Site Moved

This site has been moved to a new location - Bin-Blog. All new post will appear at the new location.


An Update

The introduction did not tell much about what I am doing now. These are my personal project - the ones I am currently working on...

Sudoku - A javascript version of the popular japanese number game sudoku. This can be used to play sudoku online. This is a recent production and is in beta stage. Some visitors have pointed out a huge problem in it(it creates unsolvable puzzles). I may have to change the underling algorithm to correct this - this might take some time as I am extremely busy right now.

RSSPilot - RSSPilot is a software that can be used to create RSS feeds for your web. One can also use it to read offline RSS feeds. This too is in Beta stage - as it has been for the last 4 to 5 months.

Average User's Tutorial for JavaScript - Written as a second part of my 'ABC of JavaScript : An Interactive JavaScript Tutorial'. This is near completion - but there is still some work to be done.

That's about it. Of course, these are my PERSONAL projects - not my professional ones. At my office, I am creating a site offering Fantasy Basketball. That should be interesting as I don't know anything about basketball and even less about fantasy games.


An Introduction

More about me from my site BinnyVA, Bin-Co and Creationism Arugment
I have started the blog - but as of yet, I have not introduced myself. I am Binny. Binny V Abraham.

I am a programmer by profession. Currently I am working at a Web Development company, Reubro International as a programmer in Perl and PHP languages. I am also studing for BCA in IGNOU(VI Semister). I am settled in the Southern part of India - Kerala(also known as 'God's own country').


All the languages that I know are listed here.

C++ - The only language I have studied professionally. HTML and some related things like DHTML, JavaScript, CSS etc. Perl Tcl/Tk XML PHP Shell Scripting in Windows, Dos, UNIX and Linux.


I have many websites scattered around the net. The sites I currently manage are given below...

Name of SiteDescriptionURL
Bin-CoA site offering many Scripts and Tutorials in languages like Perl, Tcl/Tk, C++, JavaScript etc.
BinnyVAMy personal website.
Pheleo MinistryThe website of a firm that I and my father manage.
CreationismMy views about the evolution - creation controversy.
Bible ResourcesMany articles by me and my friends found no where else on the web.
JimsA site that I run for my friend Jims.
BDA site for a software of mine - BD : The DIR Replacement.
BinnyVA.CoolIncAn expirmental site.
Bin-BlogAnd of course, this blog.


Over the years I have written a lot of artices. The major ones are given below.

Computer Related Assembling A Computer Tcl/Tk Tutorial CGI-Perl Tutorial Perl/Tk Tutorial Basic JavaScript Tutorial Christanity Evolution Or Creation? Who is the True God? Who is Jesus? Swoon Theory : The proof for the Resurrection of Jesus Christ Foreign Missions In Kerala Kochu Kunchju Upadesi God And Humor Ecology and the Christian
Stories Unknown Relations Poems I am a Pessimist


I have also authored a number of softwares - both good and bad over the past few years. I have created an XML database for all the software I have created - See the XML file or have a look at the front end(IE 5.5+ only - sorry).


The Rules of Programming in the Unix Tradition

I am reading the Book "The Unix Art of Programming" by Eric Steven Raymond. This set of rules of programming is found in that book. Thought someone might find it useful...

  • Rule of Modularity: Write simple parts connected by clean interfaces.
  • Rule of Clarity: Clarity is better than cleverness.
  • Rule of Composition: Design programs to be connected to other programs.
  • Rule of Separation: Separate policy from mechanism; separate interfaces from engines.
  • Rule of Simplicity: Design for simplicity; add complexity only where you must.
  • Rule of Parsimony: Write a big program only when it is clear by demonstration that nothing else will do.
  • Rule of Transparency: Design for visibility to make inspection and debugging easier.
  • Rule of Robustness: Robustness is the child of transparency and simplicity.
  • Rule of Representation: Fold knowledge into data so program logic can be stupid and robust.
  • Rule of Least Surprise: In interface design, always do the least surprising thing.
  • Rule of Silence: When a program has nothing surprising to say, it should say nothing.
  • Rule of Repair: When you must fail, fail noisily and as soon as possible.
  • Rule of Economy: Programmer time is expensive; conserve it in preference to machine time.
  • Rule of Generation: Avoid hand-hacking; write programs to write programs when you can.
  • Rule of Optimization: Prototype before polishing. Get it working before you optimize it.
  • Rule of Diversity: Distrust all claims for 'one true way'.
  • Rule of Extensibility: Design for the future, because it will be here sooner than you think.

Read the whole book at


SiteMaps - A New Technology from Google

Google came up with something new(nothing new about that, is there?) - a method in which webmasters tell the search engines which all files must be indexed, where it can be found, and how important it is. Till date, search engines index a site by scanning the html files for links to other files. Upon finding one, it adds this link to its list and scans it next. The problems with this approach are many...

  • Indexes many files that may be private. This is exploited by hackers in a process called google hack. If the webmaster did not include this file in the robots.txt exclusion list, it will be indexed by google bot. And if the webmaster includes the private file's URL in the robots.txt file, it will be an invitation for the hackers - as it will tell them exactly where they can find the private files.
  • Can be wrong in estimating the importance of a page. For example, I would consider my JavaScript Tutorial page to be more important than my CGI-Perl Tutorial page - but a bot may not be able to guess that.
  • Orphan files won't be listed - If a page don't have any other pages linking to it, it will not be listed.
  • And much more...

The new method will let the webmaster submit the location of a XML file that will have the location of all the pages in his website.

The format of the XML file is fairly simple.

<?xml version="1.0" encoding="UTF-8"?>




Lets have a look at one block.

<loc></loc> - The location of the page(URL)
<lastmod>2004-06-08T09:28:34Z</lastmod> - The 'Last Modified' date of that file.
<priority>0.8</priority> - The importance of the page - this can be anywhere between 0.0 and 1.0.

For more details, go to Google's page on GSM protocol

But you don't have to worry about making a XML file with the list of all your pages - good ol' google has provided a Python script that will do the job for you. This script(called Sitemap Generator) can be downloaded from its SourceForge page at

Basic Info...

Name: sitemap_gen
Version: 1.0
Summary: Sitemap Generator
Author: Google Inc.
License: BSD

From the README file...

The script analyzes your web server and generates one or more
Sitemap files.  These files are XML listings of content you make available on
your web server.  The files can be directly submitted to search engines as
hints for the search engine web crawlers as they index your web site.  This
can result in better coverage of your web content in search engine indices,
and less of your bandwidth spent doing it.

The script is written in Python 2.2 and released to the open
source community for continuous improvements under the BSD 2.0 new license,
which can be found at:

The original release notes for the script, including a walk-through for
webmasters on how to use it, can be found at the following site:

How to use the script.

First, you have to create a configuration XML file that has the details of your site. Just copy 'example_config.xml' file that comes with the sitemap_gen script and edit it. The file has enough explanations so it should be easy enough to create a config file that will match your site. The config file that I used for one of my sites are given below...

<?xml version="1.0" encoding="UTF-8"?>
<!-- configuration script - Bin-Co


<directory  path="D:/code"    url="" />

<!-- Exclude URLs that point to UNIX-style hidden files               -->
<filter  action="drop"  type="regexp"    pattern="/\.[^/]*$"    />

<!-- Exclude URLs that end with a '~'   (IE: emacs backup files)      -->
<filter  action="drop"  type="wildcard"  pattern="*~"           />

<!-- Exclude URLs that point to default index.html files.
  URLs for directories get included, so these files are redundant. -->
<filter  action="drop"  type="wildcard"  pattern="*index.htm*"  />

<!-- Custom Drops -->
<!-- Downloads -->
<filter  action="drop"  type="wildcard"  pattern="*.zip"  />
<filter  action="drop"  type="wildcard"  pattern="*.gz"  />
<filter  action="drop"  type="wildcard"  pattern="*.bz"  />
<filter  action="drop"  type="wildcard"  pattern="*.exe"  />

<!-- Images -->
<filter  action="drop"  type="wildcard"  pattern="*.gif"  />
<filter  action="drop"  type="wildcard"  pattern="*.jpg"  />
<filter  action="drop"  type="wildcard"  pattern="*.png"  />

<!-- Code Files -->
<filter  action="drop"  type="wildcard"  pattern="*.tcl"  />
<filter  action="drop"  type="wildcard"  pattern="*.pl"  />
<filter  action="drop"  type="wildcard"  pattern="*.cgi"  />

<!-- Script files  -->
<filter  action="drop"  type="wildcard"  pattern="*.js"  />
<filter  action="drop"  type="wildcard"  pattern="*.css"  />

<!-- Some Folders must not get in. -->
<filter  action="drop"  type="wildcard"  pattern="*Temp*"  />
<filter  action="drop"  type="wildcard"  pattern="*cgi-bin*"  />


The script is actually meant to be run from your webserver - but that is not the way I did it. I ran the script on my local machine and then upload the XML files created to my server. Just make sure that the locations you gave in the config file correctly points to your pages. Now run the script using python. I used this command.

python --config=binco.xml

Just type this command in the terminal - you do have python installed don't you?

If everything goes well, a message will be shown which in my case was...

Reading configuration file: binco.xml
Walking DIRECTORY "D:/code\"
Sorting and normalizing collected URLs.
Writing Sitemap file "D:\code\sitemap.xml" with 183 URLs
Search engine notification is suppressed.
Count of file extensions on URLs:
    45  (no extension)
   123  .html
    12  .txt
     3  .xml
Number of errors: 0
Number of warnings: 0

Have a look at the sitemap.xml file that was created. All the pages of your site in one file. Now we have to submit this file to google. Before doing that, upload the XML file to your server and note its location. Then, login to to the google sitemap site at Webmasters using your gmail account. If you don't have one, create it now. Then click on the 'Add a Sitemap' link and input the location of the XML file we just created in the input field. This will add the XML file to the list of sitemaps that must be downloaded and parsed by google - this will take some time so be patient.

Thats all there is to it. The whole process can be automated - this is how google what it to be. Just upload the Python script and its config files to your server and set a Cron job to run every week. Make sure than the config file do NOT have the ' suppress_search_engine_notify="1" ' option. Now when ever the script runs, it will create the XML file and notify google that a new Sitemap was made.

This is another good idea by google. But even they are not confident this will work. The following text is taken from Google Blog.

'We're undertaking an experiment called Google Sitemaps that will either fail miserably, or succeed beyond our wildest dreams.'

Everything that could be used to improve the position of a site in google search will be abused by some webmasters. And this technology gives a lot of control to webmasters - so undoubtedly they will find a way to cheat using Sitemap. Till then, let us enjoy this technology.

Filed Under...


Subscribe to : Posts