Google Search == Internet Search

by admin 9. February 2009 10:56

I noticed a very interesting article from Jeff Atwood concerning the current state of play with respect to search engines. He makes the point that there really is no competition today when it comes to search engines; Google rules the roost.

Although I feel as Jeff does, that Google has truly earned its position, such a "monopoly" is surely a cause for concern? So, I decided to look a little closer at the statistics for the search engines driving traffic to this blog. Here are the results over a random time period:

VSeWSS 1.3


From the stats, it seems I am getting 32X times the traffic from Google as from the nearest competing search engine. It would seem that I better speak nicely of Google? If this was Microsoft's search engine, the wooly hat brigade would be screaming it from the rafters... hypocrisy is alive and well ;-)



HTML Markup I don't know about you, but as a Web Developer I sometimes find myself not doing the things I should be doing. The recent rash of SQL Injection script attacks have shown me personally how little thought I sometimes give to where the input is coming from when I'm developing forms.

Then there is the generated markup itself. All our development efforts end up in plain HTML being spit out onto a'page' on a device of some kind. When you consider how sensitive the search bots are to the structure, content and placement of our markup, isn't it amazing that the only time we ever think of it is when we have to generate it dynamically in some code-behind? :-O

If you are using master pages, then you won't want to have duplicate content, in the shape of similar meta tag content, all over your site. So, you will have to generate the meta tags programmatically in the code-behind of the content pages:

HtmlHead head = this.Master.Page.Header;
HtmlMeta meta = new HtmlMeta();
meta.Name = "Description";
meta.Content = "Friendly and relevant content";


You also do not want pages in the secure area of your site to be spidered:

HtmlHead head = this.Master.Page.Header;
HtmlMeta meta = new HtmlMeta();
meta.Name = "googlebot";
meta.Content = "noindex, nofollow";


I recently noticed a big chunk of JavaScript in the markup that I was using to solve the problem of pushing the footer to the bottom of the page, and not relying on copious amounts of needless content and spacers to do the job that CSS cannot. I should have had it in an external file and used the appropiate ASP.NET method to register it and pull it in. So, I created a JS file called footerFix.js, placed it in its own folder and used the following in the master page code-behind:

string myScript = "/js/footerFix.js";
Page.ClientScript.RegisterClientScriptInclude("myKey", myScript);


This created the following markup in my page:

<script src="/js/footerFix.js" type="text/javascript"></script>


Now I have a smaller page, faster download time and a fairly good chance of the spiders actually getting the info they require.