Googlebot was here for it's daily visit this evening - nothing unusual there - but what was strange was a couple of the URLs it tried to spider - /atom.xmlindex.phpatom.xml & /atom.xmlindex.php. Whilst this site is written in php and does contain feeds (which are already indexed), both of these returned 404s as I don't have these files on this domain and never have had. Judging by the filenames, it was attempting to find an
atom feed, a syndication feed similar to RSS.
I can only think of two reasons Googlebot should have tried to spider these addresses - either someone has manually submitted them to Google or it is trying new tricks to keep ahead of the competition. The former seems rather unlikely as it would be a complete waste of time, which only leaves the latter. They already have
Google News, which syndicates stories from 4500 news sites; perhaps they are planning to do the same with blog feeds or maybe have a dedicated blog search. Could this see them splitting out blogs from the main index entirely? Probably none of these, but I'm going to keep an eye on the logs, and I'd be interested to hear if anyone else has had similar visits.
No Comments/Trackbacks for this post yet...