Posts Tagged ‘Google’

Canonicals and noindex results

Sunday, November 1st, 2009

The results of the third experiment are in. This was quite a simple one: to see whether Google would respect a canonical link element on a page that had the noindex robots metatag.

No surprises here, happily. You’d expect Google to read the noindexed page, including the canonical link element, make the adjustment accordingly and index the destination page. That’s exactly what it did. Happy days.

Actually, it did it so quickly (compared with some other canonicals that I’ve implemented elsewhere) that I’m left wondering whether Google might actually be more inclined to pay swift attention to the canonical instruction if the page on which it is found is set not to be indexed. Just speculation, of course.

More on Google and robots.txt

Sunday, November 1st, 2009

I spoke to a couple of fellow SEO types about Google’s behaviour in Experiment 4. I’ve almost been persuaded that the behaviour of Google in the circumstances is not so very controversial.

In the experiment, I found evidence that Google was checking URLs that were disallowed in robots.txt, which initially seemed to me to be a breach of the robots protocol.

Here’s what Google says about robots.txt.

While Google won’t crawl or index the content of pages blocked by robots.txt, we may still index the URLs if we find them on other pages on the web.

Here’s what the site dedicated to the robots.txt protocol says:

The “Disallow: /” tells the robot that it should not visit any pages on the site.

There appears to be a slight discrepancy. Google says it will not crawl or index the content of pages blocked, whereas the robots protocol suggests that blocked agents should not visit the pages.

However, Google states that it may index just the URLs if it finds them elsewhere. This leads to the classic “thin result” in Google where you just see the URL and nothing else. It’s quite possible for such thin results not only to appear but to rank in searches, which has been an interesting way of demonstrating the power of anchor text in the past.

Google will presumably not want to index junk URLs. So when it finds URLs via links on other pages, but finds that they are blocked by robots.txt, it is presumably sending some kind of HTTP request – enough to confirm that they are valid, and apparently enough to pick up the HTTP response. I’d assume that the approach is as follows:

  • Response: 200 – index a thin result URL
  • Response: 301 – follow the same process with the destination URL, index if not blocked
  • Response: 4xx or 5xx – don’t index, at least for now

This would explain the result of the experiment. It seems to me that Google is not quite acting within the spirit of the robots protocol, if this is indeed the case.

The upshot of this is that you have to be very careful about the combination of methods that you are using to restrict access to your pages. It’s well known, for example, that Google will not (or cannot) parse a page-level robots noindex instruction if that page is blocked by robots.txt (because they are respecting robots.txt and not looking at the content of the page). For similar reasons, Google would not be able to “see” a canonical link instruction on a page blocked by robots.txt. However, it seems that they can and will respect an HTTP-level redirect, because this response is not part of the “content” of the page.

I wonder if I’m the only person to find this stuff interesting!

Google does not always respect robots.txt… maybe

Tuesday, October 27th, 2009

Here are the results for experiment 4.

To recap: I put a new link on the home page which was blocked by robots.txt. The link was to http://www.search-experiments.com/experiments/exp4/experiment4main.shtml.

Before even creating this page, I blocked all pages in that folder in robots.txt.

Here’s the very text that appears there:

User-agent: *
Disallow: /experiments/exp4/

Google Webmaster Tools confirms that the page is blocked when I use its Crawler Access tool:

Test results:

http://www.search-experiments.com/experiments/exp4/experiment4main.shtml
Blocked by line 3: Disallow: /experiments/exp4/

(However, it’s not yet showing up in the Crawl errors page.)

Then I put a 301 redirect in place on the page, redirecting to my “destination” page.

If Google properly respects robots.txt, then it should not request the blocked page. If it doesn’t request the blocked page, it shouldn’t find the 301 redirect to the destination page.

As that destination page is not linked to from anywhere else, that page should never appear in the index.

So, what happened?

Well, Google took it’s time to reindex the home page of the site (it’s not frequently updated and it’s not exactly a high-traffic site). But it did get around to it eventually.

And the destination page has also been indexed.

Now, it is of course possible that some other site has linked directly to the destination page, thereby giving Google an alternative and legitimate route in. The experiment is not, therefore, in a clearly controlled environment. But this seems quite unlikely, unless it has been accessed by some other crawler which has republished the destination URL somewhere, or someone was being very annoying to the point of being malicious. On a site like this, however, with its minuscule readership, I think the chances of the latter are remote. Incidentally, neither Google nor YSE are reporting any links into the destination page.

There was only one way to find out exactly what had happened – to look at the raw server logs for the period and see whether Google had indeed pinged the blocked URL. Unfortunately…  when I went to the logs to check out exactly what Gbot had been up to, I found that I hadn’t changed the default option in my hosting, which is not to keep the raw logs. So that’s not too smart. Sorry. I’ve got all the stats that I normally need, but neither AWStats, Webalizer or GA are giving me the detail that I need here.

On the balance of probability however, it seems that Google may be pinging the URLs that you tell them not to access with robots.txt, and checking the HTTP header returned. If it’s a 301, it will follow that redirect, and index the destination page in accordance with your settings for that page.

What’s the practical use of this information? Well, I can imagine a circumstance in which you have blocked certain pages using robots.txt because you are not happy with them being indexed or accessed, and you are planning to replace those pages with pages that you are happy with, you shouldn’t rely on Google continuing to respect the robots.txt exclusion once you have arranged for those pages to be redirected.

What’s the next step? Well, I’ve enabled the logs now, and will run a similar experiment in the near future.

301 redirect and robots.txt exclusion combined

Tuesday, October 20th, 2009

Experiment 4 is now up on the Search Experiments home page.

What I’m up to here is again pretty simple. I’ve created two pages. The first has been linked to from the home page under Experiment 4, but it has also been blocked by robots.txt (by disallowing the directory in which it resides). To be on the safe side, the robots.txt exclusion was put in place for the directory before the page was even created.

This page, however, will never see the light of day, because it has also been 301-redirected to another page, the “destination” page for the experiment.

Fortunately this blog is so obscure that the destination page is unlikely to receive any other incoming links (please don’t link to it if you’re reading this…).

The hypothesis is that Google will NOT follow the URL from which it is blocked by robots.txt, and so it will NOT discover the 301 redirect, so the destination page should not appear in Google’s index. What we should see instead is a snippet-free URL for the original page.

That’s what should happen if my understanding is right. But that’s not necessarily the case. Results will be reported back here.

Canonical link element and noindex robots metatag

Tuesday, October 20th, 2009

I’ve actually explained what I’m doing in this experiment on the page itself, which is here. The set-up is as follows

  • Create two almost identical pages
  • Link to the first one
  • Set the first page to “noindex,follow”
  • Give the first page a canonical link element in the head section, pointing to the second page
  • Set the second page to “index, follow”

Then, sit back and wait for Googlebot to work its magic – and see whether the second page makes it into the index. Really, provided that Google respects the noindex tag, and there’s no good reason why it should not, there should be no chance of the first page making it into the index. So the sole question is whether the second page will make it into the index or not.

My expectation, and hope, is that it will, despite being unlinked from anywhere else. Further variations on this theme will follow if it does not, and may in any case.

Search experiments at Google

Wednesday, August 27th, 2008

As if owning one zeitgeisty domain wasn’t sufficient, it seems that this one, or the phrase from which it is formed, is now in fashion, following a Google blog post about search experiments there

As the SEO blogs link to and comment about it, the phrase “search experiments” becomes more popular and more competitive – at time of writing, this site has been pushed down on to page 2 (position #11).

The Google post is both interesting and funny. It kicks off with two versions of part of a results set so similar that it is impossible to tell them apart without placing them side by side, and even then it is a struggle. It reminded me obscurely of the Fast Show’s Animation Now sketch, where he moves things “just a tiny bit”.

The difference between the two is an extra half-millimetre of white space around one of the results. I suppose that they didn’t get where they are today by saying at any point “ah, that’s good enough”, but the degree of attention to detail seems beyond obsessive. The poster, Ben Gomes, even refers to the changes as “barely visible”.

It’s interesting to see however that as well as experimenting with new features and products, they are always tweaking the main model. If only they paid as much attention to their algo! (Joke, but I know a few webmasters who would be laughing bitterly…)

Effects of taggregation, plus status updates

Friday, August 15th, 2008

I am a little surprised to find that the blog home page was briefly #2 (now #3) in Google for the phrase “search experiments”, and that the site home page is #2 in Yahoo (in each case, the UK varieties). Despite this apparent “success” (I don’t think that the term has driven any search visitors to the site), there remain pages of the site resolutely unindexed.

Google

The preference that Google is showing for the blog home page is also interesting, and it is worth looking into why this might be, particularly because the links that I have created are all to the website home page. Although all the pages on the site link to the blog home, all the pages/posts on the blog link to the site home. 

So what is going on with Google here? A link: operator search returns no results, but Webmaster tools credits the site overall with 39 external links. Eight of these are to the home page, the rest to blog pages. The eight, which I set up, are from a couple of other blogs, one of which is totally weak and the other fairly weak.

The links to blog pages are mostly from Technorati, and all Technorati links are from pages aggregating all blogs with particular tags. The other links look as if they are doing something similar, probably with material taken or scraped from Technorati.

There’s good cross-linking between the blog and the other site pages: all links on blog pages to the main site home page use the phrase; conversely, all links on the non-blog pages link to the blog including the phrase. 

So, crosslinking should pretty much cancel itself out in relation to relative ranking. Which leads to an interesting tentative hypothesis: that simply blogging and using tags can garner external links – from aggregator pages – that are as powerful as hand-edited links from existing sites.

I do have one reasonable powerful incoming link set up (from the home page of a five-year old site with thousands of organic links), but this is not yet showing up as an external link in Webmaster tools. (This link is to the home page, not the blog.)

OK, it could of course be passing PR without showing up in Webmaster tools. I shall keep an eye out to see whether the relative ranking changes, and when the link shows up in Webmaster tools.

Yahoo

In Yahoo, it’s the home page that is showing up in the rankings. The blog home page is nowhere to be seen in the rankings; indeed, Site Explorer doesn’t recognise the page among the six that it currently lists. 

However, Site Explorer is giving credit for the one relatively powerful link to the site.

Observations and predictions

1) The blog home page being “ahead” of the home page in Google rankings seems to suggest that the links garnered by tag aggregation – I am disappointed but not wholly surprised to discover that the word “taggregation” has already been coined – may have a significant role to play in getting content indexed and ranked. I will not put it more strongly than that at present. It may be worth experimenting with a new blog, unlinked elsewhere, to test this hypothesis – by watching how it performs up to the point that someone manually links to it.

2) Having a top 3 result for a plausible if specialised phrase does not necessarily generate traffic.

3) Google is more interested in blog content than Yahoo (?)

Prediction: when Webmaster Tools shows the strong site in the external links, the home page for the site will outperform the blog home page in Google. 

Thinking about it, the other possible reason that the blog home page may be outperforming the home page is content – there’s typically a lot more content on the blog page and (obviously enough) the phrase “search experiments” gets mentioned all the time on it.