>>Matt Cutts: Today’s question comes from
Land Lubber in Colorado who asks: “Does PageRank take into account cross-browser compatibility?
If a site isn’t compatible with certain browsers, does that make a difference for Googlebot? The answer is no on both counts. So I’ve mentioned this in another video but
let me just reiterate: PageRank is based on the number of people who link to you and how
reputable they are; the links that come to your site. It is completely independent of
the content of your site. So PageRank doesn’t take into account cross-browser
compatibility because it doesn’t take into account the content of the Website or the
Webpage; it only takes into account the links. That’s the essence of PageRank. It looks at
what we think our opinion of the reputation of links. So now the next question: if a site isn’t
compatible with certain browsers does that make a difference for Googlebot? Well let’s play it through. Suppose Googlebot
comes to your site and Googlebot says, “I would like to crawl a page from your site.
Please give it to me so I may index it.” We take that page and we look at it and we
look for textual content on that page. But we’re almost always crawling as Googlebot.
Maybe we’ll crawl as Googlebot Mobile or-or you know AdsBot or-or Google Image bot or
something like that, but we try to provide very nice descriptive ways so that you can
tell that Google is coming to your site unless we’re doing a-a spam check or something like
that or someone’s coming to your site to sort of see whether your cloaking something like
that. So Googlebot comes to your page, it tells
you it’s Googlebot, and it tries to index the page that it gets. So it really doesn’t
have much of a notion of, “How do things render differently for a mobile browser versus Internet
Explorer 6 versus Netscape 2 versus Firefox 4 or whatever?” We’re just going to take a
look at the textual content and try to make sure that we index it. Now if you wanna make sure that you don’t
get in trouble in terms of cloaking or anything like that, you wanna make sure that you return
the same page to Googlebot that you return to regular users. So just make sure that you
don’t have any special code that doing an “if Googlebot” or checking if the user agent
is Googlebot, or the IP address is from Google. If you’re not doing anything special for Google,
you’re just doing whatever you would normally do for your users, then you’re not going to
be cloaking and you shouldn’t be in any trouble as far as that goes. So Google doesn’t look into cross-browser
site compatibility or things like that and in fact Google tries to be relatively liberal
and expecting even somewhat broken HTML because not everybody writes perfect HTML; that doesn’t
mean the information on the page isn’t good. There were some studies that showed that 40
percent of all Webpages had at least some sort of syntactic error, but if we threw out
40 percent of all pages, you’d be missing 40 percent of all the content on the Web. So Google tries to interpret content even
if it’s not syntactically valid, even if it’s not well formed, even if it doesn’t validate.
For all these sort of reasons, we have to take the Web as it is and try to return the
best page to users even if the results that we see are kind of noisy. So historically we haven’t provided any sort
of penalty by saying, “Oh, you didn’t validate or it’s not clean HTML.” We don’t have any
sort of factor, to the best of my knowledge, that looks at compatibility with certain browsers
or cross-browser compatibility of a site. Hope that helps.