SEO Office Hours, January 7th, 2022

seo-office-hours-january-7th-2022 - seo-office-hours-january-7th-2022-hero-image

This is a summary of the most interesting questions and answers from the Google SEO Office Hours with John Mueller on January 7th, 2022.

Can the consistency of blog posting affect crawling and ranking?

03:28 “I have a blog in which I post almost one article every day and [there’s] another person [posting] about one article in a week. According to consistency, […] will it affect the frequency of how Google will crawl my website, or […] is this consistency have any connection with the ranking?”

According to John, “There are a lot of factors that go into ranking, and being able to crawl and index a website is one of those things. But if we’re talking about one page per day or one page a week […] for us to crawl ‒ that is trivial. If we’re talking about millions of pages every day, then sometimes the technical capabilities come into play, and crawl budget is a topic. But if we’re talking about a couple of pages a day, […] or even ten thousand of pages a day, then that’s something that we can usually crawl in a reasonable time anyway. So that’s less a matter of being able to crawl it on time and more a matter of all of the other factors that we use around search.”

05:27 “When I used to post one article every day, I saw that Google crawled my website almost every day. […] But when I became inconsistent, I saw that Google crossed the site once every two days or less. Is this a fact?”

John said, “That can happen. It’s not so much that we crawl a website, but we crawl individual pages of a website. When it comes to crawling, we have two types of crawling roughly. One is a Discovery crawl where we try to discover new pages on your website, and the other is the Refresh crawl where we update existing pages that we know about. For the most part, for example, we would Refresh crawl the homepage once a day or every couple of hours. And if we find new links on their homepage, then we’ll go off and crawl those with the Discovery crawl as well. Because of that, you’ll always see a mix of Discover and Refresh happening with regards to crawling, and you’ll see some baseline of crawling happening every day.

But if we recognize that individual pages change very rarely, then we realize we don’t have to crawl them all the time. For example, if you have a news website and you update it hourly, then we should learn that we need to crawl it hourly. Whereas if it’s a news website that updates once a month, then we should learn that we don’t need to crawl every hour. That’s not a sign of quality or ranking. Just, from a technical point of view, we’ve learned we can crawl this once a day or once a week, and that’s okay.”

Need to optimize your crawl budget?

Contact us for crawl budget optimization services.

Do hreflang tags affect a website’s ranking?

09:47 “I have one website [that] performs very well in a particular language. Then I decided to create an English version of that website to target people in a new domain. Should I add an hreflang tag to connect these two separate domains or leave it alone for Google to figure out itself? Can these hreflang tags impact my website’s performance?”

John answered, “Hreflang is on a per-page basis, so it would only make sense if you have equivalent pages in other languages or for other countries. It’s not something that does ‘the whole website’ kind of thing. So if you have some pages that have equivalent versions using hreflang is a good way to connect them. What happens with hreflang is that the ranking stays the same, but we try to swap out the URL against the best fitting one. So if someone is searching for your website name and we have an English version and French version, then if we can tell the user is in France or searching in French, then we’ll try to show the French version of the homepage. That works across the same [and] different domains.”

John concluded, “that’s essentially a good practice [but] it’s not necessary. It doesn’t change the rankings, but it helps to make sure that your preferred version is shown to the user. It doesn’t guarantee it, but it makes it easier for us to show the preferred language version. So if someone is searching in French and we have your French and your English pages, we wouldn’t accidentally show the English page to them.”

Learn more about hreflang tags in our Ultimate Guide to International SEO. 

Google Chrome data vs. ranking

12:39What data does Google Chrome collect from users for ranking?”

John said, I don’t think we use anything from Google Chrome for ranking. The only thing that happens with Chrome is for the Page Experience report. We use the Chrome User Experience Report data, which is that aggregated data of what users saw when they went to the website with regards to the Page Experience specifically.”

John also reassured that Google doesn’t use Google Analytics data for ranking, but such metrics as Bounce Rate or Time on Page “are sometimes useful for site owners to look at, but that doesn’t mean that they’re useful for search to use.”

SERP features and ranking

27:56 “Since the number of features is increasing in search results, I’m wondering if, and how Google Search Console is including rankings, for example, the Google Map Packs or People Also Ask in metrics like Average Position and Clicks, etc. If not, what’s the best way to see if my website is ranking in these different features?”

John replied, “For the most part, yes, we do include all of that in the Performance report data in Search Console. Anytime we show a URL from your website in the search results, we’ll show that as an impression for that website [and] query.

The Average Position also goes into play there, and it is not like the average position on a page but the average top position. So if your website is visible in positions three, four, and five, for example, then we’ll track three as the position for that individual query. […]

What you don’t see for a lot of these features is a breakdown by the feature type. So you can’t go in and say where is my website always being shown within Google Business profiles or the map searches. We don’t show that, but we do count that as an impression for those individual queries. You could take those queries, try them out and see where your website is being shown, and try to follow it back like that.

Sometimes the different features in the normal search results make things tricky to track. For example, if we show an image from your website on top of the image’s thumbnail in a normal search results page, then we’ll also count that as your website appearing in the ranking for that query. And if you look at the search results in a textual way, then you might not see that immediately, but all of that should come into play.

When we launch new features where we also list the website, we do try to watch out to make sure that we also include that in Search Console, so it shouldn’t be the case that we show a link to your website and not track that as an impression with the Position and the Clicks in Search Console.”

JavaScript strings vs. crawl budget

30:32 “We see that every JavaScript string starting with a slash is interpreted as a URL and is followed by Googlebot. Sometimes the URL is not valid, and we see different crawl errors in Search Console. Is there an official recommendation on how to nofollow such URLs? We used to split the strings into two or more parts. Does having millions of pages with such strings may negatively impact the crawl budget?”

John’s response was: “When it comes to crawling, we prioritize things in different ways, and all of these random URL discoveries that we come across where your URL is mentioned in a text or a JavaScript file […] tend to be fairly low on the list. So if we have anything important that we recognize on your website, any new pages that you link to any new content that you’ve created, we’ll prioritize that first. Then if we have time, we’ll also go through all of these random URL mentions that we’ve discovered. So from a crawl budget point of view, this is usually a non-issue.

If you’re seeing that overall we’re crawling too much of your website, then you can adjust the amount of crawling in Search Console with the crawl rate settings. Again, here we still prioritize things, so if you set the setting to be fairly low, then we’ll still try to focus on the important things first. And if we can cover the important things, then we’ll try to go through the rest. From that point of view, if you’re seeing that we’re hitting your server too hard, you can adjust that after a day or two. It should settle down at that new rate, and we should be able to keep on crawling.

With regards to nofollowing these URLs, you can’t do that in the JavaScript files. We try to recognize URLs in JavaScript because sometimes URLs are only mentioned in JavaScript. What you can do, however, is put these URLs into a JavaScript file that is blocked by robots.txt. And if the URL is blocked by robots.txt, then we won’t be able to see the JavaScript file and we won’t see those URLs. So if it’s a critical thing […], then you could use robots.txt to block that JavaScript file.

The important part here is to keep in mind that your site should still render normally with that file blocked. So in Chrome, you can block that individual URL and test it out, but especially the mobile-friendliness of a page should still be guaranteed. We should still be able to see the layout of the page properly with that JavaScript file blocked.

So if it’s only interactive functionality that is being blocked by that, then usually that’s less of an issue. If it blocks all of the JavaScript and your page doesn’t work at all anymore, then that’s something where I’d say maybe you need to find a different approach to handle that.”

Struggling with JavaScript SEO on your website?

Drop us a line to receive a JavaScript SEO audit.

Nofollow vs. noindex tags

34:46 “Can rel=”nofollow” be used as “noindex”? For example, when I publish an article on my website and on every page, where this article is mentioned, I’ll use rel=” nofollow” in the URL with that article.”

John said, “No. Nofollow tells us not to pass any PageRank to those pages, but it doesn’t mean that we will never index those pages. If you want a page to be blocked from indexing, make sure it has a noindex on it. Don’t rely on us not accidentally running across a random link to that page, so I would not assume those two are the same.

In particular, with regards to new content on the web, […] we do sometimes use [rel=” nofollow”] for the discovery of URLs as well. So, on the one hand, we might see that link without [and with] a nofollow, and still, look at it anyway. If you don’t want a page indexed, then make sure it’s not indexed.”

Problems with getting a page indexed

35:56 “We published a landing page about a month ago, and it hasn’t been indexed yet. I tested [it] with the live URL and requested indexing a few times. I understand indexing doesn’t always happen quickly, but this is the first time a landing page on our site is not indexed after a couple of days, so I’m wondering if there might be something I’ve missed?”

According to John, “It’s really hard to say without knowing the individual URLs. We don’t index everything on the web, so it’s completely common that for most websites, we index some chunk of the website but not absolutely everything on the website, so that might be something that you’re seeing there.

With regards to the amount of content that we index from individual websites ‒ sometimes that relies a little bit on our understanding of the quality of the website itself. So if we think this is a high-quality and important website, then maybe we’ll go off and try to crawl and index that content as quickly as possible, but there’s no guarantee there.

From that point of view, it’s tricky to see what exactly is happening here. What I might do in a case like this is to post in the help forum to make sure that there are no technical issues holding that URL back. Then otherwise give it a little bit more time or see what you can do overall to improve the quality of the website in general, which is usually something that’s more a long-term goal rather than something that you can just quickly tweak and hope that Google will pick it up and tomorrow everything will be different.”

The level of traffic for an article

37:44 “I’m looking at pruning some content on my site. Weak traffic is one of the criteria. What would you consider the minimum acceptable level of traffic to keep an article?”

John replied, “I don’t think purely going off and looking at the traffic to a page is enough reason to say this is a good or bad page. Some pages don’t get a lot of traffic but are extremely important. For example, if you’re selling Christmas trees, then you probably expect those pages to be visible in the search results in December, so if you look in January or March and you look at the traffic to your pages, and you’ll say, […], I should delete all my Christmas tree pages. But that’s not the right thing to do there – these pages will be relevant at some point in the future. Similarly, other kinds of pages on your website might get very little traffic, but they might be really good pages, and they might be important pieces of information on the web overall. So going in and saying at this level of traffic, I will delete everything from my website, I don’t think that makes sense.”

The increase in content uniqueness vs. ranking

43:35 “Does a significant increase in the overall uniqueness of a site’s content have no effect on the site’s ranking and visibility in the search results? Then, is it not worth the effort to fight against content theft?”

John answered, “As far as I know, there is no aspect in our algorithms that says this is something that is unique to this one website and because there’s something very unique here, we’ll rank it higher for all kinds of other queries. If you’re selling a unique type of shoes and someone is searching for shoes, then it’s not that we would rank your site because it’s a unique type of shoes. But rather you have shoes, this person is looking for shoes, and maybe other sites also have shoes, and we’ll rank them based on kind of the shoe content that we find there. So it’s not a matter of us going through and saying, well, there’s only something very unique here, therefore we should rank it higher for this more generic term.

Obviously, if you have something unique and someone is searching for that unique thing, then we will try to show your site there, and that’s the reason also for things like the DMCA complaint process where you can say someone else is ranking with my unique things and I don’t want them to show up because that’s my content or I have copyright on it at least. […] If you’re seeing that other sites are ranking for your specific thing for that unique thing that you have on your website and you have a copyright on your content and whatever else aligns that you can use a DMCA process for, that’s a perfectly fine tool to try to help clean that up. But it’s not the case that we will rank your website higher just because like we’ve seen some unique things on your website.”

 

Hi! I’m Bartosz, founder and Head of SEO @ Onely. Thank you for trusting us with your valuable time and I hope that you found the answers to your questions in this blogpost.

In case you are still wondering how to exactly move forward with fixing your website Technical SEO – check out our services page and schedule a free discovery call where we will do all the heavylifting for you.

Hope to talk to you soon!