A summary of the most interesting questions and answers from the Google SEO Office Hours with John Mueller on November 27, 2020. 

Relaunch of Google Search Console’s URL Inspection Tool

1:04One participant asked John about the status of the URL Inspection Tool, which used to make it easier to get your new URLs indexed by Google with its “Request indexing” feature, but was removed from the Google Search Console weeks ago.

John didn’t have an update or a timeline. His advice was to improve the quality of your content and your website if your new pages don’t get indexed within a reasonable time. In John’s view, the “Request indexing” feature shouldn’t be used as a regular way of getting Google to index your content.

John mentioned that “the team is working on it, so it’s not that it’s going away”, so we can expect the URL Inspection Tool to be back soon.

There was a follow-up question at 7:30 about the possible new features of the URL Inspection Tool, but John had no updates either.

Conflicting hreflangs in sitemap and on pages

8:10There was a question on how Google deals with conflicting hreflang tags in the sitemap and in source code. For instance, the correctly implemented hreflang tag in the sitemap might say that the page is for English users in the U.S., while the source code says French in the US.

John said that Google combines signals in that case. So the page would be treated as both for English and French users in the US.

One place where it would be conflicting or confusing is when you specify the same hreflang tag for two different pages. Google doesn’t prioritize sitemap tags over source code, so the indexing systems would have to take a guess.

Content syndication and the canonical tag

11:05 The next question was about content syndication. If a publisher with many domains publishes the same piece of content on several websites, what should they do to inform Google that it belongs to the same network of websites?

John recommended using the canonical tag to let Google know about the preferred original content piece. If the canonical tag isn’t used, there might be 20 pages competing against each other in the search results with weak ranking signals. The canonical tag allows you to consolidate your ranking signals for the one preferred page.

A follow-up question was if “internal” linking between several pages belonging to the same publisher would be problematic and eventually result in a manual penalty.

John commented that for the most part, this is normal behavior and unless the network of websites is really large and there’s reason to think that the linking is just for SEO purposes, there’s nothing to worry about. 

Anchor text – how long should it be?

16:49 A participant asked whether Google prefers longer, more informative anchor text, like “You can buy cheap shoes here”, or if the anchor text can just be “Cheap shoes” and it makes no difference for Google.

John said that anchor text shouldn’t be over-optimized for length. Anchor text is used for more context, but it doesn’t necessarily mean that it should be longer. For internal linking, you should focus on making it more informative for your users.

New Crawl stats report – the difference between “Discovery” and “Refresh”

27:12 Google Search Console was recently updated with a new Crawl stats report that allows you to get more data about Googlebot’s activity on your website.

The question was about the difference between “Discovery” and “Refresh” statuses in the new report.

John said that crawling is split into “Refresh” crawling, which serves to update information about the page, and “Discovery” crawling, which makes it possible to discover new pages, found via internal or external links. For most sites, crawling is primarily focused on refreshing the information about the site. “Refresh” doesn’t mean Googlebot only looks for changes in the content – it may also extract newly added links.

Google News uses different ranking algorithms than Google Search

32:05 Responding to a question about Google News, John stated that it uses different ranking algorithms than Google Search.

Can crawlers read the metadata from an iframe?

34:30 If some content is imported through an iframe, can crawlers read the metadata from the imported content?

John replied that it’s not always possible for crawlers to extract metadata from an iframe. His recommendation was to try to implement the content that you think is important in your own source code or use JavaScript to fetch it. That way, Google can fully access the content after the rendering phase.

How to take advantage of Web Stories with long-form content?

39:20 Web Stories are now displayed on Google Discover, which is a great new way to get more traffic to your site. But how to use Web Stories when you have a longer piece of content?

John’s comment was that he has seen Web Stories used to build interest for a topic and then link to a larger piece of content. However, it all depends on the type of content that you have.

Updating content while keeping the old content in search results

49:37An interesting question was asked about cases when content is regularly updated, but users are still looking for the old content with very similar keywords. The exact example was about programming documentation – programming languages evolve and you need to be able to find documentation about the most recent version, but programmers are using old versions too and need to be able to find that information on Google as well.

John recommended using a single, canonical URL for the most up-to-date version and moving the outdated content to a new URL when a new version arrives.

For instance, you may have a page about iPhone 14, and the URL would be example.com/iPhone. When iPhone 15 is released, you move the content about iPhone 14 to example.com/iPhone-14, while updating example.com/iPhone with information about the new product. According to John, this is the best approach that allows the users to find both old and new information on Google.