How To Use Google Search Console’s Index Coverage (Page Indexing) Report

guide-google-search-console-index-coverage-report - 0 how to use google search console index coverage report hero image 1

Index Coverage (Page Indexing) is a report in Google Search Console that shows the crawling and indexing status of all URLs that Google has discovered for your website. 

It helps you track your website’s indexing status and keeps you informed about technical issues preventing your pages from being crawled and indexed correctly. 

Checking the Index Coverage (Page Indexing) report regularly will help you spot and understand issues and learn how to address them. 

In this article, I will describe:

  • What the Index Coverage (Page Indexing) report is,
  • When and how you should use it,
  • The statuses shown in the report, including types of issues, what they mean, and how to fix them.

How to use the Index Coverage (Page indexing) report?

For a page to be ranked and shown to users, it needs to be discovered, crawled, and indexed. During this process, Google finds out about the existence of a URL, examines its content, and then directs the collected information to the index.

If something goes wrong along the way, you can find out why your page is not indexed in Google Search Console’s Page Indexing (Index Coverage) report.

To get to the Index Coverage (Page indexing) report, log in to your Google Search Console account. Then, in the menu on the left, select “Pages” in the Index section:

guide-google-search-console-index-coverage-report - 1 how to use google search console index coverage report

You will then see the report. By ticking both statuses, you can choose what you want to visualize on the chart:

guide-google-search-console-index-coverage-report - 2 how to use google search console index coverage report

“All known pages”, “All submitted pages” vs. “Unsubmitted pages only”

In the upper left corner, you can select whether you want to view:

  •  “All known pages”, which is the default option, showing URLs that Google discovered by any means,
  • “All submitted pages”, including only URLs submitted in a sitemap, or
  • “Unsubmitted pages only,” including only URLs not present in the sitemap that Google discovered through links.

You should find a stark difference between the status of “All submitted pages” and “All known pages.” “All known pages” normally contain more URLs, and more of them are reported as Not indexed. That’s because sitemaps should only contain indexable URLs, while most websites contain many pages that shouldn’t be indexed. One example is URLs with tracking parameters on eCommerce websites. Search engine bots like Googlebot may find those pages by various means, but they should not find them in your sitemap. 

So always be mindful when opening the Index Coverage (Page indexing) report and make sure you’re looking at the data you’re interested in.

Inspecting the URL statuses

Indexed pages

To browse the URLs that are indexed within your website, go to the View data about indexed pages section, just below the chart.

Here you can see the timeline of how the number of your indexed pages changed over time on a sorted chart.

Below the chart, you can explore the list of your indexed pages. But remember that you may not see all of them as:

  • The report shows up to 1,000 URLs, and
  • A new URL may have been added after the last crawl.

To receive more information, you can inspect each URL by choosing the URL from the list and clicking Inspect URL on the right panel.

guide-google-search-console-index-coverage-report - 3 how to use google search console index coverage report

Not indexed pages

To see the details on the issues found as Not indexed, look below the chart in the Page indexing report:

guide-google-search-console-index-coverage-report - 4 how to use google search console index coverage report

This section displays the reason behind a given status, the source of it (whether your website or Google causes the issue), and the number of affected pages.

You can also see the validation status – after fixing an issue, you can inform Google that it has been addressed and ask to validate the fix. 

This is possible at the top of the report after clicking on the issue:

guide-google-search-console-index-coverage-report - 5 how to use google search console index coverage report

The validation status can appear as “fixed”. But it can also show “failed” or “not started” – you should prioritize fixing issues that respond with these statuses. 

You can also see the trend for each status – whether the number of URLs has been rising, dropping, or staying on the same level.

After clicking on one of the types, you will see which URLs respond with this issue. In addition, you can check when each URL was last crawled – however, this information is not always up-to-date due to possible delays in Google’s reporting. 

There is also a chart showing the dates and how the issue changed over time.

guide-google-search-console-index-coverage-report - 6 how to use google search console index coverage report

Improve page appearance section

Although some of your pages are indexed, they may be affected by issues you still need to address if you care about your website’s health.

Google places such URLs in a separate section in the Page Indexing (Index Coverage) report.

If you’re struggling with any issues that apply to this case, like “Indexed, though blocked by robots.txt,” you’ll find the Improve page appearance section below the list of your Not indexed pages.

guide-google-search-console-index-coverage-report - 8 guide google search console index coverage

Here are some important considerations you should be aware of when using the report:

  • Always check if you’re looking at all submitted pages or all known pages. The difference between the status of the pages in your sitemap vs. all pages that Google discovered can be very stark.
  • The report may show changes with a delay, so whenever you release new content, give it at least a few days to get crawled and indexed.
  • Google will send you email notifications about any particularly pressing issues encountered on your site. 
  • Your aim should be to index the canonical versions of the pages you want users and bots to find. 
  • As your website grows and you create more content, expect the number of indexed pages in the report to increase.

How often should you check the report?

You should check the Index Coverage report regularly to catch any mistakes in crawling and indexing your pages. Generally, try to check the report at least once a month. 

But, if you make any significant changes to your site, like adjusting the layout, URL structure, or conducting a site migration, monitor the results more often to spot any negative impact. Then, I recommend visiting the report at least once a week and paying particular attention to the Not indexed status.

URL Inspection tool

Before diving into the specifics of each status in the Index Coverage (Page indexing) report, I want to mention one other tool in the Search Console that will give you valuable insight into your crawled or indexed pages. 

URL inspection tool provides details regarding if:

  • The page is indexed,
  • The page is indexed but has issues (e.g., struggles with structured data-related problems), or
  • The page isn’t indexed.

You can find it in Google Search Console in a search bar at the top of the page. 

Simply paste a URL that you want to inspect – you will then see the following data:

guide-google-search-console-index-coverage-report - 7 how to use google search console index coverage report

You can use the URL inspection tool to:

  • Check the index status of a URL and, in case of issues, see what they are and troubleshoot them,
  • Learn if a URL is indexable,
  • View the rendered version of a URL,
  • Request indexing of a URL – e.g., if a page has changed,
  • View loaded resources, such as JavaScript,
  • See what enhancements a URL is eligible for – e.g., based on the implementation of structured data and whether the page is mobile-friendly.

If you encounter any issues in the Index Coverage (Page indexing) report, use the URL inspection tool to verify them and test the URLs to better understand what should be fixed. 

NEXT STEPS

Here’s what you can do now:

  1. Contact us.
  2. Receive a personalized plan from us to deal with your indexing issues.
  3. Enjoy your content in Google’s index!

Still unsure of dropping us a line? Read how technical SEO services can help you improve your website.

The statuses in the Index Coverage (Page indexing) report and types of issues

It’s time to look at the Not indexed and Improve page appearance statuses in the Index Coverage (Page indexing) report and: 

  • Discuss the specific issue types that they can show, 
  • What causes these issues, and 
  • How you should address them.

Not indexed

You may find that many URLs in the Not indexed section have been excluded for the right reasons. But it’s important to regularly check which URLs are not indexed and why to ensure your critical URLs are not kept out of the index.

Excluded by ‘noindex’ tag

Googlebot found a page and could not index it because of a noindex tag or header in the HTTP response. It’s worth routinely going through these URLs to ensure the right ones are blocked from the index.

Learn how to approach this issue in Justyna Jarosz’s article on the “Excluded by ‘noindex’ tag” status.

Blocked by page removal tool

These URLs have been blocked from Google using Google’s Removals tool. However, this method works only temporarily, and, typically after 90 days, Google may show them in search results again. If you want to block a page permanently, you can remove or redirect it or use a noindex tag.

Check out more about the “Blocked by page removal tool” status on our blog.

Server error (5xx)

As indicated by the name, it refers to server errors with 5xx status codes, such as 502 Bad Gateway or 503 Service Unavailable. 

You should monitor this section regularly, as Google will have trouble indexing pages with server errors. You may need to contact your server administrator to fix these errors or check if they are caused by any recent upgrades or changes on your site. 

Check out Google’s suggestions on how to fix server errors.

Redirect error

Redirect error indicates that the redirect you set up didn’t work, and so it didn’t transfer search engine bots and users from an old URL to a new one. Such mistakes are usually triggered by poor redirect configuration, such as using redirect chains or loops.

Read how to fix this issue by reading our article on the “Redirect error” status.

Blocked by robots.txt

Robots.txt is a file containing instructions on how robots should crawl your site.

If this URL should be indexed, Google needs to crawl it first, so you should go through URLs blocked by robots.txt and check if you intended to block them.

Blocked due to unauthorized request (401)

The 401 Unauthorized status code means that a request cannot be completed because it’s necessary to log in with a valid user ID and password. Googlebot cannot index pages hidden behind logins – this tends to occur in staging environments. In this case, either remove the authorization requirement or verify Googlebot so it can access the pages. 

If these URLs shouldn’t be indexed, this status is fine. However, to keep these URLs out of Google’s reach, ensure your staging environment cannot be found by Google. For example, remove any existing internal or external links pointing to it.

Crawled – currently not indexed

Googlebot has crawled a URL but is waiting to decide whether it should be indexed. 

If you want to learn about what could be causing this status and how to address any issues, be sure to read our article on how to fix “Crawled – currently not indexed”.

Discovered – currently not indexed

This means that Google has found a URL – for example, in a sitemap – but hasn’t crawled it yet. 

Keep in mind that in some cases, it could simply mean that Google will crawl it soon. This issue can also be connected with crawl budget problems when Google may view your website as low quality.

If you want to learn more about this status ‒ read our article on how to fix “Discovered – currently not indexed”.

Alternate page with proper canonical tag

This URL is a duplicate of a canonical page marked by the accepted tag. Canonical tags are used to specify a URL that represents the primary version of a page. 

In most cases, this status doesn’t need to be fixed. However, if you want to make sure your canonical tags are correct, you should check our guide on how to fix “Alternate page with proper canonical tag” in Google Search Console.

Duplicate without user-selected canonical

There are duplicates for this page, and no canonical version is specified. It means that Google doesn’t view the specified URLs as canonical. 

You can use the URL inspection tool to learn which URL Google chose as canonical. For more tips, check our article about the “Duplicate without user-selected canonical” status.

Duplicate, Google chose different canonical than user

You chose a canonical page, but Google selected a different page as canonical. 

The page you want to have as canonical may not be as strongly linked internally as a non-canonical page, which Google may then choose as the canonical version. 

If you want to learn more about possible causes and solutions for the status, read our guide on how to fix the Duplicate, Google chose different canonical than user issue.

Not found (404)

404 error pages indicate that the requested page could not be found because it changed or was deleted. Error pages exist on every website and, generally, a few of them won’t harm your site. But, whenever a user encounters an error page, it may lead to a negative experience.

If you see this issue in the report, go through the affected URLs and check if you can fix the “Not found (404)” errors. 

Page with redirect

Pages with the “Page with redirect” status are redirecting, so they haven’t been indexed. Pages here would generally not require your attention. 

For permanently redirecting a page, be sure you implemented a 301 redirect to the closest alternative page. Redirecting 404 pages to the homepage can result in Google treating them as soft 404s.

Soft 404

Soft 404 issue means a page returns a 200 OK status, but its contents make it look like an error, e.g. because it’s empty or contains thin content. Or, it may be custom 404 pages containing user-friendly content directing to other pages, but still returning a 200 OK HTTP code. 

To fix soft 404 errors, you can:

  • Add or improve the content on these URLs, 
  • 301 redirect them to the closest matching alternatives, or
  • Configure your server to return proper 404 or 410 codes.

Also, as a follow-up, read our article on what are soft 404s in SEO.

Blocked due to access forbidden (403)

The 403 Forbidden status code means the server understands the request but refuses to authorize it. You can either grant access to anonymous visitors so Googlebot can access the URL or, if this is not possible, remove the URL from sitemaps. And if Google shouldn’t access these URLs, it’s better to use a noindex tag.

Blocked due to other 4xx issue

Your URLs may not be indexed due to 4xx issues not specified in other error types. 4xx status codes errors generally refer to problems caused by the client ‒ check these pages to learn what the error is. 

You can learn more about what is causing each problem by using the URL Inspection tool. Fix the problems according to the specific code that appears. If you cannot resolve the error, remove the URL from your sitemap.

To learn more about this status, read our article on how to fix “Blocked due to other 4xx issue” in Google Search Console.

Improve page appearance

Although the URLs in the Improve page appearance section are part of Google’s index, they still need a closer look on your side. Fix them to ensure the affected URLs don’t sabotage your full visibility and traffic potential.

Indexed, though blocked by robots.txt

Using robots.txt directives is not a bulletproof way to prevent indexing pages. Google may still index a page without visiting it, e.g., if other pages link to it.

In this case, the affected page will respond with the “Indexed, though blocked by robots.txt” status.

Here you may need to reevaluate your indexing strategy and decide on what pages within this status you want to stay indexed and what to block from crawling with robots.txt.

Page indexed without content

Sometimes, a given URL may get indexed even if:

  • The page you published has no content, or
  • Google can’t read or access such content.

And, although the issue seems to be minor, you shouldn’t ignore it.

For further reading on the topic, take a look at the article on the “Page indexed without content” status on our blog.

Conclusion

The Index Coverage (Page indexing) report shows a detailed overview of your crawling and indexing issues and points to how they should be addressed, making it a vital source of SEO data.

Your website’s crawling and indexing status is not straightforward – not all of your pages should be crawled or indexed. Ensuring such pages are not accessible to search engine bots is as crucial as having your most valuable pages indexed correctly.

The report reflects the fact that your indexing status is not either black or white. It highlights the range of states that your URLs might be in, showing both serious errors and minor issues that don’t always require action. If you’re struggling to understand what action you should take to improve your website’s indexing, contact us for technical SEO services.

Ultimately, you should regularly browse Google’s Index Coverage (Page indexing) report and intervene when it doesn’t align with your indexing strategy.

Hi! I’m Bartosz, founder and Head of SEO @ Onely. Thank you for trusting us with your valuable time and I hope that you found the answers to your questions in this blogpost.

In case you are still wondering how to exactly move forward with fixing your website Technical SEO – check out our services page and schedule a free discovery call where we will do all the heavylifting for you.

Hope to talk to you soon!