How To Fix “Crawled – Currently Not Indexed” in Google Search Console

Hero image for the article How to fix Crawled- currently not indexed in Google Search Console

“Crawled — currently not indexed” is a Google Search Console status indicating that a given URL is known to Google and has been crawled and analyzed, but Google chose not to index it.

Possible reasons for this issue include:

  1. Indexing delays.
  2. Poor content quality.
  3. Deindexing due to insufficient quality.
  4. Poor website architecture.
  5. Duplicate content.

Google’s documentation defines the Crawled – currently not index status as:

The page was crawled by Google, but not indexed. It may or may not be indexed in the future; no need to resubmit this URL for crawling.
source: Google

Reading this explanation might feel frustrating, especially if the status affects a page important for your business. Google’s definition doesn’t clarify what happened and what you might do next. All it says is that Googlebot crawled your page but, for some reason, decided not to index it.

According to our research, the Crawled – currently not indexed status is the most common issue reported in the Index Coverage report. It means that you’ve probably already experienced it, or you’re likely to experience it in the future.

It’s crucial to fix the problem as soon as possible. After all, if your page isn’t indexed, it won’t appear in search results, and it won’t get any organic traffic from Google. 

This article presents the possible causes for the Crawled – currently not index status and ways of fixing them.

Where can you find the Crawled – currently not indexed status?

You can find the status in the Index Coverage report and the URL Inspection Tool in Google Search Console.

Index Coverage report

Crawled – currently not indexed belongs in the “Excluded” category, which indicates that Google doesn’t think it’s a mistake that the page is not indexed. 

These pages are typically not indexed, and we think that is appropriate. These pages are either duplicate of indexed pages, or blocked from indexing by some mechanism on your site, or otherwise not indexed for a reason that we think is not an error.
source: Google
Screenshot of Index Coverage report

After clicking on the Crawled – currently not indexed status, you’ll see a list of affected URLs. You should examine it and prioritize fixing the issue for pages most valuable to you. 

The report is also available for export. However, you can export only up to 1000 URLs. If more pages are affected, you can increase the number of exported URLs by filtering pages specific to sitemaps. For example, if you have two sitemaps, each with 1000 URLs, you can export both of them separately.

URL Inspection Tool

The URL Inspection Tool in Google Search Console can also inform you about URLs that are Crawled – currently not indexed.

The URL Inspection Tool in Google Search Console reports on the Index Coverage status of specific URLs, like Crawled - currently not indexed

The top section of the tool informs you on whether the URL can be found on Google or not. If the inspected URL belongs in the Excluded category in the Index Coverage report, the URL Inspection Tool will report the following: “The page is not in the index, but not because of an error.”

Below, you can find more specific information about the current Coverage status of the inspected URL – in the case above the URL was Crawled – currently not indexed.

Reporting bug: your page might be actually indexed

After noticing the Crawled – currently not indexed status, the first thing you should do is investigate if your page is really not indexed.

It’s not uncommon to see a page marked as Crawled – currently not indexed in the Index Coverage report, while the URL Inspection tool indicates that the page is actually indexed.

The URL Inspection tool allows you to check details about a specific URL, including:

  • Indexing issues,
  • Structured data errors,
  • Mobile Usability,
  • View loaded resources (e.g., JavaScript).

You can also request indexing for an URL or see a rendered version of a page.

Google’s John Muller addressed the problem with differences between the Index Coverage report and URL Inspection tool during Google’s SEO Office Hours:

I’ve recently seen some threads like this on Twitter where people saw URLs that were flagged as not being indexed in Search Console. And then, when you check them individually, they are actually indexed. I don’t know exactly what is happening there yet. […] My suspicion is it’s more a matter of timing – we show them in the Search Console report, and then they get indexed over time. Then at some point, they would drop out of the report again. And for whatever reason, dropping out is taking a little bit longer than it should.
source: John Mueller

As John said, it might be simply a delay and data synchronization problem between these two tools, and the status might be updated in the Index Coverage report over time.

However, It’s not always just a delay. Sometimes it’s a reporting bug. 

In September, we noticed some of our indexed articles were reporting Crawled – currently not indexed.

That definitely wasn’t a delay issue as old articles were affected too. 

Shortly after, other SEOs, including Lily Ray, started noticing this very issue. 

What to do in this situation? Which report to trust?

Generally, the URL Inspection tool shows more up-to-date data than the Index Coverage report. That’s why you should always trust the URL Inspection tool more when forced to choose between these reports.

Causes and solutions for the Crawled – currently not indexed status

Now, let’s get to the bottom of the problem – what causes the status to appear and what you can do to fix it.

Google doesn’t give you a clear answer why your page was crawled but not indexed, but there are a few possible reasons why the status might appear, including:

  • Indexing delay,
  • Page doesn’t meet quality standards,
  • Page got deindexed,
  • Website architecture issue,
  • Duplicate content issues.

Indexing delay

It’s not uncommon that Google visits a page, but it takes a while to index it. The Internet is infinitely big, and Google needs to prioritize which pages get indexed first. 

In my Ultimate Guide to Indexing SEO, I showed how long it takes for pages on popular websites to get indexed. Here are some of the results of my investigation:

  • Google indexes just 56% of indexable URLs after 1 day from being published. 
  • After 2 weeks, just 87% of URLs are indexed. 

source: Tomek Rudzki

If you just published your page, it might be perfectly normal that it’s not indexed yet, and you need to wait a bit longer for Google to index your content.

Solution

You can’t influence the crawling and indexing of your page in the short term, but there are a few things you can do to help your website in the long run:

  • Create an indexing strategy to help Google prioritize the right pages on your site. To do so, you need to decide which pages should be indexed and the best method to communicate it to Google. 
  • Ensure there are internal links to the pages you care about. It will help Google find the pages and learn more about their context.
  • Create a well-optimized sitemap. It’s a simple text file that lists your valuable URLs. Google will use it as a roadmap to find the pages faster.

Page doesn’t meet quality standards

Google can’t index all of the pages on the Internet. Its storage space is limited, and that’s why it needs to filter out the low-quality content. 

Google’s goal is to provide the highest quality pages that best answer users’ intent. It means that if a page is of lower quality, Google will most likely ignore it to leave the storage space available for higher quality content. And we can expect the quality standards to get only stricter in the future. 

Solution

As a website owner, you should ensure your page provides high-quality content. Check if it’s likely to satisfy your users’ intent and add good quality content if needed. Google offers a list of questions to help you determine the value of your content. Here are some of them:

  • Does the content provide original information, reporting, research or analysis?
  • Does the content provide insightful analysis or interesting information that is beyond obvious?
  • Is this the sort of page you’d want to bookmark, share with a friend, or recommend?
  • If the content draws on other sources, does it avoid simply copying or rewriting those sources and instead provide substantial additional value and originality?

source: Google

Additionally, you can use tips on quality content from Google’s Quality Raters Guidelines. Even though the document is meant mainly for Search Quality Raters to assess the quality of a website, webmasters can use it to get some insights on how to improve their own sites. If you want to learn more, check out our guide on Quality Raters Guidelines.

User-generated content

User-generated content might be a problem from the standpoint of quality.

For example, let’s assume you have a forum, and someone asks a question. Even though there might be many valuable replies in the future, at the time of crawling, there was none, so Google may classify the page as low-quality content. 

What to do to protect yourself from this situation?

Quora came up with an excellent strategy for the problem. Every unanswered question has the “/unanswered/” prefix in the URL.

Here is an example: https://www.quora.com/unanswered/Are-you-really-happy-with-your-results 

The robots.txt file blocks all of the pages with /unanswered/ in their URLs. It means Googlebot can’t crawl them. 

Once there’s a reply to the question, the URL changes and becomes available for crawling. This way, Quora blocks access to the low-quality content generated by the users.

Page got deindexed

An URL can suffer from the Crawled – currently not indexed status because it was indexed in the past, but Google decided to deindex it over time.

If you wonder why some stuff might disappear from the index, it’s likely that they are just replaced by higher-quality content.

Additionally, you should pay attention to algorithms updates. It’s possible a new algorithm rolled out, and your page was affected by it. 

Unfortunately, deindexing might also be caused by a bug on Google’s side. For example, Search Engine Land once got deindexed because Google wrongly assumed the site was hacked. 

Solution

The solution to deindexed pages is closely related to its quality. You should always ensure your page serves the best quality content and is up to date. Don’t assume that once a page is indexed, you don’t need to do anything with it ever again. Keep monitoring it and implement changes and improvements if necessary.

[…]pages that drop after a core update don’t have anything wrong to fix. This said, we understand those who do less well after a core update change may still feel they need to do something. We suggest focusing on ensuring you’re offering the best content you can. That’s what our algorithms seek to reward.
source: Google

After fixing the issues, you can submit those URLs to Google Search Console to help Google notice the changes quicker.

Website architecture issue

When John Mueller was asked about possible reasons a page was marked with the Crawled – currently not indexed status, he mentioned another possible cause – poor website structure.

Let’s imagine a situation where you have a good quality page, but the only way Google found it is because you put it in your sitemap.

Google might look at the page and crawl it, but since there are no internal links, it would assume the page has less value than other pages. There’s no semantic or structural information to help it evaluate the page. That might be one of the reasons why Google decided to focus on other pages and leave this one out of the index after crawling it.

Solution

Good website architecture is key to helping you maximize the chances of getting indexed. It allows search engine bots to discover your content and better understand the relation between pages. 

That’s why it’s crucial to provide a good website architecture and ensure there are internal links to the page you want to be indexed. 

If you want to learn more about website structure, check out our article on How To Build A Website That Ranks And Converts. 

Duplicate content

Adam Gent, an SEO freelancer, shared an interesting case with the SEO community. His page was reporting Crawled – currently not indexed because Google thought it was a duplicate page. 

Google wants to present unique and valuable content to users. That’s why when it realizes during crawling that some pages are identical or nearly identical, it might index only one of them. 

Usually, the other one gets labeled as “Duplicate” in the Index Coverage report. However, it’s not always the case, and sometimes Google assigns the Crawled – currently not indexed status instead.

It’s not entirely clear why Google might choose Crawled – currently not indexed over a dedicated status for duplicate content. One of the possible explanations is that the status will change later after Google decides if there’s a more suitable one for the page. 

Another option might be a reporting bug. Google might simply make a mistake while assigning the statuses. Unfortunately, the situation is more challenging because Crawled – currently not indexed doesn’t give you as much information as a dedicated status for duplicate content. 

How to check if a duplicate page is showing in the search results?

  1. Go to the page that’s not indexed and copy a random text fragment.
  2. Paste the text in Google Search in quotation marks.
  3. Analyze the results. If a different URL with your copied text shows up, it might mean that your page is not indexed because Google chose a different URL to index.

Solution

First and foremost, you should ensure you create original pages. If necessary – add unique content.

Unfortunately, duplicate content might be unavoidable (e.g., you have a mobile and desktop version). You don’t have much control over what appears in search results, but you can give Google some hints about the original version. 

If you notice a lot of duplicate content indexed, evaluate the following elements: 

  • Canonical tags – these HTML tags tell search engines which versions are the original ones.
  • Internal links – ensure internal links are pointing to your original content. Google might use it as an indicator of which page is more important.
  • XML Sitemaps – ensure only the canonical version is in your sitemap.

Remember that these are only hints, and Google is not obligated to follow them. In the case described by Adam Gent, Google chose the RSS feed version to index, even though many canonicalization signals pointed to a different original URL. Adam solved the issue by setting up a 404 to ensure only the original version stayed. He also suggested setting up an X-robots HTTP header on all feed URLs would stop them from being indexed.

Crawled – currently not indexed vs. Discovered – currently not indexed

The Crawled – currently not indexed status is commonly confused with another indexing issue in the Index Coverage report: Discovered – currently not indexed.

Both of the statuses indicate that the page is not indexed. However, in the case of Crawled – currently not indexed, Google has already visited the page. Meanwhile, in Discovered – currently not indexed, the URL is known to Google, but, for some reason, it wasn’t crawled yet.

 

Crawled – currently not indexed Discovered – currently not indexed
Page discovered by Google Yes Yes
Page visited by Google Yes No
Page indexed  No No

 

Some of the reasons for these statuses might be similar, including poor-quality pages and internal linking problems. However, when you see a Discovered – currently not indexed status, you need to additionally investigate why Google couldn’t or didn’t want to access the page. For example, it might indicate problems with the overall quality of the whole website, crawl budget issues, or server overload.

Wrapping up

Crawled – currently not indexed is mainly associated with page quality, but in reality, it can indicate many more problems, like website architecture or duplicate content. 

Here are the key takeaways from the article that can help you deal with the Crawled – currently not indexed status:

  • Add unique and valuable content to your pages. Once you have done it, submit those URLs to the Google Search Console. This way, Google may notice changes quicker.
  • Review your website architecture and ensure there are internal links to your valuable pages.
  • Decide which pages should and shouldn’t be indexed to help Google prioritize the most valuable URLs.

If you need help addressing the Crawled — currently not indexed status on your website, our technical SEO services is what you’re looking for.