Indexing in SEO – A Complete Guide for 2024

The Ultimate Guide to Indexing SEO - Hero Image

Indexing in SEO refers to the process of storing web pages in a search engine’s database, a crucial step for visibility on platforms like Google.

Research conducted by our team in 2023 found that an average of 16% of valuable pages on well-known websites aren’t indexed, indicating a key area for SEO enhancement.

This SEO issue is a critical business problem for your entire organization. If your pages don’t get indexed, the time of your writers, designers, developers, and managers, simply goes to waste.

Here are some examples:

  • Walmart.com: 45% of product pages are not indexed.
  • dictionary.cambridge.org: 99.5% of pages are not indexed.

These websites are large. But it doesn’t mean that smaller websites aren’t at risk:

  • Even a small website can have indexing issues because of technical problems (e.g., sylvesterstallone.com).
  • There are websites with unique content that have indexing issues (e.g., victoriassecret.com).

This is the Ultimate Guide to Indexing SEO. It compiles everything that I learned from 5+ years of studying this topic, running experiments, and providing Indexing SEO services for our clients.

I wrote this guide to help you understand why some of your website’s pages may not be indexed by Google and provide you with the solutions required to fix this serious problem.

chapter 1

The Google index

Organic traffic is the backbone of online business, but you won’t get any if Google doesn’t index your content.

To understand what indexing is and why Google doesn’t properly index some websites, we need to understand exactly what Google’s index is and how it works.

What is the Google index?

Google has a great analogy:

The Google index is similar to an index in a library, which lists information about all the books the library has available. However, instead of books, the Google index lists all of the web pages that Google knows about. When Google visits your site, it detects new and updated pages and updates the Google index.
source: Google

Essentially, the Google index is a database of web pages that Google knows about. Once these pages are indexed, Google can use the information it has about them and their content to decide to show them in search results.

The concept is fairly simple. But the road to getting indexed is complicated.

Google’s indexing pipeline 

  1. Discovery
    First, Google has to discover a URL (make an SEO-friendly URL with this article). In the process of going through the web, Google extracts links from newly discovered web pages. These pages can be discovered in multiple ways: following links on other internet pages or in sitemaps or looking at where inbound links are coming from.
  2. Crawling
    Then, Google has to visit the page. Google has sophisticated algorithms allowing them to define which URLs should be prioritized. Then Googlebot visits pages that meet the priority threshold. This is known as crawling.
  3. Indexing
    Finally, Google extracts the content of a page. Google evaluates quality and checks if the content is unique. Also, this is the step when Google renders pages to see all of their content, evaluate their layout, and various other elements. If everything is fine, the page gets indexed.

This is a fairly simplified breakdown – each of these steps actually consists of additional stages – but these are the crucial steps.

After your page gets through these phases and is successfully indexed, only then can it be ranked for relevant queries and shown to users, bringing organic traffic to your website.

One exception is when you intentionally prevent Google from visiting your page using your robots.txt file, making it impossible for Google to crawl it. Google can then still index the page using a link found on a different page. That being said, you won’t likely get lots of traffic to that page from Google because it won’t know what the page contains and won’t know if it’s relevant to users.

Here’s an example of that happening in the wild with one of Google’s own products.

13-indexing-seo-jamboard

In this case, Google blocked its own robot, Googlebot, from crawling all pages on the Google Jamboard subdomain.

But Googlebot was still able to find links to Jamboard pages on other websites and used these links for indexing.

This case highlights something vital.

Notice that the indexed home page of Google Jamboard has no description displayed inside the snippet. That’s because Googlebot wasn’t able to access it and relay that information to the index.

As a website owner, you need to make sure that Googlebot can access as much content on your site as possible. Otherwise, Google will have limited information on what your page is about, and your search visibility will suffer.

Does Google index all pages?

The answer is clear: No. 

In the past couple of years, I ran the numbers multiple times, using a database with thousands of different websites.

On average, 16% of valuable, indexable pages on popular websites aren’t indexed. Ever.

And it’s no secret. Google openly admits their goal is not to index every single page on the web. Google’s John Mueller had this to say on the topic:

When it comes to indexing, we don’t guarantee that we will index all pages of the website. And especially for larger websites, it’s really normal that we don’t index everything. That can be the case that maybe we just index 1/10 of a website because it’s a really large website. We don’t really know if it’s worthwhile to index the rest.
source: John Mueller

You might say, “Okay, Google just doesn’t index everything, so I guess if some of my valuable pages aren’t indexed, it’s not a big deal.”

But I think this is the wrong approach. There are actually many large sites that Google can fully index.

You can do various things to help Google index more pages on your website, and you should. If you have a large website, check out this article on how to check the indexing status of a large website.

Every other SEO effort you make on your website will have a diminished ROI if you still have unindexed content.

How long does it take for Google to index a page?

As I already showed you, many pages simply don’t get indexed by Google, and even more don’t ever get crawled.

To make things worse, it’s common that indexing happens with a significant delay.

We track the indexing of many popular websites. This allows us to observe how long it takes for Google to index new pages on average (and remember, we’re skipping the pages that never get indexed here).

These statistics show how common indexing delays are: 

1-indexing-seo-indexing-delay-statistics

As you can see:

  • Google indexes just 56% of indexable URLs after 1 day from being published. 
  • After 2 weeks, just 87% of URLs are indexed. 

Google has a sophisticated system of managing how it crawls websites.

Some websites are crawled more frequently, and some websites are visited less frequently. In the short term, you cannot influence it, but there are many things you can do to improve your standing in the long run. We’ll talk about them later.

Partial indexing

There’s one more indexing issue that I’ve studied extensively, and this one is the most difficult to define and address. I call it partial indexing.

While I consider it an indexing issue, an argument can be made that it’s also a ranking issue.

Here’s what it’s all about:

Sometimes a page gets indexed by Google, but parts of the content of that page don’t. My research shows that these unindexed content fragments don’t contribute to the page’s rankings.

They can’t be found when you specifically search for them, and they seem to not contribute to the page’s overall rankings.

Sometimes, these content fragments are less important, for example, related items/products.

But quite often, it’s the main content of the page, like the main product description on a product page of an eCommerce site.

Website  % of indexed pages with main content not indexed Additional notes
aboutyou.de 37% On mobile, product details are hidden under tabs. 
sportsdirect.com 8%
charlotterusse.com 8%
zappos.com 16%
boohoo.com 14%
zulily.com 70%
lidl.de 3%
walmart.com 45% On mobile, product details are hidden under tabs. 
hm.com 6%
samsclub.com 39%

In my opinion, the most common cause for partial indexing is duplicate content.

The websites shown above commonly use the producer’s product description, and it seems Google is filtering it out in the indexing/ranking phase. 

Check out my article on partial indexing to find out more.

Why is indexing a challenge?

So, why doesn’t Google just index every page it knows about? 

The web is growing

The basic reason is that the web is simply too big. And it’s still growing.

According to WorldWideWebSize, there are over 5 billion pages on the internet as of March 2021. 

And most of those pages aren’t exactly valuable to Google’s users. The web is full of spam, duplicate content, and harmful pages that contain malware and phishing content. 

Google has learned to avoid crawling those pages, let alone indexing them.

Websites are getting heavier

An average website is getting heavier each year.

Websites increasingly depend on JavaScript and modern media formats, including high-resolution images and videos which are usually more difficult to index.

While this offers new possibilities to users, Google needs to render all that heavy code and access these heavy media to understand what a given page is about. 

As all of these challenges only get more serious, we should expect Google to be even pickier when indexing content in the future.

Index selection

Because the web is too big for Google to index fully, Google has to choose which pages it wants to index.

And, obviously, Google wants to focus on quality pages. So Google’s engineers developed mechanisms of avoiding crawling low-quality pages.

This means that Google may skip crawling some of your pages because, having seen your other content, it assumes they are low-quality pages.

In this scenario, your pages drop out of the indexing pipeline right at the beginning. 

We’re trying to recognize duplicate content in different stages of our pipeline. On the one hand, we try to do that when we look at content. That’s kind of like after indexing – we see that these two pages are the same, so we can fold them together.

But we also do that, essentially, before crawling, where we look at the URLs that we see, and based on the information that we have from the past, we think, “Well, probably these URLs could end up being the same, and then we fold them together.

source: John Mueller

The data available thanks to Google Search Console confirms this is happening very often. “Discovered – currently not indexed” is one of the most common indexing issues, and it’s usually caused by:

  1. Low quality (Google detected a common pattern and decided not to waste resources crawling low-quality or duplicate content).
  2. Insufficient crawl budget (Google has too many URLs to crawl and process them all). 

I spoke more about my research on Google Search Console’s most common indexing issues in my article over at SearchEngineJournal.

Assigning priority to URLs

Various criteria are applied to the requested URL crawls so that less important URL crawls are rejected early from the backlog data structure.

This quote suggests that Google is assigning a crawling priority to every URL before it’s crawled. But more importantly, it states that less important URLs are rejected and may never get crawled!

According to that very patent, the priority assigned to URLs can be determined by two factors:

  1. A URL’s popularity,
  2. Importance of crawling a given URL for maintaining the freshness of Google’s index.

The priority can be higher based on the popularity of the content or IP address/domain name, and the importance of maintaining the freshness of the rapidly changing content such as breaking news. Because crawl capacity is a scarce resource, crawl capacity is conserved with the priority scores.”

Google’s “Minimizing visibility of stale content in web searching including revising web crawl intervals of documents” patent talks about the factors that define a given URL’s popularity: view rate and PageRank.

But there’s one more factor that may cause Google to give up crawling your URLs – your server. If it responds slowly to crawling, the priority threshold that a URL needs to meet is increased:

“The priority threshold is adjusted, based on an updated probability estimate of satisfying requested URL crawls.

This probability estimate is based on the estimated fraction of requested URL crawls that can be satisfied.

The fraction of requested URL crawls that can be satisfied has as the numerator the average request interval or the difference in arrival time between URL crawl requests.”

So what can you do with all that information? How to improve the chances that all your URLs will be assigned high priority and get crawled by Googlebot without hesitation?

  • You need to make the most out of internal linking to make sure new pages have enough PageRank.
  • Just having an XML sitemap isn’t nearly enough if you’re hoping to get your new pages indexed quickly.
  • Having tons of low-quality content may negatively impact other pages on your domain.

When indexing issues are not your fault: Google’s indexing bugs

Google Search is a truly complex mechanism, made of hundreds (and maybe even more) interconnected algorithms and systems. Some of the smartest programmers and mathematicians work there. 

However, like every piece of software, it has some bugs.

To my knowledge, the most famous indexing bug happened on October 1st, 2020.

It was really rough because Google had removed the Request Indexing feature from the Google Search Console just a day before. 

After 2 weeks, it was announced that the canonical issue was effectively resolved, with about 99% of the URLs restored.

Let me point to another interesting example of Google’s indexing bug.

One of the most popular publishing websites in the SEO branch, Search Engine Land, once got totally deindexed by Google. 

Search Engine Land got deindexed because… Google systems wrongly detected that the website had been hacked.

Normally, Google informs website owners about detecting such issues through Google Search Console. However, the team at SEL didn’t receive any notifications in GSC nor by email.

What I’m trying to say by talking about these cases is that indexing is a very complex system and that bugs will happen now and then.

Diagnosing your website’s indexing status

As the first step of your indexing journey, you should check your website’s indexing statistics.

You HAVE to know how many pages are not indexed and why. 

Use Google Search Console

The best way is to use the Google Search Console because it has the most accurate data.

  1. Log in to GSC and select a property,
  2. Click on Index -> Pages.

 

Screenshot of Google Search Console's Page Indexing statuses.

The report is divided into two intuitive categories: Not Indexed and Indexed. 

You will quickly notice how many pages on your site are indexed. You can further narrow down the report to see a sample of indexed pages. 

You will easily discover how many pages are:

  • Indexed,
  • Not indexed because of duplicate content, quality issues, server errors, etc. 

You can also easily use this report to diagnose indexing issues. The table below the graph will show you, for example, how many URLs are stuck in the indexing pipeline.

For example, you can take a closer look at pages that got crawled but still aren’t indexed, and pages that Google didn’t index because it ignored your canonical tag.

GSC is a treasure for everyone with a website.

Don’t use the “site:” command

I don’t recommend using the site: command to check your index coverage.

Some people use this command to find out how many pages Google indexed from their website.

However, this is not an accurate method. More importantly, it won’t tell you why some pages may not be indexed. Google Search Console will. 

That doesn’t mean this command is not useful.

You can use it to get a rough estimate of how many pages your competitors have in Google’s index. Just remember, it’s not very accurate!

3-indexing-seo-site-command

chapter 2

How to make sure your pages get indexed by Google

You now know that Google’s index is a complex system of interconnected algorithms.

Things can go wrong at each step of the indexing pipeline, and it may not even be your fault.

But there are things you can do to maximize your chances of getting indexed by Google.

How to make sure Google will index your content

1. Make sure the page is indexable

There are three things you need to look at to check if a page is indexable.

  1. The page can’t have the noindex tag 
  2. The page can’t be blocked by robots.txt
  3. The page can’t have a canonical tag pointing to another page.

Let’s dig in.

Noindex

Googlebot is a good citizen of the web.

If you tell Google: “Hey, don’t index this page,” the page won’t be indexed. And there are many ways to do that.

The most commonly known is the “noindex” directive. 

It’s a directive showing that Google can visit a page, but a page shouldn’t be included in the Google index. 

There are two ways of using the noindex directive:

  1. You can place it in the X-Robots tag HTTP header
  2. You can place it in the source code with the classic <meta name=”robots” content=”noindex”/>

Robots.txt file

The robots.txt file can be used to give instructions to various web crawlers, telling them whether or not they should access your website or its parts.

You can use robots.txt to tell Google not to crawl a page or multiple pages on your site using the disallow directive.

This blocks Google from visiting a page and indexing its content. 

Canonical tag

Finally, you shouldn’t expect Google to index your page if it has a canonical tag in its source code pointing to a different page.

Canonical tags are a way to let Google know about your preferred version of a page when there are many duplicate or near-duplicate versions of the same page on your website.

They come in handy when, for whatever reason, you have duplicate content on your site but want to consolidate ranking signals and let Google index and rank the one master version of the page.

It follows that if a page on your website has a canonical tag pointing to a different page, Google won’t index it.

How to check noindex, robots.txt directive, and canonical tag all at once

Per URL

Manually inspecting a page for the three factors mentioned above is time-consuming. Moreover, it’s error-prone!

So when you quickly want to check if a page is indexable, use the SEO Minion plugin. It’s available for Chrome and Firefox. 

SEO Minion will inform you about the reasons why a given page is not indexable.

In bulk

If you want to check a larger amount of URLs, the best way is to use an SEO crawler like Screaming Frog.

First – set the Mode to “List.”

4-indexing-seo-set-mode-to-list

Second, paste the list of URLs to the tool. 

5-indexing-seo-paste-urls-to-screaming-frog

Then click “Start.” 

Once the check is done, check the indexability column. You will see two self-explanatory results: Indexable / Non-Indexable. 

6-indexing-seo-screaming-frog-results

Now you should know if your pages are indexable. Congratz!

But this is only the beginning.

2. Help Google crawl your website more efficiently

Google should be able to find links to your important pages just by crawling your website. 

However, it gets more complicated when you have a huge website with thousands of pages. There are a couple of ways in which you can help Google discover your URLs and crawl them faster.

Sitemap.xml

The XML Sitemap is a file that should contain links to all the indexable pages of your website.

Here’s what Google has to say about sitemaps:

Search engines like Google read this file to more intelligently crawl your site. A sitemap tells Google which pages and files you think are important in your site and also provides valuable information about these files: for example, for pages, when the page was last updated, how often the page is changed, and any alternate language versions of a page.
source: Google

So you can use sitemaps to inform Google about the pages that you definitely want to be indexed.

Furthermore, you can use it to let Google know when your pages were changed using the <lastmod> parameter, and if there are alternate versions (e.g., when you have multiple language versions, you can use the hreflang tag in the sitemap to point Google to variants of the same page).

Sitemap attribute Is it supported in Google?
lastmod supported
changefreq Not supported
priority Not supported 

Note that if you overuse the <lastmod> parameter, Google may end up ignoring it. 

Whether you’re creating or reviewing your sitemap, check out this article to avoid making common mistakes in your sitemap.

Only put valuable URLs in the sitemap!

As I mentioned earlier, sitemaps help Google to crawl your website more intelligently.

But if you misuse them, they may actually hurt your site.

Let me show it to you with an example: GoodReads, a very popular brand. 

I checked their index coverage, looking at a sample of their URLs from a sitemap.

It turned out that just 35% of their product pages are indexed. I was shocked, as I know that it’s a very high-quality website. I use it myself, and I love it.

Then I noticed that the sample I checked didn’t include any books. So I decided – let’s download all their sitemaps. 

The result: there were no book pages in their sitemaps. 

Why is it a bad sign? 

Google may prioritize URLs found in sitemaps and skip visiting book pages that are actually the most valuable. 

Recommendation:

You should ensure that sitemaps only list canonical, valuable pages.

Create and submit your sitemap

After you create a sitemap, you should submit it to Search Console’s Sitemaps Tool.

7-indexing-seo-submit-a-sitemap

Google might find it on its own, but that can take time.

When it comes to creating a sitemap, it’s effortless.

You don’t have to create the sitemap file on your own. There are many dedicated tools for that. 

For instance, YoastSEO generates it automatically for you if you’re using WordPress. Most SEO Crawlers also offer that feature.

Of course, you can also create a sitemap file on your own, but remember to update it regularly, or you’ll run into trouble.

URL Submission tool

If you want Google to index your page quickly, you can use the URL Inspection tool in Google Search Console. 

To do so, while inspecting a page in the URL Inspection Tool, click on “Test Live URL.”

8-indexing-seo-request-indexing

In the past, this tool was reliable and quick – it worked like a charm.

Once you requested indexing, Google would index the page within 5 minutes. It would even index some low-quality content that you’d otherwise have a hard time getting indexed.

But things changed. Now, indexing takes time, even when you use the URL Submission feature.

So, if you want Google to index your website really fast, you shouldn’t rely on it.

And this feature is just not good enough if you have hundreds of pages that you want to be indexed because there’s a daily limit of URLs you can submit per GSC property.

Rather, you should follow our Indexing Framework. 

As a side note, if you want a new page or just a piece of information to get indexed really fast, publish it on social media. Tweets usually get indexed blazingly fast.

Indexing API 

Just like Bing, Google has an Indexing API. You can use it to ping Google about URLs added, removed, or changed and “force” Google to discover your content more quickly.

Google documentation suggests that it’s quicker than if you used other ways of submitting URLs. 

The Indexing API prompts Googlebot to crawl your pages sooner than updating the sitemap and pinging Google. However, we still recommend submitting a sitemap for coverage of your entire site.
source: Google

Sounds too good to be true, right?

Yeah, there’s a catch.

For now, you can submit only two types of pages. 

Currently, the Indexing API can only be used to crawl pages with either JobPosting or BroadcastEvent embedded in a VideoObject. For websites with many short-lived pages like job postings or livestream videos, the Indexing API keeps content fresh in search results because it allows updates to be pushed individually.
source: Google

The future of indexing? 

Google’s Indexing API is limited to 2 types of pages. 

However, Google has been flirting with the idea of letting the Indexing API work for all pages. Wix and YoastSEO were the companies that helped Google run these tests.

The future of the tool is unknown. However, I know that Bing’s Indexing API lets website owners submit URLs without any restrictions, and it seems that it works for them.

Here’s what Christi Olson, who is currently Head of Search Advertising at Microsoft at Bing, had to say about Indexing APIs. She (and her team) believe that URL submission helps improve crawling efficiency. 

9-indexing-seo-indexing-api-future

Internal linking

An essential aspect of SEO that has a direct effect on indexing is internal linking

It should be clearly stated that having a URL in the sitemap is not enough to ensure that Google can crawl and index it. 

I go by two rules when it comes to internal linking:

  1. Avoid infinite scroll. 
  2. Don’t have canonical tags pointing to the first page of pagination.

Of course, there are exceptions to these rules. But if you aren’t sure if what you’re doing will work, stick to my rules!

Get a grip on your internal linking

Based on my experience, the following situation is widespread: a page is in the sitemap but cannot be found in your website’s structure. We call pages like that orphan pages.

One of the tools you can use to find orphan pages on your site is Sitebulb. Sitebulb does a great job and uses XML sitemap as a reference and data from Google Analytics and Google Search Console. 

It will provide you with a list of orphaned pages (ones that it found in the sitemap or elsewhere but couldn’t reach by clicking around your site).

10-indexing-seo-sitebult-orphaned-pages

Ideas to boost your internal linking

You might be looking for ways to improve your internal linking and help Google crawl and index your site more thoroughly.

Here are some ideas to look into:

  1. Related products tab
  2. Most popular items
  3. Blog posts.

Writing quality content aligns perfectly with your goal of improving internal linking while also giving you a chance to earn some external links. It’s a win-win!

JavaScript challenges – internal linking

For years, Google had issues with indexing JavaScript websites.

At first, back in the day, Google wasn’t able to deal with JavaScript websites at all.

Then it got better, but Google used an extremely outdated browser for rendering.

As of 2021, the situation has drastically improved. Google can render modern JavaScript without breaking a sweat (although it may slow down your crawling if overused, not to mention its impact on Web Performance!).

However, Google’s handling of JavaScript is still not perfect, and we often have the developers to blame. 

The most common issue is the infinite scroll that’s improperly set up using JavaScript. 

Many websites improperly implement pagination by not using a proper <a href> link. Instead, they use pagination that depends on a user action – a click. In other words, Googlebot would have to click on a button (View more items) to get to the consecutive pages. 

Unfortunately, Googlebot doesn’t scroll or click the buttons. The only way to let Google see the second page of pagination is to use proper <a href> links.

source: Google

Bartosz Góralewicz wrote about Rendering SEO and all the dangers that come with unoptimized JavaScript and rendering process in his Rendering SEO Manifesto.

Struggling with rendering issues?

Contact us for Rendering SEO services to address any bottlenecks affecting your search visibility.

You should know that JavaScript is here to stay – more and more elements on a page are generated using this language. So here’s when you may want to take advantage of JavaScript SEO services.

Bad internal linking can hurt your site

Back in 2019, we took a look at Verizon’s website.

55% of their product pages were not indexed in Google. 

One of the possibilities of such a low indexing ratio was their extensive use of JavaScript.

Their website heavily relied on JavaScript for internal linking. 

We hypothesized that Google didn’t render JavaScript on some pages because it didn’t think it would make a significant change to the page’s content.

And Verizon’s website without JavaScript rendered was a completely different website, which was likely a factor contributing to 55% of their product pages not being indexed.

If you don’t want to fall into this trap, you can contact Onely for internal linking optimization services.

Related items

I mentioned related items as one of the strategies you can use to boost your internal linking. But there’s a catch.

We commonly see that when your related items aren’t really related, Google might not index them. 

I spoke about this very issue last year with Martin Splitt, a web developer advocate at Google. We talked openly about the sample I used for my tests and the methodology of our experiments.

Martin was surprised by the stats and offered his own theory (he didn’t have any data to share at that time) that the rendering phase in most of the cases is perfectly fine, but then something in the background prevents it from indexing.

He used an example of a shop selling accessories for cats, and some of the “related items” aren’t for cats but dogs.

With this hypothesis in mind, if Google notices the related items are unrelated, they may be skipped from indexing, meaning that Google won’t see those links.

If that’s the case, it has strong implications. If an online store has a poor suggestion system for related items, it loses on two levels:

  1. First of all, you lose the opportunity to advertise relevant products to your customers.
  2. Secondly, Google may not index your internal links, which weakens your PageRank flow and your website’s structure.

External linking 

Some people get fixated on acquiring external links in unnatural ways which are a crucial part of black hat SEO.

Even if you think it works short-term, I promise: eventually, you’ll realize you were wasting time.

As Google gets “smarter,” these links are becoming increasingly redundant.

Our site is an example of how you can gain external links pointing to your website in a fully natural way.

From day one, our focus was on writing high-quality content that would help others.

That’s it. We write and publish, and once it’s up, we promote it on our social media.

If you want to see an example, here is my Ultimate Guide to JavaScript SEO

14-indexing-seo-backlinks-onely

Many other websites in our industry use the same strategy, and some probably have even better results.

If you do want to spend time building links besides just writing good content, focus on the following:

  1. PR: reach out to people who might be interested in your content and ask them to include it on their sites.
  2. Guest blogging: share your expertise on other websites. You’ll gain links and traffic, but more importantly, you’ll build your brand in the long term.

Not all content should be indexed

It may sound surprising inside a guide on getting indexed, but you shouldn’t aim to have Google index all of your content.

You should know that having low-quality content indexed may actually damage your website.

A while ago, I wrote an article analyzing why popular websites such as Instagram, Giphy, or Pinterest suddenly lost 40-50% of their SEO visibility. 

I accidentally discovered that these sites suffered massive visibility losses around the same time while going through one of the SEO tools.

This looked interesting, so I tried to find common patterns. And I found one.

Many tag/search pages from these websites used to be ranking high. And then they got deindexed, just like that. 

Why? I would call it “collective responsibility.” I think Google decided there are many low-quality pages of this category that occupy the index and… deindexed ALL of them.

But when this problem happens, it doesn’t just end there.

It’s a vicious circle:

  1. Google crawls low-quality pages.
  2. Google stops visiting the website as often.
  3. Many pages aren’t ever crawled by Google, even if they are high-quality pages.
  4. There are valuable pages that aren’t indexed.

This shows how ranking, crawling, and indexing are interconnected.

Can crawlers find your content? 

Is your content hidden behind login forms?

If you require users to log in, fill out forms, or answer surveys before accessing content, search engines won’t see it. A crawler is definitely not going to log in.

Are you relying on search forms?

Robots cannot use search forms. Some individuals believe that if they place a search box on their site, search engines will find everything that their visitors search for. I’m sorry, that won’t happen.

Is text hidden within non-text content?

Non-text media formats (images, video, GIFs, etc.) should not be used to display text that you wish to be indexed. While search engines are getting better at recognizing images, there’s no guarantee they will read and understand the text on images. It’s always best to have any text you want to be indexed within your web page’s HTML markup.

Can search engines follow your site navigation?

Just as the crawler needs to discover your site via links from other sites, it needs a path of links on your own site to guide it from page to page.

If you’ve got a page you want search engines to find, but it isn’t linked to from any other pages, it’s as good as invisible. Many sites make the critical mistake of structuring their navigation in ways that are inaccessible to search engines, hindering their ability to get indexed.

Do you have clean website architecture?

Website architecture is the practice of organizing and labeling content on a website to improve efficiency and findability for users. Good information architecture is intuitive, meaning that users shouldn’t have to think very hard to flow through your website to find something.

If you feel you may need to reorganize your content, contact us for information architecture services.

Common navigation mistakes that can keep crawlers from finding your content:

  • Having a mobile navigation that shows different results than your desktop navigation.
  • Any type of navigation where the menu items are not in the HTML, such as JavaScript-powered navigation.
  • Personalization, or showing unique navigation to a specific visitor type, could be considered cloaking by Google.

NEXT STEPS

Here’s what you can do now:

  1. Contact us.
  2. Receive a personalized plan from us to deal with your indexing issues.
  3. Enjoy your content in Google’s index!

Still unsure of dropping us a line? Read how technical SEO services can help you improve your website.

chapter 3

Other things you should know about indexing 

These were the basics that pretty much every website owner should know.

But since this is an Ultimate Guide, this chapter will cover some of the most advanced aspects of indexing.

Other things you should know about indexing

International SEO

Below you can find a couple of examples of international websites that have issues with indexing. 

 

Website  Number of language versions % of pages indexed
Deezer.com 36 96%
Victoriassecret.com 214 85%
Yoox.com 32 50%
android.com 31 50%
only.com 31 65%

What happens when you have an online store in multiple languages?

For instance, you offer your products to people from:

  • United States: example.com/us
  • United Kingdom: example.com/uk
  • Australia: example.com/au

What Google sees is duplicate content available under different URLs. Normally, it would decide on the canonical version and only index that.

That’s where the hreflang tag comes in.

You can use it to inform Google about multiple language versions of your site.

If this sounds confusing, you can read more about it in my Ultimate Guide to International SEO. 

Mobile-First Indexing

As of March 2021, all websites fall under Mobile-First Indexing. 

If MFI is a new concept to you, let me briefly explain it: 

Google now crawls the mobile version of your page and uses the information it finds there for ranking.

So your mobile version is the one being crawled, indexed, and ranked.

Don’t let Google index sensitive data

So far, I mostly discussed the cases where Google doesn’t want to index content. But it can also happen that Google will index more than you wish for.

Be careful when you are publishing things like this:

  • Phone number
  • Address
  • E-mail
  • Any other confidential information

Remember that PDFs, Trello boards, open FTP servers can get indexed by Google too.

In Trello, a trendy project management solution, there are two types of options: you can set a project as private or public. 

And because many Trello boards are set to public, many Trello boards have been indexed by Google. 

11-indexing-seo-trello-indexed

After all, Trello makes it easy for Google to find them by putting them in sitemaps.

Be careful whenever you publish sensitive data on the web because removing content from Google’s index also takes time.

This brings me to my next point.

How to delete content from Google? 

You can request content to be removed from Google for legal reasons.

All you need to do is to fill out a form as described in this video.

This feature might come in handy when someone copies your content and publishes it on their own website.

Performance matters

Web Performance is a ranking factor for Google. But this is outside the scope of this article.

If you need any help with your web performance, consider contacting us for a website performance audit.

What I want to talk about here is that there’s evidence that Google crawls slow pages less frequently. And less crawling means less indexing. Simple.

The crawl capacity limit can go up and down based on a few factors:

  • Crawl health: If the site responds quickly for a while, the limit goes up, meaning more connections can be used to crawl. If the site slows down or responds with server errors, the limit goes down, and Googlebot crawls less.
  • Limit set by site owner in Search Console: Website owners can optionally reduce Googlebot’s crawling of their site. Note that setting higher limits won’t automatically increase crawling.
  • Google’s crawling limits: Google has a lot of machines, but not infinite machines. We still need to make choices with the resources that we have.
source: Google

So if you notice that Google crawls your site less frequently or extensively than it used to, your server might be to blame. Reducing your server’s response time should allow Google to crawl faster.

chapter 4

FAQ

And now is the time for your questions 😉

I hope I covered most of them, but if there’s still something on your mind, do let me know!

FAQ section about indexing

What is indexing in SEO?

Indexing is the final step of a pipeline that every web page needs to go through in order to be retrieved and displayed to search engine users when their queries are relevant to the given page’s content.

In order to be indexed on Google, every page (with rare exceptions) must first be found by Googlebot, crawled, and rendered so that Google can analyze its content.

Can I place “noindex” in robots.txt? 

It was an undocumented feature of Googlebot. As for now, it doesn’t work.

How can I use GSC to find indexing issues? 

  • Check the number of indexed pages. 
  • Check if a given page is indexed.
  • Check exactly why a page is not indexed.
  • Find interesting crawl stats.

Will the site: command show me all indexed pages?

I found the following fragment in the documentation of Wix (Wix is a popular Content Management System):

To see if your site has been indexed by search engines (Bing, Google, Yahoo, etc.), enter the URL of your domain with “site:” before it, i.e. “site:mystunningwebsite.com.” The results show all of your site’s pages that have been indexed, and the current Meta Tags saved in the search engine’s index.”

That’s not true. Site:website.com won’t show you every indexed page, and I have gigabytes of data to confirm it.

It shows you just a sample of pages with varying accuracy.

Can I use Google Cache to check how Google indexed my page? 

That’s one of my favorite myths. 

I don’t want to discuss it at length because we have an excellent article on using Google Cache to check indexed content.

TL;DR: While Google Cache is very useful, don’t rely on it in this context.

Are some pages more prone to not getting indexed?

I noticed that there are types of websites that are most prone to indexing issues: 

  1. Large, rapidly changing websites.
  2. International websites.
  3. eCommerce stores that copy content from a manufacturer.
  4. JavaScript websites.
  5. New websites (!!!).

However, as my statistics show, even small websites with up to 10k URLs can often have indexing issues. 

Is having a sitemap enough to get crawled and indexed?

Commonly, especially in the case of large websites, a sitemap is not enough. Google may not crawl a page if it can only find the link in the sitemap. To help your pages reach the crawling priority threshold, make use of internal linking.

Can pages that are blocked in robots.txt be indexed on Google? 

Yes. Google can find links to those pages on other pages. Just google “Google Jamboard.”

Can a page get removed from Google’s index?

Occasionally, a page gets indexed by Google, ranks for prominent keywords, and then suddenly gets deindexed. There could be many reasons for that: 

  • A page is returning 4xx or 5xx errors. 
  • URLs have a meta noindex meta tag. 
  • Googlebot can’t access the page (blocked by robots.txt file or through password authentication).
  • Google decided it’s duplicate content.
  • A page no longer satisfies Google quality standards (especially after core updates). 
  • Google decided that there wasn’t enough storage to keep it and made room for more important pages.

How can I know if Google deindexed my page?

You should visit the Crawled – currently not indexed report in Google Search Console. 

However, this report will show you two types of URLs:

  • URLs that got deindexed
  • URLs NOT YET indexed (may be indexed in the future). 

What’s the difference between Crawled – currently not indexed and Discovered – currently not indexed?

I see many people asking this question. It’s very easy; I explained it in the table below: 

 

Google discovered it Google visited it  Google indexed it
Crawled – currently not indexed Yes Yes At the moment – no. 
Discovered – currently not indexed Yes No No

How often does Google crawl my website?

Google Search Console offers some data that will help you answer that question.

Log into Google Search Console and navigate to Crawl Stats Report in the Settings section.

You can also find out how often Google crawls your website by analyzing your website’s log files, but it requires some expertise.

It’s worth noting that Google determines how often they should crawl your website using the crawl budget for your website.

How to check if a sample of pages is indexed?

In the previous part of the article, I explained how to check how many pages of your website aren’t indexed and why.

But how to check if a specific sample is indexed? 

The easiest & most accurate way is to use the URL Inspection Tool. 

Screenshot of Google Search Console's URL Inspection Tool.

This way, you can examine other pages of your website. However, after checking around 100 URLs, you will exceed the daily quota.

To check more URLs, you need to use Google Search Console’s Index Coverage (Page Indexing) report to check more URLs.

Keep in mind that there are up to 1000 URLs available in this report. So if you have a large website, this method won’t fully solve your problem either.

In one of my articles, Diagnosing Indexing Issues using GSC, I wrote about a workaround that you can use to go around the 1000 URL limit.

Another way is to use Google Analytics or Google Search Console. 

You can export a list of pages that get more than 0 visits from Google. 

If a page gets traffic from Google, then it’s indexed. You should be careful, though – the fact that a page doesn’t get any traffic doesn’t necessarily mean a page is not indexed. 

What does Mobile-First Indexing mean?

From now on, all websites are primarily crawled, indexed, and ranked based on their mobile versions.

My website is not indexed. What are the possible reasons?

  1. Your website is new and Google hasn’t had an opportunity to visit it yet.
  2. There are no external links from other websites – Google may not be sure if your website is good enough.
  3. You have technical issues or code that blocks Googlebot from accessing your content. 
  4. Your website got penalized by Google. 
  5. Your internal linking needs some work.
  6. You have a lot of low-quality, thin content.

Indexing ≠ ranking

As a final note, I need to emphasize that indexing is very important, but it’s not ranking. A page can be indexed and not rank for any keywords.

If you have a large website, you probably have some pages that get next to zero clicks and impressions – just look for them in your Google Search Console account!

Ranking and getting traffic is the final, most rewarding step of the SEO journey. But remember that crawling, indexing, and ranking all belong in the same pipeline and are fully interconnected.

 

Key takeaways:

  • Google doesn’t index everything. The statistics are staggering. 16% of valuable, indexable pages aren’t indexed.
  • At the same time, many large websites are fully indexed; an optimized website is easier for Google to index.
  • Indexing is much more complicated than ensuring that a page doesn’t have a “noindex” tag or that it’s not blocked by robots.txt. 
  • eCommerce websites are particularly prone to indexing issues.
  • JavaScript-powered websites aren’t the only ones that can suffer from indexing issues.
  • Unique content helps with indexing, while having duplicate content makes it more of a challenge. 
  • Google Search Console is a crucial tool for diagnosing indexing issues.
  • Because the web is growing, we should expect Google to be even pickier when indexing content in the future.
  • Having a URL in the sitemap is not enough for a page to be indexed by Google. 
  • You shouldn’t aim to have every page indexed by Google. Getting low-quality pages indexed can harm your traffic.
  • Ranking and indexing are tightly related to crawling and discovering new pages.
  • Google can “judge” a page without crawling it by looking at other pages on your site.