This demonstrates the existence of a rendering queue within Google’s indexing pipeline, and shows how waiting in this queue can drastically affect how fast your content gets crawled and indexed.
And it’s one more reason to push as much of your content as possible into plain HTML.
So together with Marcin Gorczyca, we created a subdomain with two folders. Each folder had a set of pages like this one:
The copy was generated by AI.
One of the folders (/html/) contained pages built only with HTML. Meaning Googlebot could follow the internal link to the next page (in the case above, “Ostriches”) without fetching additional resources or rendering anything.
In both folders, there were a total of seven pages, six of them only accessible via those internal links.
We pinged Google about the existence of the first page in each folder, and we then waited for all pages to get crawled.
With HTML, it took just 36 hours. That’s nearly 9 times faster.
In 2019, I wrote an article responding to Google’s claims that the delay between crawling and rendering is 5 seconds on median. But back then, our research at Onely mainly focused on the delay between content getting published and getting indexed.
So now we know there’s a very significant difference. But the question remains: why?
Normally, crawl budget would be a possible explaining factor. Every additional file needed to render a page is an additional request that Googlebot needs to make, which contributes towards your crawl budget. As Erik Hendriks from Google said during WMConf 2019, “crawl volume is gonna go up by 20 times when I start rendering.”
Going back to my ancient article about the rendering delay, Martin Splitt said the following during the November 2019 Chrome Developer Summit:
If this is in fact true, then it would mean the following happened with my experiment (let’s talk about Googlebot getting from the second to the third page in the /js/ folder, just to make the example clear):
- The second page was crawled.
- 5 seconds later (at median — let’s make it one hour!) the second page was rendered and Google found the link to the third page.
- The third page was crawled after waiting in the crawl queue for 164 hours.
At the same time, the third page in the /html/ folder got crawled three hours after the second page was discovered. After 165 hours, it’s been 129 hours since Googlebot discovered the final, seventh page in that folder!
So to me, there is just one explanation left
There’s a significant delay between a page being crawled and being rendered, and that’s why it took Google so long to discover and crawl consecutive pages in the /js/ folder.
As rendering takes additional computing resources, pages that require rendering have to wait in a rendering queue in addition to the crawl queue which applies to all pages.
Of course, Google may assign higher rendering priority to popular pages, and the delay may not be always so significant. But before you assume this isn’t an issue for you, run an experiment on your website and check how much time your pages spend in the rendering queue.
Here is my main takeaway: the rendering queue is very real and can significantly slow down Googlebot’s discovery process on your website. Especially for websites that push out tons of content and need it indexed quickly, like news sites, this is a critical issue.