A year later, in 2015, Google stated that it is “generally able to render and understand your web pages like modern browsers” so it’s deprecating the obsolete AJAX crawling scheme.
Then, on May 7th, 2019, Google announced an update to its rendering service. Since then, Google was no longer using Chrome 41. Instead, Googlebot was going to be updated in sidestep with Chrome. This meant Google was finally able to render all pages more or less like the Chrome browser you might be using.
Don’t forget about the following issues:
- You should STILL check if Google can render and index your website.
Let’s look at each of these points in more detail.
1. Rendering Delays
Supported features are one thing, but rendering delays are another.
Why should you bother?
Google STILL has two waves of indexing.
- The first wave (HTML) is instant.
- However, the second wave: rendering is delayed. It happens when resources become available.
How long should you wait?
Google’s John Mueller discussed it on Twitter: “Usually it’s on the order of days to a few weeks even.”
I’ve seen some articles suggesting that when Google supports the most recent JS features, we’ll no longer need to do dynamic rendering (serving bots a static version) or hybrid rendering.
Well, as long as Google’s rendering JS is not instant, I CANNOT agree with this.
This is a very dynamic branch; ads can quickly become outdated, in a matter of hours or days.
There’s the serious risk that Google will render and index your content AFTER it expires…
For rapidly changing content, Google recommends dynamic rendering.
If you want to have a JS website that is successful in these search engines, the best way is to use dynamic rendering or hybrid rendering.
Let me show you what can happen if you don’t serve Open Graph schema/Twitter cards in the initial HTML:
As you can see, there is no hero image, nor description. It definitely doesn’t encourage users to click on such a link.
Please compare it to our website.
Do you see the difference?
Traffic from social media websites is huge.
Imagine you are Wish.com and most of your revenue relies on social media platforms…
Seems like a big deal, doesn’t it?
This equation will always be true.
Even if Google makes rendering JS perfect on their end, poorly implemented JS SEO may still lead to a serious SEO disaster.
You should ensure that:
- Google can access the content hidden under tabs.
- Important JS files aren’t blocked in robots.txt.
- JS doesn’t remove any content.
Of course, the above list is not exhaustive; however, it should help you fix the most common JS SEO issues.
5. You Should Still Check if Google Can Render and Index Your Website
Executing JS at scale is a really resource-consuming process.
It may sound a bit cliché, but there are over 130 trillion documents on the web. It’s a huge challenge even for giants like Google (and any other search engine for that matter) to crawl and render the entire internet.
Based on my experience, if a server responds really slowly, Google may skip requesting your JS files and index the content as it is. After all, there are trillions of other documents waiting for Googlebot to visit.
This has strong implications. If your server/APIs respond slowly, you risk Google not indexing content generated by JS.
I expect the issue will still exist, no matter which JS features Google ends up supporting.
There will still be a risk that Google may not fetch some resources just because its algorithms decided they don’t contribute to essential page content.
It’s just an algorithm; it can still make mistakes, which can lead to a disastrous impact on your presence in Google Search.