Simply put, yes, JS issues can hurt your search engine optimization efforts.
Generally, the more complicated your coding is, the more likely it is to suffer from JS issues that affect SEO. However, the good news is that there are methods to fix these errors and therefore improve your website’s ranking.
Now that we have established that JS issues can, in fact, hurt your SEO efforts, let us look at some of the obstacles that you are most likely to encounter.
JS (and CSS) files are blocked for Googlebot
It is crucial that crawlers are able to render your website correctly, which requires access to the necessary internal and external resources. If your site is not rendered as expected, then Google may do so incorrectly, leading to differences between how the page appears to a regular visitor and to a search engine bot.
A common problem is blocking important resources in the robots.txt file.
So, what to do about it?
You can verify how your pages are rendered by Google by using the URL Inspection Tool in Search Console. It is best if you test several exemplary URLs per site section that uses an individual template.
A critical question is:
Do resources that are not loaded add any significant content to the page and should be crawlable instead?
Also, examine your robots.txt file – are any relevant directories that store assets blocked for Googlebot?
If so, remove any blockades that target critical files.
HTML links (hyperlinks with <a> tag and href attribute) should be used to link to indexable pages so that search engines can:
- crawl and index your pages
- understand the structure of your website.
Google’s documentation provides examples of problematic implementations:
If your paginated pages lead to unique indexable URLs, it is essential to use <a> links for pagination, so that Google can discover and index the additional content on any following pages (such as product pages linked from paginated categories).
For example, Fenty Beauty’s category pages use a Load More button to reveal more products without any <a> tag links that would be visible to web crawlers.
Clicking the button will take you to a URL such as https://fentybeauty.com/collections/makeup-lip?page=2,
but that link is nowhere to be found on the parent category.
This means Googlebot will have problems accessing the paginated pages and discovering products that appear below the initial list of items.
If you are interested in that topic, read our case study from 2022:
In the end – avoid JS links for critical content and stay with regular hyperlinks.
You are relying on URLs that contain hashes (#)
Fragment identifiers, also known as anchors or hash fragments, are used to navigate to a specific section within a web page.
URLs containing a hash symbol will not be crawled by Googlebot as a separate page and therefore cannot be validly indexed, unless the content was already present in the source code.
For your content to be found and indexed properly in any framework, it is best practice to use alternative methods of directing search engines to the right page such as creating new unique static URLs without the hash symbol, or using a different separator, such as a question mark (?), often used for parameters.
Google mentions in their documentation that JS redirects should only be used as a last resort.
Also, It can be hard to know if the desired redirect is actually being executed – there is no guarantee that each time Googlebot will execute the JS that triggers the URL change.
For example, if client-side JS redirects are the default solution for a website migration with many URL changes, it will be less efficient and will take more time for Googlebot to process all the redirects.
Additionally, pages that are set to noindex in the initial HTML do not go through rendering, so Google will not see it if they are redirected with JS.
You are expecting Google to scroll down like real users do
As already mentioned in relation to pagination issues, Googlebot cannot click buttons like a human would. Also, Google cannot scroll the page the way regular users do.
Any content requiring such actions to load, will not be indexed.
For example, on infinite pagination pages, Google will not be able to see links to subsequent products (beyond the initial render) as it will not trigger the scroll event.
However, Google is able to render pages with a tall viewport (about 10,000 px), so if additional content is loaded based on the height of the viewport, Google may be able to see “some” of that content.
But you need to be mindful of the 10,000px cut-off point – content loaded lower than this, likely will not be indexed.
What is more, there is no guarantee that Google will use a high viewport at scale – not all pages may get rendered with it, so not all of their content will get indexed.
If implementing lazy-loading, for example, subsequent products on an ecommerce category, make sure that the lazy-loaded items are only deferred in terms of visual rendering (their images are not downloaded upfront but lazy-loaded), but their links and details are present in the initial HTML without the need to execute JS.
Generally speaking, for your website to be indexed properly, all content should load without the need for scrolling or clicking. This allows the entire website to be viewed correctly by both visitors and crawlers alike.
You can use the Inspection Tool in Google Search Console to verify that the rendered HTML contains all the content that you want indexed.
Google nowadays ranks websites based on their mobile versions, which are less likely to be as optimized as their desktop counterparts. As a result of mobile-first indexing, it is necessary to ensure Google can see links on your mobile menu.
Responsive web design is the common answer to that issue.
Best if you use one set of menu links and then style it accordingly to work for all screen resolutions. There is no need to create separate menu instances for multiple resolutions.
This can also cause link redundancies if all menu variants are included in the code at the same time (you will double the number of links from the navigation). If you create separate menus for desktop and mobile, where only one appears in the code depending on the screen resolution, you need to remember that only what is visible on mobile will be indexed (Mobile-First Indexing).
Links present only in the desktop menu will not be taken into account.
Additionally, if your menu is generated by scripts – Google will most likely not crawl it, or at least, not every time. With such an important part of your navigation, this situation is not ideal. If you cannot use solutions like SSR (Server-Side Rendering), please remember to keep your critical links in the unrendered source HTML.
It is best to avoid hiding content behind tabs or “click here to see more”-type buttons and instead use a combination of CSS and HTML to only temporarily “hide” the content that is already present in the code from the visual render, unless a tab is clicked/tapped.
This way, it is far more likely that content will be indexed.
To verify that Google can index your tabbed content, copy a fragment of text hidden under a tab and search for it using the site: operator with the URL of the webpage:
If you see your content in the actual Google Search, you can be certain it got indexed.
You are relying on dynamic rendering
If you take the approach of serving visitors with a fully-featured JS site whilst the server sends a pre-rendered version to Googlebot, that is Dynamic Rendering.
And it can lead to a number of problems.
Firstly, this creates two instances of the website that you need to manage and maintain (each page has its pre-rendered version served to Googlebot based on user-agent recognition), which naturally requires more resources. You are then required to verify that the version served to Google is equal to what real users see, as major content differences can lead to outdated content being indexed, or worse, your site getting penalized for misleading practices.
As of 2023, Google is not recommending dynamic rendering as a valid, long-term solution.
How to detect if your site uses dynamic rendering?*
Then do the same but switch the user agent to Googlebot – with JS disabled, does the page look the same?
Or perhaps, it looks nearly ready (compared to the blank page seen before)?
If so, your site uses dynamic rendering.
*Note that there are edge cases – if apart from detecting the user agent, you also verify if the request comes from actual Google servers, this will not prove anything. Ask your dev 😉
For example, browsing a category page on levi.com: https://www.levi.com/PL/en/clothing/women/501-levis-crop-jeans/p/362000111
It is likely the site uses dynamic rendering to serve pre-rendered content to crawlers.
It does not look like it is working correctly though!
Here’s a comparison of content for regular user, and for Googlebot:
Additionally, dynamic rendering can be too intensive for the server/infrastructure causing problems with the availability of the pre-rendered HTML and their response times. If the server-side rendered version is generated ad hoc, this means any backend calculations are performed on the fly, only upon an incoming request from Googlebot.
Depending on the size of the resources and JS payloads, this can take some time, hence resulting in atrocious response times (Googlebot is patient, but will not wait forever!).
If any of the JS chunks are not executed during that calculation, you could be missing parts of the page – therefore, that missing content will not be indexed. If a substantial portion of the content is missing from the prerender, it results in thin-content problems on URLs that Googlebot indexes, and this negatively impacts the quality of the entire site.
The recommended long-term solution is to serve the same server-side rendered version of your pages to both crawlers and users. In other words, remove detecting if the incoming request comes from user agent Googlebot to serve it dedicated content – just serve rendered content to everyone.
Your error pages are indexed (soft 404 errors)
This can affect the performance of your website in search results, so it is critical to verify that 404 error codes are returned to Googlebot as expected. This becomes even trickier if your site uses dynamic rendering.
You can also use Google Search Console to identify URLs that are returning 200 HTTP status codes instead of 404 errors. They are usually marked as “Soft 404” in the Page Indexing report.
Then, it is just a matter of changing them to “regular” 404s, with accurate HTTP status code.
You are using large JS (and CSS) files that slow down the page’s performance
As crawlers can measure the loading time of websites, it is often useful to reduce the size of large files (both JS and CSS) so that your website can be loaded quickly.
To address this issue, you can use a number of tactics such as:
- Minifying and compressing your JS/CSS files.
- Making sure JS/CSS is not render-blocking.
- Deferring JS that is not needed for the initial page render (for example, JS that handles user-interactions).
- Reducing the use of third-party libraries.
These can all contribute to reducing the size of your files so that they can be loaded quickly, providing an overall positive experience for both visitors and search engine crawlers.
Core Web Vitals (CWVs) are a set of user-centered metrics introduced by Google to assess the loading performance, interactivity, and visual stability of web pages.
The quickest way to access the CWVs scores for any website is to use PageSpeed Insights.
Input your URL into the PSI tool and provided that the data sample is large enough, the tool will display page speed metrics and how your page scores in them based on real-life data from the actual users of your site.
Click on each metric name to see definitions and background on how each metric is calculated.
Here is an example of a PSI result, based on fentybeauty.com page:
The tool not only gives you real user metrics, but also a set of actionable recommendations to share with your development team.
If you cannot cover these issues on your own, working with an experienced SEO specializing in JS will help. At Onely, we are working to get your website indexed even with JS dependencies and provide an overall positive experience for both visitors and search engine crawlers.