These things include internal links, site navigation, tags around important text and keyword content, inbound links from things like paid media services, and URL content.
As a condensed explanation: Instead of the straightforward path Googlebot and Caffeine (the indexer) have when it comes to HTML, they have to go through the same process with half a dozen more steps. This includes getting information from different external API’s, as needed, compiling code, and using many “special features” found in Caffeine to try and render the content accurately.
In many cases, Googlebot can fetch JS content, and the Indexer will read and index it. However, it is easier to use only approved techniques and types of code, JS or otherwise.
To see how Google views your site and what content may be being left out, just type your URL into Google’s Search Console. This tool makes it easy to see how your pages are being indexed and if there are any apparent gaps between what is really there and how it is being read.
You should also make sure that, in your Robots.txt file, you have allowed Google to view, fetch, and index any JS you’re using. Sometimes all JS is blocked because it’s used to create pages that are not meant for readers or humans to use. It’s better, however, to block individual functions, blocks, or pages. This does take more time and make for more complicated coding for developers, but can significantly improve JS-related SEO.
If you want to improve the SEO of a website, stick with plain HTML in as many cases as possible. Use JS on top to enhance specific functions, but ensure that there is always nice, neat, and readable HTML underneath to ensure that your pages are crawled and indexed efficiently and will be no matter how the programming of those two services is altered.