Blog Post

Will Serving Real HTML Content Make A Website Faster? Let's Experiment!

Published
September 21, 2022
#
 mins read
By 

in this blog post

TLDR: if your site is Twitter, AirBnB, Apple, Spotify, Reddit, CNN, FedEx, or so many others then probably yes!

Many of the most common performance problems in websites and applications today are caused by how they load and rely upon JavaScript, and the difficulty involved in solving those problems often depends on the degree of that reliance. When JS reliance is minimal, fixing poor delivery performance can be as simple as instructing the browser to load certain scripts at a lower priority and allow HTML content to render sooner. But when a site is dependent on JavaScript for generating its HTML content in the first place, those sorts of optimizations can’t help, and in those cases fixing the problem may require deep and time-consuming architectural changes.

While it has been around longer, the pattern of using JavaScript to generate a page's content after delivery became particularly popular within the last 5-10 years. The approach was initially intended for web applications that have highly dynamic, personalized, real-time content, but nowadays frameworks such as React have made these practices commonplace among sites that don’t share those specialized qualities as well.

Appropriateness aside, sites built this way can suffer longer initial loading times due to the stepped nature of their content delivery, delayed requests for images and videos, and the time it takes an average device to process code after it's delivered.

Signs of Dependence

Sites that suffer delivery performance issues due to JavaScript over-dependence can be relatively easy to spot. Often these sites will visibly render their content in noticeable steps, first showing either a blank or “skeleton” layout without content, and sometime after that–whenever the JavaScript finishes fetching and generating the HTML–a fully populated page. The initial page layout steps are intended to give users the perception that the page responded quickly, allowing for long loading times to seem more tolerable than they otherwise would.

Let's look at some examples.

This WebPageTest filmstrip shows Twitter’s Explore page loading on a 4G connection on a mobile device in Chrome, at 1-second intervals.

Notice how the initial filmstrip keyframes display a loading image (the blue bird in this case), then an unpopulated placeholder page layout, and ultimately the real content. Ironically, that bird will register as the site's First Contentful Paint metric, but the actual page content will replace it much later. Apparently, humans aren't the only audience for visual loading tricks!

Here’s another example. This is AirBnB’s homepage loaded on a cable connection in Chrome on a desktop computer.

Here, the telltale “skeleton” loading screen is visible for about 5 seconds, and only after that page’s HTML is generated can the browser begin to discover and subsequently fetch the images that it will eventually populate the grid. Those grid images register as the site's Largest Contentful Paint (LCP) metric, one of Google's “Core Web Vitals”:

Tradeoffs and How to Know When to Make a Change

Now, it’s very important to note that while the examples in this post helpfully display this pattern, the architectural decisions of these sites are made thoughtfully by highly-skilled teams. Web development involves tradeoffs, and in some cases a team may deem the initial performance impact of JS-dependence a worthy compromise for benefits they get in other areas, such as personalized content, server costs and simplicity, and even performance in long-lived sessions. But tradeoffs aside, it’s reasonable to suspect that if these sites were able to deliver meaningful HTML up-front, a browser would be able render initial content sooner and their users would see improvements as a result.

And that situation tends to put us in a bind: it's one thing to suspect that a change will improve performance, and another thing to be able to see the impact for yourself. For many sites, the changes involved in generating HTML on the server instead of the client can be quite complicated, making them difficult to prototype, let alone change in production.

It's hard to commit to big, time-consuming changes when you don't know whether they will help...

Enter, WebPageTest Opportunities & Experiments

One of my favorite parts of WebPageTest's new Opportunities & Experiments feature is that it can diagnose this exact problem and reliably predict just how much a fix would potentially improve performance.

After running either of these sites through WebPageTest and visiting their opportunities section, you'll find an observation that a large amount of HTML was generated after delivery by JavaScript and may be worth your attention:

Here it is on Twitter’s result:

screenshot of text: A significant portion of HTML content (132.09kb, 52.11% of total HTML) was generated by JavaScript after delivery.

…and on AirBnB’s respectively:

another screenshot of similar text to prior image

Those observations come free to any WebPageTest user as part of any test run, and we're constantly refining and adding more diagnostics to that page. In addition, a particularly novel companion to those observations will be offered to users with access to WebPageTest Experiments, which are part of the WebPageTest Pro plan.

Pro users who click that obvservation to expand it will be presented with the following experiment:

screenshot of text: Mimic Pre-rendered HTML: This experiment mimics server-generated HTML by swapping the initial HTML with the fully rendered HTML from this test run. Note: this will very likely break site behavior, but is potentially useful for comparing early metrics and assessing whether moving logic to the server is worth the effort.

Once applied, that experiment will test the impact of delivering that site’s final HTML directly from the server at the start, allowing developers to understand the potential impact of making that change before doing any of the work!

The Mimic Pre-Rendered HTML Experiment

Like all WebPageTest experiments, this experiment works by testing the performance of making one (or many) changes to a live site mid-request using a special proxy server and comparing the performance of that test to an identical test that does not modify the site at all. These two groups of test runs are called the Experiment and the Control of a WebPageTest experiment, and we typically encourage users to run at least 3 of each to get a good median run. To make the comparison as fair as possible, WebPageTest runs both the experiment and the control through its experiments proxy server, either making changes on the fly or simply passing requests directly through, respectively. That last part is important because simply proxying a site can impact its performance at least in subtle ways, so it's best not to compare a proxied test to an original unproxied test. With Experiments, our aim is to ensure that the only difference we’re measuring between the experiment and the control is the optimization itself.

The changes that the Mimic Pre-Rendered HTML experiment makes occur in one interesting swap, using some special information collected in the original test. As of this summer, every test run on WebPageTest captures the final state of a page’s HTML (or, technically its DOM), and stores it as part of a test's data. When the initial page is requested during the pre-render experiment, the proxy fetches that page’s stored final HTML and replaces the site's initial HTML response body with that final HTML text as it passes it along to the browser. While not always perfect, for many sites this experiment should reveal the potential performance benefits of an actual implementation in just one click.

As an added tip, I like to combine this experiment with a “disable scripts” experiment as well because it can help prevent JavaScript-rendered sites like these from unnecessarily re-rendering after delivery. As such, I’ve added that experiment in the following runs.

Predicting the Benefits of Serving Useful HTML

Let’s look at Twitter first. Running the Mimic Pre-Rendered HTML experiment on Twitter’s Explore page gives us the following initial results.

At initial glance, we can see the huge, expected impact of meaningful HTML in the comparison video on the top right, where the page is fully populated with content at 3.4 seconds, down from the original time of 12 seconds.

Notably, a couple of metrics are slower in the experiment run: start render and first contentful paint. But that's only because the control site happens to render its bird image very early, and the experiment doesn't quite render its real content quite as soon as that bird.

A huge improvement! But it’s actually even huge...r

More good news! Just beneath the experiment results, WebPageTest added a note telling us that there were notable initial response time differences between the experiment and the control run. Specifically, the experiment took a little longer to arrive at Time To First Byte. This can happen with any experiment due to common network variance or inconsistent server response times, and sometimes it can highlight server issues worth looking into.

But with the pre-render HTML experiment, the variance is expected because the proxy task itself takes a little time to apply mid-flight, given that it requires making a request for that final HTML.

Delays like this that occur as a result of our proxy tasks are not useful in a comparison and they wouldn't likely exist in a real implementation of the technique on a live site. For that reason, whenever server timing varies by more than 100ms WebPageTest offers a link to view the experiment results with each run's first byte times ignored. With that link, we can see the metric differences more fairly, as if the experiment and control had delivered their initial HTML at the same moment.

Wow! Now that we've normalized the experiment's response time, we’re looking at a 9.32 second improvement in Largest Contentful Paint for new visits to that page on Chrome/mobile with a 4G connection speed.

Just for fun, here’s that experiment shown head-to-head in a real-time video (Note: this comparison video does not include the TTFB normalization above, so render times appear a little later than they would ideally be).

Here’s the same experiment on desktop/Chrome as well, which is also dramatic, with over 6 seconds earlier LCP.

By now, we’ve probably done enough to be able to understand the impact of this optimization, but it would be possible to refine the experiment further to eliminate some unhelpful noise. For example, an artifact in this experiment’s results comparison shows that the experiment had 1 additional render-blocking request that was not present in the control. This is peculiar and likely the result of the final HTML snapshot containing link or script elements that were originally added dynamically (and thus non-blocking), yet appear to be render-blocking when viewed as static output. A quick glance at the experiment's request waterfall confirms that a google account stylesheet is to blame, shown with a render blocking indicator on row 2:

In a real implementation, that blocking request would not exist in the HTML at load time, so you may choose to refine the experiment further by removing it from the source. For now though, our result is so dramatic that we don't need to further reduce noise to make the experiment run even faster. Regardless, this situation is a helpful reminder to take these comparisons with a grain of salt. They are a good prediction for how an optimization would apply, but keep any eye out for artifacts that can sometimes skew the results.

Let's move on!

Experimenting on AirBnB

Running the Mimic Pre-Rendered HTML experiment on AirBnB’s homepage gives us the following results (adjusted for differences in proxy timing, once again).

screenshot of the following page

Another huge improvement, especially since this test was run on Desktop with a fast cable connection! By serving useful HTML up-front, we get a nearly 5 second improvement on LCP. Also, it’s interesting to note just how early the images begin to load in this case due to their presence in the initial HTML. Browsers are designed to scan HTML very early after delivery and fetch resources–images, videos, other assets– that will be necessary for the initial layout. Having those images in the HTML allows the browser to discover them immediately, rather than much later as shown in the control site, after JavaScript generates the HTML that contains the image references.

How About a Few More Sites?

To spread the love, let’s see the impact of this experiment on a few more JavaScript-reliant sites, shall we?

Here’s CNN.com (with an over 8-second improvement in start render and 13-second faster LCP on 4G Chrome mobile):

screenshot of the prior linked page

Here’s Spotify (with an almost 7-second improvement in LCP on Desktop Chrome cable):

screenshot of the prior linked page

Here’s Reddit (5.38s LCP improvement on 4G Mobile Chrome):

screenshot of the prior linked page

Here’s Apple.com (4.0s improvement in start render LCP on 4G Chrome mobile):

screenshot of the prior linked page

Here’s FedEx.com (4 second faster start render on 4G Chrome mobile):

screenshot of the prior linked page

Here’s one from my favorite local ice cream shop (8.47s faster LCP on 4G Chrome mobile):

screenshot of the prior linked page

So many wins! As this post demonstrates, serving useful HTML up-front is faster than client-side generated HTML–often by a lot. And there are many other reasons useful HTML is better too! Accessibility is a big one. The moment a site becomes interactive–that is, when a page not only looks usable but actually is usable from a user input perspective–is often the moment that a site becomes accessible to assistive technology like screen readers.

Okay... I'm convinced. How do I do it?

Unfortunately, that part's beyond the scope of this post, but there are many great articles out there addressing this question head-on. The work involved in rendering JavaScript content on the server instead of the browser can often be complex and techniques will vary depending on your site's architecture.

Fortunately, more and more JavaScript frameworks are offering solutions for server rendering and some even offer it as a default. Given that this problem is so widespread and frequently discussed, you may just find that others have already solved the problem for your technology stack of choice. So go check the documentation, do some googling for terms like "SSR" and "server rendering", and hopefully you'll be well on your way to improvements.

Thanks for reading!

I hope this post makes it clear that serving meaningful HTML can be one of the absolute best things you can do for a site's performance. WebPageTest Experiments are designed to help us understand which changes are worth our effort before we do any of the work, and the “Mimic Pre-Rendered HTML” experiment is a particularly great example of that value.

The more information we have, the more informed decisions we can make!

Thanks for reading!

TLDR: if your site is Twitter, AirBnB, Apple, Spotify, Reddit, CNN, FedEx, or so many others then probably yes!

Many of the most common performance problems in websites and applications today are caused by how they load and rely upon JavaScript, and the difficulty involved in solving those problems often depends on the degree of that reliance. When JS reliance is minimal, fixing poor delivery performance can be as simple as instructing the browser to load certain scripts at a lower priority and allow HTML content to render sooner. But when a site is dependent on JavaScript for generating its HTML content in the first place, those sorts of optimizations can’t help, and in those cases fixing the problem may require deep and time-consuming architectural changes.

While it has been around longer, the pattern of using JavaScript to generate a page's content after delivery became particularly popular within the last 5-10 years. The approach was initially intended for web applications that have highly dynamic, personalized, real-time content, but nowadays frameworks such as React have made these practices commonplace among sites that don’t share those specialized qualities as well.

Appropriateness aside, sites built this way can suffer longer initial loading times due to the stepped nature of their content delivery, delayed requests for images and videos, and the time it takes an average device to process code after it's delivered.

Signs of Dependence

Sites that suffer delivery performance issues due to JavaScript over-dependence can be relatively easy to spot. Often these sites will visibly render their content in noticeable steps, first showing either a blank or “skeleton” layout without content, and sometime after that–whenever the JavaScript finishes fetching and generating the HTML–a fully populated page. The initial page layout steps are intended to give users the perception that the page responded quickly, allowing for long loading times to seem more tolerable than they otherwise would.

Let's look at some examples.

This WebPageTest filmstrip shows Twitter’s Explore page loading on a 4G connection on a mobile device in Chrome, at 1-second intervals.

Notice how the initial filmstrip keyframes display a loading image (the blue bird in this case), then an unpopulated placeholder page layout, and ultimately the real content. Ironically, that bird will register as the site's First Contentful Paint metric, but the actual page content will replace it much later. Apparently, humans aren't the only audience for visual loading tricks!

Here’s another example. This is AirBnB’s homepage loaded on a cable connection in Chrome on a desktop computer.

Here, the telltale “skeleton” loading screen is visible for about 5 seconds, and only after that page’s HTML is generated can the browser begin to discover and subsequently fetch the images that it will eventually populate the grid. Those grid images register as the site's Largest Contentful Paint (LCP) metric, one of Google's “Core Web Vitals”:

Tradeoffs and How to Know When to Make a Change

Now, it’s very important to note that while the examples in this post helpfully display this pattern, the architectural decisions of these sites are made thoughtfully by highly-skilled teams. Web development involves tradeoffs, and in some cases a team may deem the initial performance impact of JS-dependence a worthy compromise for benefits they get in other areas, such as personalized content, server costs and simplicity, and even performance in long-lived sessions. But tradeoffs aside, it’s reasonable to suspect that if these sites were able to deliver meaningful HTML up-front, a browser would be able render initial content sooner and their users would see improvements as a result.

And that situation tends to put us in a bind: it's one thing to suspect that a change will improve performance, and another thing to be able to see the impact for yourself. For many sites, the changes involved in generating HTML on the server instead of the client can be quite complicated, making them difficult to prototype, let alone change in production.

It's hard to commit to big, time-consuming changes when you don't know whether they will help...

Enter, WebPageTest Opportunities & Experiments

One of my favorite parts of WebPageTest's new Opportunities & Experiments feature is that it can diagnose this exact problem and reliably predict just how much a fix would potentially improve performance.

After running either of these sites through WebPageTest and visiting their opportunities section, you'll find an observation that a large amount of HTML was generated after delivery by JavaScript and may be worth your attention:

Here it is on Twitter’s result:

screenshot of text: A significant portion of HTML content (132.09kb, 52.11% of total HTML) was generated by JavaScript after delivery.

…and on AirBnB’s respectively:

another screenshot of similar text to prior image

Those observations come free to any WebPageTest user as part of any test run, and we're constantly refining and adding more diagnostics to that page. In addition, a particularly novel companion to those observations will be offered to users with access to WebPageTest Experiments, which are part of the WebPageTest Pro plan.

Pro users who click that obvservation to expand it will be presented with the following experiment:

screenshot of text: Mimic Pre-rendered HTML: This experiment mimics server-generated HTML by swapping the initial HTML with the fully rendered HTML from this test run. Note: this will very likely break site behavior, but is potentially useful for comparing early metrics and assessing whether moving logic to the server is worth the effort.

Once applied, that experiment will test the impact of delivering that site’s final HTML directly from the server at the start, allowing developers to understand the potential impact of making that change before doing any of the work!

The Mimic Pre-Rendered HTML Experiment

Like all WebPageTest experiments, this experiment works by testing the performance of making one (or many) changes to a live site mid-request using a special proxy server and comparing the performance of that test to an identical test that does not modify the site at all. These two groups of test runs are called the Experiment and the Control of a WebPageTest experiment, and we typically encourage users to run at least 3 of each to get a good median run. To make the comparison as fair as possible, WebPageTest runs both the experiment and the control through its experiments proxy server, either making changes on the fly or simply passing requests directly through, respectively. That last part is important because simply proxying a site can impact its performance at least in subtle ways, so it's best not to compare a proxied test to an original unproxied test. With Experiments, our aim is to ensure that the only difference we’re measuring between the experiment and the control is the optimization itself.

The changes that the Mimic Pre-Rendered HTML experiment makes occur in one interesting swap, using some special information collected in the original test. As of this summer, every test run on WebPageTest captures the final state of a page’s HTML (or, technically its DOM), and stores it as part of a test's data. When the initial page is requested during the pre-render experiment, the proxy fetches that page’s stored final HTML and replaces the site's initial HTML response body with that final HTML text as it passes it along to the browser. While not always perfect, for many sites this experiment should reveal the potential performance benefits of an actual implementation in just one click.

As an added tip, I like to combine this experiment with a “disable scripts” experiment as well because it can help prevent JavaScript-rendered sites like these from unnecessarily re-rendering after delivery. As such, I’ve added that experiment in the following runs.

Predicting the Benefits of Serving Useful HTML

Let’s look at Twitter first. Running the Mimic Pre-Rendered HTML experiment on Twitter’s Explore page gives us the following initial results.

At initial glance, we can see the huge, expected impact of meaningful HTML in the comparison video on the top right, where the page is fully populated with content at 3.4 seconds, down from the original time of 12 seconds.

Notably, a couple of metrics are slower in the experiment run: start render and first contentful paint. But that's only because the control site happens to render its bird image very early, and the experiment doesn't quite render its real content quite as soon as that bird.

A huge improvement! But it’s actually even huge...r

More good news! Just beneath the experiment results, WebPageTest added a note telling us that there were notable initial response time differences between the experiment and the control run. Specifically, the experiment took a little longer to arrive at Time To First Byte. This can happen with any experiment due to common network variance or inconsistent server response times, and sometimes it can highlight server issues worth looking into.

But with the pre-render HTML experiment, the variance is expected because the proxy task itself takes a little time to apply mid-flight, given that it requires making a request for that final HTML.

Delays like this that occur as a result of our proxy tasks are not useful in a comparison and they wouldn't likely exist in a real implementation of the technique on a live site. For that reason, whenever server timing varies by more than 100ms WebPageTest offers a link to view the experiment results with each run's first byte times ignored. With that link, we can see the metric differences more fairly, as if the experiment and control had delivered their initial HTML at the same moment.

Wow! Now that we've normalized the experiment's response time, we’re looking at a 9.32 second improvement in Largest Contentful Paint for new visits to that page on Chrome/mobile with a 4G connection speed.

Just for fun, here’s that experiment shown head-to-head in a real-time video (Note: this comparison video does not include the TTFB normalization above, so render times appear a little later than they would ideally be).

Here’s the same experiment on desktop/Chrome as well, which is also dramatic, with over 6 seconds earlier LCP.

By now, we’ve probably done enough to be able to understand the impact of this optimization, but it would be possible to refine the experiment further to eliminate some unhelpful noise. For example, an artifact in this experiment’s results comparison shows that the experiment had 1 additional render-blocking request that was not present in the control. This is peculiar and likely the result of the final HTML snapshot containing link or script elements that were originally added dynamically (and thus non-blocking), yet appear to be render-blocking when viewed as static output. A quick glance at the experiment's request waterfall confirms that a google account stylesheet is to blame, shown with a render blocking indicator on row 2:

In a real implementation, that blocking request would not exist in the HTML at load time, so you may choose to refine the experiment further by removing it from the source. For now though, our result is so dramatic that we don't need to further reduce noise to make the experiment run even faster. Regardless, this situation is a helpful reminder to take these comparisons with a grain of salt. They are a good prediction for how an optimization would apply, but keep any eye out for artifacts that can sometimes skew the results.

Let's move on!

Experimenting on AirBnB

Running the Mimic Pre-Rendered HTML experiment on AirBnB’s homepage gives us the following results (adjusted for differences in proxy timing, once again).

screenshot of the following page

Another huge improvement, especially since this test was run on Desktop with a fast cable connection! By serving useful HTML up-front, we get a nearly 5 second improvement on LCP. Also, it’s interesting to note just how early the images begin to load in this case due to their presence in the initial HTML. Browsers are designed to scan HTML very early after delivery and fetch resources–images, videos, other assets– that will be necessary for the initial layout. Having those images in the HTML allows the browser to discover them immediately, rather than much later as shown in the control site, after JavaScript generates the HTML that contains the image references.

How About a Few More Sites?

To spread the love, let’s see the impact of this experiment on a few more JavaScript-reliant sites, shall we?

Here’s CNN.com (with an over 8-second improvement in start render and 13-second faster LCP on 4G Chrome mobile):

screenshot of the prior linked page

Here’s Spotify (with an almost 7-second improvement in LCP on Desktop Chrome cable):

screenshot of the prior linked page

Here’s Reddit (5.38s LCP improvement on 4G Mobile Chrome):

screenshot of the prior linked page

Here’s Apple.com (4.0s improvement in start render LCP on 4G Chrome mobile):

screenshot of the prior linked page

Here’s FedEx.com (4 second faster start render on 4G Chrome mobile):

screenshot of the prior linked page

Here’s one from my favorite local ice cream shop (8.47s faster LCP on 4G Chrome mobile):

screenshot of the prior linked page

So many wins! As this post demonstrates, serving useful HTML up-front is faster than client-side generated HTML–often by a lot. And there are many other reasons useful HTML is better too! Accessibility is a big one. The moment a site becomes interactive–that is, when a page not only looks usable but actually is usable from a user input perspective–is often the moment that a site becomes accessible to assistive technology like screen readers.

Okay... I'm convinced. How do I do it?

Unfortunately, that part's beyond the scope of this post, but there are many great articles out there addressing this question head-on. The work involved in rendering JavaScript content on the server instead of the browser can often be complex and techniques will vary depending on your site's architecture.

Fortunately, more and more JavaScript frameworks are offering solutions for server rendering and some even offer it as a default. Given that this problem is so widespread and frequently discussed, you may just find that others have already solved the problem for your technology stack of choice. So go check the documentation, do some googling for terms like "SSR" and "server rendering", and hopefully you'll be well on your way to improvements.

Thanks for reading!

I hope this post makes it clear that serving meaningful HTML can be one of the absolute best things you can do for a site's performance. WebPageTest Experiments are designed to help us understand which changes are worth our effort before we do any of the work, and the “Mimic Pre-Rendered HTML” experiment is a particularly great example of that value.

The more information we have, the more informed decisions we can make!

Thanks for reading!

This is some text inside of a div block.

You might also like

Blog post

From refresh to results: the metrics that shaped Election Day 2024 coverage

Blog post

Did Delta's slow web performance signal trouble before CrowdStrike?

Blog post

Web Performance Experts Look into the Future of Web Performance