Blog Post

Revving Up Browser Speed and Performance

Part one of two of my Velocity NY 2015 review focuses on the various ways industry leaders are attempting to improve browser speed and performance.

When I was an industry analyst covering the application performance management space at 451 Research, one of the broad trends I was following was something that I called “preventive performance management.” It was a new-ish class of tools that moved beyond traditional monitoring and triage of performance issues to apply technologies like analytics, event correlation, and automation to prevent those issues from ever impacting users in the first place.

This was a nascent space and these tools were far from plug-and-play, typically requiring a lot of tuning, mapping, and self-learning before they delivered the promised results. But in a space where innovation was typically tied to managing new performance issues caused by new technology adoption (cloud, dynamic languages, mobile, etc.) and companies were looking to minimize down times and response times without maximizing infrastructure investments, any technology that could help enterprises stay one step ahead of performance issues seemed promising.

In that same vein, what caught my attention on the first day of sessions at O’Reilly Media’s Velocity conference in New York last week was a series of sessions on how to improve your site’s performance in the development and design phase. There were the standard presentations on optimizing design elements such as images, text, and fonts to make pages load faster. But, I was more interested in hearing about some new and as yet unheralded technologies that can make web applications and sites load faster.

The first session I attended looked at HTTP/2, the first new version of the network protocol the Web runs on in 18 years. HTTP/2 was published as a spec by the Internet Engineering Task Force (IETF) in May. One of the stated goals of HTTP/2 is to decrease network latency to improve web page load times. The most significant way HTTP/2 does this is by allowing multiple requests to be sent, one after the other, on the same TCP connection, while responses to those requests can be received out of order—eliminating the need for multiple connections between the client and the server. HTTP/2 also compresses HTTP headers, which allows the server to push resources to the client that haven’t been requested yet, and allows the client to indicate to the servers which resources are more important than others.

In the presentation, HTTP/2 was compared to its predecessor HTTP/1.1, which the vast majority of the Web runs on today. The presenters ran their own test, hosting the same site locally and running it over HTTP/1.1 and HTTP/2. The bottom-line result was that a page that took 5 seconds to load via HTTP/1.1 took just 2 seconds to load over HTTP/2, 100ms network latency for both. As network latency was increased, up to 500ms, the difference flattened out.

One of the advantages of HTTP/2 is that it can push content to the browser, minimizing the number of request cycles the browser makes to the server. As network latency increases however, so do the number of handshakes the browser must make to the server. HTTP/2, a more complex protocol, takes longer to make these handshakes than its predecessor.

Still, there are performance improvements to be had from HTTP/2. Just don’t expect them anytime soon. To date, just over 1.5% of the Web uses the new protocol, according to W3 Technologies. [For a demo comparing page load times using HTTP/2 vs. HTTP/1.1, go to: https://http2.akamai.com/demo]. My colleague Andrew Smirnov, a performance engineer at Catchpoint, will take a deeper look at HTTP/2 in an upcoming post on this blog.

The next session looked at another technology that could speed up Web page loading times: the ServiceWorker client side proxy, which is present in all major browsers even though Google Chrome claims the most advanced implementation (a Google performance engineer conducted the session).

ServiceWorker is a JavaScript API that enables local caching of Web content. It acts as a proxy server, sitting between the web application and the browser and network. ServiceWorker can control the web application it is associated with, including navigation and resource requests, and caches content in a very granular fashion to give developers complete control over how their apps behave in certain situations, such as when the network is not available. This degree of control makes ServiceWorker an improvement over previous caching tools like AppCache.

As web pages are rendered, a site that uses ServiceWorker checks the cache for content before it makes a network request. If the content has been cached by ServiceWorker and does not require updating, it’s served from the cache rather than from the network. In this way, ServiceWorker can support rich offline experiences not unlike a mobile app, which uses similar technology. It can also support faster page load times while online as well, not only by caching content, but pre-fetching and caching new content that’s added to the server.

HTTP/2 and ServiceWorker are two of the newer technologies being used to increase the speed of loading Web pages and to enable richer user experiences, and I expect both to eventually deliver on those promises; however, neither will really mitigate the need for monitoring of Web applications. A lot can still go wrong when the browser makes a request to the Web server, whether that problem is with the browser itself, HTML code, local network, DNS lookup, nearest Internet backbone, an API request, third party tag, or the CDN. And if the problem is on your end, such as the hardware, networking devices, databases, and other parts of internal infrastructure, you need to know where to start looking.

At Catchpoint, we refer to “peeling the onion” as a metaphor to describe the monitoring and managing of Web performance because there are many layers to uncover and examine. The onion is constantly moving and growing; therefore, because we offer more test types and longer time series data than any other vendor on the market, we believe we can peel the onion better than anyone else

The stakes are high, as customer experience is the lifeblood of any business. If your Website or application is slow, your customers will go elsewhere. In a Gartner survey from earlier this year, the number one reason enterprises gave for investing in APM tools was to “enhance customer experience quality.” When I was at 451, visibility into the customer’s experience similarly always came back as the number one application performance pain point IT organizations were struggling with.

After the afternoon break, I switched from the “Performance: Browsers” track to the “Metrics & Monitoring” track, to see what was new in the world of monitoring and alerting. I’ll review my findings in my next post. Stay tuned!

News & Trends
Synthetic Monitoring
Network Reachability
DNS
CDN
API Monitoring
DevOps
SLA Management
Workforce Experience
Media and Entertainment
SaaS Application Monitoring
Enterprise
This is some text inside of a div block.

You might also like

Blog post

Traceroute InSession: A traceroute tool for modern networks

Blog post

The cost of inaction: A CIO’s primer on why investing in Internet Performance Monitoring can’t wait

Blog post

Mastering IPM: Key Takeaways from our Best Practices Series