Blog Post

Revisiting the Anatomy of HTTP: Part I

In this article, we discuss in detail key composite metrics of HTTP request such as Time to First Byte (TTFB), Response, Server Response, and others.

One of the key driving factors behind the various web/mobile performance initiatives is the fact that end-users’ tolerance for latency has nose-dived. Several studies have been published whereby it has been demonstrated that poor performance routinely impacts the bottom line, viz,. # users, # transactions, etc. Examples studies include this, this and this. There are several sources of performance bottlenecks, viz., but not limited to:

  • Large numbers of redirects
  • Increasing use of images and/or videos without supporting optimizations such as compression and form factor aware content delivery
  • Increasing use of JavaScript
  • Performance tax of using SSL (as discussed here and here)
  • Increasing use of third-party services which, in most cases, become the longest pole from a performance standpoint

Over five years back we had written a blog on the anatomy of HTTP. In the Internet era, five years is a long time. There has been a sea change across the board and hence we thought it was time to revisit the subject for the benefit of the community as a whole. The figure below shows the key components of a HTTP request.

HTTP(s)

Besides the independent components, key composite metrics are also annotated in the figure and are defined below.

Time to First Byte (TTFB): The time it took from the request being issued to receiving the first byte of data from the primary URL for the test(s). This is calculated as DNS + Connect + Send + Wait. For tests where the primary URL has a redirect chain, TTFB is calculated as the sum of the TTFB for each domain in the redirect chain.

Response: The time it took from the request being issued to the primary host server responding with the last byte of the primary URL of the test(s). For tests where the primary URL has a redirect chain, Response is calculated as the sum of the response time for each domain in the redirect chain.

Server Response: The time it took from when DNS was resolved to the server responding with the last byte of the primary URL of the test(s). This shows the server’s response exclusive of DNS times. For tests where the primary URL has a redirect chain, Server Response is calculated as the response time for each domain in the redirect chain minus the DNS time.

A plot from an example test illustrating the aforementioned metrics is show below:

Delta Airlines: Catchpoint web performance

Anomalies in any metric can be viewed and detected via Catchpoint’s portal in a very straightforward fashion. An example illustrating an anomaly in TWait is shown below:

Delta Airlines, Catchpoint server response

Other composite metrics of interest include:

Render Start: The time it took the browser to start rendering the page.

Document Complete: Indicates that the browser has finished rendering the page. In Chrome it is equivalent to the browser onload event. In IE it occurs just before onload is fired and is triggered when the document readyState changes to “complete.”

Webpage Response: The time it took from the request being issued to receiving the last byte of the final element on the page.

  • For Web tests, the agent will wait for up to two seconds after Document Complete for no network activity to end the test
  • Webpage Response is impacted by script verbs for Transaction tests
  • For Object monitor tests, this value is equivalent to Response

Client Time: The time spent executing Javascript and CSS in the browser. Equal to the Webpage Response minus Wire Time

Wire Time: The time the agent took loading network requests. Equal to Webpage Response minus Client Time.

Content Load: The time it took to load the entire content of the webpage after connection was established with the server for the primary URL of the test(s), this is the time elapsed between the end of Send until the final element, or object, on the page is loaded. Content Load does not include the DNS, Connect, Send, and SSL time on the primary URL (or any redirects of the primary URL). For Object monitor tests, this value is equivalent to Load.

In the context of mobile performance, a key metric which is very often the target of optimization is Above-The-Fold (ATF) time. The importance of ATF stems from the fact that while the user is interpreting the first page of content, the rest of the page can be delivered progressively in the background. A threshold of one second is often used as the target for ATF. Practically, after subtracting the network latency, the performance budget is about 400 milliseconds for the following: server must render the response, client-side application code must execute, and the browser must layout and render the content. For recommendations on how to optimize mobile websites, the reader is referred to this and [this](http://storage.googleapis.com/io-2013/presentations/239- Instant Mobile Websites- Techniques and Best Practices.pdf).

A key difference between the figure above and the corresponding figure in the previous blog is the presence of the TSSL component. Unencrypted communication – via HTTP and other protocols – is susceptible to interception, manipulation, and impersonation, and can potentially reveal users’ credentials, history, identity, and other sensitive information. In the post-Snowden era, privacy has been in limelight. Leading companies such as Google, Microsoft, Apple, and Yahoo have embraced HTTPS for most of their services and have also encrypted their traffic between their data centers. Delivering data over HTTPS has the following benefits:

  • HTTPS protects the integrity of the website by preventing intruders from tampering with exchanged data, e.g., rewriting content, injecting unwanted and malicious content, and so on.
  • HTTPS protects the privacy and security of the user by preventing intruders from listening in on the exchanged data. Each unencrypted request can potentially reveal sensitive information about the user, and when such data is aggregated across many sessions, can be used to de-anonymize identities and reveal other sensitive information. All browsing activity, as far as the user is concerned, should be considered private and sensitive.
  • HTTPS enables powerful features on the web such as accessing users’ geolocation, taking pictures, recording video, enabling offline app experiences, and more, requiring explicit user opt-in that, in turn, requires HTTPS.

When the SSL protocol was standardized by the IETF, it was renamed to Transport Layer Security (TLS). TLS was designed to operate on top of a reliable transport protocol such as TCP. The TLS protocol is designed to provide the following three essential services to all applications running above it:

  • Encryption: A mechanism to obfuscate what is sent from one host to another
  • Authentication: A mechanism to verify the validity of provided identification material
  • Integrity: A mechanism to detect message tampering and forgery

Technically, one is not required to use all three in every situation. For instance, one may decide to accept a certificate without validating its authenticity; having said that, one should be well aware of the security risks and implications of doing so. In practice, a secure web application will leverage all three services.

In order to establish a cryptographically secure data channel, both the sender and receiver of a connection must agree on which ciphersuites will be used and the keys used to encrypt the data. The TLS protocol specifies a well-defined handshake sequence (illustrated below) to perform this exchange.

TLS Handshake

TLS Handshake. Note that the figure assumes the same (optimistic) 28 millisecond one-way “light in fiber” delay between New York and London. (source: click here)

As part of the TLS handshake, the protocol allows both the sender and the receiver to authenticate their identities. When used in the browser, this authentication mechanism allows the client to verify that the server is who it claims to be (e.g., a payment website) and not someone simply pretending to be the destination by spoofing its name or IP address. Likewise, the server can also optionally verify the identity of the client — e.g., a company proxy server can authenticate all employees, each of whom could have their own unique certificate signed by the company. Finally, the TLS protocol also provides its own message framing mechanism and signs each message with a message authentication code (MAC). The MAC algorithm is a one-way cryptographic hash function (effectively a checksum), the keys to which are negotiated by both the sender and the receiver. Whenever a TLS record is sent, a MAC value is generated and appended for that message, and the receiver is then able to compute and verify the sent MAC value to ensure message integrity and authenticity.

From the figure above we note that TLS connections require two roundtrips for a “full handshake” and thus have an adverse impact on performance. The plot below illustrates the comparative Avg TSSL (the data was obtained via Catchpoint) for the major airlines:

Blog_Revised Anatomy of HTTP_7_705

However, in practice, optimized deployments can do much better and deliver a consistent 1-RTT TLS handshake. In particular:

  • False Start – a TLS protocol extension – can be used to allow the client and server to start transmitting encrypted application data when the handshake is only partially complete—i.e., onceChangeCipherSpec and Finished messages are sent, but without waiting for the other side to do the same. This optimization reduces handshake overhead for new TLS connections to one roundtrip.
  • If the client has previously communicated with the server, then an “abbreviated handshake” can be used, which requires one roundtrip and also allows the client and server to reduce the CPU overhead by reusing the previously negotiated parameters for the secure session.

The combination of both of the above optimizations allows us to deliver a consistent 1-RTT TLS handshake for new and returning visitors and facilitates computational savings for sessions that can be resumed based on previously negotiated session parameters. Other ways to minimize the performance impact of HTTPS include:

  • Use of HTTP Strict Transport Security (HSTS) that restricts web browsers to access web servers solely over HTTPS. This mitigates performance impact by eliminating unnecessary HTTP-to-HTTPS redirects. This responsibility is shifted to the client which will automatically rewrite all links to HTTPS.
  • Early Termination helps minimize latency due to TLS handshake.
  • Use of compression algorithms such as HPACK, Brotli.

For further discussion on SSL/TLS, the reader is referred to the paper titled, “Anatomy and Performance of SSL Processing” by Zhao et al. and the paper titled, “Analysis and Comparison of Several Algorithms in SSL/TLS Handshake Protocol” by Qing and Yaping.

By: Arun Kejariwal and Mehdi Daoudi

Synthetic Monitoring
Network Reachability
DNS
CDN
API Monitoring
Workforce Experience
SaaS Application Monitoring
This is some text inside of a div block.

You might also like

Blog post

Traceroute InSession: A traceroute tool for modern networks

Blog post

The cost of inaction: A CIO’s primer on why investing in Internet Performance Monitoring can’t wait

Blog post

Mastering IPM: Key Takeaways from our Best Practices Series