Blog Post

The Key to Cache: An Intro to Varnish

In this article, we will focus specifically on varnish, which is an HTTP accelerator otherwise known as caching HTTP reverse proxy.

We’ve published several articles on the topic of caching and its role in optimizing web performance in the past. In this article, we will focus specifically on varnish, which is an HTTP accelerator otherwise known as caching HTTP reverse proxy.

What are HTTP Accelerators?

HTTP accelerators use techniques such as caching, prefetching, compression, and TCP acceleration to reduce the number of requests being served by the web server.

Accelerators are primarily of two types:

  • Web client accelerators
  • Web server accelerators

Varnish is an example of a web server accelerator which serves as a reverse proxy server and is installed in front of web/application servers. It focuses on reducing the number of requests being served by the web/application server by caching the responses returned from the web server, thus allowing less bandwidth usage and reducing the server’s load.

Varnish, when installed in front of a web server, receives the requests made by the client and attempts to respond to these requests from its cache (varnish cache).

If varnish is unable to respond to the query from its cache, it forwards the request to the backend, receives the response from the backend, stores it in its cache and then delivers it to the client who made the request.

The diagram above explains the process of caching using varnish in a simple manner. The client (a browser, in this case) makes HTTP requests assuming it is communicating with the web server.

The HTTP request is received by Varnish, which is installed right in front of the web server. Assuming it is the first time that the request for the resource has been made, Varnish will forward the request to the web server (Apache or Nginx); it caches the response received from the web server so that Varnish can respond if the same query is made again without reaching out to the Web Server.

One very important item to point out here is that Varnish allows caching and accelerates web pages without the need of modifying any of your code or backend.

Version 1.0 of Varnish was released in 2006 and it’s come a long way since then; websites like StackOverflow, Drupal, Wikipedia, Reddit, Facebook, Twitter, and twitch.tv are currently using this.

VCL-Varnish Configuration Language

When it comes to Varnish, one of the most widely used and an amazingly powerful feature it provides is the option to customize. We all love customizations, right? Be it our homes, cars, motorbikes, clothes, or even the tools we use. Varnish supports customizations using a powerful configuration language called VCL. The VCL is used to control the behavior of the cache and allows you to control how requests are cached/not cached by Varnish.

Varnish and HTTP

Since Varnish is an HTTP Accelerator, it is very important to understand how Varnish works with HTTP. Let’s have a look at the points below:

  • Varnish will forward any HTTP request using HTTP methods (excluding GET and HEAD) to the backend server and will not cache the response returned. This means that an HTTP POST request, for example, will be forwarded to the backend by Varnish and the response returned will not be cached.
  • Any HTTP request that includes either the “authorization” header or “cookie” will not be served by Varnish. It will be forwarded to the backend. Any HTTP response that includes the Set-Cookie header is also not cached. This is primarily because cookies and authorization headers are specific to individual users and it is not advisable to cache content which differs from user to user.
  • The “expires” and the “cache-control” headers are used to specify the time for which a resource can be cached. Varnish respects both these headers but a time to live value can also be specified using the VCL. The TTL defined using the Varnish Configuration Language gets priority over the cache-control and expires headers.
  • Varnish works with and supports the ETag and Last Modified Response headers.
  • Varnish supports and works with the Vary header which is used by HTTP to perform Cache variations.

You can read more about the HTTP headers mentioned above in this article.

Understanding and Fixing Varnish Errors

Now, let’s look at some common Varnish errors – 503s and what they mean.

1. Error 503 Backend Fetch Failed

Varnish by default uses a cache tag size of 8192 bytes. If you are using cache tags (when using a Content Management System like Drupal, Magento, WordPress or on your server) which exceed Varnish’s default size, there are possibilities that you may end up being greeted with the 503 Backend fetch failed error.

To fix such errors, you have to increase the value of the “http_resp_hdr_len” parameter in your Varnish Configuration File.

Please note that if the value of the “http_resp_hdr_len” parameter exceeds 32768 bytes, you will also have to increase the default response size using the “http_resp_size” parameter.

2. Error 503 All backends failed or unhealthy/Backend is unhealthy

This error generally occurs when there is a problem with a backend/origin/ server or when multiple backends are fetched for information but all of them fail to provide the information.

Some of the common reasons you may end up seeing the error:

  • The backend servers took too long to respond to a request.
  • Intermittent network issues between Varnish & the backend servers

Checks:

  • We need to ensure that the backend (Origin) is configured correctly.
  • Ensure that the Web Server which you are using (Apache/Nginx) is working as expected before installing Varnish.
  • Go through your Web Server’s configuration.

3. Error 503 First Byte Timeout

The First Byte timeout error in Varnish simply means that Varnish did not receive an expected response (including errors) from the backend within the specified timeout limit. Varnish comes packed with a lot of default settings for most of its parameters; the value of these may be changed as per your requirements.

Backend timeouts in Varnish are of multiple types:

  • connect_timeout: Specifies how long to wait when establishing a TCP Connection with the backend. The default value is 3.5 seconds or 3500 ms.
  • first_byte_timeout: This parameter is used to define the time for which Varnish would wait for the backend to process and respond. The default value for first_byte_timeout is 60 seconds or 60000 ms.
  • between_bytes_timeout: The between_bytes_timeout parameter is used to define how long to wait between 2 successful reads on the backend connection.

One simple way of handling timeout-related issues is to increase the timeout (specified in seconds) by overriding the defaults specified by the user.vcl file.

Sample:

backend default {

.host = "127.0.0.1";

.port = "8080";

.connect_timeout = 2s; # Wait a maximum of 2 seconds for establishing TCP Connection (Apache, Nginx, etc...)

.first_byte_timeout = 2s; # Wait a maximum of 2 seconds to receive the first byte from the backend

.between_bytes_timeout = 2s; # Wait a maximum of 2 seconds between each byte sent

}

Please note that in some cases you may still end up seeing 503 timeout errors, even after increasing the timeout thresholds. This is mostly seen on Web Servers running Apache (Varnish and Apache running on the same server) where the “KeepAlive” setting needs to be turned off.

Varnish and CDNs

When talking about CDNs, Akamai and Fastly are two names in which you cannot ignore and the same is the case when you are talking about Varnish and CDNs. One of the main reasons why CDNs came into the picture was to get content as close to the end user as possible. Though CDNs today may not be limited to just caching, getting the content closer to the user remains an integral requirement for many businesses when they opt for a CDN.

CDNs make websites fast. Varnish is an HTTP Accelerator. Both are powerful tools that can speed up any website. Now imagine what would be the result if the two worked together? In simple terms, most CDNs work with Varnish the same way they work with Origin servers. If the origin server serves assets from Varnish cache to a CDN, the CDN will treat Varnish just like any other origin and cache those assets.

Fastly uses a customized version of Varnish focused and optimized for large scale deployments. You can read more about how Akamai and Fastly works with Varnish here:

Catchpoint and Varnish

Catchpoint not only allows you to see the difference Varnish can have on the load time of your website, but it also allows you to capture important metrics from the HTTP Response Headers such as Cache Hit, Cache Miss, Via, or any other custom header that you may be passing. You can chart the data over a period to compare performance and have a look at trends which will reveal actionable insights.

Some of the key metrics which you can monitor using Catchpoint are:

  1. Cache Hit
  2. Cache Miss
  3. Cache Hit/Cache Miss (Broken down by Resource type)
  4. Cache Hit/Miss for Images
  5. Cache Hit/Miss for Scripts
  6. Cache Hit/Miss for CSS files

Here are some visualizations charted using Catchpoint (using the Insights feature).

1. Comparative performance analysis of Response and Webpage Response times for Cache-Hit vs Cache-Miss

2. Location comparative performance of Cache-Hit vs. Cache-Miss

3. Capturing & Charting X-Varnish HTTP Header (Custom Metric using Catchpoint)

Note: The X-Varnish HTTP header allows us to find the correct log-entries for a request. For a cache hit, X-Varnish will contain both the ID of the current request and the ID of the request that populated the cache. In this scatterplot, you will be able to see both the values separated by a comma. X-Varnish HTTP header allows better debugging.

Synthetic Monitoring
Network Reachability
DNS
CDN
DevOps
This is some text inside of a div block.

You might also like

Blog post

Traceroute InSession: A traceroute tool for modern networks

Blog post

DNS Security: Fortifying the Core of Internet Infrastructure

Blog post

The SRE Report 2024: Essential Considerations for Readers