How to Improve Your Time to First Byte (TTFB)

Hero image for How to Improve Your Time to First Byte (TTFB). Image by Frankie Lopez.
Hero image for 'How to Improve Your Time to First Byte (TTFB).' Image by Frankie Lopez.

In a previous post I've talked about just how crucial having a low Time To First Byte (or TTFB) is when you're trying to have a site that ranks well and provides a good user experience for visitors. In this post, I'll look at how you can improve this, and discuss a couple of the best ways I've found to bring your TTFB down.


How Do You Improve It?

To start with, there are two main areas to focus on in tackling your TTFB. First off, you need to look at the hardware that your website is hosted on the brawn of your server. Secondarily, you can make some substantial improvements by working on the brains of your website to improve its response speed.

Things like how optimised and wellwritten the code is can certainly make a difference, but you will also be able to make huge improvements by thinking about fundamental aspects of your website and its architecture; what software is the server running? What database system are you using? How is that data laid out across the database (or databases)?

The Brawn.

Let's start off with the hardware. A great guitarist needs the loudest amp, the best race drivers drive the fastest cars, and the most successful websites sit on the most powerful servers. You'll notice that my analogies here aren't strictly true (my guitar amp only goes up to 10½), hopefully, that significance won't be lost as I explain how this is only a partial (albeit very large) influence...

Fundamentally, your server needs to be powerful enough to stay up and running when your site is getting traffic. If you don't have the CPU or RAM to stay afloat, you'll start seeing issues such as slow loading (and slow Time To First Byte), or even worse your site may go down if the traffic spike is significant or prolonged enough.

When it comes to this type of issue, you have a few choices. Realistically if you find your server isn't meeting your website needs then the best thing to do is upgrade it and invest in your hardware. Many hosts now offer dynamic scaling for your server where the virtual machine your website sits upon can increase resources (specifically CPU RAM, or storage) automatically as the website requires. This means you can save money by paying for a lowerspec server that has some surge protection in place, allowing the server to increase in RAM and CPU power when it needs to.

In honesty, this is something I would generally avoid if possible. From experience the scaling aspect never quite kicks in as quickly as you would hope, leaving the website floundering in a kind of inbetween state all the while you continue to lose visitors, money, and the trust of potential customers. Aside from that, I also find that dynamic scaling for relatively short periods of time tends to be significantly more expensive than simply provisioning a highercapacity machine from the start.

One thing we have seen a lot of over the last year, especially with online groceries when the country went into various lockdowns and people were forced online for their weekly shops, was queuing systems. These balance the load on the site by forcing additional visitors to queue virtually rather than risk the entire site being compromised for everybody.

It is often a surprise when you see a larger brand promoting a marketing push to then be faced with this outdatedfeeling solution, however, this helps to prevent your server from going down from being way too overloaded and helps to keep costs down by reducing hardware and hosting costs. Obviously, however, it does prevent users from immediately reaching your site and may drive them away. Not ideal, but definitely friendly on the wallet. This sort of thing tends to work best if you are selling something hugely in demand like lockdown grocery delivery slots, or the latest PlayStation, or (attempting to) stream the latest from Glastonbury.

The best solution here is to invest in your hardware and in your website paying for the solution that actually caters to your needs and those of your users, rather than trying to skirt around having to pay up.

The Brains.

Now, moving on to solutions that we as developers have more control over, and which don't hit the wallet quite as hard... There are two things to discuss here: the architecture and infrastructure side of things, and the code side of things.

Photograph of black and white sewing pins appearing like a distributed network diagram by Munro Studio on Unsplash.

The Infrastructure.

I have talked at great length about static site generators in the past, which entirely removes this particular concern, but the fact is that there are a lot of database systems out there, and a lot of website usecases where dynamic, and onthefly data is required, and perhaps where a hybrid solution using hydrate isn't suitable.

Traditionally, chances were high that you would be running with MySQL on a LAMP stack, but you might well be using MongoDB, Amazon RDS, Postgres, or any number of other software and systems to handle your data.

Some of these will be more suited to your site than others. Every one of these systems has its pros and has its cons. Generally speaking, MySQL is the default goto database solution and with good reason; it powers just over 54% of the internet, according to Slintel. There is a strong reason for that: it's relatively easy to work with, it's reliable enough, fast enough and handles surprisingly large amounts of data surprisingly well.

However, if you're working with huge amounts of data across multiple tables and databases, you might need to reconsider not just the system you're running on, but also the way you're laying your data out. The issue here is a simple case of scale: the bigger the database, and the bigger the set of data that needs to be traversed, the longer the process takes, and the slower your website responds. Having relational databases can help to reduce the load on each call by cutting down the size of the database being queried and searched through.

Another very popular approach to resolving database slowdown is getting a CDN up and running. CDNs can help by making sure your website is delivered streamlined and minified from the closest viable server, which will really help to cut down on the amount of time it takes for the first byte of your website to a user. Some CDNs also offer their own caching, which will help to cut it down even further. The caveat here of course is that generally CDNs only host static data, so will effectively cache the results of your queries which can quickly put your website outofdate.

Photograph of a person holding a magnifying glass against a computer screen of graphs and code by Sajad Nori on Unsplash.

The Code.

So onto the final section; the actual fleshandbones of your website.

Time To First Byte can be heavily influenced by the way your website is built, and just how much is going on when a user visits it. I'm using a generic WordPress website as an example here to explore how the TTFB can be affected and improved. Whilst it's not a tech stack I tend to work in, WordPress accounts for around 40% of all websites online and there is a lot of scope for improvement for a generic WordPress site (!), so there is value in exploring it...

First things first: plugins. The more plugins you have installed into your WordPress site, the more queries those plugins have to make when a browser request is received, with each query delaying the delivery of the first byte to a visitor. WordPress does have a bit of a reputation for being slow to load, and part of that is undoubtedly due to the sheer number of plugins that some sites use, checking against the database on every page load before the site can even begin rendering on screens.

Next up, caching. Caching is a huge deal for TTFB, as loading from cache (whether it's browser cache or server cache) can absolutely decimate the time it takes your site to get onto user's screens because the server doesn't need to query, compile and then begin to deliver information to your users. I've mentioned this above: introducing a caching layer via a CDN will almost always produce absolutely massive performance increases on even the simplest of sites assuming you don't need too much tothemoment dynamic content!

Thirdly, can you optimise your site to reduce the number of HTTP requests? This is particularly relevant when it comes to WordPress sites and multiple plugins (each inevitably importing their own JavaScript and CSS files), but as a general rule of thumb and true of any website and any tech stack: the fewer requests you make, the faster a website will load. Backlink.io found that on mobile, HTTP requests are an absolute killer for TTFB. If you can, you should be aiming to make as few requests (and to as few different domains) as possible.


Wrapping up

Overall, TTFB is an issue that can seem complicated at first and is largely fixed by investing in your hardware. There are definitely changes you can make for free when it comes to optimising your code, and you may even be able to make some changes to the infrastructure of your site without incurring a cost (although this would take significantly more effort), but at the end of the day, the issue is most heavily going to be with your server performance.

Perhaps unsurprisingly: faster servers, sitting on faster networks, mean faster websites.


Categories:

  1. Google
  2. Performance
  3. Search Engine Optimisation