What is a Timing Attack? Side Channels, Examples, and Safer Comparisons

Hero image for What is a Timing Attack? Side Channels, Examples, and Safer Comparisons. Image by Murray Campbell.
Hero image for 'What is a Timing Attack? Side Channels, Examples, and Safer Comparisons.' Image by Murray Campbell.

Timing attacks sound like something that belongs in a cryptography lecture until you see one reduced to an ordinary string comparison.

A developer sends an API key. The server compares it with the real key. If the first character is wrong, the comparison returns quickly. If the first ten characters are right and the eleventh is wrong, the comparison takes a little longer. That difference may be tiny, but it is still information.

That is the shape of a timing attack. The attacker does not need the application to print the secret. They look at how long the application takes to reject their guesses and use those timings as a side channel. The response body says "no". The clock quietly says "closer".

That is why the example in this Reddit discussion about timing attacks lands with so many web developers. It takes an abstract security idea and shows it hiding in code that people recognise.


A Timing Attack is a Side‑Channel Attack

A sidechannel attack leaks information through something around the result rather than through the result itself. Time is one of those channels. Power use, cache behaviour, memory access, error shape, and response size can be side channels too, but timing is the one web developers most often meet.

CWE208, "Observable Timing Discrepancy", describes the weakness as two operations taking different amounts of observable time in a way that reveals securityrelevant information. That is the important phrase: securityrelevant information. A slow page is not automatically a vulnerability. A slow path that tells an attacker whether a username exists, whether a signature prefix is correct, or whether a privatekey operation took a particular branch is a different matter.


The Simple String‑Comparison Version

The beginnerfriendly example is a secret comparison:

const isValidApiKey = (suppliedKey: string): boolean =>  suppliedKey === process.env.SECRET_API_KEY;

That looks innocent, but ordinary equality checks are not designed as security primitives. A language, engine, or library is free to stop once it knows two strings are different. That is good for speed. It is bad when the difference between "failed immediately" and "failed after checking more bytes" helps an attacker learn about a secret.

The attack model is not "one request gives away the key". It is a repeated measurement. The attacker sends many candidates, records response times, controls for noise as much as possible, and looks for candidates that consistently take longer. If each correct byte causes a little more work before rejection, the search space can shrink from "guess the whole secret" to "learn it piece by piece".

In a real web application, network jitter, scheduling noise, caches, queues, rate limits, and load balancers all make that harder. They do not make the bug disappear. They just make the signal noisier.


Where Timing Leaks Appear in Web Applications

Authentication flows can leak timing when the application exits early for an unknown account. If an unknown email returns immediately, but a known email triggers password hashing before failing, the response time can become a username enumeration signal. The OWASP Authentication Cheat Sheet calls this out directly: authentication responses need to avoid discrepancy factors, including processingtime differences.

Password reset flows can have the same problem. A visible message such as "this email does not exist" is the obvious leak. A response that says the same thing for every email but takes much longer for real accounts is the quieter version.

Webhook verification is another common place to care. Stripe, GitHub, Slack, and other systems commonly sign payloads. Your application recomputes a signature and compares it with the supplied one. If that comparison is not timing safe, a malicious caller may get a signal about the expected signature. In Node.js, the official crypto.timingSafeEqual documentation names HMAC digests, authentication cookies, and capability URLs as suitable uses for constanttime comparison.

Cryptographic implementations can leak timing at a lower level too. That is where the subject gets older and more serious than API key demos.


Big Examples That Made the Risk Real

Timing attacks are not a new web trend. Paul Kocher's 1996 paper, Timing Attacks on Implementations of DiffieHellman, RSA, DSS, and Other Systems, helped establish the practical risk: implementations can reveal secret material when runtime depends on secret values.

In 2003, David Brumley and Dan Boneh published Remote Timing Attacks are Practical, showing a timing attack against OpenSSL that could extract private keys from an OpenSSLbased web server on a local network. That mattered because it moved timing attacks from "maybe on smartcards or lab hardware" towards ordinary software systems and network services.

The Lucky Thirteen attack is another useful example. It targeted timing differences in TLS and DTLS record processing when CBCmode encryption was used. The lesson for most web developers is not to handroll TLS. It is that mature protocols and libraries can still be affected when padding checks, MAC verification, and error handling take measurably different paths.

More recently, PortSwigger's research on web timing attacks that actually work pushed back on the comfortable idea that network noise makes remote timing attacks irrelevant. The practical techniques have improved. The safe conclusion is not panic; it is to avoid creating timing oracles around secrets in the first place.


Use Constant‑Time Comparison, but Do Not Stop There

For JavaScript running on Node.js, the direct comparison fix is usually to compare bytes with crypto.timingSafeEqual.

import { timingSafeEqual } from 'node:crypto';const isSameSecret = (supplied: Buffer, expected: Buffer): boolean => {  if (supplied.length !== expected.length) return false;  return timingSafeEqual(supplied, expected);};

That length check is necessary because Node throws when the inputs have different byte lengths. It also shows why the surrounding code still matters. The same Node.js documentation warns that using crypto.timingSafeEqual does not automatically make everything around it timing safe.

A common pattern is to compare fixedlength digests rather than raw user strings. For example, compute an HMAC or hash in a consistent way, then compare the resulting buffers. Python has the same idea in hmac.compare_digest, which is designed to avoid contentbased shortcircuiting during comparison.

The comparison function is one piece of the defence, not the whole design.


Keep Authentication Paths Boringly Similar

Login and password reset flows should avoid obvious early exits.

If a user does not exist, do not return a different visible message. Do not return a different status code that says the same thing in the UI but leaks through the network panel. Do not skip all expensive work for missing accounts if doing so makes missing accounts measurably faster than real accounts with wrong passwords.

That does not mean pretending user experience does not matter. It means shaping the flow deliberately. For sensitive systems, use generic responses, perform comparable work where practical, apply rate limiting, and monitor repeated failures. The OWASP password storage guidance also matters because you should be verifying slow password hashes, not comparing plaintext passwords.

Dummy hashing for nonexistent users can be reasonable when username enumeration risk matters. So can accountlevel throttling, IPlevel throttling, bot controls, multifactor authentication, and clear logging. None of those is a silver bullet.


Random Delays are a Poor Primary Fix

Adding a random delay feels tempting. If timing is the problem, make timing messy. But it is not a reliable primary defence. Random delays can hurt legitimate users, increase server load, and still average out under repeated measurement.

Prefer removing the timing difference at the source. Use constanttime comparison for secrets. Avoid branchy logic where secret values determine obvious work. Keep authentication failure paths similar. Use librarylevel crypto rather than building your own. Add rate limits and monitoring so repeated measurement becomes expensive and visible.


What to Check in a Code Review

Timing issues are easy to miss because the vulnerable code often looks tidy.

I would pay attention to:

  • direct comparison of API keys, webhook signatures, session tokens, reset tokens, or authentication cookies
  • login paths that skip password hashing when the account does not exist
  • password reset, invite, and registration flows that expose account existence through time, status, copy, or redirect shape
  • custom cryptography, especially code with branches based on secret values
  • different response bodies or status codes for securitysensitive failures
  • missing rate limiting on endpoints that can be measured repeatedly

Most teams do not need to become timingattack researchers. They do need to recognise when code is asking ordinary equality, branching, or error handling to do securitysensitive work.


Wrapping up

A timing attack turns runtime into evidence. The application may say only "wrong key" or "login failed", but the time it takes to say that can reveal something useful to an attacker.

The fix is rarely dramatic. Use the comparison functions your platform provides. Keep secretdependent paths consistent. Avoid leaking account existence. Ratelimit endpoints that invite repeated guesses. Let mature libraries handle cryptographic details.

The clock is part of the interface, whether we intend it or not. Securitysensitive code should be written with that in mind.

Key Takeaways

  • Timing attacks use responsetime differences as a side channel.
  • Ordinary string equality is not a safe primitive for comparing secrets.
  • Remote timing attacks are harder than local demos, but they are not automatically impossible.
  • Use constanttime comparison for API keys, signatures, tokens, and other secret digests.
  • Authentication flows should avoid visible and timingbased user enumeration.
  • Rate limiting, monitoring, and generic responses support the fix, but they do not replace safer comparison and consistent control flow.

Planning a platform change?

I help teams make difficult platform work clearer, from architecture decisions and migrations to launch recovery, performance, and search visibility.