Caching Strategies in React

Hero image for Caching Strategies in React. Image by Arum Visuals.
Hero image for 'Caching Strategies in React.' Image by Arum Visuals.

Modern web applications rely on fetching data, whether from an API, a database, or another source. The challenge is that every request takes time and consumes resources. Without caching, applications can feel sluggish, and servers can become overwhelmed by unnecessary repeated requests.

Caching helps solve this by temporarily storing data, reducing redundant fetching, and improving performance. In React, we can cache data on the client, whilst Next.js provides builtin serverside caching mechanisms. However, in a serverless environment, caching works differently, and we cannot rely on inmemory storage. Instead, we need external solutions like Redis.

This is a challenge I've faced recently in a project I've been working on. The gameplay requires very fast API responses or otherwise begins to look broken, while the data itself can be massive and complicated. I write about how I resolved that much more in the case study, but it prompted me to write here, too.

In this article, I will explore different caching strategies for React and Next.js applications, including best practices for caching in serverless environments.


Client‑Side Caching in React

Clientside caching is one of the simplest ways to improve performance. If data does not change often, we can store it in memory or local storage instead of refetching it from an API every time.

Using State for Basic Caching

A basic approach is to store fetched data in React state (something we should all be innately familiar with) like this:

const [data, setData] = useState<string | null>(null);useEffect(() => {  const cachedData = localStorage.getItem("cachedData");  if (cachedData) {    setData(cachedData);  } else {    fetch("https://api.example.com/data")      .then((res) => res.text())      .then((fetchedData) => {        setData(fetchedData);        localStorage.setItem("cachedData", fetchedData);      });  }}, []);

Here, we first check if the data exists in localStorage. If it is not, we then fetch it and store it for future use.

Using React Query for Smarter Caching

For more advanced caching, we can use React Query, which automatically caches API responses and refreshes them when needed. This looks something like this:

import { useQuery } from "@tanstack/react-query";const fetchData = async () => {  const res = await fetch("https://api.example.com/data");  return res.json();};const MyComponent = () => {  const { data, isLoading } = useQuery(["data"], fetchData, { staleTime: 60000 });  if (isLoading) return <p>Loading...</p>;  return <p>{data}</p>;};

React Query allows us to hand off cache management, intelligently managing the cache and reducing unnecessary API calls whilst also ensuring fresh data when we need it.


Server‑Side Caching in Next.js

Next.js offers several builtin caching mechanisms, which we can take advantage of to reduce server load and improve application response times.

Static Site Generation (SSG) for Prebuilt Caching

If data does not change frequently, we can prebuild pages with our data at build time using Static Site Generation (SSG). This is great for pages that rarely change, like Author pages on a blog, for example:

export async function getStaticProps() {  const res = await fetch("https://api.example.com/data");  const data = await res.json();  return { props: { data } };}

Since SSG pages are prebuilt, they do not update unless the site is redeployed. This is perfect for static content but impractical for frequently changing data.

Incremental Static Regeneration (ISR)

ISR builds on SSG by allowing pages to update after deployment at defined intervals. This means we can serve pregenerated content but refresh it periodically, like this:

export async function getStaticProps() {  const res = await fetch("https://api.example.com/data");  const data = await res.json();  return { props: { data }, revalidate: 30 };}

With ISR, a request triggers page regeneration only after the revalidate time has passed, which means that users receive cached content until it is updated. In the example above, the revalidate: 30 option in the return means that the page will be regenerated every thirty seconds, keeping content uptodate whilst also reducing API calls.

Server‑Side Rendering (SSR) with Built‑In Caching

For frequently changing data, ServerSide Rendering (SSR) can fetch data on every request. However, Next.js still allows us to add caching at the HTTP level using cache headers:

export async function getServerSideProps({ res }) {  res.setHeader("Cache-Control", "public, s-maxage=60, stale-while-revalidate=30");  const response = await fetch("https://api.example.com/data");  const data = await response.json();  return { props: { data } };}

The result here is similar to ISR (but a different approach): the API response is cached for 60 seconds, serving fresh data while reducing redundant requests.


Caching in Serverless Environments

Why Can't We Use in‑Memory Storage in Serverless Functions?

Serverless platforms like Vercel and Netlify do not retain state between function calls. Each request may be handled by a different instance, which means that:

  • Variables stored in memory do not persist between requests.
  • If a function is idle, it is shut down, clearing any data you may have cached.
  • There is no guarantee the same function instance will handle the next request.

For example, this attempt at caching inmemory will not work reliably on serverless functions:

let cache = {};export default async function handler(req, res) {  if (cache.data) {    return res.json(cache.data);  }  const response = await fetch("https://api.example.com/data");  cache.data = await response.json();  res.json(cache.data);}

Each time a new instance runs, the cache object is empty, resulting in the API request occurring again, and negating the attempted caching.

Using Redis for Persistent Serverless Caching

Since serverless functions do not retain memory between executions, we need a persistent external cache to store frequently accessed data. This is where Redis comes in.

What is Redis?

Redis (Remote Dictionary Server) is an inmemory data store designed for lightningfast data retrieval. Unlike traditional databases, which store data on disk, Redis keeps everything in RAM. This means that reads and writes to it are significantly faster than those to a conventional API. This makes it ideal for caching API responses, session data, and frequently accessed computations that can't be held on the server.

Even though Redis requires a network request to retrieve data, it is much faster than querying an API or database because:

  • Data is stored in memory
    , avoiding disk reads and complex query processing.
  • Latency is minimal
    , typically in the submillisecond range.
  • Operations are optimised for speed
    , with commands like GET and SET executing in constant time (O(1)).

Using Redis to Cache API Responses

Instead of hitting an external API every time, we can store responses in Redis and retrieve them virtually instantly:

import Redis from "ioredis";const redis = new Redis(process.env.REDIS_URL);export default async function handler(req, res) {  // Check Redis cache first  const cachedData = await redis.get("apiData");  if (cachedData) {    return res.json(JSON.parse(cachedData));  // Return cached response  }  // Fetch fresh data from the API  const response = await fetch("https://api.example.com/data");  const data = await response.json();  // Store in Redis with a 60-second expiration  await redis.set("apiData", JSON.stringify(data), "EX", 60);  res.json(data);}

Now, if multiple users request the same data within 60 seconds, they receive it instantly from Redis rather than waiting for the API to respond.

Why is Redis Faster than an API call?

A typical API request may take 100–500ms due to network latency, authentication, and database queries. Redis, by contrast, can serve cached responses in under 1ms because:

  • It eliminates the need for API authentication
    on every request.
  • It avoids making database queries
    for each user.
  • It reduces network latency
    , as responses are served from the cache instead of a remote data centre.

By integrating Redis into a serverless architecture, we can significantly reduce response times and minimise redundant API calls, making our applications much more scalable.


Wrapping up

Caching is crucial for optimising React and Next.js applications. Clientside caching improves performance for repeated API calls, while Next.js offers builtin caching mechanisms like ISR and HTTP cache headers. In serverless environments, caching must be handled externally using tools like Redis since inmemory storage is not persistent.

Key Takeaways

  • Clientside caching

    (React Query, localStorage) reduces API calls.
  • Next.js caching

    (SSG, ISR, SSR) minimises server requests and improves load times.
  • Serverless functions cannot

    use inmemory storage between requests.
  • Redis

    provides a persistent caching layer for serverless environments.

Using the right caching strategy makes a huge difference in performance and scalability. Whether we are optimising API calls in React, caching pages in Next.js, or handling data in a serverless environment, a wellplanned approach keeps our applications fast and responsive.


Categories:

  1. Development
  2. Front‑End Development
  3. Guides
  4. JavaScript
  5. Next.js
  6. Performance
  7. React