Understanding Event Loop and Concurrency in JavaScript

Hero image for Understanding Event Loop and Concurrency in JavaScript. Image by Predrag Nikolic.
Hero image for 'Understanding Event Loop and Concurrency in JavaScript.' Image by Predrag Nikolic.

Async JavaScript feels magical right up until it breaks. A loading spinner never appears. A setTimeout(..., 0) runs later than expected. Two requests finish in the wrong order. An await that looked harmless leaves the page feeling sticky.

None of that is random. It usually means our mental model is too vague.

People hear that JavaScript is singlethreaded and picture one long queue where everything politely waits its turn. That is close enough to get started and wrong enough to cause trouble. The more useful model is that JavaScript executes one piece of synchronous code at a time on a call stack, whilst the host environment does other work around it and feeds completed work back in according to event loop rules.

Once that clicks, async code stops feeling mystical and starts feeling debuggable.


The Three Moving Parts Worth Caring About

You do not need a computer science diagram with nine boxes to reason about the event loop day to day. Three moving parts carry most of the weight:

  • the call stack, where synchronous JavaScript runs
  • the host environment, which handles things like timers, network I/O, and browser events outside the stack
  • one or more queues, where callbacks wait until JavaScript is allowed to run them

When you call a function, it goes onto the stack. When that function returns, it comes off again. If you start a timer, kick off a fetch, or register a click handler, that work is not sitting on the stack for ten seconds waiting patiently. The browser or Node runtime is tracking it elsewhere.

When that outside work becomes ready, the runtime schedules followup JavaScript so it can run later once the stack is clear and the event loop says it is time.

That last bit matters. Ready does not mean run immediately. It means eligible to be queued.


What Single‑Threaded Does and Does Not Mean

JavaScript on the main thread executes one frame of synchronous JavaScript at a time. That part really is singlethreaded.

What is not singlethreaded is the whole runtime. Browsers can track timers, process network responses, handle user input, decode images, and prepare rendering work without your code actively sitting on the stack doing those jobs. Node can wait on file system work, sockets, and OS signals. The runtime can make progress on several things at once even though your JavaScript callback still runs one slice at a time.

That is why concurrency is the right word more often than parallelism.

If you fire off three HTTP requests with Promise.all, the waiting happens concurrently. The callbacks that handle the results still run on the JavaScript thread according to the event loop. If you want your own JavaScript to execute truly in parallel, you are usually into Web Workers, Worker Threads, or similar mechanisms.


Tasks, Microtasks, and Why Promises Seem to Jump the Queue

This is the part that causes most daytoday confusion.

The event loop does not use one undifferentiated queue. The most useful distinction for ordinary development is between tasks and microtasks.

  • tasks include things like timers, message events, and many browser or runtime callbacks
  • microtasks include promise reactions, await continuations, and queueMicrotask() callbacks

The rough rule is:

  1. Run the current synchronous code until the stack is empty.
  2. Drain the microtask queue.
  3. Move on to the next task.
  4. Give the host environment a chance to do other work, including rendering in the browser.

That is why a resolved promise callback usually runs before a zerodelay timer. The timer is a task. The promise continuation is a microtask.

You will also hear the term macrotask. That shorthand is common enough, but for everyday reasoning it is usually enough to think in terms of tasks and microtasks.


A Tiny Example That Explains a Lot

console.log('script start');setTimeout(() => {  console.log('timeout');}, 0);Promise.resolve().then(() => {  console.log('promise');});console.log('script end');

The output is:

  • script start
  • script end
  • promise
  • timeout

Nothing exotic happened here.

console.log('script start') and console.log('script end') run immediately because they are synchronous. The timer callback is scheduled as a task. The promise reaction is scheduled as a microtask. Once the stack is empty, the runtime drains microtasks before taking the next task, so promise appears before timeout.

If that order surprises you, the surprise is useful. It means you were mentally flattening multiple queues into one.


async and await are syntax, not a new execution model

async and await make asynchronous code easier to read, but they do not create a new thread and they do not suspend the whole runtime.

An async function runs synchronously until it hits an await. At that point, it returns a promise to its caller, and the rest of the function is scheduled to continue later when the awaited value settles.

In practice, that continuation behaves like promise reaction work. It rejoins the flow through the microtask queue.

const example = async (): Promise<void> => {  console.log('inside start');  await Promise.resolve();  console.log('inside after await');};console.log('before call');void example();console.log('after call');

This logs:

  • before call
  • inside start
  • after call
  • inside after await

That ordering is a good reminder that await means resume later, not freeze everything until this line is finished.

For experienced developers, this is also where a lot of hidden sequencing bugs begin. Two awaits written one after the other are often accidentally serial. If the operations are independent, Promise.all or a similar pattern may be the real intention.

const [user, orders] = await Promise.all([  getUser(),  getOrders(),]);

That does not make JavaScript parallel in the languagelevel sense. It does mean you are no longer waiting for one I/Obound operation to finish before starting the other.


Why the Browser Still Freezes Even When Your Code is Async

A common novice assumption is that once promises or await appear in the code, the UI should stay responsive automatically. Not so.

Rendering only happens when JavaScript gives the browser room to render. If you keep the main thread busy with long synchronous work, the browser cannot paint in the middle of it just because you changed some state or added a class name first.

button.textContent = 'Saving...';doVeryExpensiveWork();button.textContent = 'Done';

If doVeryExpensiveWork() blocks the main thread for two seconds, the user may never see Saving... at all. The browser often cannot paint that intermediate state until after the expensive work ends, by which point the text has already changed again.

The same problem can show up with overgrown microtask chains. Because the runtime drains microtasks before moving on, a large promise chain can delay rendering longer than people expect.

That is one reason "just wrap it in a promise" is such weak performance advice. If the work itself is heavy and still happens on the main thread, you have not really solved the responsiveness problem.


A More Realistic Example: Why a Spinner Sometimes Never Appears

Imagine this code in a browser:

const handleClick = async (): Promise<void> => {  setIsSaving(true);  saveLargeDraftSynchronously();  await sendAnalytics();  setIsSaving(false);};

At a glance, that looks fine. It even has an await in it, so it feels asynchronous. But if saveLargeDraftSynchronously() is expensive enough, the browser may not paint the saving state before the heavy work runs. The interface still feels frozen.

The fix is not always to use promises harder. The fix is usually one of these:

  • make the synchronous work smaller
  • split it into chunks and yield between chunks
  • move CPUheavy work off the main thread
  • change the user experience so the expensive work happens in a less blocking way

That is a more mature event loop lesson: queue choice matters, but so does whether the work should be on the main thread at all.


Concurrency Creates Ordering Problems as Well as Performance Problems

Once several asynchronous operations are in flight together, the next trap is assuming they will finish in the order you started them. They often will not.

That is how innocent features turn into race conditions.

Imagine a search box that requests results on each keystroke. The user types c, then ca, then cat. If the c request is slow and the cat request is fast, the old response can arrive last and overwrite the newest state unless you guard against it.

let latestRequestId = 0;const search = async (query: string): Promise<void> => {  latestRequestId += 1;  const requestId = latestRequestId;  const results = await fetchResults(query);  if (requestId !== latestRequestId) {    return;  }  renderResults(results);};

That bug is not really about fetch. It is about concurrency and ordering. Several operations are progressing independently, and the event loop will happily run whichever completion callback becomes ready first. If your state model assumes older work cannot arrive later, you have built a race condition.

In production code, AbortController is often the better fit because it lets you cancel stale work instead of merely ignoring the result.


setTimeout(fn, 0) is not run this now

Developers reach for setTimeout(..., 0) as a way to defer until later all the time, and that can be fine, but it helps to be precise about what it means.

It does not mean:

  • run immediately
  • jump ahead of promises
  • guarantee a paint before the callback
  • guarantee a specific millisecond boundary

It means the callback becomes a task that is eligible to run after the current synchronous work finishes and after the runtime has drained pending microtasks. In browsers, timer clamping and runtime scheduling can push it later still.

That is why zerodelay timers are useful as a coarse yield, not as a precision instrument.


Choosing the Right Way to Yield

When code is tying the main thread in knots, the fix is often to yield intentionally rather than hoping the runtime sorts it out for you.

Different yielding tools have different jobs:

  • use a promise continuation or queueMicrotask() when you need to schedule small followup work before the next task
  • use setTimeout when it is acceptable to push work to a later task
  • use requestAnimationFrame when the next step should happen around the next paint
  • use Web Workers when the work is genuinely CPUheavy and should not live on the main thread

If the job can be split up, even a simple chunking approach can help:

const processLargeList = async (items: string[]): Promise<void> => {  for (let index = 0; index < items.length; index += 100) {    const chunk = items.slice(index, index + 100);    renderChunk(chunk);    await new Promise<void>((resolve) => {      setTimeout(resolve, 0);    });  }};

That pattern is not glamorous, but it can be the difference between a page that feels frozen and one that stays interactive enough to use.

This is where novice and experienced developers usually part company. Beginners often just need to know that queues exist. More experienced developers need to be deliberate about which queue they are feeding and what tradeoff that creates.

One warning here: microtasks are easy to abuse. If you keep recursively queueing more microtasks, you can delay tasks, input handling, and rendering longer than intended. Higher priority is not the same thing as better.


Browser and Node.js Share the Broad Model, but Not Every Detail

The broad mental model in this article applies across environments: synchronous code runs on a stack, the host runtime manages asynchronous capabilities, and the event loop decides when queued callbacks can run.

The exact details are hostspecific.

Browsers have rendering to worry about. Node.js has event loop phases tied to timers, I/O, and other runtime concerns. Node also has process.nextTick(), which is even easier to misuse because it can run before ordinary promise microtasks and starve other work if abused.

That is worth knowing because developers often learn one diagram and then apply it far too literally everywhere. The diagrams are only mental models. The host environment still matters.

For most frontend work, though, the browserflavoured model is enough to make better decisions quickly.


How to Debug Event Loop Problems Without Guessing

When async behaviour feels strange, the fix is usually not more intuition. It is better observation.

A few habits help a lot:

  • log both when work starts and when the callback actually runs
  • distinguish synchronous setup from asynchronous continuation in your logging
  • look for accidental serial awaits
  • treat staleresult bugs as ordering problems, not merely networking problems
  • use browser performance tooling when the interface feels blocked
  • check whether the work is heavy, not just whether it is asynchronous

A large number of event loop bugs collapse once you ask two simple questions:

  • what queue is this callback entering?
  • what still has to happen before that queue gets a turn?

Those questions are usually more useful than memorising folklore about promises being faster or timers being later.


The Mental Model Worth Keeping

The event loop is not an obscure implementation detail sitting underneath everyday JavaScript. It is the reason ordinary JavaScript behaves the way it does once code stops being purely synchronous.

If you remember only a few things, remember these:

  • synchronous JavaScript runs to completion on the stack
  • promises and await schedule followup work, they do not create a new thread
  • microtasks run before the next task, which is why promise ordering often surprises people
  • rendering and input handling still need JavaScript to yield
  • concurrent operations do not guarantee ordered results

That is enough to make a lot of async code far less mysterious. It also makes performance conversations sharper. Sometimes the problem is the wrong queue. Sometimes it is stale ordering assumptions. Sometimes the work simply should not be on the main thread in the first place.

Once you can tell those apart, JavaScript concurrency stops feeling like superstition and starts feeling like engineering.


Categories:

  1. Development
  2. Front‑End Development
  3. Guides
  4. JavaScript