Asynchronicity and Promises
Published 2026-05-22, About 16 minute read.
You're in a technical interview and there are two Staff Developers zooming
with you. They share their screen:
They ask you with a smirk: "What is the expected output?"
Could you reason out what would happen?
Okay, seriously, if you ever find yourself in this situation for a technical
interview, I would run, not walk, away from that company.
But don't run from this blog post! It's helpful to understand these deep
pieces of JavaScript and the basics of asynchronicity, the event loop, and
control flows. And I would even venture to say it's
fun to learn something deeply and the history behind why it
is the way that it is today!
In
my last post I
started to re-implement fetch using
XMLHttpRequest
and
Promise. In this blog post I'm going to try to trace the beginnings of
asynchronicity in JavaScript, specifically before they became a primitive with
APIs that we could handle. Then, I want to talk about applications for async
task queuing today.
Take note of the make and model of your socks right now, because they're about
to be blown right off.
The Beginnings of Asynchronicity and Promises 📎
Promises have a few different names and each of the names in each language has
their own nuances that require you to really look at documentation to
understand the difference and when the two are the same:
-
Rust
uses Futures from the standard library as its async primitive — this is what
the
async/await syntax desugars to
-
Scala
uses Futures and Promises where futures represent a future value and
promises are an object that will contain a future value.
-
Java has tasks, futures,
and something very much like a promise in JavaScript:
CompletableFuture
-
Python has Futures and Tasks which can be configured like
Promises in JavaScript
-
C++
has both futures and promises that are used specifically in the producing
and consuming sides of future values.
-
Ruby
seems to use asynchronous tasks, promises, and futures
So the concept is pretty ubiquitous. The implementations vary widely, and the
value for programmers is that promises allow one to keep reference to work
that will eventually be resolved in hand.
But we can trace the beginnings of futures as an object in programming to one
paper (or, at least most of the things I read point to this paper as the
seminal work putting forward the concept of a future as a primitive.) The
paper is called "The Incremental Garbage Collection of Processes" by
Henry C. Baker and Carl Hewitt.
It's super interesting to read this paper today. Baker and Hewitt use the term
"future", and they acknowledge that Friedman and Wise called them "promises"
and Hibbard called them "eventuals." You also see certain language that is
common today being used even then ("thunks!") Futures were in the air, and
many people were toying with this in the functional programming world.
Just check out this explanation, written in 1977:
When an expression is given to the evaluator by the user, a future for that
expression is returned which is a promise to deliver the value of that
expression at some later time, if the expression has a value. A process is
created for each new future which immediately starts to work evaluating the
given expression. When the value of a future is needed explicitly, e.g. by
the primitive function "+", the evaluation process may or may not have
finished. If it has finished, the value is immediately made available; if
not, the requesting process is forced to wait until it finishes.
Pretty cool! Baker-Hewitt is talking about blocking synchronous execution of
future values if values haven't been resolved yet. And in their case, it was
about garbage collection. The basic gist of their article was that if you have
something in hand that branches to handle different futures- branches that may
be running concurrently- all the future paths need to be garbage collected and
a runtime needs to be aware of this.
In the end this paper was important because future,
unresolved, and asynchronous values could be first class citizens in
programming languages.
Enter JavaScript and the Event Loop 📎
If you want to get a summary of the event loop, there is
a talk by
Jake Archibald you can watch on
YouTube. This talk will give you a very good understanding of why JavaScript
acts the way it does, why the main thread can get locked, and what tasks and
the event loop are, and more.
But for those who need a concise TLDW:
-
JavaScript runtimes are "single threaded", which means there's a single call
stack called the main thread. JavaScript can only handle one thing at a
time.
-
JavaScript can line up tasks in what's called the task queue. If something
is scheduled to happen (say, like in a
setTimeout as a callback
function,) it is added to this task queue. The tasks on the queue are
addressed first in first out. But, they are addressed after
the call stack is empty.
There is another queue called the microtask queue. Historically, there was a
time when JavaScript didn't have a formal idea of microtasks, and the main
thread was all you had to work with. Think about this: how do you have code
act concurrently if the runtime can only work on one thing on the callstack at
a time?
If you enqueue a task, there are no good guarantees about when it will run,
because you must depend on the call stack being empty. This means, if you have
long running processes on the main thread, and enqueue a callback to console
log "Hello, World" after 0 seconds, you will not see the greeting when you
expect.
So developers tried to use setTimeout to enqueue some functions
so that they would act somewhat async, but you could always run into long
processes on the main thread that locked the whole application until the call
stack was empty.
Take, for example, a "Promise" class and it's use below. (Note I'm writing
this with old, pre ES6 classes and so I'm using regular functions instead of
arrows and prototypes.)
Hey, that seemed to work exactly as we intended it! What's the issue?
Well, if we were to tie up the main thread with some sort of long running
loop, you'll see that the promise doesn't resolve after the 2 seconds we
expect:
You'll notice the main loop freezes all activity, and the
STPromise "task" doesn't even get kicked off until after the 5
seconds that the main thread is locked. This is not good!
Added Asynchronicity: the Microtask Queue 📎
MutationObserver was introduced around 2012 to 2013. Browsers
started to allow queueing of tasks that would execute before the next normal
task would execute. What this meant, then, was that the callbacks could take
advantage of this and execute more quickly when a DOM mutation happened. This
new queue of special tasks that could happen earlier was eventually dubbed the
microtask queue.
Smart people started to take advantage of this higher priority queue. If you
look at the early source code of Promise libraries, such as
Bluebird, you would see that they take advantage of this mutation observer API by
assigning a callback to a mutation observer and then triggering a mutation to
execute that callback as a microtask!
Let's update our setTimeout example to try this "hack":
Bluebird and other promise libraries used many other options and fallbacks for
forcing microtasks, especially in different browsers or in Node, but this was
the main way it was done in the browser.
So maybe now you can see why "asynchronous" was used instead of "concurrent"
for this sort of thing. The code execution is forcing higher priority
execution between normal macrotasks, but it truly isn't running concurrently
as if the callback was running on a different thread.
It doesn't feel great co-opting this API to get a microtask queued, though,
right?
Things got better. With ES6 we got to use Promises which implicitly queued
microtasks in their API. Eventually, we got the nice
queueMicrotask API that let us set microtasks directly in 2019.
Compare the difference:
Okay, what about process.nextTick() and
requestAnimationFrame()?
📎
Right.
requestAnimationFrame
is a different beast. You don't use this method to simulate asynchronous code.
Although the API looks nearly identical, the callback in this function
receives a timestamp argument automatically. The calling of this function also
is closely tied to the refresh rate of your computer, which will obviously
vary and requires care to provide consistent results on different computers.
Basically, it's different enough where it's not suitable for futures.
So then, what about
process.nextTick()? Well, the biggest issue is that it is only available in Node. But aside
from that, it also feels like a different beast. It actually will execute
before microtasks, so it acts somewhat like a pre-microtask queue. I would
recommend just using queueMicrotask instead.
So what about that nightmare code interview question then? 📎
You can try it out here in the playground below, but take a second first just
to think about it. We know that the regular console logs will happen in their
current order. Also, we know that the setTimeout will enqueue a
new task, which will basically happen last. That leaves the
Promise and the queueMicrotask, and both of those
are microtasks and should happen at the end but before the
setTimeout in the order they are called.
The difficult one is the requestAnimationFrame. Depending on the
computer you're working on, this may be different. However, this code is so
small there is little to no chance that a refresh rate scheduling of this task
would happen before microtasks or even a regular task. For example, if you're
on a 60hz screen, the task would be scheduled in 16.7 milliseconds from the
time of calling it. JavaScript is fast, and the whole script below clocks in
at about 0.67 milliseconds to run the function. So we can count on this
happening last.
So 1, 6, 3, 4, 2, and then 5
Applications 📎
The reason I fell into investigating microtask queueing was because I authored
a dependency injection library that included reactive state. It was a service
system, much like Ember's, that allows consumers to subscribe to state updates
with services that lazily instantiate themselves. It makes sharing
functionality and state across the page easy.
If you've ever dealt with reactivity, you might know that it's a bit tricky to
be efficient and performant and avoid notification cycles or infinite render
state updates (I'm looking at you useEffect().) So to avoid
duplicate notifications and infinite cycles when state changes, I use a
microtask queue.
I create a Set that lives in the module of the service base class. When we
notify a service or consumer that state has been changed, we add that service
or consumer to the set. In the future, we just check to see if we've notified
them before we do further notifying. It's a simple way to avoid cycles and
duplicate notifications.
But the real trick is await 0. Since notify is an
async function, we are using promises automatically, and leveraging the
microtask queue.
If you want to see the file in the wild,
check it out here. But here is the gist of the notification function for state changes:
A simple improvement I can make is by being more explicit with the clearing of
the notification set. Not only can I use queueMicrotask and
better show my intent for why I use await 0, but I also can make
this function no longer async!
So this is better, in my opinion:
I'll have to update that library sometime 😗
You can also see this trick happening in a lot of other places. For example,
in Lit they use this trick to batch property changes. You can see this trick
employed here. It's a bit harder to suss out, but using
await this.__updatePromise; assures that the rest of the function
is enqueued as a microtask. This means all changes to properties have resolved
before lit continues and renders.
And in researching this post, I found out that Vue.js also does this with
their scheduler. See if you can understand how this
flushing promise works. (I had to have this explained to me, and to be honest, I'm still going back
to it to try to solidify what's happening in my brain.)
One more cool thing I ran into writing this post... 📎
I'm trying out my new ReplPlayground web component in this post,
and I ran into an issue while demonstrating how the main thread locks. See if
you can find it here:
Do you see it? Notice how it's "executing" for about 5 seconds, and then the
two console logs show? This is an example of how the task queue was blocking
the console feedback in my component!
The code in the repl-playground actually gets executed using
eval() in a sandboxed iframe. Console messages and return values
are posted out of the iframe so values and logs can be displayed in the
console area of the component. So why does it wait until the main thread is
unblocked to log anything?
I learned that although this script is being evaluated in an iframe, the
iframe and the main page share a main thread. So, the script being run inside
the iframe locked the thread until the eval was finished. The logs we expected
to see are posted and the event listeners that are supposed to handle the logs
are asynchronous, but they won't be executed until the call stack is empty.
This means that they wait for eval() to finish before the async
tasks can be triggered
It's pretty ironic that this very issue came up in the code to demo my blog
post concept!
The way you can resolve this is by using a web worker instead. I took a little
detour to allow an attribute web-worker on my repl playground
element, and if it's present it will execute the code in a web worker which is
a separate thread from the main thread.
Go ahead, inspect the web component and add `web-worker` as a boolean
attribute to it and try it. It will work!
Does that mean you can work concurrently in Javascript with workers? 📎
Yes!
But there are many caveats and it's really hard to share memory between
threads and, most important, I would need to do a whole lot of research on
workers to say more! 😅
Yes, that is probably what my next post will be about.
I hope you enjoy this stuff as much as I do! 📎
I learned a lot researching and diving deep into the JavaScript history. Let
me know if you have any questions or feedback, or just want to say hi! Find me on Bluesky or Mastodon. I also have an RSS feed here