Two core concepts that allow Vert.x framework to be highly scalable and performant are event loop, more specifically the Multi-Reactor pattern, and its message bus, called EventBus in Vert.x
But from the many questions I answer on StackOverflow about Vert.x, it seems that quite a few misunderstand those two very important concepts.
In this article I would like to address misconceptions regarding Event Loop such as:
“Vert.x has EventLoop, so it’s single threaded and is using only one CPU”
“Vert.x is multithreaded, so it has to create a thread for each verticle”
Event Loop is an implementation of Reactor design patter.
It’s goal is to continuously check for new events, and each time a new event comes in, to quickly dispatch it to someone who knows how to handle it.
But by using only one thread to consume all the events, we’re basically not making the best use of our hardware. Node.js applications, for example, often spawn multiple processes to address that issue.
While providing good isolation, processes are expensive. Vert.x uses multiple threads instead, which are cheaper in terms of system resources.
Hence — multi-Reactor.
To understand how multi-reactor works in practice, we’ll be checking the amount of threads by simply calling
While not precise, this will serve our purposes well enough.
Let’s see first how many threads we have at the beginning of our program:
Before starting VertX -> 1 thread
Yep, that makes sense. That’s only our main thread now. You may see some other JVM maintenance threads kick in just as you start your application sometimes though, but that is insignificant for the following examples.
Now we’ll start our Vert.x application:
And check the number of threads again:
After starting VertX -> 3 threads
So, starting Vert.x yields 2 additional threads. One is running the application, and another is called
vertx-blocked-thread-checker, if you’re curious.
Now let’s deploy a thousand verticles and see how that affects our thread count. Verticles are lightweight actors, that usually run on the event loop.
threadCounts for now, as it’ll be explained later.
CountDownLatch here since verticles are deployed asynchronously, and we want to make sure that all have been deployed when we check the thread count.
After deploying 1000 verticles -> 19 threads
So, some simple math here. We had 3 threads before, and now, 16 additional threads were added. They’re all named in the form
vert.x-eventloop-thread-X. You can start ten thousand verticles, and the amount of your event loop threads won’t be affected.
So, two important takeaways until now:
- Vert.x is not single threaded
- Maximum number of Event Loop threads depends on number of CPUs, not on number of verticles deployed
You can see how the default number of threads is determined here:
Now it’s time to see what our verticle looks like, and why do we pass a
HashMap to it:
So, when each verticle starts, it logs which thread it has been assigned.
This code helps us understand how the threads are divided between verticles:
As you can see, each new verticle gets a thread in a round-robin manner.
Looking at the results you may wonder, why we deployed 16 event loop threads, but verticles registered only on the first 8 of them.
The reason is that we’re deploying verticles very aggressively. In a regular application, you probably won’t be doing that.
So, let’s slow things down a bit. We’ll deploy same thousand verticles, but this time, one after the other:
The result is that we’re using less threads than before:
That’s because framework has enough time to react.
That’s all good, speaking about verticles that run on event loop.
But what about the second kind, the worker verticles?
Unlike regular verticles, which should never block, worker verticles are used to execute long running or blocking tasks.
Let’s deploy a thousand of worker verticles now, in a similar manner, and see what happens:
After deploying 1000 worker verticles -> 27 threads
Deploying one thousand worker verticles added another 20 threads.
That’s because worker verticles use a separate thread pool, which is of size 20 by default.
You can control the size of this pool by calling
VertxOptions, then passing them upon Vert.x initialization:
Note that unlike regular verticles, worker verticles aren't distributed evenly across threads, since they serve a different purpose:
And by the way, you can also control the size of the event loop pool in a similar manner:
Here are few key points:
- Vert.x is multithreaded framework
- It uses a controlled number of threads
- For event loop tasks, size of the thread pool is two times the CPU count by default
- For worker tasks, size of the thread pool is 20 by default
- Sizes of both thread pools can be easily adjusted
In the second article, we’ll discuss misconceptions regarding Event Bus. And if you’re curious about other “under the hood” aspects of Vert.x, let me know in the comments.
Interested in Multi Reactor design pattern in particular and in design patterns in general? Be sure to check my “Hands-on Design Patterns with Kotlin” book.