Project Loom: virtual threads in Java

Alexey Soshin
4 min readMay 17, 2020

--

Photo by Johannes Plenio on Unsplash

A few days ago Ron Pressler published article called “State of Loom”, which was endorsed by all the greatest people of JVM community, for all the good reasons.

While an interesting read by itself (I personally liked the “taxi” metaphor a lot), I was skeptical how useful this solution could be. Ron challenged me to try it for myself.

Want to read this story later? Save it in Journal.

What we’ll need:

  • Java15 Early-Access Build that contains Project Loom
  • Our favorite command line tool, since IntelliJ doesn’t know anything about Java15 at the moment

To begin with, I decided to implement basic challenge provided by Kotlin coroutines: starting one million threads that simply increase an atomic counter.

While the original challenge was written in Kotlin, it’s easy to rewrite it using Java syntax:

public class Main {
public static void main(String[] args) {
var c = new AtomicLong();
for (var i = 0; i < 1_000_000; i++) {
new Thread(() -> {
c.incrementAndGet();
}).start();
}

System.out.println(c.get());
}
}

Compile and run it:

javac Main.java && java Main
... Never completes

You can see that using regular threads, it still fails.

Let’s now rewrite it using “virtual threads” provided by Project Loom:

for (var i = 0; i < 1_000_000; i++) {
Thread.startVirtualThread(() -> {
c.incrementAndGet();
});
}

This outputs correctly:

1000000

But I’m still not impressed.

In the article Ron correctly points out that Kotlin had to introduce delay() function, that would suspend a coroutine, because using Thread.sleep() would instead put one of the coroutine scheduling threads to sleep, effectively reducing concurrency. How does virtual threads handle this case?

Outputs:

Something around 400K on my machine

That’s when things start to get interesting! Thread.sleep() is actually aware of being invoked inside a virtual thread, so it suspends the continuation, not the executor.

That’s already awesome. But let’s dig a bit deeper with the following exercise:

So, we’re launching 10 slow tasks, and 10 fast tasks. The intuition would be that fast tasks complete before the slow ones, after all, they’re 1 million times faster.

But that’s not the case:

Best slow UUID is fffffde4-8c70-4ce6-97af-6a1779c206e1
Best slow UUID is ffffe33b-f884-4206-8e00-75bd78f6d3bd
Best slow UUID is fffffeb8-e972-4d2e-a1f8-6ff8aa640b70
Best fast UUID is e13a226a-d335-4d4d-81f5-55ddde69e554
Best fast UUID is ec99ed73-23b8-4ab7-b2ff-7942442a13a9
Best fast UUID is c0cbc46d-4a50-433c-95e7-84876a338212
Best fast UUID is c7672507-351f-4968-8cd2-2f74c754485c
Best fast UUID is d5ae642c-51ce-4b47-95db-abb6965d21c2
Best fast UUID is f2f942e3-f475-42b9-8f38-93d89f978578
Best fast UUID is 469691ee-da9c-4886-b26e-dd009c8753b8
Best fast UUID is 0ceb9554-a7e1-4e37-b477-064c1362c76e
Best fast UUID is 1924119e-1b30-4be9-8093-d5302b0eec5f
Best fast UUID is 94fe1afc-60aa-43ce-a294-f70f3011a424
Best slow UUID is fffffc24-28c5-49ac-8e30-091f1f9b2caf
Best slow UUID is fffff303-8ec1-4767-8643-44051b8276ca
Best slow UUID is ffffefcb-614f-48e0-827d-5e7d4dea1467
Best slow UUID is fffffed1-4348-456c-bc1d-b83e37d953df
Best slow UUID is fffff6d6-6250-4dfd-8d8d-10425640cc5a
Best slow UUID is ffffef57-c3c3-46f5-8ac0-6fad83f9d4d6
Best slow UUID is fffff79f-63a6-4cfa-9381-ee8959a8323d

The intuition works only for cores <= ACTUAL_CPU_CORES

Which is not bad, and somewhat expected, because currently Project Loom uses ForkJoinPool for scheduling virtual threads.

Although virtual threads are preemtive, as stated in the documentation, scheduling them at the moment is cooperative, like coroutines. You don’t see the interleaving attributed to OS threads preemtiveness, at the moment. It’s mentioned in the article, though, that forced preemtion is being considered, which would be an interesting option.

One obvious solution would be to use Thread.yield(), and it works just fine. Invoking a function, though, doesn’t yield any result. And there’s no suspend keyword, like in Kotlin, to help it.

Other than that, any use of standard Java IO will still yield, as described in “All Your Blocking Are Belong to Us” (did I already mention that I love the well placed puns in the article?).

For example, using not very meaningful System.out.print(“ “); somewhere in my function resulted in a context switch. And in any real-world application, IO is so common that this shouldn’t be a real issue anyway.

Conclusions

I must admit, that Project Loom is seriously awesome. It seems very viable for end-users, effectively providing lightweight concurrency everywhere, without the need for a library or a framework.

But the greatest benefit, I expect, would be for library developers, that, once Java15 achieves wide enough adoption, can put concurrency concerns aside, and provide other benefits for their users.

That’s our expectation from the platform is anyway, right? We’ve put on it garbage collection, code optimization, and now, as the next step, also concurrency.

📝 Save this story in Journal.

👩‍💻 Wake up every Sunday morning to the week’s most noteworthy stories in Tech waiting in your inbox. Read the Noteworthy in Tech newsletter.

--

--

Alexey Soshin
Alexey Soshin

Written by Alexey Soshin

Solutions Architect @Depop, author of “Kotlin Design Patterns and Best Practices” book and “Pragmatic System Design” course

Responses (4)