I spent three hours yesterday tracing a dropped database connection in a reactive pipeline. The stack trace was completely useless, pointing to a thread that had died long before the actual error occurred. Well, that’s not entirely accurate—it’s the kind of bug that makes you question your career choices.
Then I saw the notifications about the new Hibernate Reactive release candidate and the latest Quarkus point releases. I pulled them down immediately. I needed a win.
Non-blocking database access in Java used to be a massive headache. Five years ago, you either used raw R2DBC and wrote SQL by hand, or you suffered through terrible performance. The new Hibernate Reactive RC cleans up a lot of the API surface. I spun this up on my M3 Max running Sonoma 14.4 to see if it actually fixed the memory leak issues I’d been fighting in my staging environment.
The ORM Finally Gets Out of the Way
My biggest gripe with reactive ORMs has always been the boilerplate. You want to fetch a record, update a field, and save it. In a blocking world, that’s three lines of code. In reactive, it used to look like a callback nightmare.
The latest updates to the Mutiny integration make this much cleaner. Here is a practical example of how I’m structuring my data layer now.
import io.smallrye.mutiny.Uni;
import org.hibernate.reactive.mutiny.Mutiny;
import jakarta.enterprise.context.ApplicationScoped;
import jakarta.inject.Inject;
@ApplicationScoped
public class UserRepository {
@Inject
Mutiny.SessionFactory sessionFactory;
public Uni<User> updateLastLogin(Long userId, String ipAddress) {
return sessionFactory.withTransaction((session, tx) ->
session.find(User.class, userId)
.onItem().ifNotNull().invoke(user -> {
user.setLastLoginIp(ipAddress);
user.setLoginCount(user.getLoginCount() + 1);
})
);
}
}
Notice how we aren’t explicitly calling a save method. The transaction management handles the dirty checking automatically, just like classic Hibernate. This worked perfectly on my local Postgres 16.1 instance.
I ran a load test against this specific endpoint using wrk. The memory usage dropped from 380MB on my old blocking setup to about 115MB under the exact same load. That is real money saved on cloud hosting if you are running dozens of microservices.
Handling Streams Without Losing Your Mind
Getting data out of the database is only half the battle. You still have to process it. The new Spring Cloud and Quarkus releases both heavily push you toward structured concurrency, but they still fully support reactive streams for high-throughput scenarios.
I built a quick interface and service to test how the new Mutiny stream operators handle backpressure.
import io.smallrye.mutiny.Multi;
import io.smallrye.mutiny.Uni;
public interface OrderProcessor {
Multi<OrderResult> processPendingOrders(String regionCode);
}
Implementing this requires careful thought about concurrency limits. If you just flat-map everything, you will overwhelm your downstream services.
import jakarta.enterprise.context.ApplicationScoped;
@ApplicationScoped
public class ReactiveOrderService implements OrderProcessor {
private final OrderRepository repository;
private final PaymentClient paymentClient;
public ReactiveOrderService(OrderRepository repository, PaymentClient paymentClient) {
this.repository = repository;
this.paymentClient = paymentClient;
}
@Override
public Multi<OrderResult> processPendingOrders(String regionCode) {
return repository.findPendingByRegion(regionCode)
.onItem().transformToMulti(orders -> Multi.createFrom().iterable(orders))
.onItem().transformToUniAndMerge(order -> validateAndCharge(order))
.withRequests(5)
.filter(result -> result.isSuccessful());
}
private Uni<OrderResult> validateAndCharge(Order order) {
return paymentClient.charge(order.getAmount(), order.getPaymentId())
.onItem().transform(receipt -> new OrderResult(order.getId(), true, receipt))
.onFailure().recoverWithItem(new OrderResult(order.getId(), false, null));
}
}
The withRequests(5) line is the lifesaver here. It strictly limits the concurrent external API calls.
The Connection Pool Gotcha
Here is the thing nobody tells you about mixing reactive frameworks and ORMs right now. If you aren’t careful with your connection pool sizing, you will exhaust your database connections silently.
And the framework won’t crash. It just hangs.
I learned this the hard way last Tuesday. I had my reactive application spinning up thousands of concurrent non-blocking requests, but my underlying Agroal connection pool was still capped at the default 20 connections. The reactive threads were just waiting indefinitely. Always explicitly set your quarkus.datasource.reactive.max-size in your application properties to match your expected concurrent database load.
Virtual Threads vs Reactive
I know what you are thinking. Why bother with reactive at all when Java 21 gave us Virtual Threads?
But it is a valid question. I use virtual threads for 80% of my new services. The code is easier to read and debug. Yet for that remaining 20% where I have thousands of concurrent persistent WebSocket connections streaming market data? Reactive still wins on pure hardware efficiency.
The context switching overhead, even with virtual threads, adds up at extreme scale. I benchmarked a virtual thread implementation against the Mutiny implementation above. The virtual thread version started rejecting connections at around 12,000 concurrent users on a 2GB container. The reactive version held steady up to 28,000 before the CPU finally pegged at 100%.
You don’t always need reactive. But when you do, it’s nice to see the tooling finally catching up and making the experience less painful. And the new Gradle RC also dropped with better build caching for these exact frameworks, which cut my local compilation time from 42 seconds to just under 14. I’ll take that trade any day.
