Introduction
In the world of Java development, choosing the right framework can significantly impact the performance and productivity of your applications. With this in mind, Graeme Rocher (the creator of Micronaut) set out to eliminate misconceptions and provide a clear picture of how three major Java frameworks — Micronaut, Quarkus, and Spring Boot — stack up against each other when running on JDK 14.
below is the the result during the screencast, Rocher recorded the following stats for different metrics (winner in red), taking averages over 10 runs:
takeaways
- Performance Benchmarks: The study aimed to provide a fair comparison of the performance metrics for Micronaut, Quarkus, and Spring Boot, with a particular focus on time-to-first-response, request-per-second, and memory efficiency.
- Transparency in Testing: Rocher made the testing process transparent by providing an unedited screencast of the benchmarks and making the source code available on GitHub, allowing others to replicate the tests.
- Results Varied by Tool: The performance results varied depending on the benchmarking tool used. Quarkus led in time-to-first-response and performed better at higher concurrency levels with the work tool, while Micronaut showed better results with the Vegeta tool.
- Spring Boot Data Challenges: Rocher encountered difficulties in obtaining reliable request-per-second data for Spring Boot due to issues with its Netty implementation and keep-alive connections.
- Misinformation Correction: Part of the motivation for conducting the tests was to address and correct misleading information that had been circulating in the Java community.
- Encouragement for Independent Testing: Despite the findings, Rocher emphasizes the importance of developers conducting their benchmarks to determine which framework best meets their specific needs and preferences.
- Acknowledgment of Bias: The analysis openly acknowledges potential bias, as Object Computing, where Rocher is employed, is the home of Micronaut. However, efforts were made to mitigate this bias through the transparent and replicable testing approach.
The overarching message from the study is that while benchmarks can guide developers, there is no substitute for personal verification and testing to choose the most suitable framework for a given project.
Remember: when choosing a framework, consider specific project needs, team expertise, and compatibility with existing systems, especially for Microservices and IoT.
References
- https://github.com/graemerocher/framework-comparison-2020/blob/master/results.pdf
- https://www.youtube.com/watch?v=rJFgdFIs_k8