Glancing into the Future: Java plans for 2025, Machine Learning on GC logs and agentic software - JVM Weekly vol. 116
2025 has begun, so it’s the perfect time to see what the Java team has in store for us.
On the official Java channel, Nicolai Parlog, Java Developer Advocate, published a summary of what to expect in 2025 regarding JDK projects. If you're curious about what's coming this year (excluding Project Amber, as Nicolai left that for another video) and you only have 10 minutes to spare for Java today, there's no better way to spend it than by watching this video.
However, my textual TLDW is below as well:
One of the most important ones is Project Babylon, which aims to expand Java's capabilities for external programming models such as SQL, differential programming, machine learning models, and GPU integration. A key component here is the so-called "code reflection," currently available in the project's repository and being prepared for incubation. After a dynamic 2024, Babylon will continue to develop in 2025 — the team is exploring, among other things, a Java equivalent of the ONNX script, with a prototype potentially being showcased at JavaOne in March 2025.
For those unfamiliar, ONNX stands for Open Neural Network Exchange — an open format for storing machine learning models, enabling their transfer between various frameworks. So I can't wait 🍿.
Project Loom completed work on virtual threads in 2024, including their interaction with object monitors, removing a major hurdle to adoption. The results will be visible as early as March with the release of JDK 24. In 2025, the project will focus on further development (and hopefully finalization 🤞) of the Structured Concurrency API, which simplifies concurrent programming by treating related tasks as a single work unit, as well as the Scoped Values API, a modern alternative to ThreadLocal. I hope to see both stabilized in JDK 25.
Project Leyden, initiated in May 2022, aims to enhance Java's startup time, time to peak performance, and footprint by shifting certain computations from runtime to earlier phases. In 2024, the project introduced its first early-access build, focusing on ahead-of-time (AOT) class loading and linking, which allows classes to be read, parsed, loaded, and linked before runtime, thereby improving startup performance. Looking ahead, the Leyden team is developing features such as AOT method profiling and code compilation, with early drafts already in progress. These enhancements aim to further optimize Java applications by enabling the JVM to utilize pre-recorded profiles and compiled code, reducing the need for just-in-time compilation during execution.
Project Lilliput focuses on memory optimization, reducing object headers from 12 or 16 bytes to 8 bytes, potentially lowering heap usage by 10–20%. JDK 24 introduces experimental support, and the goal for next year will be evaluation and performance testing.
Project Panama will evolve in three main areas: the Vector API, the Foreign Function and Memory (FFM) API, and general performance improvements. Updates planned for JDK 25 include method profiling and ahead-of-time code compilation.
Project Valhalla continues its work on value types and is exploring new mechanisms like null-controlled types. Additionally, preliminary research is underway to improve numeric computations based on value classes. This project is closely tied to Panama initiatives, particularly regarding stable values which are still under development (and Nicolai promised to describe them in the Amber Episode). All these efforts aim to make Java even more adaptable to modern programming environments.
In the description of the video, you will find a ton of mailing, adding context to all of these projects, please check! And most importantly — Nicolai mentioned, that the best way to contribute is by providing feedback. Your feedback matters, so don't be a stranger.
If you feel that you would like to digest more today, that craving can be compensated by another publication on the official Java channel: A Deep Dive into JVM Start Up by Billy Korando . As the title suggests, it takes us through the JVM startup process.
Here’s a quick TLDW: Starting the JVM is far more complex than it might initially seem. It begins with the JNI function CreateJavaVM, which parses arguments and identifies the main class before initializing the JVM environment. This includes checking system resources like CPU count and memory, which can influence the choice of garbage collection algorithm. The JVM also sets up HotSpot-specific performance data (HS per data) and initializes the metaspace, the native memory region for storing class metadata. Class loading is dynamic, enabling on-demand code generation, a feature crucial for frameworks and tools. All this happens before the application executes its first line of code.
Once a class is loaded, it undergoes a linking process, including verification, preparation, and resolution. Verification ensures the structural correctness of the class, preparation sets default values for static fields, and resolution replaces symbolic references in the constant pool with actual memory addresses. These steps aren’t strictly sequential; they can occur at different times, from verification to initialization. Interestingly, this entire complex process — including class loading, linking, and initialization—takes about 62 milliseconds for a simple "Hello World" program.
It is yet another highly recommended video. While understanding these processes isn’t typically required for everyday programming, it can be useful when optimizing performance - and "knowing" stuff is simply a fun thing. Upcoming JVM features like Ahead-of-Time (AOT) compilation aim to speed up JVM startup. Project Leyden addresses this area significantly, and JDK 24, as previously mentioned, is set to implement JEP 483, focusing on early class loading and linking.
But Inside.java isn’t just about videos. Recently, their podcast has been revived. At the end of last year, they released a short episode about JDK 24, and this week, a full-length episode dropped. In it, Ana-Maria Mihalceanu chats with Jonathan Gibbons about JavaDocs and the evolution they underwent in 2024.
But amidst this influx of materials for "multimedia content fans", let’s not forget about "written word enjoyers". Recently, an article titled JVM Tuning with Machine Learning on Garbage Collection Logs was published on inside.java (which we’ve been orbiting around all day). It presents the results of Yağmur Eren master’s thesis, which explored the potential for automatic JVM tuning using machine learning. The research focused on leveraging Garbage Collection logs to train ML models that could predict optimal JVM flag settings, particularly those related to the young generation memory size (e.g., -XX:G1MaxNewSizePercent and -XX:G1NewSizePercent).
A major aspect of the study was processing GC logs, extracting key features, and training five different ML models to predict the optimal configurations. The results demonstrated a significant performance boost (up to 20%) while maintaining acceptable latency levels, suggesting that automating JVM tuning with ML has real potential.
Interestingly, this concept isn’t entirely new. A friend of mine from Krakow worked nearly a decade ago at a startup called Skipjaq - sadly now defunct - which dealt with a similar idea.
And congratulations to Yagmur - how cool must it feel to have your master’s thesis featured on Java’s main news platform! She is now hired in Oracle, so I can't wait for JDK 30+ to see the results of her research implemented - because I'm sure we will see them in one way or the other.
And when we talk about Oracle's investment in the context of ML, I can't overlook Project Stargate. Announced by US President Donald Trump, OpenAI, and SoftBank this week, it is an initiative aiming to invest $500 billion over the next four years to build a new AI infrastructure in the United States. Oracle is one of the main investors and technology partners in this project, alongside companies such as SoftBank, OpenAI, MGX, Arm, Microsoft, and NVIDIA. They will closely collaborate with NVIDIA and OpenAI in the construction and operation of this computational system.
This initiative aims to secure the United States' position in the field of artificial intelligence, create hundreds of thousands of jobs, and generate economic benefits worldwide. Stargate also seeks to support the reindustrialization of the U.S., protect national security, and pave the way for AGI (Artificial General Intelligence). It’s intriguing to see if this will give a boost to the already dynamically developing Java on GPUs.
And since we’ve touched on ML, let’s step off inside.java for a moment and jump to quarkus.io. The Model Context Protocol (MCP) is a new standard enabling AI models to safely utilize external tools and resources. It’s been mentioned in previous editions of JVM Weekly, mostly in the context of client applications. The situation is shifting now, as Max Rydahl Andersen , in his article Implementing a MCP server in Quarkus, demonstrates how to build an MCP server using Quarkus. Using an example server that provides weather forecasts and alerts for locations in the U.S., he shows practical applications of the MCP standard.
What's especially nice, this example is based on the official MCP "quickstart" guide, allowing for comparisons of implementations across different languages.
And since we’ve mentioned Quarkus and its attempts to carve out a piece of the AI pie, let’s also look at the competition: Spring. Spring has its own MCP support, but this time they’ve focused on a different topic—Agent Systems. Interestingly, just like MCP, this area draws inspiration from Anthropic article Building Effective Agents from December last year.
In his article Building Effective Agents with Spring AI, Christian Tzolov , Spring AI Lead, discusses building effective agents based on large language models using Spring AI. He presents five design patterns: Chain Workflow, Parallelization Workflow, Routing Workflow, Orchestrator-Workers, and Evaluator-Optimizer, explaining their implementation in Spring AI. The article also provides some recommendations for building reliable LLM-based systems and teases a Part 2, focusing on more advanced agent features — I’m definitely looking forward to it...
But that’s not all on this topic. If you enjoyed the Spring publication, you’ll love a fantastic initiative on GitHub. Carlos Zela Bueno , the Peru JUG Leader, has prepared a repository, langchain4j-workflow-examples, featuring advanced AI algorithm implementations in Java based on academic papers. The repository focuses on agent architectures and RAG (Retrieval Augmented Generation) techniques, such as Mixture-of-Agents (MoA) and Corrective RAG (CRAG), using the LangChain4j library and his own jai-workflow. Currently, there are only two examples (MoA and CRAG), but the README.md suggests that more are planned, including Adaptive RAG, Self RAG, and Modular RAG. Although the repository has been inactive for a year, the jai-workflow project continues to develop, and I hope Carlos will revisit it. Amazing work—I’m eagerly anticipating more!
It’s fascinating to see agent systems evolving with concrete design patterns emerging in the industry.
To wrap up, since we’ve had so much video content today, let’s end with an even longer playlist. The
newsletter, in its latest edition, presented a list of the most popular Java presentations from QCon in 2024.The biggest hit was Paul Bakker How Netflix Really Uses Java, with over 100,000 views — no doubt, the power of the Netflix brand played a role. Other highlights included Optimizing Java Applications on Kubernetes: Beyond the Basics by Bruno Borges , 1BRC-Nerd Sniping the Java Community by Gunnar Morling, Virtual Threads for Lightweight Concurrency and Other JVM Enhancements by Ron Pressler and Optimizing JVM for the Cloud: Strategies for Success, by Tobi Ajila, offering tips for working in cloud environments.
PS1: I counted that in total, today I’ve provided you with about 7 hours of content to be added to your saved-for-later list.
PS1: I counted that in total, today I’ve provided you with about
PS2: If you STILL need more content for yourself, starting today, JChampions 2025 is happening — a free conference where Java Champions will deliver sessions on topics of interest to all Java developers. You will find talks from Ken Kousen (from
), Bruno Borges, Josh Long, Holly Cummins, Ken Fogel, Ian Darwin, Frank Delporte, Eric Deandrea, Mohamed Taman, Simon Martinelli, A N M Bazlur Rahman, Bruno Souza, Kirk Pepperdine, Syed M Shaaf and a lot of other talented speakers... The full list is available here!The event runs from January 23 to 28, 2025, and all sessions will be available on YouTube and can be watched even after the conference ends. Sessions from the past four years are also available on the JChampions Conference YouTube channel.
If you want to ask questions during the talks, you can find the registration link here.
PS3: Are you a Staff Engineer or aspiring to become a Tech Lead, Team Lead, or Architect?
JVM Weekly is a media partner for L8 Conference, a conference for Staff+ Engineers, happening in Warsaw on March 17-18, 2025.
The agenda is packed with sessions on Leadership in Engineering, Mentoring, Organizational Culture, and Strategy for Tech Teams.
🚀 Use the code JVM_WEEKLY for a 15% discount.
Better yet, join the conference newsletter to get a 30% discount!
See you in Warsaw!