Beyond JDK 23: Peeking what the future holds for Java – JVM Weekly vol. 85
Today we return to the standard edition, so as you might guess... it won't be short.
1. Beyond JDK 23 - What the Future Holds for Java
Since Java 1.0 was introduced to developers at the Sun World '95 conference, Java celebrated its 29th anniversary last Thursday. I'm curious to see how Oracle will celebrate Java’s 30th birthday and if there will be any surprises for us.
Of course, I mean surprises other than new language features, as those following mailing lists and JEP trackers won't find them surprising. And we will explore this slightly more distant future of the language today.
So we will start with Java Language Update -- a look at where the language is going, which was presented by Brian Goetz at Devoxx Greece 2024. This is another iteration of this presentation, which Brian updates annually with new language roadmap updates, showcasing features such as pattern matching and records in a broader context. For example, on the one hand he presents records as a language construct for modeling data, allowing the declaration of numerous small classes modeling "nominal data portions" with names and types. On the other hand, sealed classes and pattern matching are closely related, enabling the modeling of these types and making the code more concise and less error-prone. There are far more examples like that in the talk.
Overall, it's a very cool presentation, especially if we are trying to build a mental model of how various features relate to each other.
And now it's time for a taste of what's coming in the future... a bit more distant one. Next week (spoiler) we will talk about the final list of JEPs for JDK 23, as the Rampdown phase starts next Thursday. However, for now let’s discuss some other interesting plans – proposals that timeline go beyond the fall release of the new JDK. I must admit, there are a few exciting features there
JEP 478: Key Derivation Function API (Preview)
JEP 478 introduces a new API for Key Derivation Functions (KDF), expanding Java's capabilities in managing cryptographic keys by proposing an API to generate additional keys from a secret key with an embedded additional set of data. This feature is aimed at improving support for modern cryptographic algorithms, such as HMAC-based Extract-and-Expand (HKDF) and Argon2, which are crucial for applications using Key Encapsulation Mechanism (KEM) and Hybrid Public Key Encryption (HPKE). It will also enable the introduction of higher-level protocols, such as TLS 1.3, and support for quantum cryptography.
An example of using the javax.crypto.KDF
package to initialize and utilize HKDF is as follows:
KDF hkdf = KDF.getInstance("HKDFWithHmacSHA256");
KDFParameterSpec params =
HKDFParameterSpec.ofExtract()
.addIKM(initialKeyMaterial)
.addSalt(salt).thenExpand(info, 42);
SecretKey key = hkdf.deriveKey("AES", params);
JEP 479: Remove the Windows 32-bit x86 Port
The removal of support for Windows 32-bit was proposed at JDK 21 level, and JEP 479 will eventually eliminate any traces of this distribution from both the codebase and JDK build infrastructure. The goal of this JEP is to remove all code fragments used exclusively for Windows 32-bit x86, as well as to end all testing and development activities, simplifying the JDK's build and testing infrastructure. This is a classic case of eliminating technical debt, as the current implementation of, for example, Virtual Threads is essentially one big hack. I suspect it can be expected in JDK 25 or 26, coinciding with the end of the product lifecycle itself – Windows 10, the last Windows OS with 32-bit versions, will reach end-of-support in October 2025.
JEP 472: Prepare to Restrict the Use of JNI
The removals do not end there. JEP 472 announces (as expected) restrictions on the use of the Java Native Interface (JNI), initially through appropriate warnings to the user. This step aims to prepare developers for future Java versions that will restrict interaction with native code unless explicitly enabled by application developers. This will increase platform integrity by requiring developers to explicitly approve the usage of JDK functions (JNI ones in that case) that may compromise this integrity.
This proposal does not intend to withdraw JNI or limit native code behavior invoked by JNI (at least for now), but to start preparing the Java ecosystem for future changes. However, I suspect that most workloads written in JNI will never switch to the Foreign Function & Memory API (FFMA), so it will be very hard to drop the support.
JEP draft: Process Reanimation for Serviceability
Now it gets interesting, as we enter the area of drafts. JEP draft: Process Reanimation for Serviceability proposes a new approach to enhancing Java debugging capabilities. It allows Hotspot Serviceability tools, such as jcmd
, to operate on a JVM that has crashed and failed. This would be achieved by reanimating such a failed JVM in a new process, enabling the use of native HotSpot JVM diagnostics after a failure. This process involves restoring the memory image of the process from a core dump or minidump file and the binary JVM, allowing diagnostic commands to be executed in this revived environment. This method aims to avoid duplicating diagnostic code that replicates JVM internals.
The motivation for this JEP is to streamline the post-mortem analysis of JVM crashes, which currently requires various tools that can be cumbersome and inefficient (both in use and maintenance). By reanimating a crashed JVM, the same diagnostic commands used in live scenarios can be employed, ensuring consistency and eliminating the need for separate tools for live and post-crash analysis. This would simplify the diagnostic process and potentially expedite problem resolution in the JVM. Although the application is somewhat niche, in the event of a crash, it could be invaluable and significantly streamline the entire process.
JEP draft: DRAFT: Support HTTP/3 in the HttpClient
Another JEP draft that I wanted to mention has not received a number yet, but I must admit that JEP draft: DRAFT: Support HTTP/3 in the HttpClient is also delightful. Our draft: DRAFT proposes updating the JDK HTTP client to support the HTTP/3 protocol based on QUIC (Quick UDP Internet Connections). If you are not familiar with the topic, a very nice introduction can be found here. In short, the new protocol does not use TCP/IP like previous versions, but is based on UDP, offering faster connection establishment, eliminating head-of-line blocking issues, and providing more reliable transport, especially in environments with unstable internet connections.
The update to the HTTP client API is designed to be backward compatible, requiring only minor modifications to support HTTP/3. To use HTTP/3, the API user must select this protocol version as the default for their HTTP client or explicitly specify the version when creating an HTTP request:
var httpClient = HttpClient.newBuilder()
.version(HttpClient.Version.HTTP_3)
.proxy(...).build();
var request = HttpRequest.newBuilder(URI.create("https://openjdk.org"))
.version(HttpClient.Version.HTTP_3)
.GET().build();
JEP draft: Exception handling in switch (Preview)
Finally, for completeness, the JEP I mentioned a few editions ago. JEP draft: Exception handling in switch (Preview) presents a new approach to exception handling in the switch
construct in Java, allowing for more efficient management of exceptions thrown by selectors (i.e., expressions after switch
). The goal is to enable exception handling directly within the switch
block, which can significantly improve code readability and maintainability, as well as facilitate the use of APIs that may throw checked exceptions.
Currently, if a selector in a switch
construct returns null
or throws an exception, the standard behavior is to throw a NullPointerException
or propagate the exception further. This forces developers to use external try-catch
blocks to handle these situations, which can obscure the code, making it harder to understand. JEP proposes extending the switch
syntax to allow handling exceptions thrown by selectors directly within the switch
block using a new case throws
clause, enabling elegant and consistent management of various selector evaluation results. An example of using the new syntax might look like this:
Future<Box> f = ...
switch (f.get()) {
case Box(String s) when isGoodString(s) -> score(100);
case Box(String s) -> score(50);
case null -> score(0);
case throws CancellationException ce -> handleCancellation(ce);
case throws ExecutionException ee -> handleExecution(ee);
case throws InterruptedException ie -> handleInterrupt(ie);
}
This change will allow developers to use switch
expressions as universal flow control tools in applications, especially those that heavily use APIs throwing exceptions. This should make the code more modular, easier to test, and less error-prone.
We will return to the topic of JEPs (this time in the context of JDK 23) next week. For now...
2. Early builds of the Leyden Project ready for (easier) testing
Since we are already looking ahead to the future of JDK, I yet have another interesting announcement. You might not know, but there is an official Leyden prototype repository on GitHub that allows developers to experiment with optimizations introduced in the project for improving startup time and application performance. The functionalities introduced by the prototype aim to shift work from runtime to earlier phases, called training sessions. During these training sessions, various types of telemetry information are precomputed and stored, and bytecode is precompiled to native code, enabling observation of the actual behavior of the application.
From the end-user's perspective, the main functionality introduced in Project Leyden seems to be the enhanced Class Data Sharing (CDS), which, in addition to class metadata and heap objects, also allows storing compiled Java methods. This can be activated with the new flag -XX:+PreloadSharedClasses
. It enables loading classes already at application startup, which can significantly speed up the startup process and simplify the implementation of further optimizations.
The new CDS also allows the storage of profiling data, making it possible to apply method profiles from training sessions to accelerate JIT compilation during JVM warm-up. This can be tested using VM flags such as -XX:+RecordTraining
and -XX:+ReplayTraining
. Training sessions also allow experimentation with compiling frequently used methods during their execution and then storing them in the CDS archive, accessible with flags -XX:+StoreCachedCode
and -XX:+LoadCachedCode
.
Finally, we have access to the flags -XX:+ArchiveDynamicProxies
and -XX:+ArchiveReflectionData
, for testing the pre-generation of dynamic proxies and reflection metadata, respectively, which can also be read from the cache. Overall, from a user's perspective, Project Leyden appears to be AppCDS on steroids.
Should you use it in production? Probably not. But I decided to share it with you because the project finally received a decent Readme last week, at least on GitHub. So, if any of you want to play around with it, now is the perfect time.
3. Interesting Resources on Event Sourcing in Java
Are you familiar with
? It’s probably the best place if you’re interested in tracking trends not just in specific languages or technologies, but across the entire industry. Its author, , does a fantastic job aggregating what the industry has to offer each week. However, since it’s a broad and comprehensive publication aimed at a wide audience, it rarely focuses on Java. That’s probably why I enjoyed the recent publications so much, where Oskar, who comes from the .NET world and is an expert in Event Sourcing, decided to delve into the latest developments in Java.Using a shopping cart example, Oskar in his article This is not your uncle's Java! Modelling with Java 22 records pattern matching in practice discusses new features introduced in Java 22, such as records and pattern matching for modeling specific domains, showing how these new features work together and can be used to evolve states in the context of Event Sourcing, without any frameworks. He explains how events can be recorded and how to build the current state based on the event history, highlighting the importance of domain modeling. He also demonstrates how to use pattern matching to manage various states of a shopping cart, such as opening, adding products, removing products, confirming, and canceling.
In another article How to write a left-fold streams collector in Java, Oskar expands on the topic, discussing more advanced techniques for state management using streams in Java. He shows how to implement a custom stream collector that enables left-fold aggregation of events in the context of Event Sourcing. Left-fold aggregation, also known as left-fold reduction, is crucial in event-driven systems because it ensures that each event is processed in the order it occurred. This makes it possible to correctly reconstruct the state of an object based on the sequence of events, which is essential for maintaining data integrity and consistency with reality. The article also demonstrates how to create a custom collector that iteratively evolves the state of the shopping cart, considering the order of events. This makes the aggregation process more understandable and efficient.
Taking this opportunity, I’ll also share an older project by Oskar that has been updated along with the publication of the above articles. It’s the Introduction to Event Sourcing Workshop available on GitHub, which guides you step-by-step through the various complications you might encounter when working with Event Sourcing systems.
If you’re familiar with the term but would like to understand it better through real code examples, I highly recommend checking it out.
.4. Release Radar
Spring Boot 3.3
Historically, new releases of Spring Boot have received their own sections, but I have to admit that Spring Boot 3.3 is not a revolutionary release, though it still has noteworthy updates.
This new release ties in well with the Project Leyden section, as Spring Boot 3.3 offers improved support for CDS (Class Data Sharing), helping to reduce startup time and memory usage. Spring Boot now facilitates creating a CDS-friendly application layout, which can be done by extracting an uber JAR using the jarmode
tool – a feature in Spring Boot that allows running the application directly from the JAR using various environment profiles, such as creating layered Docker containers.
java -Djarmode=tools -jar your-application.jar extract
Continuing with features related to application build management, Spring Boot 3.3.0 introduces SBOM (Software Bill of Materials) support, an important improvement in application security and dependency management. SBOM is a detailed inventory of all components that make up the software, including libraries, modules, and dependencies. It’s a helpful tool for managing software security and compliance, providing better understanding and control over its components – something you will be asked about in your next ISO audit (I know from experience). Version 3.3.0 automatically generates SBOM in the CycloneDX format, a standard by OWASP, during the build process, includes it in the uber JAR, and offers an Actuator endpoint to expose it. This allows users to easily obtain detailed information about the components of a running application, increasing transparency and facilitating risk management in Spring Boot-based projects.
The new Spring Boot release introduces several significant changes in metrics and monitoring support. Micrometer 1.13 has deprecated support for Jersey in favor of jersey-micrometer
from Jersey, which led to a similar move in Spring Boot. Micrometer also introduced new annotations, such as @SpanTag
, and support for tagged and local fields for Zipkin Brave and OpenTelemetry. Spring Boot has also removed dependency management support for Dropwizard Metrics, meaning developers need to specify the version of this library themselves. Additionally, the new framework includes updates to the Prometheus client to the stable 1.x branch, which introduces breaking changes due to renamed exported metrics.
Several other dependencies, such as Flyway 10, Infinispan 15, and Git Commit ID Maven 8.0, have also been updated. If you still want more Spring, I recommend checking out the full Release Notes.
Amper 0.3.0
Amper is an experimental build tool developed by JetBrains, aimed at simplifying and streamlining the process of building projects in a declarative manner, a bit like modern CI/CD configuration.
product: jvm/app
dependencies:
- org.jetbrains.kotlinx:kotlinx-datetime:0.4.0
test-dependencies:
- io.mockk:mockk:1.13.10
settings:
kotlin:
languageVersion: 1.8 # Set Kotlin source compatibility to 1.8
jvm:
release: 17 # Set the minimum JVM version that the Kotlin and Java code should be compatible with.
Initially, Amper functioned as a plugin for Gradle, allowing for rapid prototyping and user experience validation. However, the goal of Amper is to provide an intuitive and efficient tool for configuring project builds, which, while supporting integration with existing systems, has the ability to gradually introduce new functionalities. This necessitates a gradual move away from relying on other tools, hence the latest update introduces test support for Standalone Mode, effectively cutting the umbilical cord for the project.
Standalone mode in Amper is a new feature that allows for independent management of most build tasks without delegating them to existing systems, such as Gradle. This is a major step, giving JetBrains full control over the developer experience and excellent integration with IDE tools (which essentially means "works well with IntelliJ IDEA and Fleet"). The creators promise that this mode will bring performance improvements, such as faster resolution and downloading of dependencies.
In addition to standalone mode, the latest update Amper 0.3.0 introduces many other minor usability improvements. New enhancements appearing in Fleet 1.35 and IntelliJ IDEA 2024.2 include quick fixes in Kotlin source code, support for creating new projects and modules, and easy merging of duplicated configuration blocks.
Azure Container Apps
Recently, the Microsoft Build conference took place, and while it didn't bring significant news regarding JVM support, one of the announcements might interest you.
Azure Container Apps is a managed service that allows running containerized applications without managing the infrastructure. It enables automatic scaling, microservices deployment, and running applications in different environments. While support for Java is not entirely new (after all, we run applications in containers, so it’s transparent to the runtime), Microsoft has decided to add some features specific to Java.
This means that components enabling observability and application configuration, such as Spring Cloud Config, Spring Cloud Eureka, and others, will no longer need to be run and managed by developers but will be managed and integrated within the Azure Container Apps environment. Developers will be able to, for example, monitor the performance and state of the application using JVM metrics, such as Garbage Collector state or memory usage types.
Microsoft has also significantly expanded the documentation for Container Apps and prepared a series of guides and a complete workshop and a series of guides.
Private opinion – Container Apps is a very convenient solution... but also quite expensive. Use it wisely and keep an eye on your cost calculators, as it’s easy to get carried away with "infinite scalability" if you have a wild imagination.
And since we’re on the subject of Microsoft...
Semantic Kernel for Java GA
...Bruno Borges from Microsoft announced the stable release of Semantic Kernel SDK 1.0 GA.
Semantic Kernel is a text analysis tool that uses semantic language models to understand and interpret the meaning of words in context (hence the strange name), but for most users, it primarily serves as an option to integrate their applications with LLMs. After a period of experimenting with the appropriate API design and usage patterns, the teams responsible for the SDK versions for Python, Java, and .NET have united to create version 1.0 for all these languages (until now, only the .NET version was stable), including the Semantic Kernel Java SDK. This involves unifying naming conventions, syntax, and the available set of features.
Thus, Semantic Kernel Java SDK 1.0 introduces several key new features. The most important of these is the ability to invoke tools. This is a feature known from Langchain4j (which I will mention shortly), allowing the AI service to request the invocation of native Java functions within the query planning process. Additionally, the audio service now supports text-to-speech and speech-to-text conversion. Type conversion has also been improved, enabling users to register and serialize/deserialize types to and from prompts. Observability has also been addressed, introducing hooks to monitor key points in the flow, such as function calls, allowing for better tracking and debugging.
Other new features in Semantic Kernel Java SDK version 1.0 include improvements in API consistency, making it more intuitive (which is helpful), but existing users (including myself) will have to deal with a fair number of breaking changes. The documentation now covers examples in all languages, including Java, and the examples folder has been cleaned up. Simplified Maven dependencies have also been introduced, as well as the ability to easily deploy demo applications using the Azure Developer CLI.
Langchain4j 0.31
But Semantic Kernel is not the only Java library enabling integration with various language models and AI tools that has received an update. Langchain4j in its latest 0.31 release introduces several key features and new integrations that extend the library's capabilities.
The new Langchain4j introduces several innovations in Retrieval-Augmented Generation (RAG) methods. First, it adds the ability to use web search engines as a data source for the RAG model, expanding the scope of this tool's usage. Second, it implements the option to preview the content retrieved by RAG for generating responses, allowing for better management of input data and understanding which fragments influenced the results. Additionally, an experimental feature to use an SQL database as a data source for RAG has been introduced.
In terms of integrations, Langchain4j 0.31.0 also offers new connections with external models and services. Support has been added for Cohere embeddings (a company specializing in generative language models for enterprises), integration with Google Web Search Engine and other search engines like Tavily, and support for Jina AI functionality has been implemented. Significant modifications are also related to Azure OpenAI, where a migration from functions to tools has been carried out, potentially impacting more efficient integration with Azure cloud. The ability to add custom HTTP headers in OpenAI integration has also been introduced. Moreover, Vertex AI Gemini support has been expanded, adding support for system messages and parallel tools, increasing the scalability and complexity management capabilities of operations.
Apache Camel 4.6
Apache Camel is a flexible open-source tool for integrating applications in enterprises. It represents the Enterprise Service Bus (ESB) category, allowing for efficient connection of various systems and applications to exchange data. Camel supports different message routing patterns, simplifying the configuration of data flows between services, both internal and external. This enables developers to focus on business logic rather than the technical details of inter-system communication.
The main tool that underwent significant changes is Camel JBang, a tool that allows for quick application launches without the need to configure a full development environment. It uses JBang, which enables running Java scripts without prior compilation or extensive project building, making it ideal for rapid prototyping, testing, and running simple Camel integrations and scripts. Camel JBang now supports launching with Spring Boot and Quarkus using the --runtime
option. The ability to configure logging levels for individual packages in the application.properties
file and define JDBC data sources in the Spring Boot style has also been added.
Rest DSL has gained a new contract-first approach that utilizes OpenAPI v3 specifications. This allows for easily defining Rest DSL in Camel based on an existing OpenAPI specification file. Additionally, new features have been added to components: camel-azure-eventbus
has been refactored for better failover support and reconfiguration, camel-sql
now supports variables in SQL queries, and camel-kafka
has been updated to Kafka client 3.7, adding JMSDeserializer
to handle JMS header types.
Apart from this, other dependencies have been updated, including support for many new AI models using AWS Bedrock. You can find the full release notes here.