All 24 new JEPs for JDK 24: Quantum-Resistant Cryptography, Garbage Collectors, and a lot of cleanups - JVM Weekly vol. 111
Starting today, the JDK 24 enters the Rampdown phase!
Starting today, the new Java enters the Rampdown phase – this means that the feature list has been frozen and no further new features are to be expected. Therefore, we will go through the complete list of changes in the new edition.
Exegi Monumentum - New Stable APIs
Let’s start with the crème de la crème: the new APIs that will become part of developers' toolchains with the next Java release.
JEP 485: Stream Gatherers
JEP 485 extends Java's Stream API with the ability to define custom intermediate operations, called gatherers. This allows for more flexible and expressive data processing within streams, enabling transformations that were previously difficult or impossible to achieve using the built-in intermediate operations.
Problem it solves: The existing Stream API provides a fixed set of intermediate operations like mapping, filtering, and sorting. However, it lacks a mechanism (similar to existing Collectors) for defining more complex operations, such as grouping elements into fixed-size windows or removing duplicates based on custom criteria. As a result, developers often had to resort to complicated workarounds or abandon streams in favor of iterative code (gasp!).
Solution provided: Stream gatherers allow developers to define custom intermediate operations for streams. These gatherers can be used with both finite and infinite streams and support sequential and parallel processing. This makes streams more versatile and adaptable to specific application needs (or the whims of a particular developer).
Code example:
import java.util.stream.Stream;
import java.util.stream.Collectors;
public class Main {
public static void main(String[] args) {
Stream<String> stream = Stream.of("a", "b", "c", "d", "e");
var result = stream.collect(Gatherers.slidingWindow(3))
.map(window -> window.collect(Collectors.toList()))
.collect(Collectors.toList());
System.out.println(result);
// Output: [[a, b, c], [b, c, d], [c, d, e]]
}
}
The above code demonstrates the use of a custom gatherer, slidingWindow, which groups stream elements into windows of a specified size, enabling advanced stream operations. For a detailed mechanism, check out the JEP itself; here, we focus on its usage.
PS: More about the Stream Gatherers API can be found here.
JEP 484: Class-File API
JEP 484 aims to provide a standard API for parsing, generating, and transforming Java class files. This API allows developers and tools to handle class files according to the format defined in the JVM Specification, facilitating the adoption of new language and VM features as they are introduced.
Problem it solves:
The Java ecosystem has many libraries for handling class files, such as ASM, BCEL, and Javassist. However, the rapid evolution of the class file format, tied to Java's dynamic development and the six-month JDK release cycle, leads to compatibility issues. The class file format changes with every JDK version, introducing new attributes, instructions, or metadata. If a library (e.g., ASM) hasn't been updated to support the latest JDK, attempts to analyze or manipulate such class files can fail.
Furthermore, the JDK includes its own internal libraries for processing class files, which slows the ecosystem's ability to adopt new features. Until a "stable" Java release includes updates, new library versions are often delayed.
Solution provided by the JEP:
Introducing a standard API for handling class files that evolves with the class file format enables both platform components and external tools or libraries to support the latest language and VM features immediately.
This API represents class file elements as immutable objects, mirrors the hierarchical structure of class files, supports lazy processing, and offers both streaming and materialized views of class files. This allows developers to efficiently process class files while enabling JDK components to gradually migrate to the standard API. In the future, this may allow the removal of internal copies of external libraries, such as ASM.
Code example:
public class ClassFileExample {
public static void main(String[] args) throws Exception {
Path classFilePath = Path.of("MyClass.class");
byte[] classData = Files.readAllBytes(classFilePath);
ClassModel classModel = ClassModel.parse(classData);
List<MethodModel> methods = classModel.methods();
for (MethodModel method : methods) {
System.out.println("Method: " + method.methodName());
}
}
}
The above code demonstrates how the new API can read a class file, parse it, and display the names of all methods contained in the class.
Liliputs, Garbage Collectors, and Virtual Threads - VM Internals
JEP 404: Generational Shenandoah (Experimental)
Just as previous JDK versions introduced the generational ZGC, JEP 404 aims to enhance the Shenandoah Garbage Collector - a low-latency algorithm that performs parallel and concurrent garbage collection with minimal impact on application response time - by adding experimental generational capabilities. This change is designed to improve throughput, resilience to sudden loads, and memory efficiency.
What problem does it solve:
Traditional generational collectors like G1 or CMS operate under the assumption that most objects quickly become unused, focusing their efforts on young objects. Non-generational Shenandoah requires more memory and intensive work to reclaim space from unreachable objects, which can lead to longer application pauses.
What is the solution:
Introducing a generational mode in Shenandoah splits the heap into two generations: young and old. This allows the collector to focus on reclaiming memory from the young generation, where most objects quickly become unused, leading to shorter pauses and better memory management.
JEP 475: Late Barrier Expansion for G1
The goal of JEP 475 is to simplify the implementation of garbage collector barriers for G1 by shifting their expansion to a later stage in the compilation process by the C2 compiler. This change aims to relieve the compiler and (interestingly) make the code easier to understand and maintain for JVM developers.
What problem does it solve:
In the current implementation, G1 barriers are expanded early in the compilation process, which increases the complexity of the intermediate code and burdens the C2 compiler. This leads to longer compilation times and makes it difficult to maintain the correct order of memory operations, potentially resulting in errors and difficulties in code maintenance.
What is the solution:
The proposed solution delays the expansion of G1 barriers to a later stage in the compilation process, just before generating machine code. This allows the barriers to be represented in a more compact form in the intermediate code, reducing the compiler's burden and helping maintain the correct order of operations. Additionally, this approach allows for better optimization and simplifies barrier implementation, resulting in better performance and easier code maintenance.
JEP 490: ZGC: Remove the Non-Generational Mode Link to JEP
The goal of JEP 490 is to remove the non-generational mode of the Z Garbage Collector (ZGC) in the HotSpot virtual machine. This change is designed to simplify ZGC code and reduce maintenance costs by focusing solely on the generational mode of ZGC.
What problem does it solve: Maintaining two separate modes of operation for ZGC - generational and non-generational - adds complexity to the codebase and burdens development and maintenance. The generational ZGC mode has been introduced as a better solution for most use cases, making the non-generational mode less necessary.
What is the solution: JEP 490 proposes to remove the non-generational mode of ZGC by deprecating the ZGenerational option and removing the code and tests related to this mode. In future releases, this option will be completely removed, and attempting to use it will result in an error when running the JVM.
There are definitely things to be excited about.
As we are done with garbage collection, now it's time for the Virtual Threads and memory usage!
JEP 491: Synchronize Virtual Threads without Pinning
The goal of JEP 491 is to improve the scalability of code using synchronized methods and blocks by allowing virtual threads that block in such constructs to release their assigned platform threads, enabling their use by other virtual threads. This change eliminates nearly all cases of "pinning" virtual threads to platform threads, which previously limited the number of virtual threads available to handle application load.
What problem does it solve?
Virtual threads introduced in Java 21 are managed by the JDK rather than the operating system, enabling the creation of high-throughput applications using a massive number of them. However, when a virtual thread executes code inside a synchronized method or block and encounters a blocking operation (such as reading from a socket), it cannot "unpin" from its platform thread. This results in "pinning" the virtual thread to the platform thread, limiting application scalability as the blocked platform thread cannot handle other virtual threads.
What is the solution?
JEP 491 proposes modifying the JVM's object monitor implementation so that virtual threads can independently acquire, hold, and release monitors regardless of their platform thread counterparts. This allows the platform thread to be released to handle other virtual threads when a virtual thread blocks inside a synchronized block or method, significantly improving application scalability.
Additionally, diagnostics will be enhanced by expanding the scenarios in which jdk.VirtualThreadPinned events are logged in JDK Flight Recorder, making it easier to identify situations where virtual threads fail to release platform threads.
Code example:
synchronized byte[] getData() {
byte[] buf = ...;
int nread = socket.getInputStream().read(buf); // Could block here
}
In the above example, if the read method blocks due to a lack of available bytes, the virtual thread executing getData will be able to "unpin" from its platform thread, freeing the platform thread to handle other tasks. This will enable better scalability for applications using synchronized methods and blocks with virtual threads.
JEP 450: Compact Object Headers (Experimental)
A child of the Project Liliput. The goal of JEP 450 is to reduce the size of object headers in the HotSpot virtual machine from 96–128 bits to 64 bits on 64-bit architectures, aiming to reduce memory usage, improve deployment density, and enhance data locality.
What problem does it solve?
In the current JVM implementation, object headers consume a significant portion of heap memory, especially in applications working with small objects. Reducing the size of the headers allows for more efficient memory usage and reduces the overhead of memory management.
What is the solution?
Each object in memory consists of data and "metadata"—information about what the object is and how it behaves.
In this context:
Mark word is a portion of data within the object that stores technical information, such as whether the object is locked (by threads), or whether the garbage collector is processing it, etc.
Class word contains information about the object's "type" (or class), allowing the program to understand, for example, that "this is a Cat object, not a Dog."
Each of these elements occupies a specific amount of memory.
The proposed solution involves combining the two header words - mark word and class word - into a single 64-bit word by compressing the class pointer and placing it within the mark word. This requires modifications to the locking mechanisms, garbage collection, and class pointer handling to preserve integrity and availability of object type information.
If you'd like to test the impact on your application, you can activate the compact object headers during JVM startup using this command:
java -XX:+UnlockExperimentalVMOptions -XX:+UseCompactObjectHeaders -jar application.jar
The above command will allow you to assess the impact of this option on performance and memory usage in your specific environment.
Java Quantum Leap: Security and Cryptography
JEP 478: Key Derivation Function API (Preview)
JEP 478 introduces an API for Key Derivation Functions (KDF), which are cryptographic algorithms designed to generate additional keys from a secret key and other data. This API allows applications to use KDF algorithms like HMAC-based Extract-and-Expand Key Derivation Function (HKDF) and Argon2. It also provides a framework for security providers to implement these algorithms in Java or native code.
The problem it solves:
With technological advancements, including quantum computing, traditional cryptographic algorithms are becoming increasingly vulnerable to attacks. The absence of a standard KDF API in the Java platform complicates the implementation of modern cryptographic protocols, such as Hybrid Public Key Encryption (HPKE), which are designed to resist quantum attacks. Furthermore, the existing APIs lack a natural way to represent advanced KDF functionalities, limiting developers in implementing and certifying custom KDF algorithms.
Solution:
The introduction of the javax.crypto.KDF
class as a standard interface for key derivation functions simplifies the use and implementation of various KDF algorithms. The API supports initialization with appropriate parameters and facilitates key or data derivation. This ensures secure and repeatable key generation between parties sharing knowledge of the cryptographic input. The API also lays the groundwork for post-quantum cryptography support in Java.
Code example:
KDF kdf = KDF.getInstance("HKDF");
SecretKey derivedKey = kdf.deriveKey("AES", new AESKeyGenParameterSpec(256));
JEP 496: Quantum-Resistant Module-Lattice-Based Key Encapsulation Mechanism
JEP 496 implements a quantum-resistant key encapsulation mechanism based on modular lattice networks (ML-KEM). Key Encapsulation Mechanisms (KEM) secure symmetric keys transmitted over unprotected communication channels using public-key cryptography. ML-KEM is designed to resist future quantum attacks and has been standardized by NIST in the FIPS 203 specification.
What problem does it solve?
Advancements in quantum computing pose a threat to currently used cryptographic algorithms like RSA and Diffie-Hellman, which could theoretically be broken using Shor's algorithm. This creates an urgent need for quantum-resistant algorithms to secure data against future threats. ML-KEM, standardized by NIST, provides such protection and helps safeguard applications from potential quantum attacks.
Solution:
JEP 496 proposes implementing ML-KEM with appropriate Java APIs:
KeyPairGenerator for generating ML-KEM key pairs.
KEM for negotiating shared secret keys based on ML-KEM key pairs.
KeyFactory for converting ML-KEM keys to and from encoded formats.
The specification introduces a new family of security algorithms named "ML-KEM" in Java. FIPS 203 defines three parameter sets for ML-KEM: ML-KEM-512, ML-KEM-768, and ML-KEM-1024, which vary in security strength and performance.
Code example:
// Generating an ML-KEM key pair
KeyPairGenerator generator = KeyPairGenerator.getInstance("ML-KEM");
generator.initialize(NamedParameterSpec.ML_KEM_512);
KeyPair keyPair = generator.generateKeyPair();
// Key encapsulation by the sender
KEM kem = KEM.getInstance("ML-KEM");
KEM.Encapsulator encapsulator = kem.newEncapsulator(keyPair.getPublic());
KEM.Encapsulated encapsulated = encapsulator.encapsulate();
byte[] encapsulationMessage = encapsulated.encapsulation(); // Message to send
SecretKey secretKeySender = encapsulated.key(); // Sender's secret key
// Key decapsulation by the receiver
KEM.Decapsulator decapsulator = kem.newDecapsulator(keyPair.getPrivate());
// Receiver's secret key
SecretKey secretKeyReceiver = decapsulator.decapsulate(encapsulationMessage);
// secretKeySender and secretKeyReceiver contain the same secret key
JEP 497: Quantum-Resistant Module-Lattice-Based Digital Signature Algorithm
JEP 497 implements a quantum-resistant digital signature algorithm family based on modular lattice networks (ML-DSA). Digital signatures are crucial for detecting unauthorized data modifications and authenticating signatories' identities. ML-DSA is designed to resist future quantum attacks and complies with NIST's FIPS 204 standard.
The problem it solves: See above — quantum advancements threaten traditional algorithms.
Solution:
In addition to the described APIs, a new family of security algorithms named "ML-DSA" is introduced. FIPS 204 defines three parameter sets for ML-DSA: ML-DSA-44, ML-DSA-65, and ML-DSA-87, offering varying levels of security and performance.
Code example:
// Generating an ML-DSA key pair
KeyPairGenerator generator = KeyPairGenerator.getInstance("ML-DSA");
generator.initialize(NamedParameterSpec.ML_DSA_44);
KeyPair keyPair = generator.generateKeyPair();
// Signing data
Signature signer = Signature.getInstance("ML-DSA");
signer.initSign(keyPair.getPrivate());
signer.update(data);
byte[] signature = signer.sign();
// Verifying signature
Signature verifier = Signature.getInstance("ML-DSA");
verifier.initVerify(keyPair.getPublic());
verifier.update(data);
boolean isValid = verifier.verify(signature);
In this example, an ML-DSA key pair is generated using the ML-DSA-44 parameter set. The private key signs the data, while the public key verifies the signature.
"I Don't Want to Play with You Anymore" - Deprecations
What’s being removed (or will be in the future).
JEP 472: Preparing to Restrict JNI Usage
JEP 472 introduces warnings about the use of Java Native Interface (JNI) and aligns the Foreign Function & Memory (FFM) API to issue consistent warnings. This prepares developers for future versions of Java, where interactions with native code will be restricted by default to ensure platform integrity.
The problem it solves:
Interactions between Java code and native code via JNI can compromise application integrity, leading to undefined behavior like JVM crashes, manipulation of fields without proper access controls, or improper memory management. The introduction of warnings raises awareness among developers and prepares them for upcoming changes.
Solution:
A warning mechanism for JNI usage is introduced, along with adjustments to the FFM API to issue consistent warnings. Developers can bypass these warnings (and potential future restrictions) by selectively enabling access to native interfaces at runtime using the --enable-native-access option.
Example:
# Enable native interface access for all unnamed modules (classes from the classpath)
java --enable-native-access=ALL-UNNAMED -jar application.jar
# Enable native interface access for specific modules
java --enable-native-access=module1,module2 -jar application.jar
These commands run the application with native interface access enabled for specified modules, avoiding warnings related to JNI and FFM API usage.
JEP 498: Warn upon Use of Memory-Access Methods in sun.misc.Unsafe
JEP 498 issues warnings on the first use of memory-access methods in the sun.misc.Unsafe class during program execution. These methods were deprecated in JDK 23 and are scheduled for removal in future versions. They have been replaced by standard APIs such as VarHandle
(JEP 193, JDK 9) and the Foreign Function & Memory API (JEP 454, JDK 22). The warnings aim to encourage developers to migrate away from sun.misc.Unsafe to supported alternatives, ensuring smooth transitions to modern JDK versions.
The problem it solves:
The sun.misc.Unsafe class, introduced in 2002, allowed low-level operations, primarily for memory access. While powerful, these methods were unsafe, leading to undefined behavior and JVM crashes. Despite their internal use intent, developers widely adopted them due to their performance advantages. However, many libraries failed to enforce adequate safety checks, increasing application risks. Efforts to hide the class in JDK 9 (JEP 260) and provide alternatives delayed its removal.
Solution:
JEP 498 introduces runtime warnings on the first call to any memory-access method from sun.misc.Unsafe. A new command-line option
--sun-misc-unsafe-memory-access={allow|warn|debug|deny}
was added in JDK 23 to control method behavior. Its default value, currently allow, will change to warn in JDK 24, displaying a warning upon the first use.
JEP 479: Remove the Windows 32-bit x86 Port
JEP 479 removes source code and support for compiling the JDK on 32-bit Windows x86 systems. Announced for removal in JDK 21, its elimination simplifies JDK build and testing infrastructure, addressing technical debt.
The problem it solves:
Maintaining support for 32-bit Windows x86 consumes resources and complicates the development of new features. Implementing virtual threads (JEP 436) relies on kernel threads, offering no advantage on this platform. Moreover, Windows 10, the last OS supporting 32-bit operation, reaches its end of support in October 2025.
Solution:
The specific code paths for 32-bit Windows x86 are removed, and the JDK build system is modified to no longer compile for this platform. Documentation is updated to reflect these changes, streamlining JDK build and testing infrastructure and focusing the OpenJDK community on new features and improvements.
This was the first JDK I used. Rest in Peace.
JEP 501: Deprecate the 32-bit x86 Port for Removal
JEP 501 marks the 32-bit x86 port as deprecated, with plans for removal in future JDK versions. Currently, the only supported 32-bit x86 port is for Linux. Once removed, the only way to run Java programs on 32-bit x86 processors will be via the architecture-independent Zero port.
Problem it solves:
Maintaining the 32-bit x86 port incurs additional costs, particularly for implementing platform-specific solutions or workarounds for new features like Loom, Foreign Function & Memory API, or Vector API. This delays innovation and adoption in the JDK.
Solution:
JEP 501 proposes deprecating the 32-bit x86 port in JDK 24, with its complete removal planned for JDK 25. Attempts to configure builds for 32-bit x86 will result in an error indicating its deprecation. Developers can override this with --enable-deprecated-ports=yes, but there will be no guarantees of successful compilation or functionality.
This deprecation follows prior actions like JEP 449 (deprecating the Windows 32-bit x86 port) and JEP 479 (removing it). With dwindling support for 32-bit OSes and x86 processors, this step simplifies and modernizes JDK development.
JEP 486: Permanently Disable the Security Manager
JEP 486 permanently disables the Security Manager in the Java platform. Introduced in early Java versions, the Security Manager is no longer the primary mechanism for securing client-side code and has rarely been used for server-side code. Marked for removal in Java 17 (JEP 411, 2021), its deprecation received initial criticism but is now widely accepted as a necessary step.
Problem it solves:
The Security Manager added complexity to Java libraries, requiring many methods to check resource access permissions. Despite its intent, it was rarely used, and most applications granting full permissions negated its minimal privileges model. Maintaining the Security Manager was costly, diverting resources from developing modern security mechanisms.
Solution:
JEP 486 introduces:
Removal of the ability to enable the Security Manager at runtime with the -Djava.security.manager option.
Prohibition of installing a Security Manager via System.setSecurityManager(...).
Simplification of hundreds of JDK classes previously relying on the Security Manager for resource access decisions.
API modifications to behave as if the Security Manager was never enabled.
These changes allow the Security Manager’s removal from JDK code, enabling a focus on implementing modern security features like new protocols (e.g., TLS 1.3, HTTP/3) and stronger cryptographic algorithms.
java -Djava.security.manager -jar application.jar
Results in the program terminating with a critical error, stating that enabling the Security Manager is no longer possible.
Developers should update applications to not depend on the Security Manager and consider using modern security mechanisms available in Java.
Cold Start & Build Time Improvements
Now let’s talk about optimizations aimed at improving platform startup time.
JEP 483: Ahead-of-Time Class Loading & Linking
JEP 483 aims to improve Java application startup time by enabling the HotSpot JVM to have immediate access to preloaded and prelinked classes during startup. This is achieved by monitoring an application during a run and saving the loaded and linked forms of all classes in a cache for use in subsequent runs.
In other words, the JDK is getting yet another layer of dynamic caching.
Problem it solves:
Java's dynamic features - such as runtime class loading, dynamic binding, and reflection -though powerful, can significantly increase application startup times. Typical server applications scan hundreds of JAR files, read and parse thousands of class files, load data into class objects, and bind them together during startup. This can take seconds to minutes - though, to be fair, such extreme cases are increasingly rare. JEP 483 addresses this by allowing these operations to be performed in advance and cached, shortening subsequent application launches.
Solution:
The solution extends the HotSpot JVM to include a cache storing preloaded and prelinked classes. The process involves the following steps:
Recording AOT Configuration: Run the application in a recording mode to save the AOT configuration in a file (app.aotconf):
Creating AOT Cache: Use the recorded configuration to create the AOT cache in a file (app.aot):
Using AOT Cache: On subsequent runs, use the created cache to speed up application startup:
By leveraging this cache, the JVM can instantly access preloaded and prelinked classes, significantly reducing application startup time—or so the JDK developers promise. We'll have to wait for benchmarks to verify these claims.
JEP 493: Linking Run-Time Images without JMODs
JEP 493 aims to reduce the JDK size by approximately 25% by enabling the jlink tool to create custom runtime images without using JMOD files. This functionality must be enabled during JDK build time; it is not active by default, and some JDK vendors may choose not to include it.
The problem it solves: A full JDK installation includes two main components:
Run-time image: The executable Java runtime system.
JMOD files: A modular format containing class files, native libraries, configuration files, and other resources for each module in the runtime image.
JMOD files are used by jlink to create custom runtime images. However, their presence duplicates all files already in the runtime image, wasting significant space. JMOD files account for about 25% of the total JDK size. In cloud environments, where container images with installed JDKs are frequently copied over networks from container registries, reducing JDK size would improve efficiency.
$ configure [ ... other options ... ] --enable-linkable-runtime $ make images
Then, create a runtime image containing only the java.xml and java.base modules:
$ jlink --add-modules java.xml --output image $ image/bin/java --list-modules java.base@24 java.xml@24
In this example, jlink creates a runtime image with only the selected modules, without requiring JMOD files, resulting in a smaller final image.
Nihil Novi Sub Sole - Preview features
That is, these are the JEPs that haven’t received significant updates. For reference:
JEP 487: Scoped Values (Fourth Preview)
The goal of JEP 487 is to introduce Scoped Values, which are a Virtual Thread-friendly alternative to the ThreadLocal
API. They allow methods to share immutable data with both their direct and indirect invocations within the same thread and its child threads. Scoped Values are easier to understand than thread-local variables and have lower memory and time overhead, particularly when combined with virtual threads and structured concurrency.
Problem addressed?
In traditional Java programming, data is passed between methods via parameters. However, in complex applications where control flows between different components, passing all necessary data through successive methods becomes impractical. Thread-local variables, used until now, enable data sharing within a thread but come with management difficulties and potential performance issues.
Solution
Scoped Values enable the definition of immutable data that can be shared within the scope of method calls in the same thread and its child threads. This approach allows for clearer and more efficient data passing without modifying method signatures. Compared to thread-local variables, scoped values offer better code readability and lower operational costs, especially in the context of virtual threads and structured concurrency.
Code Example
public class ScopedValueExample {
private static final ScopedValue<String> USER_ID = ScopedValue.newInstance();
public static void main(String[] args) {
ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor();
ScopedValue.where(USER_ID, "user123").run(() -> {
System.out.println("User ID: " + USER_ID.get()); // Output: User ID: user123
executor.submit(() -> {
System.out.println("User ID in child thread: " + USER_ID.get()); // Output: User ID in child thread: user123
});
});
executor.shutdown();
}
}
Here, the USER_ID
scoped value is set to "user123" and shared both in the current thread and its child thread without requiring it to be passed as a parameter.
JEP history and previous iterations: Scoped Values were first introduced as a preview feature in JDK 20 (JEP 429). Subsequent previews appeared in JDK 21 (JEP 436) and JDK 22 (JEP 452). The fourth preview in JEP 487 focuses on gathering further user feedback and ensuring the API aligns with real-world application needs before finalization.
JEP 488: Primitive Types in Patterns, instanceof, and switch (Second Preview)
The goal of JEP 488 is to extend pattern matching in Java to support primitive types across all pattern contexts and enable the use of instanceof
and switch operators with these types. This functionality is available as a preview feature.
Problem addressed:
Previously, pattern matching in switch did not support primitive types, limiting flexibility and code readability. For example, handling integer-based status codes required using a default case with additional operations instead of directly matching specific values. Additionally, record patterns did not support primitive types, complicating the processing and decomposition of data in records.
Solution:
JEP 488 allows primitive types to be used in pattern matching. This enhancement enables switch statements to directly handle primitive values, simplifying the code and improving readability. Primitive patterns can also be nested within more complex patterns, such as in records, facilitating the natural decomposition of data structures.
Code Example:
switch (x.getStatus()) {
case 0 -> "okay";
case 1 -> "warning";
case 2 -> "error";
case int i -> "unknown status: " + i;
}
Here, status is matched as an integer, eliminating the need for a default case to handle unknown values.
Another (Record) Example:
sealed interface JsonValue {
record JsonString(String s) implements JsonValue { }
record JsonNumber(double d) implements JsonValue { }
record JsonObject(Map<String, JsonValue> map) implements JsonValue { }
}
var json = new JsonObject(Map.of("name", new JsonString("John"),
"age", new JsonNumber(30)));
if (json instanceof JsonObject(var map)
&& map.get("name") instanceof JsonString(String n)
&& map.get("age") instanceof JsonNumber(int a)) {
System.out.println("Name: " + n + ", Age: " + a);
}
With the enhancements in JEP 488, JsonNumber
can be directly matched as an int, simplifying data handling and improving readability.
JEP 489: Vector API (Ninth Incubator)
The goal of JEP 489 is to introduce an API that enables vector computations to be expressed in Java, which are reliably compiled at runtime to optimal vector instructions on supported CPU architectures. This allows achieving performance surpassing equivalent scalar computations.
The problem it solves:
Vector computations enable the processing of multiple data elements simultaneously, which is crucial for high-performance applications like multimedia processing and machine learning algorithms. The lack of a standard API in Java has hindered developers from effectively utilizing modern vector instructions available on current processors. As a result, Java applications may not achieve optimal performance compared to applications written in languages that offer direct access to such instructions.
Proposed solution:
The introduction of the Vector API allows Java developers to define vector operations in a way that is independent of processor architecture. This API ensures that vector computations are compiled at runtime to the appropriate vector instructions available on the platform, such as SSE or AVX on x64 and NEON on ARM AArch64. This enables applications to achieve higher performance without losing code portability.
Code example:
public static void main(String[] args) {
VectorSpecies<Float> SPECIES = FloatVector.SPECIES_256;
float[] a = {1.0f, 2.0f, 3.0f, 4.0f};
float[] b = {5.0f, 6.0f, 7.0f, 8.0f};
float[] c = new float[4];
VectorMask<Float> mask = SPECIES.indexInRange(0, a.length);
FloatVector va = FloatVector.fromArray(SPECIES, a, 0, mask);
FloatVector vb = FloatVector.fromArray(SPECIES, b, 0, mask);
FloatVector vc = va.add(vb);
vc.intoArray(c, 0, mask);
for (float f : c) {
System.out.println(f);
}
}
In the example above, the Vector API is used to add two arrays of floating-point numbers in a vectorized way, enabling efficient data processing using modern processor instructions.
History and previous iterations: Still waiting for Project Valhalla.
JEP 492: Flexible Constructor Bodies (Third Preview)
The goal of JEP 492 is to enable placing statements in Java constructors before calling another constructor, i.e., super(..)
or this(..)
. These statements cannot refer to the constructed instance but can initialize its fields. Initializing fields before calling another constructor increases class reliability, especially when methods are overridden.
The problem it solves:
Previously, the first statement in a constructor had to be a call to another constructor (super(..)
or this(..)
). This limitation restricted the ability to perform operations like argument validation or field initialization before calling the superclass constructor. As a result, developers had to use workarounds like auxiliary static methods, which complicated code and made it harder to maintain.
Proposed solution:
Flexible constructor bodies allow placing statements before calling another constructor. These statements form a "prologue" that enables field initialization or argument validation before calling the superclass constructor. However, the prologue cannot refer to the constructed instance. After the superclass constructor is called, the "epilogue" containing the remaining statements is executed. This approach improves code readability and reliability by eliminating the need for additional helper methods.
Code example:
public class PositiveBigInteger extends BigInteger {
public PositiveBigInteger(long value) {
if (value <= 0) throw new IllegalArgumentException("Value must be positive");
super(value);
}
}
In the example above, argument validation for value is performed before calling the superclass constructor BigInteger. If the value is invalid, an exception is thrown earlier, preventing unnecessary object creation.
History and previous iterations: JEP 492 was first proposed as a preview feature in JDK 23. This is its second preview version introduced in JDK 24, retaining the original features. In this iteration, the documentation has been improved, and implementation details have been updated, but the functionality itself remains unchanged.
JEP 494: Module Import Declarations (Second Preview)
The goal of JEP 494 is to introduce a concise way to import all packages exported by a module in Java. This simplifies the reuse of modular libraries without requiring explicit import declarations for each package.
The problem it solves
Currently, developers must add numerous import declarations at the beginning of every source file, increasing code complexity and reducing readability. While modules introduced in Java 9 allow grouping packages under a shared name, there has been no mechanism to concisely import all packages exported by a module.
Proposed solution
JEP 494 introduces module import declarations in the form of import module M;, allowing the on-demand import of all public classes and interfaces from packages exported by module M and its transitive dependencies. This enables developers to access a broad set of classes and interfaces with a single module import declaration, simplifying code and improving readability. For instance, import module java.base; provides access to all packages exported by the java.base
module, such as java.util
and java.nio.file
.
Code example:
import module java.base;
public class Example {
public static void main(String[] args) {
List<String> list = List.of("apple", "banana", "cherry");
Path path = Path.of("/example/path");
// ...
}
}
In the example above, the import module java.base; declaration provides access to the List class from the java.util package and the Path class from the java.nio.file
package without requiring individual package imports.
History and previous iterations: Module import declarations were first proposed as a preview feature in JEP 476 (JDK 23). The second iteration removes the restriction that prevented modules from declaring a transitive dependency on the java.base module and updates the java.se
module declaration to require java.base transitively. With these changes, importing the java.se module now imports the entire Java SE API on demand.
Additionally, on-demand type import declarations are now allowed to override module import declarations.
JEP 494: Module Import Declarations (Second Preview)
The goal of JEP 494 is to introduce the ability to concisely import all packages exported by a module in the Java language. This facilitates reusing modular libraries without requiring import declarations in every module.
Problem addressed:
Currently, developers must add numerous import declarations at the beginning of every source file, increasing code complexity and reducing readability. Additionally, while the introduction of modules in Java 9 allowed grouping packages under a shared name, there was no mechanism for concisely importing all packages exported by a module.
Solution:
JEP 494 introduces module import declarations in the form import module M;, allowing on-demand import of all public classes and interfaces from the packages exported by module M and transitively by the modules on which M depends. This enables developers to access a wide range of classes and interfaces with a single module import declaration, simplifying code and improving readability. For example, import module java.base
; grants access to all packages exported by the java.base module, such as java.util
and java.nio.file
.
Code Example:
import module java.base;
public class Example {
public static void main(String[] args) {
List<String> list = List.of("apple", "banana", "cherry");
Path path = Path.of("/example/path");
// ...
}
}
In the example above, the import module java.base; declaration provides access to the List class from the java.util package and the Path
class from the java.nio.file package without requiring individual imports for those packages.
The introduction of module import declarations simplifies Java programming by enabling more concise and readable dependency management.
JEP history and previous iterations: Module import declarations were first proposed as a preview feature in JEP 476 (JDK 23). In this second iteration, two changes were introduced: the restriction preventing modules from declaring a transitive dependency on the java.base module was lifted, and the java.se module declaration was updated to transitively require the java.base module. With these changes, importing the java.se module will include the entire Java SE API on demand.
Additionally, type-import-on-demand declarations were allowed to override module import declarations.
JEP 495: Simple Source Files and Instance Main Methods (Fourth Preview)
JEP 495 simplifies writing first programs in Java by allowing the creation of simple source files without requiring class definitions and permitting the definition of main methods as instance methods, without the need for the static modifier. This makes Java code more concise and readable, especially for beginner programmers, easing the learning process.
Problem addressed: Traditionally, even the simplest programs in Java required defining a class and a static main method, adding unnecessary complexity for beginners. For example, a "Hello, World!" program must be written as:
public class HelloWorld {
public static void main(String[] args) {
System.out.println("Hello, World!");
}
}
This structure requires beginners to understand concepts like classes, access modifiers, and array arguments, which can be overwhelming in the early stages of learning.
Solution:
JEP 495 allows the creation of simple source files where the main method can be defined directly without an enclosing class. Additionally, helpful methods for console input and output are automatically imported in such simple files, enabling the use of a more compact form.
For more advanced programs requiring data structures or I/O operations, standard APIs outside the java.lang package are automatically imported in simple source files.
Code Example:
void main() {
var list = List.of("apple", "banana", "cherry");
for (var fruit : list) {
println(fruit);
}
}
In the example above, thanks to automatic imports, the List class can be used without manually importing the java.util package.
JEP history and previous iterations: The functionality described in JEP 495 was previously proposed in preview versions through JEP 445 (JDK 21), JEP 463 (JDK 22), and JEP 477 (JDK 23). In each iteration, the functionality was refined and adapted based on user experience and feedback. In this fourth preview version, new terminology was introduced, and the title was updated (again; remember Implicit Classes?), but the functionality itself remained unchanged, allowing for further user feedback and refinement.
JEP 499: Structured Concurrency (Fourth Preview)
JEP 499 simplifies concurrent programming by introducing an API for structured concurrency. Structured concurrency treats groups of related tasks running in different threads as a single unit of work, streamlining error handling, task cancellation, reliability, and observability.
Problem addressed:
Traditional approaches to concurrent programming in Java, based on ExecutorService and Future, allow parallel execution of tasks but do not provide a structure for the relationships between parent and child tasks. This lack of structure can lead to issues such as thread leaks, improper task cancellation, or difficulty diagnosing errors. For example, if one of several parallel tasks fails, the others might continue running, causing undesired side effects.
Solution:
JEP 499 introduces StructuredTaskScope
, which enables grouping related tasks into a single unit of work. This allows multiple tasks to be run in parallel and waited upon in an orderly fashion. If one task fails, StructuredTaskScope
automatically cancels the others, preventing thread leaks and simplifying error handling.
Additionally, structured concurrency improves the observability of concurrent code, enabling better monitoring and diagnosis of issues.
Code Example:
try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {
Future<String> user = scope.fork(() -> findUser());
Future<Integer> order = scope.fork(() -> fetchOrder());
scope.join(); // Waits for all tasks to complete
scope.throwIfFailed(); // Throws an exception if any task failed
String theUser = user.resultNow();
int theOrder = order.resultNow();
return new Response(theUser, theOrder);
}
In the example above, StructuredTaskScope.ShutdownOnFailure creates a scope where two tasks are run in parallel: findUser()
and fetchOrder()
. The scope.join()
method waits for both tasks to complete, and scope.throwIfFailed()
throws an exception if any of the tasks failed.
This integrated approach to error handling and task cancellation simplifies code and improves its reliability.
JEP history and previous iterations:
Structured concurrency was first proposed in JEP 428 and introduced as an incubating API in JDK 19. It was then re-incubated in JDK 20 by JEP 437 with a minor update related to inheriting scoped values. The first preview appeared in JDK 21 through JEP 453, where the StructuredTaskScope::fork
method was changed to return Subtask instead of Future. Subsequent previews were included in JDK 22 (JEP 462) and JDK 23 (JEP 480) with no significant changes. JEP 499 now proposes the fourth preview in JDK 24, also without major updates.
Uff, that was long. But it only proves how powerful the new JDK release will be (which will also include a Long Term Support version). The only thing I’m missing is any developmental materials about Valhalla. But well, maybe in the fall.
And that was all the 24 new JEPs for JDK 24, folks!