What is Jit compiler?

What is Jit compiler?

posted 4 min read

This is a frequently asked question in many technical interviews. Let’s explore the answer together and learn which JIT compiler parameters are available for tuning.

In simple terms, the JIT (Just-In-Time) compiler compiles bytecode at runtime for methods that are called frequently. As a result, we get native code that is executed directly by the processor, improving performance.

It’s also important to note that the level of applied optimizations varies, which directly affects CPU and RAM consumption in our Java application. However, the trade-off is usually worth it, as compiled and optimized code runs much faster.

The HotSpot JVM (Oracle/OpenJDK) includes two JIT compilers, C1 and C2. According to the OpenJDK documentation:

C1 Compiler
A fast, lightly optimizing bytecode compiler.

C2 Compiler
A highly optimizing bytecode compiler, also known as 'opto'.

Tiered Compilation: The Best of Both Worlds
By default, the JVM uses Tiered Compilation, which combines C1 and C2. This balances startup time and peak performance:

C1 compiles methods quickly for faster startup.
C2 recompiles "hot" methods with aggressive optimizations for maximum speed.
However, in some low-latency applications (e.g., high-frequency trading, real-time systems), Tiered Compilation is disabled, as transitioning from C1 to C2 can introduce unwanted pauses.

JIT Benchmarking Results
I have prepared a GitHub repository with a working JMH setup:
GitHub Repository: https://github.com/stjam/fib-demo

The Readme.md contains jvm run commands for various JIT compiler configurations. Below are my results:

Performance Comparison: C1 vs C2 vs Tiered vs No JIT

1️⃣ Without JIT (-Xint)

Performance: 18,821 ops/s
Error: ± 0.055 ops/s (almost no variation)
Explanation:
JVM runs fully interpreted, with no JIT optimizations.
Slowest performance due to lack of compilation.
Minimal error, since JVM execution is stable without optimizations.

2️⃣ Only C1 (-XX:+TieredCompilation -XX:TieredStopAtLevel=1)

Performance: 376,382 ops/s
Error: ± 53,193 ops/s
Explanation:
C1 provides fast compilation but lacks deep optimizations.
Faster than no JIT, but not as optimized as C2 (no aggressive inlining or loop unrolling).
Higher error margin due to dynamic optimizations.

3️⃣ Only C2 (-XX:-TieredCompilation)

Performance: 416,779 ops/s
Error: ± 4,059 ops/s
Explanation:
C2 aggressively optimizes methods after warm-up.
Best peak performance after JVM fully optimizes hot code.
Low error margin (execution is stable after optimizations).

4️⃣ Tiered Compilation (Default JVM Mode)

Performance: 406,206 ops/s
Error: ± 44,724 ops/s
Explanation:
JVM first compiles methods with C1, then recompiles them with C2 for full optimization.
Almost as fast as C2-only mode, since hot methods eventually get optimized.
Higher error margin, because JVM dynamically decides when to recompile methods.

Conclusion

✔️ If you need fast startup, use C1-only (-XX:TieredStopAtLevel=1).

✔️ If you want maximum peak performance, use C2-only (-XX:-TieredCompilation).

✔️ For the best balance, leave JVM in its default Tiered mode.

❌ Disabling JIT completely (-Xint) is only useful for debugging—it should not be used in production.

Experiments Worth Trying
Want to fine-tune JVM performance? Try these JIT experiments:

1️⃣ Force Early Compilation (-XX:CompileThreshold=100)

java -jar target/benchmark.jar -jvmArgs='-XX:CompileThreshold=100'

Forces JIT compilation after just 100 method calls, improving startup performance.

2️⃣ Disable Method Inlining (-XX:MaxInlineSize=0)

java -jar target/benchmark.jar -jvmArgs='-XX:MaxInlineSize=0'

Prevents JVM from inlining methods, making it slower but useful for analysis.

3️⃣ Force Inlining of Large Methods (-XX:MaxInlineSize=1000)

java -jar target/benchmark.jar -jvmArgs='-XX:MaxInlineSize=1000'

Encourages aggressive inlining, potentially improving performance.

Looking Ahead: Graal JIT (JVM’s Future)
Since Java 9+, Graal JIT has become a powerful alternative to C2. It compiles faster, optimizes better, and enables Ahead-Of-Time (AOT) compilation for cloud applications.

Tip: The Just-In-Time (JIT) compiler enhances Java application performance by converting bytecode into optimized native machine code at runtime. However, different JIT compilation strategies impact startup time, CPU usage, and peak performance. If your application requires fast startup, using C1-only compilation might be beneficial, whereas applications needing maximum performance should leverage C2. The default Tiered Compilation provides a balance between both, but knowing when and how to adjust these settings can make a significant difference in runtime efficiency.
Caution: While JIT tuning can improve performance, modifying JVM parameters too aggressively may lead to unexpected slowdowns. Lowering the compilation threshold forces methods to compile sooner, but excessive compilations can increase CPU overhead. Disabling method inlining can be useful for debugging but may negatively impact execution speed. On the other hand, forcing excessive inlining can lead to higher memory consumption and degrade performance under certain conditions. Before making changes to JIT settings, extensive testing should be conducted to ensure that the optimizations actually improve performance rather than introducing new inefficiencies.
Note: Unlike fully interpreted execution, which runs consistently but slowly, JIT compilation introduces dynamic optimizations that can lead to performance variations. As the JVM identifies frequently used methods and recompiles them for efficiency, benchmark results may fluctuate, especially in Tiered Compilation mode. When measuring performance, it’s important to allow for a warm-up period so that the JIT compiler has time to optimize code execution before meaningful benchmarks are collected.

In the next article, we’ll compare Graal JIT vs C2 and analyze benchmark results! Stay tuned.

Help Spread the Knowledge!
If you found this article useful, please:
Star the repository: https://github.com/stjam/fib-demo
Share it with your colleagues and friends!

The more stars ⭐ and shares , the better!

References

https://openjdk.org/groups/hotspot/docs/HotSpotGlossary.html
https://www.ibm.com/docs/en/sdk-java-technology/8?topic=reference-jit-compiler

If you read this far, tweet to the author to show them you care. Tweet a Thanks
Great insights Boris!  Quick question—how much of an impact does adjusting the -XX:CompileThreshold have on memory usage? Looking forward to the Graal JIT comparison!
Thanks for the comment! Lowering the -XX:CompileThreshold value causes more methods to be JIT-compiled sooner. This can lead to an increase in the amount of compiled code stored in the code cache, which in turn raises memory usage. That's why it's essential to also manage the code cache size to balance performance and memory consumption. Looking forward to sharing more real-world numbers and insights on the Graal JIT comparison!
Thanks Boris yeah looking forward for more from you.....cheers

More Posts

Class is Not Abstract & Does Not Override Abstract Method - Fixed

Abdul Daim - Feb 24, 2024

Checked exception is invalid for this method

prince yadav - Nov 26, 2023

Java Features You Didn’t Know Existed (But Should)

saurabhkurve - Dec 11, 2024

How to serialize JSON API requests using Java Records?

Faisal khatri - Nov 9, 2024

Creating RESTful API Using Spring Boot for The First Time

didikts - Oct 14, 2024
chevron_left