JIT does method optimization (-XX:+PrintCompilation) after 10k invocations and you can configure it with -XX:CompileThreshold. I read the reason not to lower that threshold is the JIT optimization might be wrong or you optimize the infrequently used code. I have a few questions regarding this area:
- I think the wrong optimization (ie: on-stack-replacement) is due to lazy classloading of polymorphic methods. But after 3 implementations (I think) are found, JVM just do an index table lookup. Of course, speed suffers if you have more polymorphic impl. Is polymorphic method the only cause or major cause for wrong JIT optimization? If not, what are the others?
- what if I could force loading all the classes at astartup so JVM can build such index tables upfront, isn't it better to do overall optimization upfront? What's wrong with optimize-all approach? what's the cost if my goal is only speed?
- comparing with C++, if my source are closed which means no 3rd party lib, just like that low-latency system, is there a way to force optimization upfront to increase performance to be closer to that of c++?
- Peter Lawrey mentioned in his oracle magazine article that you can kick-in JIT by artificially run enough test data in production to meet the threshold. Doing so seems dangerous in production env and one mishap you will be fired. There must be a better way to do it with min risk.
- any good reference about this topic (involves java & c++) is greatly appreciated.
Updated: #3. never should expect java faster than c++, just want to be closer.
is there a way to force optimization upfront to increase performance
to be better than that of c++?
Nope. It's a fundamental limitation of the semantics enforced in the Java specification and the way that the JVM ecosystem works that it will be slower than a C++ implementation, assuming equivalent quality of implementation and code. Have a look at my existing answer on this subject for more details.