Each graph plots performance over time for a number of JVMs. Click on graphs to enlarge, click on legend for version details on each VM.

The jar used for the 2011-03-04-Fri-14-52 execution is here and the sanity runs for each VM and benchmark can be view here. OS details are here, and hardware details are here. Methodology notes appear at the end of this page.

ibm-java-i386-60jdk1.5.0_12jdk1.6.0_14jikesrvm-3.1.0jikesrvm-svnjrmc-3.1.0-1.6.0

antlrantlrantlrantlrantlr
bloatbloatbloatbloatbloat
chartchartchartchartchart
eclipseeclipseeclipseeclipseeclipse
fopfopfopfopfop
hsqldbhsqldbhsqldbhsqldbhsqldb
jythonjythonjythonjythonjython
luindexluindexluindexluindexluindex
lusearchlusearchlusearchlusearchlusearch
pmdpmdpmdpmdpmd
xalanxalanxalanxalanxalan
Methodology: Every 12 hours or so, the DaCapo suite is checked out from svn and built, as are a number of JVMs. After correctness testing (reported elsewhere), each JVM (both those built and those binary-released) executes each of the benchmarks in the latest DaCapo release and each of the benchmarks in the svn head. Each benchmark is run for 10 iterations to allow for warm-up of the JVM. In the graphs on this page we report times for the 1st, 3rd and 10th iterations. To minimize bias due to systematic disturbance, the JVMs are iterated in the inner loop. Once all benchmarks in both the release and svn versions of DaCapo have beenn run, a time check is made, and if time remains in the 12 hour period, another iteration of performance tests are run. Generally we perform about 4 or 5 complete runs in a 12 hour period. The graphs below show a dot for each of the 4 or 5 results at each time period, and plot the mean of the results for a given time period. It is beyond the scope of these experiments to perform cross-JVM heap size sensitivy comparisons, so all benchmarks are run with the same minimum and maximum heap sizes for all VMs (click on the VM name in the legend at top to see the command-line arguments used).