One of the secrets of modern computing is that, despite the many wonderful advances in hardware, we still frequently need to make software go faster. In order to create the bigger, more versatile software that society needs, we use ever more 'high level' (i.e. more human friendly) programming languages, but programs written in such languages tend to be rather slow. 'Optimising compilers' are the key to regaining lost performance: they take in high level programs, remove redundant calculations in ways that a human is unable to, and produce faster computer friendly programs as output. Programs can easily become 10, or sometimes even 100, times faster as a result of this.
JIT compilers are amongst the most complex bits of software ever written. Perhaps surprisingly there is no book on how to write one; instead, there is a substantial folklore. The most important bit of folklore is what I'll call the 'assumption of warmup' - which is that after JIT compilation has completed we hit the nirvana of 'peak performance' (i.e. after a slow start, the program ends up running fast).
In our EPSRC-funded work using JIT compilers we had often noticed performance anomalies, without really being able to spot a pattern. We decided to perform a quick experiment to see how long different JIT compilers take to reach peak performance: at first all we did was to measure performance for longer than anyone had done before. The data we got back after a couple of weeks was unexpected: it showed that all of the most widely used JIT compilers (including those in popular web browsers) not only took ages to complete JIT compilation, but that they sometimes got slower over time; and often the same benchmark on the same JIT compiler would sometimes get faster and sometimes get slower over time. We were convinced that we'd done something silly, or that an external factor was at play, so we spent three years designing the most pernickety experiment of JIT compiler performance ever undertaken.
The paper we published (and the accompanying software) shows that the assumption of warmup - the basis upon which all JIT compilers rest - holds in only around 40% of cases. Needless to say, this is not what our field expected, or wanted to hear! So deep-seated are these problems that none of the major JIT compilers has yet been fixed.
This might sound depressing - and, I suppose, in some ways it is - but we prefer to see it as an opportunity. For example, at King's College London, we've started an industrially funded project to see if we can use the knowledge we've gained to fix one open-source JIT compiler: we know we won't be able to fix everything, but we think we can fix enough things to be useful.
Ultimately, there's a bigger opportunity: JIT compilers are widely used today, even though we've shown that they don't work as well as people thought. Clearly they're already useful, but what I think we've shown is that they could be more useful. We've spent the last few months investigating how we could create a JIT compiler without the problems we found, and we now have some compelling ideas as to how we might do so. Realising those ideas won't be easy, but I'm hopeful that in the future we'll able to make a lot of software run a lot faster!
The work reported on this blog was done in conjunction with Edd Barrett, Carl Friedrich Bolz-Tereick, Rebecca Killick and Sarah Mount.