Friday, November 6, 2009

How small is your VM?

We all use intel. Boo. But that's just the way it is. (OK so some use ARM, but I don't think what I'm about to say will break ARM at all)

I'm developing on an ATOM (and about to tear my hair out - netbeans is a write-off, but eclipse is ok) and if you look at the tech specs for the ATOM you'll see that it has a 24k level one instruction cache. The ATOM's big brothers have 32k - so not that much more.

If one could keep the VM down to less than this, it would be hella fast. If one could keep the VM and important native calls to less than this, it would be hella hella fast.

And another advantage would be the relative size of the code to debug etc would be small as well.

This throws up an interesting question though - how would JITing hurt this? Whenever a JITted call was made, the level one cache would be destroyed, so going to and returning from the JITted code would be costly; heuristics would be needed to see just how costly.

So, is it possible to write the VM in 24k?

I simply don't know, not having written intel/c code in a long time. I feel happy that it is. Keeping it simple by making sure the number of different opcodes is kept to a minimum (see the last post) may result in longer programmes, but those programs will hopefully execute faster.

Once the kernel is written, if there is any room left we can add extra opcodes to make things faster - but only after profiling of course!

The final point to make is that we now do everything in the data cache (level one is still only 24/32k) and our JIT code essentially becomes a software replacement for the Intel branch predictors.

No comments:

Post a Comment