Rakudo on JVM progress update, and some questions answered

It’s time for another progress update on the ongoing JVM work. Last time I posted here, we’d reached the point of having a self-hosting NQP ready to merge into the master branch of the NQP repository. That has now happened, so the May release of NQP will come with support for running on the JVM (note, this does not mean the May release of Rakudo will come with this level of capability, and a JVM-based Star release with modules is further still!) . In this post, I will discuss some of the things that have happened in the last few weeks and also try to answer some of the questions left behind in the comments last time.

The Rakudo Port

With NQP pretty well ported (there are some loose ends to tie up, but it’s pretty capable), the currently ongoing step is porting Rakudo. At a high level, Rakudo breaks down into:

  • The core of the compiler itself: the grammar (for parsing), actions (which assign semantics to the things we parsed), world (which takes care of the declarative aspects of programs), optimizer (tries to cheat without getting caught) and a few other small pieces to support all of this. This is written in NQP.
  • The Perl 6 MOP (meta-object protocol) implementation, which defines what classes, roles, enums, subsets and so forth mean. This is also written in NQP.
  • The bootstrap, which uses the MOP to piece together various of the core Perl 6 types. It does Just Enough to let us start writing Perl 6 code to define the rest of the built-ins. Also written in NQP.
  • The core setting, which is where the built-in types, operators and methods live. This is written in Perl 6.
  • A chunk of per-VM code that does lower-level or performance-sensitive things.

The first 3 of these need…an NQP compiler. And, it turns out that we have one of those hosted on the JVM these days. But could it handle compiling the Perl 6 grammar? It turns out that, after some portability improvements to the various bits of Perl 6 compiler code, the answer was a resounding “yes”. In fact, the grammar was compiled down to JVM bytecode sufficiently accurately that I didn’t encounter a single parse failure on the entire CORE.setting (though there were lots of other kinds of failures that took a lot of work – more on that in a moment). As for the rest of the compiler code, there were bits of lingering Parrot-specific stuff throughout it, but with a little work they were abstracted away. By far the hardest was BOOTSTRAP, which actually runs a huge BEGIN block to do its setup work and then pokes the results into the EXPORT package. This is kinda neat, as it means the setup work is done once when building Rakudo and then serialized. Anyway, onto the next pieces.

Compiling the Perl 6 setting depends on the Perl 6 compiler working. Since the first thing the setting does is use the bootstrap, which in turn uses the MOP, it immediately brings all of the above three pieces together. While we talk about “compiling” the setting, there’s a little more to it than that. Thanks to various BEGIN time constructs – such as traits, constant declarations and, of course, BEGIN blocks – all of which show up in the CORE setting – we actually need to run chunks of the code during the compilation process. That’s right – we run bits of the file we’re in the middle of compiling while we’re compiling it. Of course, this will be nothing new to Perl folks – it’s just most Perl programmers probably don’t worry about how on earth you implement this. :-) Thankfully, it’s a solved problem in the NQP compiler toolchain, and the stuff that makes NQP BEGIN blocks work largely handles the Rakudo ones too.

Anyway, all of this means that even getting the setting to finish the parse/AST phase requires doing enough to run the various traits and so forth. And that in turn brings in the fifth piece: the per-VM runtime support. This includes signature binding, container handling and a few other bits. Thankfully, it no longer involves multiple dispatch, since that is written in NQP these days (apart from some caching infrastructure, which is shared with NQP’s multiple dispatch, and thus was already done). Getting through the parse/AST phase of the setting doesn’t need all of the runtime support to be implemented, but it did require a decent chunk of it. Of course, at the start everything is missing, so getting from line 1 to line 100 was O(week), from 100 to 1000 O(day) and each thousand or so from there O(hour). It’s 13,000 or so lines in all.

The parse/AST step is just the first (though biggest) phase of compiling Perl 6 code, however. Next comes the optimizer, followed by code generation. In theory, the optimizer could have been bypassed. I planned to do that, then discovered it basically worked for the set of optimizations that didn’t need signature binding to participate in the analysis, so I left it in. Code generation is part of the backend, and so is shared with NQP. So it shoulda just worked, right? Well, yes, apart from code generation is the place where nqp:: ops get resolved to lower level stuff. And Perl 6 uses a lot more of them than NQP. Note that they only have to be mapped to somewhere; the JVM is late bound enough that it won’t complain unless you actually hit the code path that tries to use something that is not yet implemented. In reality I did a bit of both: implementing those that would surely be hit soon or that were trivial, and leaving some others for later.

So, some time yesterday, I finally got to the point of having a CORE.setting.class. Yup, a JVM bytecode file containing the compiled output of near enough the entire Perl 6 core setting. So, are we done yet? Oh, no way…

Today’s task was trying to get the CORE setting to load. How hard could that be? Well, it turns out that it does a few tasks at startup, most of which hit upon something that wasn’t in place yet. Typically, it was more of the runtime support, though in some cases it was unimplemented nqp:: ops. Of course, there were a handful of bugs to hunt down in various places to.

Anyway, finally, this evening, I managed to get the CORE setting to load, at which point, at long last, I could say:

perl6 -e "say 'hello world, from the jvm'"
hello world, from the jvm

Don’t get too excited just yet. It turns out that many other simple programs will happily explode, due to hitting something missing in the runtime support. There’s still plenty of work to go yet (to give you an idea, trying to say a number explodes, and a for loop hits something not yet implemented), but this is an important milestone.

Interop

A couple of the comments in response to my last post asked about interop with Java. There’s two directions to consider here: using Java libraries from Perl 6, and using Perl 6 code from Java. Both should be possible, with some marshalling cost, which we’ll no doubt need to spend some time figuring out how to get cheap enough it’s not a problem. It may well be that invokedynamic is a big help here.

The Java stuff from Perl 6 direction can probably be made fairly convenient to use by virtue of the fact that Perl 6 has a nice, extensible MOP. The fact the object you’re making a call on lives in Java land can be just a detail; we can hide it behind the typical method call syntax, and should even be able to populate a 6model method cache with delegation methods that do the argument mapping. I’m sure there will be plenty of interesting options there. I suspect we’ll want to factor it a little like NativeCall – some lower level stuff in the runtime, and some higher level sugar.

Going the other way will be more “fun”. I mean, on the one hand the marshalling is just “in the other direction”, which we’d need to do for values coming back from Java land anyway. But trying to work out how to make it feel nice from Java land could be trickier. I don’t believe the “.” operator is very programmable, which probably leaves us with string lookups or code-generated proxy thingies. Or maybe somebody will come up with a Really Great Solution that I hadn’t thought of.

My JVM related Perl 6 dev focus for now will be getting Rakudo to work decently and start getting Perl 6 modules working on the JVM also, but interop with Java land is certainly on the roadmap of things I think should happen. As with all things, I’m delighted to be beaten to it, but will work on it if it goes undone for too long. :-)

Performance?

The first thing to say on this is that it’s too early to have a really good idea. The final pieces of the gather/take transform (which has global consequences) have yet to land, which will certainly have some negative impact and will need to happen soon. At the same time, I’ve been very much focused on making things work on the JVM at all over making them especially clever or optimal. Numerous things can be done in ways that will not only perform better in a naive sense, but that will also be much easier for the JVM’s JIT to do clever stuff with. There are many, many things we will be able to do in this area.

Since I only have a sort-of-working Perl 6 compiler, I can’t say that much about Perl 6 performance. The only result I have to share is that the CORE setting parse completes in around a third of time that it does on Parrot (noting it’s not only about parsing, but also some code generation and running of stuff). This is not especially great – of course, we need to do better than that – but it’s certainly nice that the starting point before I really dig into the performance work is already a good bit faster.

The other result I have is NQP related. nwc10++ has been doing performance testing with a Levenstein benchmark of each commit to the NQP JVM work, which is really great in so far as it gives me a rough idea if I accidentally regress (or improve ;-)) something performance wise. There, we saw a larger factor performance win, around 15 times faster than the same program running in NQP on Parrot.

The big negative news on performance is startup time. Part of this is just the nature of the JVM, but I know another enormous part of it is inefficiencies that I can do something about. I’ve plenty of ideas – but again, let’s make things work first.

From Here

Things will be a little quieter from me over the next week and a bit, due to a busy teaching schedule. But now we have a fledgling Rakudo on JVM, and from here it’s going to be making it gradually more capable, first passing the sanity tests, then moving on to the spectests and the ecosystem. There are ways to help for the adventurous. Some ideas:

  • Profile the code generation phase, which is one of the pieces that is slower than expected. Try to figure out why.
  • Have a look at how multiple dispatch stuff is currently set up, and see if the dispatch logic could possibly be shuffled off behind invokedynamic.
  • Try something. See how it explodes. See if you can fix it. (Yes, generic I know. :))
  • Have a look at a “make install” target for Rakudo on JVM.

I’ll be speaking on the JVM work at the Polish Perl Workshop the weekend after next, and hope to have something a bit more interesting than “hello world” to show off by then.

This entry was posted in Uncategorized. Bookmark the permalink.

9 Responses to Rakudo on JVM progress update, and some questions answered

  1. Brandon says:

    Very exciting!

  2. Vinay says:

    very exciting indeed!!!

  3. Daniel Ruoso says:

    Amazing! Truly Amazing!

  4. Alan Rocker says:

    “First you get good, then you get fast”.
    The most exciting thing about the JVM compatibility is the possibility of running on Android. It’ll be nice to get pure Perl 6 on there, but if JVM means we can develop apps in Perl 6 faster, so much the better.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.