This week: less than hoped, but still good stuff

This week actually means “the week starting 6th July”, which is around the time much of Europe was being unreasonably hot. I spent the week in lovely Kyiv with my wife – where the weather was, predictably, also hot. I’d hoped for a nice mix of hacking, sight-seeing, and nice food. Well, I got a decent amount of nice food. Unfortunately, I also had a pretty bad time with the pesky hayfever thing that’s been bothering me this year (mostly due to bad sleep and hot weather), got frustrated enough that I decided to try a different anti-allergy medication, reacted less than awesomely to it, and generally spent plenty of time feeling crap. Happily, things are improving a good bit now that I’m back home, and ahead of me are five weeks with only 3 nights that I need to be away from home. After two and a half months in which I was never in the same place for much more than a week, you can’t imagine how happy I am of that – I can only imagine it’ll do my productivity wonders, and I’m hopeful for my health too. Anyway, let’s take a look through the few bits I did manage to get done.

Multi-dimensional array progress

In the last report, I mentioned I’d got most of the way with the new MultiDimArray representation I was implementing in MoarVM, along with various new ops. This week I got the last loose ends tied up, adding support for cloning, serialization, and deserialization.

With that done, it was time to move on to porting the work over to the JVM. I stubbed in all the new op mappings, so the NQP test file I established while working on the MoarVM implementation would compile on the JVM backend also (the strategy of writing tests at NQP level to exercise backend-level stuff continues to serve us very well). Then I set about working to make them pass. I got up to 71 out of 188 tests passing, which is a decent start – especially given there’s various bits of setup work to do early on.


The &?BLOCK symbol refers to the current block of code we’re in (which amongst other things provides a way to write recursive, yet anonymous, blocks). &?ROUTINE is the same, but for the current enclosing routine (sub, method, token, etc.) We’ve had &?ROUTINE for a while, but not &?BLOCK. I set out to implement it, and noted that one had to be careful that it refers to the current closure. Glancing at &?ROUTINE, I noticed it didn’t take sufficient care over closure semantics, and soon had a failing test case exposing the issue. So, I fixed the &?ROUTINE bug, wrote tests for &?BLOCK, and got that in place too. So, one missing feature added and one potential nasty bug down.

when/default semantics

All the way back in April, I tried to deal with RT #71368, which noted that our when/default semantics were out of line with the design in S04. Trouble is, when I tried to bring us in line with them, I found that I would break a bunch of folk’s code. Of note, this pattern would not be allowed:

sub foo($n) {
    $_ = bar($n);
    # use when/default here against $_

That is, setting $_ – which you get fresh per block anyway – for the sake of using when/default. Nobody really felt this should be outlawed. I agreed and put aside my changes.

This week I finally got around to returning to the issue to try and bring it to some kind of conclusion. The result was a commit to the design docs to bring them in line with the semantics that folks seem to prefer (which was a nice simplification also), along with adding some more tests to give us better coverage.

One more regex engine bug down

A while back, we got dynamic quantifiers, so you can use a variable (or any expression, really) to decide how many times to match something:

/'x' ** {$n}/

RT #125521 pointed out an icky bug that showed up when you tried to mix this feature with captures. Thankfully, this turned out to be one of the easier kind of regex engine bugs to figure out: some capture-related code paths simply hadn’t been updated to understand dynamic quantifiers.

And the usual other little bits

Here are a few other assorted small things that I dealt with:

  • Fix RT #125537 (type variable resolution failed to look in outer scopes) and add test
  • Fix RT #124940 (for type variable T, my T $x = … could fail to assign)
  • Review test mentioned in RT #125003, correct it, resolve ticket.
  • Fix RT #125574 (missing error on too-late application of ‘is repr(…)’ trait)
  • Fix RT #125513 (could auto-gen a %_ when there already was one in some unusual cases)
  • Review RT #80694, observe weird .^can behavior is gone, add a test for that, suggest a good solution to problem and test it too

Stay tuned for next week’s report, which already has as much to talk about as this one – and we’re only half way through the week!

Posted in Uncategorized | 2 Comments

This week: digging into multi-dimensional arrays – and plenty more

This report covers what I got up to during the closing days of June and the opening days of July.

Multi-dimensional array support in MoarVM

I’ve been pondering how to approach the multi-dimensional array aspects of the S09 Perl 6 design document for a while. I started out the implementation phase by taking another pass through the document, with an eye for things that were likely to hurt, or simply that did not fit with the current Perl 6 language as we have it today. That resulted in a gist that I tossed in Larry’s direction. Thankfully, nothing in that list was a huge blocker for getting the majority of the work done. With the top-down bit of out the way, it was time to move on to the bottom-up. MoarVM’s opset wanted a few additions, the representation API wanted a few extensions, and a new representation (named MultiDimArray) was needed. To recap, a representation is a memory allocation, layout, and access strategy, and is one half of what makes up the Perl 6 notion of “type” (the other half being the meta-object, which cares for all the high-level, VM-independent bits like method dispatch and type relations). Gradually, I test-drove my way through implementing the new multi-dimensonal APIs on the existing 1D dynamic array representation, then started to flesh out the new multi-dimensional representation. By the end of the week, I had the majority of it in place. The new representation can happily, for example, store a 10x10x10 array of 8-bit integers in a single 1000-byte blob. To come is filling out a few more missing pieces, the JVM port of these new guts (thankfully with a nice set of tests to guide the way), and then onwards and upwards to making use of it all at Rakudo level, so we can have multi-dimensional array support in Perl 6.

Another pre-compilation bug nailed

One of the most annoying kinds of bugs people run into are pre-compilation bugs: ones where your modules work fine, but a version of the module pre-compiled to bytecode breaks in some way. While that’s not always the compiler’s fault (for example, if you monkey-patch recklessly, or meta-program carelessly), most of the time it is. This week I hunted down a bug involving variables typed with subset types running into pre-compilation issues. Thankfully, it wasn’t overly difficult to fix once I worked out what was going on – but most happily, it was also the pre-compilation bug afflicting Text::CSV. It now works just fine pre-compiled.

A few tasks down in the regex engine

For a while, there’s been some debate of the failure semantics of the goal-matching syntax. That is, should:

/'(' ~ ')' \d+/

Backtrack on not finding the closing parentheses, as if you’d just written:

/'(' \d+ ')'/

Or should it throw an exception? Now, all of the uses of this construct in the Perl 6 grammar want the exception semantics. So, that’s the behavior we’ve had. However, it was argued (on a few occasions over the years) that this was not a desirable behavior for using the construct in normal regexes. My argument was always, “so just write it the other way” – but after enough tickets on the issue it was time for a review. Patrick Michaud wrote up a possible way forward a while ago, and this week I ran that by Larry, who agreed we’d change things. So, I set about putting the change into effect. Here, the design of the goal matching error mechanism came in handy. Actually, the syntax:

/'(' ~ ')' \d+/

Desugars to:

/'(' \d+ [ ')' || <.FAILGOAL(')')>]/

And the FAILGOAL method threw the exception. So, the behavior change simply meant adding:

token FAILGOAL($missing) { <!> }

The compiler toolchain already overrode FAILGOAL to throw a more helpful exception, so things continued to work for the Perl 6 grammar’s own needs. The only folks left in the middle are those using the goal matching syntax who wanted the exception. Thankfully, that’s easy to get back in your own grammars:

method FAILGOAL($missing) {
    die "Oh noes, I needed a '$missing'";

I also fixed another obscure, but potentially infuriating bug involving a mis-guided optimization. I’ll just reference RT #72440 and the patch that fixed it.

More failure mode improvements

Here’s a selection of things I did related to improving error reporting, to improve overall user experience:

  • Fix RT #125120 (bad error reporting if you declared a type X then made a syntax error)
  • Fix RT #108462 (missing redeclaration checks)
  • Fix RT #125335 (lack of escaping in error message about illegal numification)
  • Fix RT #125227 (trait warnings pointed to useless internals line, not relevant position in the source code)
  • Implement RT #112922 (catch impossible default values on parameters at compile time)
  • Add test for and resolve RT #123897 (bad error reporting, improved by implementing RT #112922)

And finally…

There’s the usual collection of things not worth a headline mention, that that are gladly dealt with.

  • Fix RT #125505 (getting Capture elements stripped away Scalar containers)
  • Working on RT #125110 (leading combining characters mis-handed in Str.perl) and unfudge tests, plus further fixes to combining chars and Str.perl
  • Fix RT #125509 (=== didn’t work on Complex), plus a few other issues observed with ===
  • Add test coverage for RT #115868, plus improve the two errors that are produced
  • Fix RT #116102 (ENTER phaser did not work as an r-value)
  • Add test coverage for RT #125483 (the “;;” syntax actually influences mutli-dispatch when placed prior to the first parameter)
  • Update test now “my ${a}” is parsed as legal Perl 6 (allows anonymous hash with key type declarations)
Posted in Uncategorized | Leave a comment

Grant status update

Over the last 3 months, I’ve been working on my Perl 6 Development Fund grant. Those of you following the blog will have seen plenty of posts in that time about what I’ve been up to. This post, 3 months in, is more administrative than technical, but it contains some nice statistics. (And there’ll be another of the regular weekly-ish reports coming in the next few days!)

Time worked

I have worked a total of 165 hours and 38 minutes. This means there are around 84 hours worth of funding remaining on the grant. I expect to deliver nearly all of these hours by the end of July.

April was by far the most productive month (75 hours). May was the month I got married, and I took time from all of my work for that, so worked only 41 hours on the grant. June was a little better (50 hours) but still around 20 hours short of what I’d hoped, largely due to poor health at the start of the month.

Major achievements

The biggest achievement of the work so far is the implementation of NFG (Normal Form Grapheme) in Rakudo on MoarVM. Along the way, I also implemented the Uni class, which provides all of the Unicode normalization forms. The last two monthly releases of Rakudo, made in late May and late June, have had strings working at grapheme level. By now, there are no known outstanding issues in RT relating to NFG support.

Much progress has been made on improving the stability of our concurrent programming features, with various reports from users of improvement. Issues certainly remain, but a number of the most serious bugs have now been addressed.

Startup time is now decidedly lower than at the start of the grant. Measuring informally on my development machine against Perl 5, Rakudo now starts up in just 40% of the time taken to load Perl 5 with Moose, and 160% of the time taken to load Perl 5 with Moo. I’d like to note that I’m certainly not the only person to thank for these startup time improvements; some have come from other Rakudo and MoarVM contributors. And, of course, there’s room for improvement yet!

Both the startup improvements and other work have also helped to lower the base memory footprint, though we certainly have some work to go in this area. Private memory consumed by Rakudo Perl 6 on MoarVM running the simple infinite loop program is still twice that of Perl 5 with Moose loaded, for example. On the other hand, this is just half the memory we once swallowed. One notable improvement I worked on is getting hashes to use a good bit less memory.

RT tickets

I’ve been addressing bugs and missing features noted in the RT queue. 85 unique RT tickets are mentioned in my work log, almost all of which had an outcome of being resolved by the work I did. They were all over the map: semantic wrongness, bad error reporting, crashes, unimplemented things, and so forth.

Further funding needed

There’s enough funding “in the pot” for me to continue my work through July. In that time, I plan to deliver multi-dimensional arrays (including the native packed variety), further address concurrency issues, and resolve more RT issues.

I’d very much like to continue this work for the rest of the year, as we approach the Christmas release. Any potential donors may like to read more about the Perl 6 Core Development Fund.

Posted in Uncategorized | 3 Comments

This week: Unicode 8, loads of fixes, preparing for shaped arrays

It’s been another week of getting lots of small things done – but also gearing up to working on various array related things, including fixed size and shaped arrays. I expect to have some progress to report on that next week; in the meantime, here’s what I got up to in this week.

Update to Unicode 8.0

I got MoarVM’s Unicode database upgraded to the recently released Unicode 8. Part of the work was reading the changes list to see if there were any code changes needed. After establishing that there were not, the actual upgrade was very easy, since we have a script that takes the Unicode database, extracts the bits we need, and packs it into various compact data structures. The final step was looking into 3 test failures in one of the specification test files for Perl 6. It turned out the tests in question were written in a way that made them vulnerable to Unicode adding additional ideographs; I fixed the tests so we should not run into this problem with them in future Unicode upgrades. And with that, Rakudo on MoarVM has Unicode 8 support. Make the SIGN OF THE HORNS, and grab a HOTDOG.

Improving error reporting

We work pretty hard to report the programmer’s mistakes in a good and understandable way. Of course, sometimes we fall short, and people file tickets to let us know. I’m keen to fix various of these; many of them are not that hard to fix, but can make quite a big difference to user experience. Here are the ones I’ve just recently fixed:

  • Fix RT #125228 (bad error reporting when ‘is’ trait argument referenced undeclared symbol)
  • Fix RT #125259 (bad error reporting when parameterizing a type in an illegal way)
  • Fix RT #125427 (Rakudo silently accepted trying to overload special-compiler-form operators, which will never work)
  • Fix RT #125339 (no location information for unhandled control exceptions)
  • Fix RT #125441 (bad error reporting when trying to declare a class with the same name as an existing enum element)

Parsing issues

For a while, there’s been a tricky issue in RT regarding parsing of parameters with both where clauses and defaults:

sub bar($percent where { 1 <= $_ <= 100 } = 100) {

This did actually parse. Unfortunately, it parsed as something like:

sub bar($percent where ({ 1 <= $_ <= 100 } = 100)) {

That is, trying to assign 100 to a block. This, of course, explodes when we try to evaluate the constraint. Clearly it was a precedence issue – but oddly, the precedence limit set on parsing the constraint matched what the standard grammar did exactly. I dug a bit deeper – and found a discrepancy in how we interpreted the precedence limit (exclusive vs. inclusive). That was a one line change – but fixing it caused quite a lot of fallout in the specification tests. In all, three or four separate issues had cropped up.

One of them was easy and rewarding: I removed a hack that was working around the precedence limit handling bug. That left the remaining ones, involving adverbs getting attached to the wrong bits of the AST and chained assignments parsing wrongly. Again, I looked for a place where we were out of line with the standard grammar – and failed. I couldn’t see how STD could come out with a different result than Rakudo was. Of course, since STD only parses, we could easily have not noticed it was broken. I didn’t have a build of STD to hand; thankfully, someone on channel did and could quickly paste me the ASTs it produced. And…they were wrong. So, I’d uncovered a bug in the standard grammar that had gone without being noticed for years. Anyway, I worked out some kind of solution, and left Larry to think up a neater one if possible. Since then he’s given me another suggestion, which I’ll try out. Anyway, the bug is gone.

The others were thankfully simpler. In one case, a program that ended in an invocant colon, for indirect object syntax, would report a syntax error; this was just a case of failing to look for end of source code as a valid thing to have after it.

Finally, I looked into some oddities around item and list precedence analysis for assignment. This one also ended up with me discovering a discrepancy with the standard grammar. However, even after fixing that, others found the rules still a little surprising. I looked deeper, and found a good way to explain the current semantics. Later, Larry read the discussion and committed a further tweak. So, we’re ahead of the standard grammar in another place now.

Down in MoarVM

I spent some work doing various fixes down at the VM level. I jumped on a couple of segmentation faults (RT #125376), which I like to get on top of since they can, in the worst case, be security issues. I also reviewed a set of patches resulting from running the clang static analyzer over the codebase, and also looked through some tickets and pull requests.

The most significant work, however, was adding a “free at next safepoint” mechanism to the fixed size allocator. A common problem in concurrent systems, when you have memory not managed by the GC, is making sure that when you free it, no other thread can possibly be reading it. For things the garbage collector manages, this isn’t an issue; it already is considering every live reference to an object when it does GC no matter what thread has the reference. But in some places, we do want to employ concurrent algorithms, but not have the GC manage the memory. Generally, these are situations where we’re producing a new “version” of the data structure, putting it in place, and freeing the old one. It’s fine if another thread obtained the previous version and is looking at it – but that won’t end well if another thread frees it right away.

Thankfully, there’s an easy solution also inspired by how the GC works: add the memory to a list of things what we should free at the next “safe point”. A safe point is one where we know the state of each thread is such that it could not possibly still be in code looking at the memory in question. It turns out GC runs are natural safe points, so for now it just postpones the freeing until the next GC run. However, the API will let us consider some smarter option in the future if needed.

I immediately used this for the NFG synthetics table (a concurrent data structure that can safely be read from without ever taking a lock, and locking is only used to establish an ordering on additions). I’ll also use it to fix an issue with memory safety of dynamic arrays.

Too many spectests for Windows

Recently, “make spectest” stopped working on Windows. I suspected somebody had done some kind of platform unaware change to the testing related bits – but a quick glance at the git log ruled out that hypothesis. Finally, I figured it out: on Windows, there is a 16-bit string length field on the process launch data structure, so you can’t send along a command line longer than 32KB. We were invoking the fudge tool with all of the spectests, and had reached the number of tests where we had too long a command line for Windows. Thankfully, I could get it to fudge in batches.

Loads of other small fixes

I also chewed my way through a number of other tickets, and fixed up a few more things that I spotted along the way.

  • Verify RT #125365 fixed and add test coverage
  • Fix RT #109322 (bare blocks didn’t take a closure properly, causing reentrancy issues)
  • Review and remove/update tests mentioned as needing review in RT #125016 and RT #125018
  • Fix RT #125015, which complains about HOW(…) sub not existing
  • Fix RT #113950 (optimizer could sometimes cause LEAVE phaser to not execute)
  • Fix an issue where submethods with a role mixed in would not be considered submethods; add a test case
  • Fix RT #125445 (.can and .^can broken on enum values)
  • Fix RT #125402 (cannot assign non-Str to substr-rw; fixed by making it DWIM and coerce)
  • Fix RT #125455 (variable phasers applied to variables directly in package/class bodies didn’t work properly)
Posted in Uncategorized | 2 Comments

This week: fixing lots of things

My first week of Perl 6 work during June was spent hunting down and fixing bugs, some of them decidedly tricky and time-consuming to locate. I also got a couple of small features in place.

Unicode-related bugs

RT #120651 complained that our UTF-8 encoder wrongly choked on non-characters. While there were defined as “not for interchange” in the Unicode spec, this does not, it turns out, mean that we should refuse to encode them, as clarified by Unicode Corrigendum 9. So, I did what was needed to bring us in line with this.

RT #125161 counts as something of an NFG stress test, putting a wonderful set of combiners onto some base characters. While this was read into an NFG string just fine, and we got the number of graphemes correct, and you could output it again, asking for its NFD representation exploded. It turned out to be a quite boring bug in the end, where a buffer was not allowed to grow sufficiently to hold the decomposition of such epic graphemes.

Finally, I spent some time with the design documents and test suite with regard to strings and Unicode. Now we know how implementing NFG turned out, various bits of the design could be cleaned up, and several things we turned out not to need could go away. This in turn meant I could clean them out of the test suite, and close the RT tickets that pointed us towards needing to review/implement what the tests in question needed.

Two small features

In Perl 6 we have EVAL for evaluating code in a string. We’ve also had EVALFILE in the design and test suite for a while, which evaluates the code in the specified file. This was easily implemented, resolving RT #125294.

Another relatively easy, but long-missing, feature was the quietly statement prefix. This swallows any warnings emitted by the code run in its dynamic scope. Having recently improved various things relating to control exceptions, this one didn’t take long to get in place.

Concurrency bug squishing continues

In the last report, I was working on making various asynchronous features, such as timers, not accidentally swallow a whole CPU core. I’d fixed this, but mentioned that it exposed some other problems, so we had to back to fix out. The problem was, quite simply, that by not being so wasteful, we ran into an already existing race condition much more frequently. The race involved lazy deserialization: something we do in order to only spend time and memory deserializing meta-objects that we will actually use. (This is one of the things that we’ve used to get Rakudo’s startup time and base memory down.) While there was some amount of concurrency control in place to prevent conflicts over the serialization reader, there was a way that one thread could see an object that another one was part way through deserializing. This needed a lock-free solution for the common “it’s not a problem” path, so we would not slow down every single wval instruction (the one in MoarVM used to get a serialized object, deserializing it on demand if needed). Anyway, it’s now fixed, and the core-eating async fix is enabled again.

I also spent some time looking into and resolving a deadlock that could show up, involving the way the garbage collector and event loop were integrated. The new approach not only eliminates the deadlock, but also is a little more efficient, allowing the event loop thread to also participate in the collection.

Finally, a little further up the stack, I found the way we compiled the given, when, and default constructs could lead to lexical leakage between threads. I fixed this, eliminating another class of threading issues.

The case of the vanishing exception handler

Having our operators in Perl 6 be multi-dispatch subroutines is a really nice bit of language design, but also means that we depend heavily on optimization to get rid of the overhead on hot paths. One thing that helps a good bit is inlining: working out which piece of code we’re going to call, and then splicing it into the caller, eliminating the overhead of the call. For native types we are quite good at doing this at compile time, but for others we need to wait until runtime – that is, performing dynamic optimization based upon the types that actually show up.

Dynamic optimization is fairly tricky stuff, and of course once in a while we get caught cheating. RT #124191 was such an example: inlining caused us to end up missing some exception handlers. The issue boiled down to us failing to locate the handlers in effect at the point of the inlined call when the code we had inlined was the source of the exception. Now, a tempting solution would have been to fix the exception handler location code. However, this would mean that it would have to learn about how callframes with inlining look – and since it already has to work differently for the JIT, there was a nice combinatoric explosion waiting to happen in the code there. Thankfully, there was a far easier fix: creating extra entries in the handlers table. This meant the code to locate exception handlers could stay simple (and so fast), and the code to make the extra entries was far, far simpler to write. Also, it covered JIT and non-JIT cases.

So, job done, right? Well…sadly not. Out of the suite of tens of thousands of specification tests, the fix introduced…6 new failures. I disabled JIT to see if I could isolate it and…the test file exploded far earlier. I spent a while trying to work out what on earth was going on, and eventually took the suggestion of giving up for the evening and sleeping on it. The next morning, I thankfully had the idea of checking what happened if I took away my changes and ran the test file in question without the JIT. It broke, even without my changes – meaning I’d been searching for a problem in my code that wasn’t at all related to it. It’s so easy to forget to validate our assumptions once in a while when debugging…

Anyway, fixing that unrelated problem, and re-applying the exception handler with inlining fix that this all started out with, got me a passing test file. Happily, I flipped the JIT back on and…same six failures. So I did have a real failure in my changes after all, right? Well, depends how you look at it. The final fix I made was actually in the JIT, and was a more general issue, though in reality we could not have encountered it with any code our current toolchain will produce. It took my inlining fixes to produce a situation where we tripped over it.

So, with that fix applied too, I could finally declare RT #124191 resolved. I added a test to cover it, and was happy to be a tricky bug down. Between it and the rest of the things I had to fix along the way, it had taken the better part of a full day worth of time.

Don’t believe all a bug report tells you

RT #123686 & RT #124318 complained about the dynamic optimizer making a mess of temp variables, and linked it also to the presence of a where clause in the signature. Having just worked on a dynamic optimizer bug, I was somewhat inclined to believe it – but figured I should verify the assumption. It’s good I did; while you indeed could get the thing to fail sooner when the dynamic optimizer was turned on, running the test case for some extra iterations made it explode with all of the optimization turned off also. So, something else was amiss.

Some bugs just smell of memory corruption. This one certainly did; changing things that shouldn’t matter changed the exact number of iterations the bug needed to occur. So, out came Valgrind’s excellent memcheck tool, which I could make whine about reads of uninitialized memory with…exactly one iteration to of the test that eventually exploded. It also nicely showed where. In the end, there were three ingredients needed: a block with an exit handler (so, a LEAVE, KEEP, UNDO, POST, temp variable or let variable), a parameter with a where clause, and all of this had to be in a multi-dispatch subroutine. Putting all the pieces together, I soon realized that we were incorrectly trying to run exit handlers on fake callframes we make in order to test if a given multi-candidate will bind a certain set of arguments. Since we never run code in the callframe, and only use it for storage while we test if the signature could bind, the frame was never set up completely enough for running the handlers to work out well. It may have taken some work to find, but thankfully very easy to fix.

Other assorted bits

I did various other little things that warrant a quick mention:

  • Fix RT #125260 (sigils of placeholder parameters did not enforce type constraints)
  • Fix RT #124842 (junction tests fudged for wrong reason, and also incorrect)
  • Fix attempts to auto-thread Junction type objects, which led to weird exceptions rather than a dispatch failure; add tests
  • Review a patch to fix a big endian issue in MoarVM exception optimization; reject it as not the right fix
  • Fix Failures that were fatalized reporting themselves as leaked/unhandled
  • Investigate RT #125331, which is seemingly due to reading bogus annotations
  • Fix a MoarVM build breakage on some compilers
Posted in Uncategorized | 3 Comments

That week: concurrency fixes, control exceptions, and more

I’m back! I was able to rest fairly well in the days leading up to my wedding, and had a wonderful day. Of course, after the organization leading up to it, I was pretty tired again afterwards. I’ve also had a few errands to run with getting our new apartment equipped with various essentials – including a beefy internet connection. And, in the last couple of days, I’ve been steadily digging back into work on my Perl 6 grant. :-) So, there will be stuff to tell of about that soon; in the meantime, here’s the May progress report I never got around to writing.

Before fixing something, be sure you need it

I continued working on improving stability of our concurrent, async, and parallel programming features. One of the key things I found to be problematic was the frame cache. The frame cache was introduced to MoarVM very early on, to avoid memory allocations for call frames by keeping them cached. Back then, we allocated them using malloc. The frames were cached per thread, so in theory no thread safety issues, right? Well, wrong, unfortunately. The MoarVM garbage collector works in parallel, and while it does have to pause all threads for some of the work, it frees them to go on their way when it can. Also, it steals work from threads that are blocked on doing I/O, acquiring a lock, waiting on a condition variable, and so forth. One of the things that we don’t need to block all threads for is clearing out the per-thread nursery, and running any cleanup functions. And some of those cleanups are for closures, and those hold references to frames, which when freed go into the frame pool. This is enough for a nasty bug: thread A is unable to participate in GC, thread B steals its GC work, the GC gets mostly done and things resume, thread A comes back from its slumber and begins executing frames, touching the frame pool…which thread B is also now touching because it is freeing closures that got collected. Oops. I started pondering how best to deal with this…and then had a thought. Back when we introduced the frame cache, we used malloc to allocate frames. However, now we use MoarVM’s own fixed size allocator – which is a good bit faster (and of course makes some trade-offs to achieve that – there’s no free lunch in allocation land). Was the frame cache worth it? I did some benchmarking and discovered that it wasn’t worth it in terms of runtime (and timotimo++ did some measurements that suggested we might even be better off without it, though it was in the “noise” region). So, we could get less code, and be without a nasty threading bug. Good, right? But it gets better: the frame cache kept hold of memory for frames that we only ever executed during startup, or only ran during the compile phase and not at runtime. So eliminating it would also shave a bit off Rakudo’s memory consumption too. With that data in hand, it was a no-brainer: the frame cache went away, and with it an icky concurrency bug. However, the frame cache had one last “gift” for us: its holding on to memory had hidden a reference counting mistake for deserialized closures. Thankfully, I was able to identify it fairly swiftly and fix it.

More concurrency fixes

I landed a few more concurrency related fixes also. Two of them related to parametric role handling and role specialization, which needed some concurrency control. To ease this I added an Lock class to NQP, with pretty much the same API as the Perl 6 one. (You could use locks from NQP before, but there was some boilerplate. Now it’s easy.) One other long-standing issue we’ve had is that using certain features (such as timers) could drive a single CPU core up to 100% usage. This came from using an idle handler on the async event loop thread to look for new work and check if we needed to join in with a garbage collection run. It did yield to the OS if it had nothing to do – but on an unloaded system, a yield to the OS scheduler will simply get you scheduled again in pretty short order. Anyway, I spent some time looking into a better way and implemented it. Now we use libuv async handlers to wake up the event loop when there’s new work it needs to be aware of. And with that, we stopped swallowing up a core. Unfortunately, while this was a good fix, our CPU hunger had been making another nasty race involving lazy deserialization only show up occasionally. Without all of that resource waste, this race started showing up all over the place. This is now fixed – but it happened just in these last couple of days, so I’ll save talking about it until the next report.

Startup time improvements

Our startup time has continued to see improvements. One of the blockers was a pre-compilation bug that showed up when various lazily initialized symbols were used (of note, $*KERNEL, $*VM, and $*DISTRO). It turns out the setup work of these poked things into the special PROCESS package. While it makes no sense to serialize an updated version of this per-process symbol table…we did. And there was no good mechanism to prevent that from happening. Now there is one, resolving RT #125090. This also unblocked some startup time optimizations. I also got a few more percent off startup by deferring constructing a rarely used DateTime object for the compiler build date.

Control exception improvements

Operations like next, last, take, and warn throw “control exceptions”. These are not caught by a normal CATCH handler, and you don’t normally need to think of them as exceptions. In fact, MoarVM goes to some effort to allow their implementation without ever actually creating an exception object (something we take advantage of in NQP, though not yet in Rakudo). If you do want to catch them and process them, you can use a CONTROL block. Trouble is, there was no way to talk about what sort of control exceptions were interesting. Also, we had some weird issues when a control exception went unhandled (RT #124255), manifesting in a segfault a while back, though by the time I looked at it we were merely at giving a poor error. Anyway, I fixed that, and introduced types for the different kinds of control exception, so you can now do things like:

    when CX::Warn {

To do custom logging of warnings. I also wrote various tests for these things.


I helped out others, and they helped me. :-)

  • FROGGS++ was working on using the built-in serialization support we have for the module database, but was running into a few problems. I helped resolve them…and now we have faster module loading. Yay.
  • TimToady++ realized that auto-generated proto subs were being seen by CALLER, auto-generated method ones were not, and hand-written protos that only delegated to the correct multi also were not. He found a way to fix it; I reviewed it and tweaked it to use a more appropriate and far cheaper mechanism.
  • I noticed that calling fail was costly partly because we constructed a full backtrace. It turns out that we don’t need to fully construct it, but rather can defer most of the work (for the common case where the failure is handled). I mentioned this on channel, and lizmat++ implemented it

Other bits

I also did a bunch of smaller, but useful things.

  • Fixed two tests in S02-literals/quoting.t to work on Win32
  • Did a Panda Wndows fix
  • Fixed a very occasional SEGV during CORE.setting compilation due to a GC marking bug (RT #124196)
  • Fixed a JVM build breakage
  • Fixed RT #125155 (.assuming should be supported on blocks)
  • Fixed a crash in the debugger, which made it explode on entry for certain script

More next time!

Posted in Uncategorized | 1 Comment

Taking a short break

Just a little update. When I last posted here, I expected to write the post about what I got up to in the second week of May within a couple of days. Unfortunately, in that next couple of days I got quite sick, then after a few days a little better again, then – most likely due to my reluctance to cancel anything – promptly got sick again. This time, I’ve been trying to make a better job of resting so I can make a more complete recovery.

By now I’m into another round of “getting better”, which is good. However, I’m taking it as easy as I can these days; at the weekend I will have my wedding, and – as it’s something I fully intend to only do once in my life – I would like to be in as good health as possible so I can enjoy the day! :-)

Thanks for understanding, and I hope to be back in Perl 6 action around the middle of next week.

Posted in Uncategorized | 4 Comments