The Unofficial Incomplete Spidermonkey Bibliography

I’ve started a little side-project: The Unofficial Incomplete Spidermonkey Bibliography. I’ve been interested in doing this since at least March of this year, but I finally have done it.

This project was definitely inspired by Chris Seaton’s The Ruby Bibliography, however I don’t want to focus exclusively on academic publications. There’s lots of excellent writing about SpiderMonkey out there that is blogs, bugs reports, and more. My hope is that this is a home to help gather all this knowledge.

On a personal note, I’m particularly interested in older blog posts, especially those that exist only in archive.org links in people’s personal notebooks here and there.

Please, give me a hand: Open Issues for things you’d like references to, or make pull requests to help fill in all the enormous gaps I am certain exist in the bibliography as I have it now.

Using rr-dataflow: Why and How?

If you haven't heard of rr, go check it out right away. If you have, let me tell you about rr-dataflow and why you may care about it!

Perhaps you, like me, occasionally need to track something back to its origin. I will be looking at the instructions being executed in an inline cache, and I will think "Well that's wrong... where did this instruction get generated?"

Now, because you can set a watchpoint, and reverse continue, you can see where a value was last written; it's easy enough to do

(rr) watch *(int*)($rip)
(rr) reverse-continue

The problem is that, at least in SpiderMonkey, that's rarely sufficient; the first time you stop, you'll likely be seeing the copy from a staging buffer into the final executable page. So you set a watch point, and reverse continue gain. Oops, now you're in the copying of a the buffer during a resize; this process can happen a few times before you arrive at the actual point you are interested in.

Enter rr-dataflow. As it says on the homepage: "rr-dataflow adds an origin command to gdb that you can use to track where data came from."

rr-dataflow is built on the Capstone library for disassembly. This allows rr-dataflow to determine for a given instruction where the data is flowing to and from.

So, in the case of the example described before, the process starts almost the same:

(rr) source ~/rr-dataflow/flow.py
(rr) watch *(int*)($rip)
(rr) reverse-continue

However, this time, when we realize the watchpoint stops at an intermediate step, we can simply go:

(rr) origin

rr-dataflow then analyzes the instruction that tripped the watchpoint, sets an appropriate watchpoint, and reverse continues for you. The process of getting back to where you need to becomes

(rr) origin
(rr) origin

Tada! That is why you might be interested in rr-dataflow. The homepage also has a more detailed worked example.

A caveat: I've found it to be a little unreliable with 32-bit binaries, as it wasn't developed with them in mind. One day I would love to dig in a little more into how it works, and potentially help it be better there. But in the mean time, thanks so much Jeff Muizelaar for creating such an awesome tool.

Fixing long delays when a program crashes while running in the VSCode terminal

Symptom: You’re writing broken code (aren’t we all?) and your program is crashing. When it crashes running under the regular OS/X terminal, you don’t see any problems; it the program crashes and it’s fine.

However, when you do the same thing under VSCode’s integrated terminal, you see a huge delay.

Solution:

launchctl unload -w /System/Library/LaunchAgents/com.apple.ReportCrash.plist

For some reason, crashes in the regular terminal don’t seem to upset ReportCrash, but when they happen in VSCode, ReportCrash takes a tonne of CPU and hangs out for 10-30s. My, totally uninformed guess, is that ReportCrash thinks the crash is related to VSCode and is sampling everything about the whole VSCode instance. The only evidence I have for this is that I find the crash delays don’t seem to happen right after restarting VSCode.

Cleaning a Mercurial Tree

(Should have mentioned this in the last post. Oops!)

You've littered your tree with debugging commits. Or you've landed a patch without pushing yourself, so it exists upstream already and you don't need your local copy. Slowly but surely hg wip becomes less useful.

You need hg prune.

Works just like your garden's pruning shears. Except, it's powered by Evolve, and so really it just marks changesets as obsolete, hiding them from view.

My Mercurial Workflow

When I joined Mozilla I decided to leap head first into Mercurial, as most of Mozilla's code is stored in Mercurial. Coming from a git background, the first little while was a bit rough, but I'm increasingly finding I prefer Mercurial's approach to things.

I really do find the staging area too complex, and branches heavierweight than necessary (see my earlier post Grinding my Gears with Git), and increasingly I really appreciate the plugin architecture that allows the creation of a Mercurial that works for you.

I have to say though, where I do envy git is that they've got some great looking docs theses days, and Mercurial is pretty far behind there, as often the best docs are those on the wiki, and it doesn't always feel very maintained.

With that in mind, let's talk about how I work with Mercurial. This post is heavily inspired by (and my workflow definitely inspired by) Steve Fink's post from last year.

Setup

I mostly used the initial setup of .hgrc provided by Mozilla's bootstrap.py. I have made a couple tweaks though:

Diff settings

[diff]
git = true
showfunc = true
unified = 8

The above stanza helps get diffs in a friendly format for your reviewers. Last time I checked, bootstrap.py didn't set unified = 8.

[color]
diff.trailingwhitespace = bold red_background

The SpiderMonkey team is a stickler for some things, including trailing whitespace. This colourization rule helps it stand out when you inspect a change with hg diff.

Aliases

  • I'm a huge fan of the wip extension that bootstrap.py sets up. It lists recent heads in graph format with a terse log, along with colourized output.
Green Hashes are draft revisions; blue are public. The red text highlights the current working directory parent, and yellow text are bookmark names.

Green Hashes are draft revisions; blue are public. The red text highlights the current working directory parent, and yellow text are bookmark names.

  • Mercurial's default hg log has a different philosophy than git log. Where git log shows you a relative view of history from your current working directory or specified revision, Mercurial's log command by default shows a global view of history in your repository. In a small project, I can imagine that making sense, but to be honest, 95% of the time I find hg log does the wrong thing for what I want. So:

    [alias]
    flog = log -f --template=wip

    Adds hg flog as an alias for following-log, which is closer in behaviour to the git log. The --template-wip bit uses the colourization and line formatting already provided for the wip extension.

    Honestly though, I use hg wip about 10x more often than I use hg {f}log.

Phases

One of the cool things about Mercurial is its well developed reasoning about history rewriting. One key element to that is the notion of 'phases' which help define when a rewrite of a changeset can happen. There's a darn good chance this will be a sensible default for you in your .hgrc:

[phases]
publish = false

Getting to work

I use a clone of mozilla-unified as the starting point. When I start working on something new, unless I have a good reason not to, I'll typically start work off of the most recent central tag in the repo.

$ hg pull && hg up central

Labelling (Bookmarks)

When working in Mercurial, one of the things you get to decide is whether or not you label you commits or not. This article goes into more detail, but suffice it to say, there's no requirement, as there is in in git, to label your lightweight branches (using Bookmarks).

I have experimented both with labelling and not these days, and I have to say, so long as I have hg wip, I think it's pretty reasonable to get by without bookmarks, especially as my commit messages typically end up having the bug numbers in them, so it feels almost like redundant info to label the branch. Maybe if you work on a project where commit messages aren't associated with a bug or other identifier labelling might be more useful.

When developing, I tend to use commits as checkpoints. Then later, what I will do is use history rewriting tools to create the commits that tell the story I want to tell. In Mercurial, this means you'll want to enable the Evolve and Histedit extensions (Facebook's chistedit.py is also nice, but not necessary). You'll also want rebase (unlike in git, rebase and histedit are two distinct operations).

A tip with histedit: When I first started with mercurial, I found myself more uncomfortable with histedit than I was with git. This was because I was used to doing 'experimental' history editing, always knowing I could get back to the repo state I stared from just by moving my branch pointer back to the commit I left behind.

Mercurial, with the evolve extension enabled, has a more complicated story for how history editing works. Over time, you'll learn about it, but in the mean time, if you want to be able to keep your old history: hg histedit --keep will preserve the old history and create the new history under a new head.

Evolve knows some pretty cool tricks, but I think I'll save that for later once I can explain the the magic a little more clearly.

More Extensions

absorb

Absorb is the coolest extension for mercurial I know of. What it does is automate applying edits to a stack of patches, exactly the kind of edits that show up in a code-review based workflow. If you create stacks of commits and get them reviewed as a stack, it's worth looking into this.

The best explanation I know of is this little whitepaer written by Jun Wu, the original author.

share

One extension I adore, is the share extension, which ships with Mercurial. It's very similar in spirit to git-worktree This allows me to have multiple working copies, but a common repo storage. Even better, it works great to have a working copy inside my VM that's backed by my current repo.


So that was a long, rambling blog post: Mostly I just wanted to share the pieces of Mercurial that make me really happy I stuck to it, and to encourage others to give it a shot again. While I doubt mercurial will ever supplant Git, as Git has mindshare for days, at the very least I think Mercurial is worth exploring as a different point in the DVCS design space.

Please, if you've got tips and tricks you'd like to share, or cool extensions, feel free to reach out or leave a comment.

An Inline Cache isn’t Just a Cache

If you read the Wikipedia page for Inline Caching, you might think that inline caches are caches, in the same way that you might talk about a processor cache, or a page cache for a webserver like memcached. The funny thing is, that really undersells how they're used in SpiderMonkey (and I believe other JS engines), and in hindsight I really wish I had known more about them years ago.

The Wikipedia page cites a paper L. Peter Deutsch, Allan M. Schiffman, "Efficient implementation of the Smalltalk-80 system", POPL '84, which I found on the internet. In the paper the authors discuss the key aspect of their implementation being the ability to dynamically change representation of program code and data,

"as needed for efficient use at any moment. An important special case of this idea is > caching> : One can think of information in a cache as a different represenation of the same information (considering contents and accessing information together)"

Part of their system solution is a JIT compiler. Another part is what they call inline caching. As they describe them in the paper, an inline cache is self modifying code for method dispatch. Call sites start as pointing to a method-lookup: The first time the method-lookup is invoked, the returned method (along with a guard on type, to ensure the call remains valid) overwrites the method-lookup call. The hypothesis here is that a particular call site will very often resolve to the same method, despite in principle being able to resolve to any method.

In my mind, the pattern of local self-modifying code, the locality hypothesis, as well as the notion of a guard are the fundamental aspects of inline caching.

The paper hints at something bigger however. On Page 300 (pdf page 4)

For a few special selectors like +, the translator generates inline code for the common case along with the standard send code. For example, + generates a class check to verify that both arguments are small integers, native code for the integer addition, and an overflow check on the result. If any of the checks fail, the send code is executed.

It's not clear if the authors considered this part of the inline caching strategy. However, it turns out that this paragraph describes the fundamental elements of the inline caching inside SpiderMonkey.

When SpiderMonkey encounters an operation that can be efficiently performed under certain assumptions, it emits an inline cache for that operation. This is an out-of-line jump to a linked list of 'stubs'. Each stub is a small piece of executable code, usually consisting of a series of sanity checking guards, followed by the desired operation. If a guard fails, the stub will jump either to another one generated for a different case (the stubs are arranged in a linked list) or to the fallback path, which will do the computation in the VM, then possibly attach a new stub for the heretofore not-observed case. When the inline cache is initialized, it will start pointed to the fallback case (this is the 'unlinked' state from the Smalltalk paper).

SpiderMonkey generates these inline caches for all kinds of operations: Property accesses, arithmetic operations, conversion-to-bool, method calls and more.

Let's make this a little more concrete with an example. Consider addition in Javascript: function add(a,b) { return a + b; }

The language specifies an algorithm for figuring out what the correct result is based on the types of the arguments. Of course, we don't want to have to run the whole algorithm every time, so the first time the addition is encountered, we will attempt to attach an inline cache matching the input types (following the locality hypothesis) at this particular operation site.

So let's say you have an add of two integers: add(3,5) The first time through the code will be an inline cache miss, because there is none generated. At this point, SM will attach an Int32+Int32 cache, which consists of generated code like the following pseudo code:

int32_lhs = unbox_int32(lhs, fail); // Jump to fail if not an int32
int32_rhs = unbox_int32(rhs, fail);
res = int32_lhs + int32_rhs;
if (res.overflowed()) goto fail; 
return res;

fail: 
  goto next stub on chain

Any subsequent pair of integers being added (add(3247, 12), etc) will hit in this cache, and return the right thing (outside of overflow). Of course, this cache won't work in the case of add("he", "llo"), so on a subsequent miss, we'll attach a String+String Cache. As different types flow through the IC, we build up a chain (*) handling all the types observed, up to a limit. We typically terminate the chains when they get too long to provide any value to save memory. The chaining here is the 'self-modifying' code of inline caching, though, in SpiderMonkey it's not actually the executable code that is modified, just the control flow through the stubs.

There have been a number of different designs in Spidermonkey for inline caching, and I've been working on converting some of the older designs to our current state of the art, CacheIR, which abstracts the details of the ICs to support sharing them between the different compilers within the SpiderMonkey Engine. Jan's blog post introducing CacheIR has a more detailed look at what CacheIR looks like.

So, in conclusion, inline caches are more than just caches (outside the slightly odd Deutsch-Schiffman definition). The part of me that likes to understand where ideas come from would be interested in knowing how inline-caching evolved from the humble beginnings in 1983 to the more general system I describe above in SpiderMonkey. I'm not very well read about inline caching, but I can lay down an upper limit. In 2004, the paper "LIL: An Architecture Neutral Language for Virtual-Machine Stubs", describes inline cache stubs of similar generality and complexity, suggesting the range is somewhere between 1983 and 2004.

(*) For the purposes of this blog post, we consider the IC chains as a simple linked list, however reality is more complicated to handle the type inference system.


Thank you so much to Ted Campbell and Jan de Mooij for taking a look at a draft of this post.


Addendum:  Tom Schuster points out, and he's totally right, that the above isn't entirely clear: This isn't the -most- optimized case. IonMonkey will typically only fallback to Inline Caches where it can't do something better.

Erasure from GitHub

Boy GitHub doesn't handle you abandoning an email address particularly well when dealing with their Contributors page.

If you compare the Contributors page for Eclipse OMR with the output of git shortlog, you notice a divergence

Screen Shot 2018-05-07 at 12.36.30 PM.png
$ git shortlog -sn --no-merges | head -n 4
   176    Matthew Gaudet
   168    Leonardo Banderali
   137    Robert Young
   128    Nicholas Coughlin

Turns out, the issue here is that I gave up my IBM email on GitHub when I left. So now, GitHub can't link my commits in OMR to me, so I no longer show up as a contributor. 

I'm not actually upset about this, but I do wonder about the people who (wrongly) say "GitHub is your resume", and the ways this disadvantages people.

Grinding my Gears with Git

Since I have started working with Mozilla, I have been doing a lot of work with mercurial, and development in the Bugzilla way. So much so, that I've not really used git much in the last four months.

Coming back to git, to collaborate on a paper I am writing with some researchers, I find some things really bothersome, that I had sort of taken for granted after years of becoming one with the Git way

  • Branches / Pull Requests are too heavyweight! This might be a side effect of the writing of a paper, but what I find myself desperately wanting to do is produce dozens of independent diffs, that I can throw up for other authors to look at. Especially speculative diffs, that change a section of the paper in a way that I'm not sure we absolutely want to go. This isn't so much a criticism of git, so much as it is of the github style of collaboration.
  • The staging area is way too complicated for a simple workflow of trying to make quick changes to a small project. When doing full fledged production software engineering, I have found it useful, but working on a paper like this? It's just extra friction that doesn't produce any value.

I have another rant one day about how LaTeX and version control is a pretty bad workflow, due to the mismatch in diff semantics (or alternatively, the hoops one needs to go through to get a good version-controllable LaTeX document).

Open tabs are cognitive spaces

I love this blog post by Michail Rybakov, Open tabs are cognitive spaces. It provides insight into a way of working (the hundreds of tabs model) I've always found baffling and confusing. I wonder if some of the tooling he talks about was more common, if I wouldn't aim more towards a thousand-tab style.

Also, I learned something super cool:

The Sitterwerk library in St.Gallen follows a serendipity principle. Every book has an RFID chip, so you can place books everywhere on the shelves - the electronic catalogue will update the location of the book accordingly. The visitors are encouraged to build their own collection, placing books that they used for their research together, thus enabling the next reader to find something relevant by chance, something they didn’t know they were looking for.
— Open tabs are cognitive spaces

I absolutely love that idea, as much as it scares the organized person in me.

Note Taking in my Work

My entire professional career I've been a note-taker at work. In the beginning, I used the Journal template of Lotus notes.

Image borrowed from the University of Windsor IT department

Image borrowed from the University of Windsor IT department

I used the journal extensively throughout my internship with IBM. Each day got a new entry, and each day's entry was filled with goals, notes on what I was learning, and debugging notes about what I was working on. Everything went in the journal, even PDFs of papers I was reading, along with the notes about them. Using its full text search, the Notes journal was how I survived my internship, and the huge volume of information that I had to deal with there.

Strangely, the single-fulltext-DB practice didn't come back to school with me, and I returned to paper notes in classes; perhaps for the better. In work though, it's hard to write down a stack trace, and find it again later, so when i went back to IBM as a full time employee, I wanted to have another database of notes. For all the power of the Lotus Notes DB, it had some downsides that I didn't like, and so I didn't want to return to that, so I went hunting for a solution.

I landed on Zim Wiki. It served me quite well through my term, though in the last few months I worked there I got a MacBook, and I discovered that while Zim is functional, it isn't excellent (I used the instructions here along with an Automator app to launch it).

I've tried to make sure every intern I worked with found their own solution to taking notes. Some took up Zim, but the last intern I worked with at IBM also had a Mac, and he introduced me to an interesting app called Quiver (Thanks Ray!), that I've been using for the last few months.

Quiver in action. In this view, the Markdown sources are being rendered previewed in the split pane view.

Quiver in action. In this view, the Markdown sources are being rendered previewed in the split pane view.

Quiver's most unique feature is the notion of cells. Each document in a Quiver notebook consists of one or more 'cells' in order. A cell has a type, corresponding to its content: Text, Code, Markdown, Latex or Diagram. The code cells are powered by the Ace editor component, and have syntax highlighting built in.

You can add cells, split cells and re-order them, using keystrokes to change between types.

Documents can be tagged for organization, and the full-text search seems pretty good so far.


So far, my experience with Quiver has been quite positive. Every day one of the first things I do is create a new note in my "Work Logs" notebook, titled by the date, and in it I write down my days goals as a checklist, as I understand them in the morning. In that note I keep casual debugging notes, hypotheses I need to explore etc. I also have a note for many bugs I work on, where I collate more of the information in a larger chunk, if warranted.

One of the magical things (to me at least, coming from Zim) is that rich text paste works very well; this is great for capturing IRC-logs formatted by my IRC client, or syntax highlighted code from here or there. I can also capture images by pasting them in (though, this also worked OK in Zim).


There are some concerns I have with Quiver though, that make me give it at best a qualified recommendation.

  • As near a I can tell, it's developed by a single developer, and I don't think he makes enough on it to work on it full time. While the application has been solid as a rock to me, it's clear there's still lots of places where work could be done. For example, there's no touch-bar support (to be honest, I just want to use the emoji picker, though it would be neat to see the cell type accessible from the touch-bar).
  • There are also a few little bugs I've encountered, almost all related to rich text editing. For example, the checkboxes are a bit finicky to edit around (and don't behave like bullets as I expected).

Overall, I am really enjoying Quiver, and will definitely keep using it

Trying out Visual Studio Code

In the interests of personal growth, I've been exploring some alternatives to my old-school vim+make setup. Especially for exploring a new codebase of appreciable size, I find myself wanting a little more.

While I'm absolutely certain I could get clang_complete working in the way I want, I figure... maybe time to think about other tools? It's 2018 after all, and my development flow has been pretty much stagnant since about 2015, when I upgraded from grep to the silver searcher, and before that, had been pretty much stuck since ~2012 when I started doing C++ the majority of the time.

I've been really interested in seeing what Microsoft has been doing with Visual Studio Code. I've been using it as my exclusive development environment now for a little over a month. There was a bit of a rocky setup trying to get IntelliSense working with Spidermonkey, and I'm not entirely proud of the solution I came up with (hacking together the c_cpp_properties.json in such a way that changes to defines, if and when they happen, are going to cause me trouble. Alas, there's no support for -include right now) but it works!

It's been a long time since I've used an IDE, and I have to say... I like it. Having tight local feedback on syntax errors is worth so much of the other pain VSCode has put me through, but also having access to IntelliSense is pretty amazing. The built in terminal has become incredibly powerful to me, by allowing me to use the command line tools I want to (hg wip) without leaving the IDE, and the ability to cmd-click a an error message's filename:line-number to jump to that in the editor is pretty amazing.

As a very long time vim user, I find myself a little surprised at how little I miss modal editing. I think the only motion I regularly miss is the jump-to-enclosing-(brace,parens). 

VSCode has a lot really going for it:

  • Almost all the settings are done via JSON files. While I normally hate hand-editing JSON, it's a refreshing change from most software's control panels, and allows great granularity, doubly so since VSCode is syntax checking its own settings files. 
  • Lots of passive information sources. The editor uses the gutters to great effect in providing information, such as highlighting lines that have changed in the patch you are working on. Has a minimap, similar to Sublime Text (though, I've never used Sublime), and inside the minimap, similar gutter information is used to highlight search matches, syntax errors, etc. 
The green bar in the gutter is saying this is new, uncommitted code.

The green bar in the gutter is saying this is new, uncommitted code.

  • Fuzzy file open (cmd-P) is a built in feature. 
  • Find by symbol is pretty magical (cmd-T).
  • The code command line tool allows me to open files, even from the built in terminal. 

Now, I shouldn't say that it's been entirely without pain.

  • The mercurial plugin is quite basic, and doesn't serve my needs particularly well, leading to me using the command line in the built in terminal. This is mostly fine, though I've yet to hook up 'code' as my editor.
  • Occasionally IntelliSense just loses its mind. Seems to generally get better with a restart.

I've tried out the debugger integration, which was... OK; though, that could mostly reflect my comfort with a command line debugger.

I have yet to put the extensions through their paces though. So far, all I've installed are the C++ tools, the mercurial extension, and one for trimming whitespace at the end of edited lines.

Overall, a month in, I'm very impressed. We'll see if it sticks!

FOSDEM Community Track (February 2017)

*cough* This was a draft post that it turns out I totally forgot about. Looking it over though, it seems fairly complete, despite my never having posted it.


I was at FOSDEM speaking as part of the Ruby DevRoom this year. I had a great time, and you can watch my talk here.

However, the Ruby track was only on the first day. The second day, I spent some time at the "Community" track... despite the fact that I couldn't get into most talks because of room size issues!

During the Mentoring 101 talk, those who couldn't get into the room instead held a round table in the hall way outside the room, which I found fascinating. The topic of the round table started with "How do you mentor new people to your community", but also stretched into how do encourage new people to become part of your community.

There was a good spread of projects participating in the talk, including community members from WordPress, LibreOffice and Apache Spark.

I took some notes from that discussion, that I'll share and expand below:

Sign posting:

Many people made the point that it's really important as a community that you demonstrate the variety of ways in which your community is willing to take contributions. https://make.wordpress.org/ was called out as a good example of this, which calls out 16 different subteams on wordpress, each of which points out what kind of work they do and how you can get involved.

Other signposting pointed out:

  • Issue labels Beginners is a good choice, though some communities go further and have a "first time contributor" tag. A comment made by a number of people was the importance of curating these beginner tags and ensuring that they are properly laid out. Similarly, it's really important more experienced developers don't tackle these, to avoid them drying up. Stories were told of some projects that would actively reject pull requests for "first time contributor" bugs if someone had done work on the project before.
    • Some people pointed out a good tag that's not common enough was "second time contribution" -- these are the slightly larger tasks that really help hook people into a community.
  • Recognition: Some projects make a big deal of recognizing everyone who contributes. LibreOffice apparently sends out a paper certificate.
  • Non-code contribution: Super important to call out the value of them! Documentation, bug triage, reproduction got a huge number of nods.

Onboarding

  • Face to face is super important: Hangouts, skype, etc. Important to build those personal relationships. If you're geographically close, coffee shops.
  • Open sprint day: A day where a large fraction of the community tries to show up simultaneously to work on a sprint together (virtual or real world!)
  • Have people document their own onboarding struggles. Easy contribution, but also super valuable.

Advertising to new contributors:

Sites exist to pull in new contributors.

There are university programs asking students to try to contribute to OSS: Having smooth paths to help them is great.

Culture

  • Be aware! The Loudest culture wins

Reading Testarossa Compiler Logs

The Testarossa compiler technology is the one included in both Eclipse OMR and OpenJ9. One of the interesting features of the OMR compiler technology is its pervasive use of semi-structured logging to explain the actions of the compiler as it compiles.

This is particularly important in the OpenJ9 compiler which makes huge amounts of decisions at runtime, based on data drawn from the running Java application or the state of the system (most compilation in OpenJ9 happens asynchronously on one of four compilation threads).

You can generate logs using the tracing options documented in the OMR Compiler Problem Determination Guide

For java, this typically means passing some -Xjit:trace* option, in addition to a log file specification.

If you were to download a build of OpenJ9, from AdoptOpenJDK let's say, you can test this out by generating logs for every method executed while running java -version like this:

$ java -Xjit:traceIlGen,log=logFile -version

You can modify this to see what was compiled by adding verbose to the Xjit options:

I've truncated this for space.

Of course, if you log everything, you'll likely produce huge logs that are a slog to deal with, so the Testarossa technology provides filtering mechanisms. For example, let's say we want to get a traceFull log of just the method java/lang/String.hashCode()I:

$ java  -Xjit:'{java/lang/String.hashCode*}(traceIlGen,log=logFile)'  -version

The additional quoting is there to deal with the shell wanting to handle many of the characters in that option string.

So, traceILGen isn't a particularly interesting option, unless you're looking at how bytecode becomes intermediate representation -- at which point it becomes great. traceFull is a useful alias for a number of tracing flags (though, despite the name, not all of them).

java  -Xjit:'{java/lang/String.hashCode*}(traceFull,log=logFile)'  -version

Using the above command, I got a traceFull log for java/lang/String.hashCode()I, and put it up on GitHub as a Gist. The rest of this post will talk about that gist.

So, if you look at it, the logs are XML... ish. There are pieces that try to form XML, but other pieces that are unaware, and write to the log as a plain text file.

I personally have waffled from time to time as to whether or not the artifice is worthwhile, or problematic. I lean towards worthwhile now, but have not always.

The basic pattern for most of a tracefull log is as follows:

  • A section on IlGen, the mapping of bytecode to trees (the Testarossa IL.
  • A dump of the trees, any new elements of the symbol reference table, and the CFG,
  • The optimization results for an opt
  • Another dump of the trees.

The last two points repeat until the optimization strategy is done executing.

Optimizations will number the transformations they make to allow selective disablement.

<optimization id=9 name=coldBlockOutlining method=java/lang/String.hashCode()I>
Performing 9: coldBlockOutlining
[     2] O^O COLD BLOCK OUTLINING: outlined cold block sequence (9-10)
[     3] O^O COLD BLOCK OUTLINING: outlined cold block sequence (5-5)

This comment does an excellent job of explaining it, though, the idea has also been called "optimization fuel" before.

As far as reading the trees, I'll defer to the documentation about the intermediate representation, contained in this directory, and in particular this document, Intro to Trees.

There's a lot more in these logs, but I'm a bit tired, so I'll leave this here. The logs are not dense, but can be invaluable in understanding the decision the compiler has made over a compilation and identifying bugs.

Some notes on CMake variables and scopes

I've been doing a lot of work on CMake for Eclipse OMR for the last little while.

CMake is a really ambitious project that accomplishes so much with such simplicity it's like magic... so long as you stay on the well trodden road. Once you start wandering into the woods, because your project has peculiar needs or requirements, things can get hairy pretty quickly.

There's a pretty steep ramp from "This is amazing, a trivial CMakeLists.txt builds my project" to "How do I do this slightly odd thing?"

We'll see how much I end up talking about CMake, but I'll start with a quick discussion of variables and scopes in CMake.

Variables and scopes in CMake

First, a quick note of caution: Variables exist in an entirely separate universe from properties, and so what I say about variables may well not apply to properties, which I am much less well versed in.

Variables are set in CMake using set:

set(SOME_VARIABLE <value>)

The key to understanding variables in CMake in my mind is to understand where these variables get set.

Variables are set in a particular scope. I am aware of two places where new scopes are created:

  1. When add_subdirectory is used to add a new directory to the CMake source tree and
  2. When invoking a function

Each scope when created maintains a link to its parent scope, and so you can think of all the scopes in a project as a tree.

Here's the trick to understanding scopes in CMake: Unlike other languages, where name lookup would walk up the tree of scopes, each new scope is a copy by value of the parent scope at that point. This means add_subdirectory and function inherit the scope from the point where they're called, but modification will not be reflected in the parent scope.

This actually can be put to use to simplify your CMakeLists.txt. A surprising amount of CMake configuration is still done only through what seem to be 'global' variables -- despite the existence of more modern forms. i.e despite the existence of target_compile_options, if you need to add compiler options only to a C++ compile, you'll still have to use CMAKE_CXX_FLAGS.

If you don't realize, as i didn't, that scopes are copied-by-value, you may freak out at contaminating the build flags of other parts of a project. The trick is realizing that the scope copying limits the impact of the changes you make to these variables

Parent Scope

Given that a scope has a reference to the scope it was copied from, it maybe isn't surprising that there's a way in CMake to affect the parent scope:

set(FOO <foo value> PARENT_SCOPE)

Sets FOO in the parent scope... but not the current scope! So if you're going to want to read FOO back again, and see the updated value, you'll want to write to FOO without PARENT_SCOPE as well.

Cache Variables

Cache variables are special ones that persist across runs of CMake. They get written to a special file called CMakeCache.txt.

There's a little bit different about cache variables. They're typed, as they interact with CMake's configuration GUI system), as well they tend to override normal variables (which makes a bit of sense). Mostly though, on the subject, I'll defer to the documentation!

Scope Tidbits:

There's a couple other random notes related to scoping I'd like to share.

  1. It appears that not all scopes are created equal. In particular, it appears that targets will always use target-affecting variables from the contained directory scope, not function scopes.

    function(add_library_with_option)
        set(CMAKE_CXX_FLAGS "-added-option)
        add_library(foo T.cpp) 
     endfunction(add_library_with_option)

    It's been my experience that the above doesn't work as expected, because the add_library call doesn't seem to see the modification of the CXX flags.

  2. Pro Tip: If anything has gone wrong in your CMakeLists.txt, try looking in the cache! It's just a text file, but can be crazy helpful to figure out what's going on.

Paper I Love: "The Challenges of Staying Together While Moving Fast"

(Prefix inspired by Papers We Love)

I recently had the opportunity to meet Julia Rubin when she was meeting at IBM. While we met for only a few minutes, we had a great (albeit short) conversation, and I started looking into her publications. With only one read, (out of a small pile!) I've already found a paper I want to share with everyone: 

"The Challenges of Staying Together While Moving Fast: An Exploratory Study" - Julia Rubin and Martin Rinard (PDF)

This paper speaks to me: It really validates many of my workplace anxieties, and assures me that these feelings are quite universal across organizations. The industry really hasn't nailed building large software products, and there's a lot of work that could be done to make things better. 

The paper includes a section titled "Future Opportunities". I hope academia listens, as there are great projects in there with potential for impact on developers lives. 

Lambda Surprise

Another day of being surprised by C++

typedef int (*functiontype)();
int x = 10;

functiontype a,b;
a = []() -> int { return 10; };   // OK
b = [&x]() -> int {return x; }; // Type error

My intuition had said the latter should work; after all, the only thing that changed was the addition of a capture expression.

However, this changes the type of the lambda so that it's no longer coercable to a regular function (and in fact, others I've talked to suggest that the surprising thing is that the assignment to a even works.)

sigh.

There is a work around:

#include <functional> 
typedef std::function<int()> functiontype;

It's funny: Before working with C++ lambdas, I had been thinking they would provide huge amounts of power when working with (and possibly providing!) C/C++ API interfaces. Alas, they are special beasts.

Debugging a Clang plugin

Writing this down here so that maybe next time I google it, I will find my own page :)

The magical incantation desired is

$ gdb --args clang++ .... 
Reading symbols from clang++...(no debugging symbols found)...done.
(gdb) set follow-fork-mode child

Doing this means that gdb will not get stuck just debugging the clang driver!

Going to be speaking at the Ruby devroom 2016!

I will be speaking this year at the FOSDEM Ruby devroom about the challenges the Ruby+OMR JIT compiler faces, and how they can be surmounted with your help! The abstract is below, or on the FOSDEM website. 

Highly Surmountable Challenges in Ruby+OMR JIT Compilation

The Ruby+OMR JIT compiler adds a JIT to CRuby. However, it has challenges to surmount before it will provide broad improvement to Ruby applications that aren’t micro-benchmarks. This talk will cover some of those challenges, along with some brainstorming about potential ways to tackle them.

The Ruby+OMR JIT compiler is one way to add JIT compilation to the CRuby interpreter. However, it has a number of challenges to surmount before it will provide broad improvement to Ruby applications that aren’t micro-benchmarks. This talk will cover some of those challenges, along with some brainstorming about potential ways to tackle them.

The challenges range from small to large. You can get a sneak peek by looking through the issue tracker for the Ruby+OMR preview.  

Boobytrapped Classes

Today I learned: You can construct Boobytrapped class hierarchies in C++.

Here's an example (Godbolt link)

#include <iostream> 
struct Exploder { 
 // Causes explosions! 
}; 

struct Unexploder { 
  void roulette() {} 
};

template<class T>
struct BoobyTrap : public T { 
  /* May or may not explode. 
  */
  void unsafe_call () { exploder(); }
  void safe_call() {} 

  private: 

  void exploder() { T::roulette(); } 
}; 

int main(int argc, char** argv) { 
    BoobyTrap<Unexploder> s; 
    s.safe_call();
    s.unsafe_call(); // Click! We survived! 

    BoobyTrap<Exploder> unsafe;
    unsafe.safe_call(); 

    // Uncomment to have an explosion occur. 
    // Imagine this with conditional compilation?
    // unsafe.unsafe_call(); 
    return 0;
}

The wacky thing here is that you can totally use the safe_call member function of the BoobyTrapped class independent of parent class -- because unsafe_call is only expanded and substituted if you call it!

This feels awkward, because it divides the interface of BoobyTrap into callable and uncallable pieces. I cant decide if I think this is a good idea or bad idea.

Pro:

  • You can knit together classes, and so long as the interfaces match enough so that the interfaces work, you're OK.

Con:

  • Feels like Fragile Base class ++

Thanks to Leonardo for pointing this out!