SPLASH 2024: Impressions and Feelings

I thought it would be useful to sit down and write up some of my thoughts on SPLASH 2024 while they are still fresh.

Due to happy nuptials (& a pressing desire to get home), I was only able to attend Splash for 2.5 days; Wednesday, Thursday and Friday morning.

The beauty of any conference is of course the Hallway Track, so I have many papers and presentations I need to read or watch that I missed. In this write-up I’ll just highlight papers / presentations I managed to catch. Missing something here says nothing other than I likely missed it :)

REBASE

Wednesday was REBASE. My first time attending REBASE, and I quite liked it. Industry / Academic cross-overs are very valuable in my opinion.

After Rebase ended, a group of us ended up chatting in the room for so long that we missed the student research competition and the food!

Thursday

This day opened with a keynote by Richard P. Gabriel, talking about his career, how he sees AI having experienced a few AI winters.

  • Wasm-R3: Record-Reduce-Replay for Realistic and Standalone WebAssembly Benchmarks was quite cool. As an engine developer it’s right up my alley, but also it addresses a real use-case I see which the generation of benchmarks from real applications.

  • WhiteFox: White-box Compiler Fuzzing Empowered by Large Language Models. This was quite neat, and honestly a decent use for an LLM in my mind. The basic idea is to provide the code to a optimization (in a Deep Learning compiler like PyTorch in the paper) to an LLM, and get it to describe some essential features of a test case including example code. Then using these essential features and example codes, create fuzz-test cases. There’s a feedback loop here to make sure the test cases actually exercise the optimizations as predicted. Their results really seem to speak for themselves -- they’ve been called out by the PyTorch team for good work. Overall I was pretty impressed by the presentation.

  • Abstract Debuggers: Exploring Program Behaviors Using Static Analysis Results This was a really neat piece of work. The basic thrust is that most static analyzers either say “Yep! This is OK” or “Nope, there’s a problem here”. The challenge is that interpreting how a problem exists is often a bit of a pain, and furthermore, all the intermediate work a static analyzer does is hidden within it not providing value to users.

    The authors of this paper ask the question (and provide a compelling demo of) “What if you expose a static analyzer like a debugger?” What if you can set break points, and step through the sets of program states that get to an analysis failure? They make a compelling case that this is actually a pretty great interface, and I’m very excited to see more of this.

    As a fanatic about omniscient debugging, I found myself wondering what the Pernosco of static analysis looks like; alas, I never managed to formulate the question in time in the session, then didn’t get a chance to talk to the presenting author later.

Friday

  • Redressing the balance: a yin-yang perspective on information technology Konrad Hinsen used the idea of Yin and Yang to interrogate the way in which we work in information technology. In his presentation, Yang is the action precipitated by the thought of Yin; his argument is that we have been badly imbalanced in information technology, focused on the Yang of “build fast and break things” and not nearly enough on the balancing Yin of “Think and explore”. As a result, tools and environments for though have been left un-built, where the focus has landed on tools for shipping products.

    His hope is that we can have a vision of software that’s more Yin focused; his domain is scientific software and he’s interested in software with layers -- Documentation, formal models, execution smentics.

  • Mark--Scavenge: Waiting for Trash to Take Itself Out This neat paper proposes a new concurrent GC algorithm that tries to eliminate wasted work caused by the evacuation of objects which end up being dead by the time they are evacuated. This is done by doing evacuation using the set of sparse parges selected from a previous GC cycle, only evacuating objects rediscovered on a second cycle.

    As a last-ditch GC, they can always choose to evacuate a sparse page, making use of headroom.

    It was a quite compelling presentation, with good results for the JVM.

The Things I Missed:

There’s a whole bunch of presentations and papers I missed that I would definitely like to catch up on:

Conclusion

Every year I come to an academic conference as an industry practitioner I am reminded of the value of keeping yourself even a little bit connected to the academic world. There’s interesting work happening there, and it’s always nice to hear dispatches from worlds which may be one possible future!