One of my OSS libraries is scalajs-benchmark which provides a means to benchmark Scala.JS-generated code. I’ve recently added some new features to this:
- ability to save results, with JMH JSON being one of the supported formats
- batch mode, which allows you to press a button and have all suites & BMs run with the results saved automatically
- the calculation logic now matches JMH itself
This is very useful because using a tool like jmh visualizer, you can load and compare the JMH JSON files.
If you haven’t seen, I also maintain online benchmarks which I’ve recently updated to include a matrix of Scala versions x Scala.JS versions. A number of the online benchmarks measure the performance of parts of stdlib collections. Now because I’ve got batch mode and can use jmh visualizer to compare results, I’ve also started running all BMs on a stable machine and saving the results in the scalajs-benchmark repo.
I find this very useful and interesting. You can just drag a few files from my repo into jmh visualizer and immediately identify performance regressions/improvements. Here’s an example that compares Scala 2.13.2 collection building compiled with Scala.JS 1.0.1 vs Scala.JS 1.1.0 and you can there are some improvements but also some regressions.
So finally with all that background I’ve arriving at my questions.
- Is this something the Scala / Scala.JS teams would find useful and would like to see maintained?
- Is it something you’d consider helpful?
- Would it be helpful if only some missing feature were added?
- I’m happy to maintain and update all the data with every new Scala and Scala.JS release if so.
- I was also thinking I could add a page to allow quick comparison of result sets rather than faffing around with urls or downloading the repo and manually dragging into jmh-visualizer. I don’t need that ability cos I have it checked out but if there’s a decent amount of interest I’d be happy to do it.