One of my OSS libraries is scalajs-benchmark which provides a means to benchmark Scala.JS-generated code. I’ve recently added some new features to this:
ability to save results, with JMH JSON being one of the supported formats
batch mode, which allows you to press a button and have all suites & BMs run with the results saved automatically
the calculation logic now matches JMH itself
This is very useful because using a tool like jmh visualizer, you can load and compare the JMH JSON files.
If you haven’t seen, I also maintain online benchmarks which I’ve recently updated to include a matrix of Scala versions x Scala.JS versions. A number of the online benchmarks measure the performance of parts of stdlib collections. Now because I’ve got batch mode and can use jmh visualizer to compare results, I’ve also started running all BMs on a stable machine and saving the results in the scalajs-benchmark repo.
So finally with all that background I’ve arriving at my questions.
Is this something the Scala / Scala.JS teams would find useful and would like to see maintained?
Is it something you’d consider helpful?
Would it be helpful if only some missing feature were added?
I’m happy to maintain and update all the data with every new Scala and Scala.JS release if so.
I was also thinking I could add a page to allow quick comparison of result sets rather than faffing around with urls or downloading the repo and manually dragging into jmh-visualizer. I don’t need that ability cos I have it checked out but if there’s a decent amount of interest I’d be happy to do it.
I’m getting notifications of likes on this which is very nice but I’m guessing that I was to wordy in my post and people might be missing the bottom part. Just in case, allow me to highlight (and in bold for people who are quickly skimming):
please answer the questions at the end of the post
if you have strong feelings either way It wasn’t so much meant as a show-and-tell but I’m imagining it’s more useful to some community members than it is to me and if that’s the case, I’d like to help. Us everyday Scala users get soooooo much value from all the work put in it’s authors, maintainers, those on the Contributors fourm, that if there’s something I can do to contribute back to the Contributors I’d be very happy to help.
Personally I’ve never played too much with the online benchmarks because they tend to measure the collections, which although important, are too far removed from my area of influence in terms of performance. When developing Scala.js, I concentrate on our own set of benchmarks, located at
Now although we have a number of precise benchmarks there, and they are very useful to me (they are the source of the measurements that we display at https://www.scala-js.org/doc/internals/performance.html), the UI in that repo kind of sucks, TBH. It would be really nice to investigate whether we can reuse your library with the benchmarks in that repo, to get better batch execution and get the JMH vizualizer. That would be cool! If you get bored at one point you could try yourself, but otherwise I will probably give it a try next time I have to run those benchmarks.
One thing that I would need for this to be useful to me if it can easily be used with a SNAPSHOT version of Scala.js published locally. That’s because the only times I actually look at performance graphs is during improvements to the Scala.js linker, and so I need to run the benchmarks with a SNAPSHOT version. I guess that is probably already supported by your library/tool, though.
Hey @sjrd, forgive my late response. I had many other things going on and simply forgot
It’s trivial to take the benchmarks you have in your repo and integrate them into something scalajs-benchmarks, be it the existing online BMs that I maintain, or a new, separate, exclusive set. Porting your existing BMs mostly just consists of copy-pasting into wrappers that give them a name.
Also using SNAPSHOT versions of Scala.js is trivial because it’s just an SBT-managed dependency; SJB (scalajs-benchmark) doesn’t care so no problem there, especially seeing as you’re so careful to ensure IR is backwards-compatible.
So we’re good on those two fronts, but there’s also a potential issue: each “build” or “deployment” of a SJB-app is a single environment. In your case, environment = {Scala.JS version, fast/full, ±opt, ±gcc}. Within that environment you can press a single button, walk away and come back to find all it’s BMs have run and results have been saved locally. It’s kind of up to you (and not automated (yet?)) how you compare results between environments. My process is I just commit BM results (JMH JSON) to my repos, and to compare I simply drag two into JMH Visualizer. That’s fine for me but I’m not sure if that’s going to be fine for you, given you generate graphs for .../performance.html. I imagine for your usecase, you’d have one app, compile it different ways (±gcc, etc) - that part’s easily automate-able. From memory, you can pass a query param to auto-start the BMs in batch-mode - opening a browser is easily automated but if you want to close the browser when the BMs are done so that your automation can move on to launching the next set of BMs, you’d have to build that bit yourself. (Can tabs close themselves via JS? If so I can add that ability to SJB). And finally once all the results are available, it would be easy enough to parse them and generate the graphs but yeah, you’d have to adapt whatever code you already have to parse JMH JSON.
So in summary, I don’t know exactly what your current process is and how much manual intervention it requires, but if you think SJB would be an improvement I’d be happy to help.
Oh and finally, the BMs that I maintain online are cut by { Scala version, Scala.js version, fast/full } so if I added your BMs to it, I now see that it unfortunately wouldn’t get you all the way. The most appropriate solution (assuming one wants to use SJB) is probably to have your own repo that uses SJB as a normal library.