artificial intelligence’s out of body experience
Not knowing anything about JS performance tuning, this is my best guess, too. There is no exception handling involved in
prepended. Both are as straight-forward as they could be in the new implementation: a single megamorphic call up front (which wasn’t the case in the old implementation) and then some auxiliary calls to small methods which are all monomorphic and inlinable.
What does “undefined behaviour” mean in this case? Your further explanations seem to indicate that it reliably throws an exception but a different kind. Is this correct? Why can’t it throw the right one?
Could we have separate code paths for JVM and Scala.js? (Hey, wasn’t there some kind of preprocessor SIP for this kind of thing? )
It looks like performance on JVM and Scala.js is very much at odds here and there is no single solution that will be fast on both. On the JVM an exception handler is essentially free but we save a few percent (in
apply performance tests) for not doing the bounds checking twice.
BTW, does Scala.js honor
@inline annotations? All JVM benchmarks were done with full inlining (as used by a normal Scala release build) and we rely on inlining by the Scala compiler for small methods for best performance.
That is explained at https://www.scala-js.org/doc/semantics.html#undefined-behaviors. In fastOpt mode (only), it reliably throws a
getCause() is the exception that should have been thrown. This is only for debugging purposes. In fullOpt mode, however, this becomes unchecked for performance reasons, and the optimizer/compiler is allowed whatever it pleases with it (including removing the code, or let it pass without exception, or whatever really).
That increases maintenance burden and decreases testability. Every time we have a different path in the JVM and in JS, that increases the chances that a bug in one goes undetected and that libraries are not portable.
@inline, even more than Scala/JVM. And that is always, not just using some flags. Our optimizer is better than scalac’s optimizer, because we always have a closed world assumption.
The implementation was now changed to no longer catch
ArrayIndexOOBE (#8827 and #8829). It would be really nice to check if that fixes the
TypeError and changes the performance (bounds checks might be cheaper than exception handlers on JS, according to @sjrd).
@japgolly could you maybe do the magic again? The current version number is 2.13.2-bin-ca30256.
Also did we find out whether
v :+ a /
a +: v experience a big performance regression on the JVM too?
@japgolly Could you double-check that the website was indeed updated? It still mentions the commit b4428c8, which was before the recent changes?
@sjrd Try a force-reload. I think Github Pages doesn’t do caching properly. I can see
ca30256 when I view it.
Hum, I had already tried force-reload. I tried again and it didn’t change anything. I see ca30256 at https://japgolly.github.io/scalajs-benchmark/, but when I click on it I get to https://japgolly.github.io/scalajs-benchmark/res/scala-2.13.2.html which mentions b4428c8, and executing
Vector index still reports the
@sjrd My apologies! The GP caching problem affects me so often that I just force-refreshed the index, saw the update and thought it was good. Instead I’d put the results in the wrong directory I’ve fixed, re-uploaded and confirmed all the way through now. Please try again and sorry about that.
Cool, thanks! Now it works. I’m not sure how performance changed, but at least the
Appending and prepending individual elements is much faster with the new Vector on the JVM. I think this was from the last complete run that I made and should reflect the current status for these operations: http://szeiger.de/tmp/alice_bignvector_opt_jdk8/vector-results.html