@JVMName annotation to control bytecode-level names


I think it would be really useful to have an annotation to control how Scala symbols (methods, parameters, etc.) are named after they make their way into the bytecode. For example:

class Klass {
  def +(other: Klass): Klass

Scala’s typechecker would use the name + while at bytecode level the method would be called plus.

I can now see at least three benefits of having this annotation:

Nicer Java interoperability

Currenlty if we want to call a symbolic method (say, +:) from Java, we have to either use its encoded name $plus$colon (which is very ugly) or provide a separate, nicely-named forwarder method (which is boilerplate). With the @JVMName annotation, Java would simply use the name specified in it.

Working around overloading limitations

Suppose we want to have an overloaded method like this:

def method(ints: List[Int]): Unit
def method(strings: List[String]): Unit

Now it’s not possible, because List[Int] and List[String] have the same erasure and overloaded variants would get the same bytecode-level signature. But with the @JVMName annotation, we could simply do this:

@JVMName("intsMethod") def method(ints: List[Int]): Unit
@JVMName("stringsMethod") def method(strings: List[String]): Unit

and problem solved!

Better control over binary compatibility

For example, it could be possible to rename a method while retaining backwards binary compatibility:

@JVMName("oldName") def newName

What do you guys think?


It’s an interesting idea, but I think it’s going to be a can of worms, and ultimately won’t be pulling its weight. Both Java interoperability and binary compatibility (as described here) can already be achieved with simple method delegation. There’s indeed a bit of repetition in the parameter list, but it’s not introducing a new concept for what I consider very limited use-cases.

The problem with such an annotation is that suddenly all overriding and overloading checks still need to be done, just that it’s going to be on both the JVM names and the Scala names. Some new checks that would become necessary:

  • you’d need to enforce the same JVMName annotation on overriding methods
  • a new method should not accidentally override an inherited method via it’s JVMName, even though they are unrelated otherwise
  • for any method, you’d need to check there are no other methods with the same signature but with a JVMName that matches the current method (accidental illegal overloading)

There are probably other corner cases. Overall, the additional complexity (not only in the implementation, but for users as well, since they will need to deal with new error messages) is too high a cost for relatively low benefits.


Odersky has already discussed the possibility of this in Dotty, making all symbolic methods have a non-symbolic name in an annotation that can be used as well, this seems to fit with that plan. As all Dotty features will eventually be Scala features, this seems like something you could try to piggyback on and bring forward in a version of Scala before 3.0.



I’m a bit confused how this is supposed to work. I thought if you declare
a symbol + in one compilation unit and call it from another, the compiler
matches declaration and call because both are translated to the same JVM
name $plus. If you change the JVM name, how would the compiler match
declaration and call?

 Best, Oliver


When compiling against binary Scala code, scalac doesn’t look at JVM signatures but rather it deserializes a @ScalaSignature annotation. It’s a binary Java annotation emitted by scalac for every Scala class. It contains full Scala-level signatures, along with information which isn’t available through plain bytecode (including Scala annotations).

This way the compiler can know the @JVMName of every method and emit appropriate bytecode at callsite.

Of course, in order for this to work well, checks mentioned by @dragos need to be enforced.



I see.

I think I have some code somewhere that uses Scala reflection and assumes
that every JVM name is an encoded version of the source name, so that’s
going to break.

 Best, Oliver


For the legal checks, Scala.js collapses them all in a fairly simple check:

  • If a method A.m matches B.m and A extends B, then A.m and B.m must have the same @JSName.

This is implemented here: https://github.com/scala-js/scala-js/blob/v1.0.0-M1/compiler/src/main/scala/org/scalajs/core/compiler/PrepJSInterop.scala#L601-L616


A simpler solution to most of the problems would be an annotation that generates a delegate in order to avoid the boilerplate:

@Delegate("plus") def +(other: Klass): Klass

would be equivalent to

def +(other: Klass): Klass
def plus(other: Klass): Klass = +(other)

This would be broader than a non-symbolic name option – it could be used for common delegation tasks like backwards compatibility with api evolution, or something more general than @BeanProperty.


Sounds like a good fit for a macro library.


What about user-defined methods that match a JSName of another method?


Well … I was about to say that our codegen for run-time overload resolution would discover that it cannot disambiguate the two methods, and report a compile error. But, well, it doesn’t. The fix is trivial, though. See https://github.com/scala-js/scala-js/issues/3047


My personal primary motivation for @JVMName was to solve the overloading/erasure problem, which this solution doesn’t solve. Also, @JVMName doesn’t affect the language itself and is therefore tool-agnostic.
@Delegate - similar to annotation macro based libraries - requires dedicated support from tools and IDEs not based on presentation compiler (IntelliJ).

What worries me more about @JVMName is how it would interact with structural types. It seems that ScalaJS does not have a solution for that regarding its @JSName: https://github.com/scala-js/scala-js/issues/956