Allowing double definitions through name mangling

I found this comment on Stack Overflow by Shelby Moore III today, replying to a question about type erasure causing a double definition (I don’t know if they ever did ask it scala-debate, which is why I’m asking it here).

Does anyone know why Scala did not just automatically create different erased names? If you call these methods from outside of Scala which have the work arounds provided in the answers, you will need to know which implicit parameter to pass in order to get the method you want. How is this qualitatively different than needing to manually know which auto-mangled method name to call if calling from outside Scala? The auto-mangled names would be much more efficient and eliminate all this boilerplate! Someday I will get around to asking on scala-debate.

Despite the fact that it would ruin binary compatibility, it seems like a pretty good idea to me. It’d make it easier for users of other JVM languages to use such methods, as the author of that comment stated, and it’d also be easier for Scala developers to write such methods, since currently, every extra definition requires an extra implicit parameter.

Currently, a Java user might have to call a method like foo(listOfInts)(scala.Predef.DummyImplicit) if you have a Scala method like this:

def foo(l: List[Int])(imlicit d: DummyImplicit) = ???
def foo(l: List[Foo]) = ???

but with some name mangling, that could be changed to foo_Int(listOfInts) or something like that, which to me seems more manageable.

I understand that this is a change that probably won’t happen anytime soon, but I’m just bringing this up so that maybe something like this will be implemented sometime in the future.

See a related discussion starting here: Principles for Implicits in Scala 3

One concept is to use @alpha for this purpose. There is an open issue on the dotty tracker:


Kotlin uses JvmName for disambiguating methods with different types.