Thanks @S11001001 for the insightful comments.
For the record, I’d like to publicly thank you for your blog post. You did a really good job at describing the problems with value classes, and your analysis motivated me to kick start the proposal with Erik. I’ll add this acknowledgement to our current proposal.
I didn’t want to add lengthy examples to ease the read of the code snippets, but I’ll add wrapFoo
and mdl
. They illustrate what we meant by:
Note that the rational for this encoding is to allow users to convert from the opaque type and the underlying type in constant time. Users have to be able to tag complex type structures without having to reallocate, iterate over or inspect it.
If the type equivalence between the dealiased type and the opaque type definition is described by the user, the implicit conversions will not be synthesized, as explained in the proposal. @non has also mentioned the possibility of user-defined =:=
et al instances his last commit here.
I will explore this idea, will talk to the Scala team about it, and see if this can fit in the implementation without being part of the proposal. The proposal is already ambitious as it is .
Indeed, it may be a problem to typecheck wrapSomeMap
only with those private synthetic implicit conversions in scope. I’ll have a closer look and try to find a way to make it typecheck.
Interesting observation, but I think this will not be a problem in the Scalac implementation. When triggering the implicit search of Ordering[Logarithm]
inside the opaque type companion, typer doesn’t know yet that Logarithm =:= Double
, so it will look for an instance of that implicit, it will fail, and it will try to to apply the implicit def conversion from Logarithm => Double
. The result of this last search will be Ordering[Double]
, taken from scala.Predef
. In Dotty, this could be a problem if the first step of the implicit search sees Ordering[Double] =:= Ordering[Logarithm]
.
Cannot this happen as well in other cases, especifically when relying on +
being provided by an implicit?
In my opinion, if someone writes a public definition without a type, they’re looking for trouble. I strongly discourage it. The case you point out cannot be addressed in a principled way, or at least I don’t see how we could.
What I would propose is that we have enabled-by-default warnings that will warn users that define public methods in opaque type companions without an explicit return type. I believe this warning could be given a bigger scope, too, and warn about these cases all over your program.
The idea is that those users that want to specify upper and lower bounds are forced to define a type member in a trait:
trait T {
type OT <: Any
}
and then implement it:
object T extends T {
opaque type OT = String
}
just as you would with type aliases.
Opaque types need to be defined inside an entity after all, so the overhead of adding this type member in a trait is minimal. Would this cover all the scenarios you’d like to use upper and lower bounds on?
Interesting, it’s the first time I hear about Flow.
We can certainly consider doing so, but I’m not sold on its utility. One of the things I like the most about opaque types is that they have the same syntax (semantic-wise) than type aliases, and don’t require explicit type ascriptions. If we add this, we’re creating a new mental model of opaque types, and users need to learn it. The fewer rules, the better.
As I explain in the meeting, this has several problems:
- APIs of different opaque types get mixed, hampering readability of the code.
- Users cannot define a method
tag
for two different opaque types that have the same underlying type. The same happens with implicits. - Use sites of these opaque types do not know where these methods are defined. It’s way clearer to see
Logarithm.tag
thantag
somewhere in your program.
I don’t like the idea of defining multiple opaque types in the same prefix. I’m personally in favor of opaque type companions, and I think companions are a natural way of thinking about Scala code. Its addition does not add overhead to the language; instead, it creates a more consistent language that converges towards common and widespread language features.
As @adriaanm mentions in the meeting, a non-negligible part of Scala developers, especifically beginners, already think that an object with the same name of a type alias is a companion.
I haven’t given this too much thought, but it will inherit it. If you want to override it, you also can. This is consistent with the behaviour of type aliases.
Yes, and this needs to be made more clear in the proposal. @xeno-by and @dragos pointed it out in an email before the meeting. The golden rule of opaque types is: the runtime will box/unbox whenever the underlying type needs to. Hence, they do not add extra boxing.
Despite boxing for AnyVal
instances, note that primitive boxing is cheaper than what AnyVal
does, and therefore faster.
For example, let’s take the Logarithm
example from the proposal and inspect its bytecode. In the value class example, the compiler triggers the instantiation (via new
) of every logarithm in the following expression val xs = List(Logarithm(12345.0), Logarithm(67890.0)).map(_ + x)
. This is not the same bytecode than for opaque types, which uses scala.Predef.doubleToDouble
to cast scala.Double
to java.lang.Double
, and whose implementation is just a cast (d: scala.Double).asInstanceOf[java.lang.Double]
. This cast is cheaper for the runtime than the new
instantiation because:
- It is instrinsified and it’s a fundamental mechanism of the JVM.
- It doesn’t have to go through the initializers of the value class and the extended classes (traits).
- When you instantiate a new object, you waste a lot of memory for object headers, fields, metadata, etc. I haven’t checked yet, but my guess is that
java.lang.Object
is optimized to avoid all this waste, therefore being easier on memory consumption.
Opaque types have more non-obvious advtanges over value classes, if we follow the reasoning of the golden rule for opaque types. If we compile val xs = List(Logarithm(12345.0), Logarithm(67890.0)).map(_ + x)
with an Array
instead of a List
, we have zero boxing/unboxing because arrays are specialized.
Opaque types do not solve the problem of boxing/unboxing (this is a problem of the runtime), but they are a mechanism that adds wrapper types avoiding any extra overhead that would not be performed had the underlying type be used.
Thanks, I’ll add this! I mention this in the meeting, but I forgot to make it explicit.