I was super-excited when first heard about opaque types in Scala 3. It was going one of the best features of Scala 3!
From what I’ve got back then is that if we define an opaque type Nat = Int
, the Nat
is treated in the whole userland as something completely disjoint from Int
and only something deep inside the compiler would know that at runtime it’s nothing more than a usual Int
. At least it’s how I read the word “opaque” - we hide the Int
from the rest of the world.
Above behaviour constitutes a basis for the “newtype” pattern and it sounded like it’s what opaque types are designed for.
However, later I stumbled upon couple of cases that made me think if I was wrong:
The first ticket boils down to this:
type IsInt[A] = A match
case Int => true
case _ => false
scala.compiletime.constValue[IsInt[Int]] // true, as expected
scala.compiletime.constValue[IsInt[String]] // false, as expected
object Foo:
opaque type Foo = Int
scala.compiletime.constValue[IsInt[Foo.Foo]] // compile-time error, expected false
From what I’ve got explained by @dwijnand and @sjrd, for dotty, Foo
now is not an Int
nor non-Int
and has unknown relation to Int
. And I really don’t understand why, from design point of view (maybe there are technical limitations though). We’ve hidden everything about Int
, we’ve defined that type for a sole purpose of being different from Int
, we just define it in terms of Int
because of runtime characteristics. On the other hand, even true
here would be more acceptable if we make it clear that compile-time operations dealias opaque types.
I’ve ran into this case in a pretty bad situation where I had lots of case classes with newtype-over-opaque types all over the codebase. And a match type at the core of type class derivation mechanism. AnyVal
would work here just fine, but I thought “AnyVal is so Scala 2, I’ll just rewrite some derivation bits with macro”. And it brings us to the second case, which could be boiled down to the following macro:
inline def getType[T] =
${ getTypeImpl[T] }
private def getTypeImpl[T: Type](using Quotes): Expr[Any] =
import quotes.reflect.*
val tpe = TypeRepr.of[T]
tpe.asType match
case '[t] =>
Expr((tpe.show, TypeRepr.of[t].show))
Depending on either the place where you call the macro or how you refer to the T
(fully qualified Example.MyT
or MyT
), the _2
of the tuple can be different things! Sometime it’s the opaque type (as expected, and as _1
), sometimes it’s the de-alised type.
@nicolasstucki has explained that opaque types are treated as RHS inside their companion objects, which kind of make sense - we need to construct them and work with underlying type somehow. But then why on Earth would different references (namespaced or not) give different results? Is it also by design? How user should debug that?
Sorry if it turned out a bit of a rant, but at the moment so much time has been spent on working around and debugging opaque types that I started to question whether I’ve got this idea of opaque-types-as-newtypes even remotely right (docs advertise a similar case). If they were designed as newtypes - to me it looks like it doesn’t work out. Unlike macro and even match types, which are more suited for library developers and/or advanced users, opaque types look so simple and userland’y, but then break the code in such subtle and hard-to-debug ways.
I have a few ideas on mind on how opaque types could be improved, but first wanted to know if I’ve got the idea wrong from the beginning.