The fact that they’re available for abstract type members is the inconsistency IMHO.
TypeTag
will be superseded in the new principled meta programming framework we are working on with quoted.Type
. We still need to work out details of a migration strategy. I guess we’ll either have to keep TypeTag
around, or we can make it an alias of quoted.Type
.
Why not though? If an abstract type member has a stable path (e.g. it’s a member of a global object), then generating its TypeTag is repeatable, it’s not weak because it’s stable - that TypeTag always contains the same info when summoned from different parts of the program and can be meaningfully compared by subtype check - it will return true vs. another tag summoned elsewhere or compatible type bounds, unlike weak tags for type parameters, for example, that are invalid immediately when going out of scope.
In practical terms, not generating TypeTags for opaque types means I can’t bind them in my DI framework - even though they’re stable in =:=; not generating them for type members might mean a couple features just going away, so it’s bad-bad news for me.
Hmm perhaps you’re right.
But then I think the default TypeTag
for an opaque type shouldn’t expose its underlying type but just handle it like other abstract types, no? So if you want to expose the underlying type via the TypeTag
you still have to define a custom TypeTag
.
Well yeah, for my usecase I don’t care about the content of the tag as long as subtype checks are available for all stable paths. But, someone else might want to look inside the opaque though, e.g. to port standalone newtype deriving
from Haskell. In that case the content of opaque being available in WeakTypeTag will help the implementor - but for that we’ll need to add yet another constructor for scala.reflect.api.Type and maybe that’s going too far?
This opaque
type is just reinventing features and syntaxes that exist in Scala 2 (with a new keyword).
Opaque types are just self-types for type aliases.
Since we already have self-types for traits, which mean type constraints that only be seen inside the traits themselves. We can expand the self-type syntax to type aliases to describe type constraints that only be seen inside the type aliases themselves.
Since we want to the type constraints to be also seen from companion objects, we can in addition allow access modifiers for the self types.
Instead of opaque type ID <: Any = Long
, we should reuse the current syntax:
type ID = Any {
private[ID] this: Long =>
}
object ID {
// This compiles because of the `private[ID]` modifier grants the conversion inside ID companion object.
def toLong(id: ID): Long = id
}
The self-type solution is just a combination of current syntaxes, which, I think, is more elegance than introducing new keywords.
The current Scala 2 language has some arbitrary inconsistent decisions.
- Constructors are allowed in classes but not in traits.
- Self-types are allowed in classes and traits but not type aliases.
- Access modifiers are allowed on classes, traits members and class primary constructors to but not on self-types.
The first inconsistent decision will be fixed in SIP-25. I hope we can also fix the other inconsistent decisions instead of introducing more.
I suppose you can still use standard Java reflection, right?
Okay, cool. For reference, the case I desperately need (with no way to work around without an almost insuperable amount of work) is actually ClassTag – specifically, being able to get at the runtime identity of the Class in such a way that it can be used as a key in a Map. My DI relies on this.
The relevant code is here. I’m fine with changing the implementation, and don’t care about ClassTag per se; what I care about is that there be some sort of typeclass that allows similar functionality, preferably without having to change the (many hundreds of) call sites.
ClassTag already works with Dotty.
… and there are no changes planned.
If this is intended to run on the JVM, I hope you aren’t putting a class as a key in a map.
Instead, use the jdk’s ClassValue to associate a jvm type to a value.
Is the ClassTag for an opaque type identical to its underlying type’s ClassTag? If it is supposed to be purely compiler side fiction, I would assume so. But in some sense ClassTag is ‘compiler side’ reflection, where the difference is visible.
5 posts were split to a new topic: On using Class
es in Maps versus java.lang.ClassValue
I was looking at this in the Dotty 0.14.0-RC1 release notes. It’s very cool! I see some room for improvement there and this seemed like a good place to put my suggestion.
implied arrayOps {
inline def (arr: IArray[T]) apply[T] (n: Int): T = (arr: Array[T]).apply(n)
inline def (arr: IArray[T]) length[T] : Int = (arr: Array[T]).length
}
Having to repeat inline def (arr: IArray[T])
makes the above more cumbersome than AnyVal
. I understand that IArray
methods must be implemented as statics to avoid boxing, but defining them in the companion object doesn’t seem strictly necessary. There should be a way to add a bunch of methods at the same time. Also maybe take advantage of export
? I’m thinking that since an instance of IArray
must be an Array
, define IArray
's methods in the type declaration, and use this
for the instance.
opaque type IArray[T] = Array[T] {
def apply(n: Int): T = this.apply(n)
def length: Int = this.length
}
or
opaque type IArray[T] = Array[T] {
export this.{apply, length}
}
I wish they had gone with inline class
instead of the weird opaque type
syntax.
inline class MyString(self: String) { def greet = s"hello $self" }
First, it would generally require less boilerplate, and second, it makes more sense conceptually with respect to companion objects: types cannot have companion objects, but classes can. The fact that opaque types are an exception and can have companion objects is fairly ugly.
For the first point, compare:
implicit inline class MyString(self: String) { def greet = s"hello $self" }
With:
opaque type MyString = String
object MyString {
implicit def apply(self: String): MyString = self
implied MyStringOps {
def (self: MyString) greet = s"hello $self"
}
}
I’m pretty worried about opaque types being extremely surprising in usecases where runtime types are inspected. Here are a view surprising cases…
object Foo {
opaque type Foo = List[Int]
def from(list: List[Int]): Foo = list
}
val list = List(1,2,3)
val foo: Foo = Foo.from(list)
(foo: Any) match {
case List(1,2,3) =>
case _ =>
}
As a user I didn’t want to care about the runtime type of Foo. Now all of a sudden pattern matches may contain more edge cases and require a lot more thinking about, in order to defend against situations like these.
An other case, Map[Any, T]
would squash these two values to be the same key.
Map(list -> 1, foo -> 2) // Map(List(1,2,3) -> 2)
Same with Sets:
Set(list, foo) // Set(List(1,2,3))
I know that there are some cases where we want to have both type-safety and performance, but I think that removing the ability to tell at runtime the difference between an opaque type and its underlying type will likely lead to quite a lot of surprises, and may end up as one of those things where we are telling noobies “oh yea, don’t use opaque types unless you know what you’re doing”.
Actually that was stated in the original post:
Thank you, yes I read that, and just wanted to say that I think that that behavior will be very surprising sometimes.
Hrm, you are not wrong but:
- You are effectively describing the difference between value classes (which aren’t going away) and opaque type aliases: value classes box in order to preserve identity whenever they are upcast (including when they are passed through generic parameters), whereas opaque type aliases don’t.
- Non-opaque type aliases, which are already in Scala, also don’t have an identity. The only difference is that, for them, the lack of separation exists even absent the upcast.
- More broadly, downcasts with pattern matching often carry this risk already: there is often a possibility that the object you are downcasting secretly mixes in one or more classes/traits that you are not aware of (usually because it was upcast before you ever saw it). In my personal opinion, the culprit here is the pattern of upcasting to Any and then performing an unsafe downcast after. I know that this pattern is really common and that my opinion isn’t going to stop people from doing it (and getting bitten by the results) but… I think that the incremental cost of introducing one more way to create unexpected results out of unsafe downcasts really isn’t going to change anything. Yes a few users will do things like what you are showing and get bitten… but those same users are probably already used to occasional strangeness out of their pattern matches.
This might be a bit preposterous but has the idea of making regular type aliases opaque (i.e. types specified without the opaque
modifier) been considered? If current style transparent type aliases still serve a useful purpose a transparent
modifier could be added to them instead of adding the opaque
modifier (the idea being that only a minority of type aliases would need to be defined as transparent
; the rest would be better served by being opaque by default).