Proposal to remove general type projection from the language


Sure. But to identify a larger set of type projections that are sound one has to come up with rules that identify these projections and an argument that these rules still ensure soundness. And then one has to evaluate the rules whether they are convincing, simple enough to specify and implement, that they don’t degrade compiler performance too much, and that they are of real usefulness. So, overall, a large effort.


So I’m not sure if I’ve understood this correctly, type projections from classes will be allowed but not from traits? Could not projections from traits inheriting from AnyRef be allowed?


Traits are class types. They’re unaffected by this proposal.


In both the Summingbird example I posted and the STM example we had abstract types that were upper bounded by a trait with the projections.

trait Foo[A <: Foo[A]] {
  type Bar

def use[A <: Foo[A]](b: A#Bar)

I wonder if that bound makes this safe.


Is this “literally” true? Or it need to extrapolated to “Traits are class types. They’re unaffected by this proposal. - in cases when that classes/traits are VALID (inhabitable)” ?

In particular, while playing with Self-types and >: this.type constraints I eventually come up to strange code (Which is in fact VALID(compilable) code in current Dotty master and on scasite). Also after careful read of aforementioned discussion on github I found even more aggressive (2 years old) example of such code.

So here is what I talking about: (complete snippet on scasite)

  // !!! THIS IS COMPILABLE CODE !!! - `Absurd` and `SelfAbsurd` compiled successfully
    trait Absurd {
      type Nonsense >: (String | Int) <: (String & Int)

      def stingIsInt(x: String): Int = x : Nonsense
      def intIsString(x: Int): String = x : Nonsense

    // thought that I need to check this was logical consequence
    // of some desperate attempts described in issues #5879, #5878
    trait SelfAbsurd {
      this: (String | Int) =>
      type Nonsense >: this.type <: (String & Int)

      def selfIsInt(): Int = this : Nonsense
      def selfIsString(): String = this : Nonsense

Now when one try to use “basically allowed” class/trait projection, one will see (which I believe absolutely fair)

scala> type T = Absurd#Nonsense
1 |type T = Absurd#Nonsense
  |         ^^^^^^
  |Absurd is not a legal path
  |since it has a member Nonsense with possibly conflicting bounds String | Int <: ... <: String & Int

scala> type S = SelfAbsurd#Nonsense
1 |type S = SelfAbsurd#Nonsense
  |         ^^^^^^^^^^
  |SelfAbsurd is not a legal path
  |since it has a member Nonsense with possibly conflicting bounds SelfAbsurd <: ... <: String & Int

So basically #-projection will not work for that “Nonsense” and “Absurd” (actually it shows that Dotty compiler is not so “naive” as it trying to seems at the first glance)

But here I could see more fundamental problem - one can write whole lot of abstract/traits code which is effectively equivalent to Nothing (for example in aforementioned snippet, it is obvious that effectively Absurd =:= Nothing and SelfAbsurd =:= Nothing as well). But problem here also not in that one may compose that trait effectively equivalent to Nothing, but in that it also could become Nothing silently - since it could be not so obvious to see contradiction, and once some neighboring/involved classes/traits will be edited, some other classes may silently become contradictory.
Same could relates to possible generic methods with contradictory constraint. (So compiler goes too far away from idea “it should work if it compiles” toward something like “do not forget to test reachability of everything in unit tests” - if you are writing library of abstract classes/interfaces)

So here I could see some fair reason for @jeremyrsmith 's thoughts. (More interesting to detect types effectively =:= Nothing ahead of time, rather then swallowing that, relying only on instantiation proof)


I agree it would be nice if the compiler could always check that bounds make no sense. Unfortunately it can’t. See our paper The Essence of Dependent Object Types for an explanation and Nada Amin’s thesis for more details. So since complete bounds checking is unfeasible, the only remaining alternative is to check paths instead, which is what the compiler does.


No. We need to require an actual runtime value, not just an expression. That’s because a value is a proof that its type members satisfy its bounds; expressions that are not values could be instead infinite loops, or code that fails with some exception.


I don’t believe this is correct. See my post “you can always cheat” above. Even with value dependent types, Scala is a statically typed language. So if the compiler sees that there is a value (from its compile time perspective), it believes it. As far as I can see, there is zero runtime checking involved. It’s simply that it looks for a term instead of a type. (And anything else would be a performance nightmare, and anyway not working because of HKT erasure on the JVM).

And that’s why I still believe that it doesn’t matter whether the value is marked for erasure or not. It’s there for the compiler at compile time, that’s what counts.


Is anyone going to address my point? Why remove a language feature when you can just render it harmless?


Playing devil’s advocate.
Is it really worth removing a feature because it’s possible to use it to construct contrived examples that result in a ClassCastException, while with this feature removed it is still possible to construct contrived examples that result in a ClassCastException? Considering that it will make a lot of code hard to port to Scala 3.


It seems that what @Sciss wants is a way to say that some parameter or field be erased, meaning that it won’t actually be passed or stored, but not meaning that it will not be constructed.

In other words, we’d have:

erased val x: T = foo(...)

become after erasure (irrelevant of whether it happens in a class body or in a block):


I think that will be a generally more useful notion of erased values/parameters. The old erased semantics could be renamed to “uncomputed”, or something of the like.

uncomputed val x: T = foo(...)
List.empty[x.S]  // error: `x` may not be a computable value

In fact, I believe this is why a special singleton kind for types (with the syntax def foo[x.type]: List[x.type] = ... proposed on github some time ago) could also work. It would declare that there is some value x somewhere, the type of which we are referring to, but it would not mean that this x need not be computed at all, unlike the current erased modifier.

@odersky any thoughts on this?


I like @LPTK’s idea, but to address the other questions:

Of course asInstanceOf lets you do bad things, if you use it incorrectly. Soundness is only about code without (incorrect) casts (and with restrictions on null).

Not caring about soundness, of course, is a consistent position.

The point is not about runtime checking. You can only select x.A if x is a valid proof that A is between its bounds. Since Scala isn’t a proof assistant, the only proofs you can trust are the ones you actually compute to a value. You can write tons of expressions that prove that Any <: Nothing, which is absurd, but no such proof can produce a non-null value (they’ll crash or loop). So the typechecker, at compile time, only trusts proofs that will (at runtime) be actual values — that is, not erased ones (modulo @LPTK’s proposal).


Interesting idea. A problem is we cannot detect “cases like the one in the ticket” very accurately, we must be extremely conservative. That is, for all type projections A#B that should be forbidden, it does seem sound to say that A#B is a completely unknown type, only equal to itself — but then, it’s just another syntax for a type variable.


@Blaisorblade To take the idea further, would type projections T#X still be unsound if we looked only at the upper bound of X in T, and not at its lower bound? As in, if we had class T{ type X >: S <: String } we could manipulate values of type T#X as strings, but we could not, for example, give values of type S to functions whose domain is T#X.
I could see such restricted projections still being super useful.


That sounds like a really interesting compromise! Should be both sound and useful :slight_smile:


@LPTK I don’t have a proof but intuitively speaking, that makes sense to me. In practice, you might want to also get equalities to work (say { type X = S }): if you want that, that could be sound as well — as long as upper and lower bound coincide, you don’t get potentially unsound subtyping constraints. But then, we need to be careful when intersecting aliases and upper bounds!


I’ve opened an issue for the proposal above and motivated it in Dotty’s issue tracker:

closed #80