Proposal to remove general type projection from the language


Sure. But to identify a larger set of type projections that are sound one has to come up with rules that identify these projections and an argument that these rules still ensure soundness. And then one has to evaluate the rules whether they are convincing, simple enough to specify and implement, that they don’t degrade compiler performance too much, and that they are of real usefulness. So, overall, a large effort.


So I’m not sure if I’ve understood this correctly, type projections from classes will be allowed but not from traits? Could not projections from traits inheriting from AnyRef be allowed?


Traits are class types. They’re unaffected by this proposal.


In both the Summingbird example I posted and the STM example we had abstract types that were upper bounded by a trait with the projections.

trait Foo[A <: Foo[A]] {
  type Bar

def use[A <: Foo[A]](b: A#Bar)

I wonder if that bound makes this safe.


Is this “literally” true? Or it need to extrapolated to “Traits are class types. They’re unaffected by this proposal. - in cases when that classes/traits are VALID (inhabitable)” ?

In particular, while playing with Self-types and >: this.type constraints I eventually come up to strange code (Which is in fact VALID(compilable) code in current Dotty master and on scasite). Also after careful read of aforementioned discussion on github I found even more aggressive (2 years old) example of such code.

So here is what I talking about: (complete snippet on scasite)

  // !!! THIS IS COMPILABLE CODE !!! - `Absurd` and `SelfAbsurd` compiled successfully
    trait Absurd {
      type Nonsense >: (String | Int) <: (String & Int)

      def stingIsInt(x: String): Int = x : Nonsense
      def intIsString(x: Int): String = x : Nonsense

    // thought that I need to check this was logical consequence
    // of some desperate attempts described in issues #5879, #5878
    trait SelfAbsurd {
      this: (String | Int) =>
      type Nonsense >: this.type <: (String & Int)

      def selfIsInt(): Int = this : Nonsense
      def selfIsString(): String = this : Nonsense

Now when one try to use “basically allowed” class/trait projection, one will see (which I believe absolutely fair)

scala> type T = Absurd#Nonsense
1 |type T = Absurd#Nonsense
  |         ^^^^^^
  |Absurd is not a legal path
  |since it has a member Nonsense with possibly conflicting bounds String | Int <: ... <: String & Int

scala> type S = SelfAbsurd#Nonsense
1 |type S = SelfAbsurd#Nonsense
  |         ^^^^^^^^^^
  |SelfAbsurd is not a legal path
  |since it has a member Nonsense with possibly conflicting bounds SelfAbsurd <: ... <: String & Int

So basically #-projection will not work for that “Nonsense” and “Absurd” (actually it shows that Dotty compiler is not so “naive” as it trying to seems at the first glance)

But here I could see more fundamental problem - one can write whole lot of abstract/traits code which is effectively equivalent to Nothing (for example in aforementioned snippet, it is obvious that effectively Absurd =:= Nothing and SelfAbsurd =:= Nothing as well). But problem here also not in that one may compose that trait effectively equivalent to Nothing, but in that it also could become Nothing silently - since it could be not so obvious to see contradiction, and once some neighboring/involved classes/traits will be edited, some other classes may silently become contradictory.
Same could relates to possible generic methods with contradictory constraint. (So compiler goes too far away from idea “it should work if it compiles” toward something like “do not forget to test reachability of everything in unit tests” - if you are writing library of abstract classes/interfaces)

So here I could see some fair reason for @jeremyrsmith 's thoughts. (More interesting to detect types effectively =:= Nothing ahead of time, rather then swallowing that, relying only on instantiation proof)


I agree it would be nice if the compiler could always check that bounds make no sense. Unfortunately it can’t. See our paper The Essence of Dependent Object Types for an explanation and Nada Amin’s thesis for more details. So since complete bounds checking is unfeasible, the only remaining alternative is to check paths instead, which is what the compiler does.


No. We need to require an actual runtime value, not just an expression. That’s because a value is a proof that its type members satisfy its bounds; expressions that are not values could be instead infinite loops, or code that fails with some exception.


I don’t believe this is correct. See my post “you can always cheat” above. Even with value dependent types, Scala is a statically typed language. So if the compiler sees that there is a value (from its compile time perspective), it believes it. As far as I can see, there is zero runtime checking involved. It’s simply that it looks for a term instead of a type. (And anything else would be a performance nightmare, and anyway not working because of HKT erasure on the JVM).

And that’s why I still believe that it doesn’t matter whether the value is marked for erasure or not. It’s there for the compiler at compile time, that’s what counts.


Is anyone going to address my point? Why remove a language feature when you can just render it harmless?


Playing devil’s advocate.
Is it really worth removing a feature because it’s possible to use it to construct contrived examples that result in a ClassCastException, while with this feature removed it is still possible to construct contrived examples that result in a ClassCastException? Considering that it will make a lot of code hard to port to Scala 3.