Proposal: Restricted Type Projection Sugar

Problem: Parameter Explosion
The removal of type projection (T#A) in Scala 3 prevents us from accessing the arguments of a higher-kinded type parameter. This forces us to “lift” every internal type variable into the class signature.

This is not just combinatorially verbose; it breaks encapsulation. We are forced to declare “useless” parameters—implementation details of the child components—solely to reconstruct the return type in the extends clause.

Example
Consider a Pipe defined by an Input Schema and an Output Schema.

  • Schema has 2 parameters: Row type and Validator type.
  • Merging two Pipes requires merging their schemas (Union of Rows, Intersection of Validators).
trait Schema[+Row, -Val]
trait Pipe[I <: Schema[?,?], O <: Schema[?,?]]

Current Scala (12 Parameters)
To implement MergePipes, we cannot simply take two Pipes. We must extract R1, V1 etc., because the MergeLogic in the extends clause requires them explicitly.

// The Type Logic (Operator)
// We are forced to define this on the raw components
trait MergeLogic[R1, V1, R2, V2] extends Schema[R1 | R2, V1 & V2]

case class MergePipes[
  // Left Pipe Internals
  IR1, IV1, I1 <: Schema[IR1, IV1],
  OR1, OV1, O1 <: Schema[OR1, OV1],
  // Right Pipe Internals
  IR2, IV2, I2 <: Schema[IR2, IV2],
  OR2, OV2, O2 <: Schema[OR2, OV2]
](
  left: Pipe[I1, O1], 
  right: Pipe[I2, O2]
) extends Pipe[
  // Visual Noise: We must manually thread the useless params here
  MergeLogic[IR1, IV1, IR2, IV2], 
  MergeLogic[OR1, OV1, OR2, OV2]
]

Proposed Syntax (2 Parameters)
We define MergeLogic to accept the Schema containers directly, using projection to access Row and Val internally. The MergePipes class now only takes the two pipes.

// The Type Logic (Refactored)
// Now accepts the containers and projects internally
trait MergeLogic[S1 <: Schema[?,?], S2 <: Schema[?,?]] 
  extends Schema[S1#Row | S2#Row, S1#Val & S2#Val]

type PTop = Pipe[?, ?]

case class MergePipes[P1 <: PTop, P2 <: PTop](left: P1, right: P2) 
  extends Pipe[
    // Clean: We extract the schemas from the pipes via projection
    MergeLogic[P1#I, P2#I], 
    MergeLogic[P1#O, P2#O]
  ]

The Proposal
Allow C#T only if C has an upper bound that defines the type parameter T.

This is not general type projection. It is only (equivalent to) syntactic sugar for the explicit parameter version.

1 Like

It is clearly not just syntactic sugar if it makes the number of type parameters dependent on some kind of “overloading” at the type level. :wink: I’m afraid what you’re aiming for is much more involved than your proposal suggests.

I believe there is a misunderstanding of the proposal. There is no “overloading” or dynamic arity resolution involved. The transformation happens strictly at the definition site, statically, just like for context bounds.

When we write def f[T: Ordering], the compiler desugars this into a signature with an additional implicit parameter (using Ordering[T]). The “user-facing” arity differs from the “internal” arity. That is standard syntactic sugar.

My proposal works the same way:

  1. Source: class Pair[C <: Code[...]]
  2. Desugars to: class Pair[P, N, C <: Code[P, N, ...]]

The underlying class always has the exploded parameter list. The sugar merely allows the user to declare it using the high-level shape, with the compiler filling in the fresh type variables for the bounds.

1 Like

Can this case be solved with the tracked modifier? Modularity Improvements

2 Likes

I don’t think so, unfortunately, since it doesn’t help with type parameters, and member types behave rather differently, particularly in terms of variance.

@Stephane Most likely your case can be emulated with General type projections through match types though I think it would be a good idea to make general type projection syntax type check as if such a match type was used.

1 Like

I don’t think that would work. Match types don’t reduce on abstract types. Type members also don’t have definition-site variance annotations.

Thinking more about this, I think one crucial piece that is currently missing from Scala is bivariance. Scala has covariant (+T), contravariant (-T), and invariant type parameters (T). But variance actually forms a lattice with invariance at the bottom, and a fourth option at the top, namely bivariance.

This is exactly the variance that is usually desired for helper types that are not to contribute directly to the subtyping hierarchy, but are simply there for accounting reasons, as in my example.

Without bivariance, the current system is incomplete, and cannot express all the cases that require bivariance in helper types to define the correct subtyping relation. It also prevents the proposed translation to work in general.

Given that having an incomplete variance lattice is fundamentally unnatural, I could imagine that completing it would lead to all kinds of benefits, both for Scala users, as well as for the compiler logic itself. It would also probably be easy to add - getting people to agree on superficial syntax for it might be the hardest part :wink:

1 Like

The hardest part for now would probably be writing a SIP. Re: syntax, I don’t think that will be a hard part, if only because bivariance annotations will likely remain niche. I proposed +-T marker before, but it could be anything, like *T or a keyword bivariant T or phantom T. Btw that earlier proposal includes another, shorter, motivation for introducing bivariance - to allow typeclasses to use covariance/contravariance annotations without having to fear that it would outlaw bivariant data types like Const from implementing them.

2 Likes

Nice! I’m glad I’m not the only one who realizes how incomplete the current system is. It’s a bit like the natural numbers without 0.

Why do you think the hardest part would be writing the SIP? If there is no fundamental blocker, I could give it a try.

Because no one wrote one yet. My proposal was from before Scala 3 was released and SIPs were reopened and I forgot about it until now. The next hard step after would be figuring out some questions about semantics - one I can think of immediately is, should a bivariant parameter be immediately substitutable by any type within its type bounds, or should it only vary co-/contravariantly once wrt its current value? That is, given trait X[+-T]; val t: X[Int] does one need two type ascriptions to get ((t: X[Any]): X[String]) or just one t: X[String]? String is neither a subtype nor a supertype of Int, but X[Int] is nevertheless assignable to X[String] by first assigning to X[Any]/X[Nothing] and then to X[String] and I’m not a sure whether that process should be automatic. And the next hard step after that one would be the implementation itself.

You’re very welcome to write a SIP if you’re up to it!

1 Like

What about ±T? :wink:

I am personally partial to :. But a less controversial option might simply be something like a @bivariant annotation (or even @private, in which case they would not be explicitly settable, and always be inferred).

Concerning the semantics, there is only one correct position dictated by theory: bivariant type parameters are ignored when it comes to subtyping.

Regardless, after taking a look at the Scala compiler code related to variance, I have low hopes that this will ever be fixed. Therefore, until there is a credible sign of willingness from the Scala team, I do not plan to work on a SIP for this.

1 Like