Better support for structurally typed records

It would be quite a bit of work to make a tuple a subtype of one of its prefixes. I am not sure in the end it would be worth it. One potential problem is the following: It would nice if we could get at some point in the future a flat layout for tuples where tuples and their elements are not boxed. But that’s incompatible with subtyping, since subtyping is not allowed to change representation.

What we can do is add a conversion from longer tuples to shorter ones. E.g.

val person2: Person2
val person1: Person1 = person2.convert

This would need a conversion macro on tuples with a signature like this one:

extension [A <: Tuple](x: A) inline def convert[B]: B

The macro would check that the target type B is shorter than A if it’s a regular tuple or defines a subset of A’s names, if it is a named tuple.

Should this conversion be implicit? Maybe, but we can start with the explicit one and then see how burdensome it is.

EDIT: On second thought, this should be an instance of Conversion. That will already provide the convert method. Proposals such as into parameters would then directly apply.


I think that could already be pretty useful. Currently I see the following pattern quite often in our codebases:

case class Person(id: UUID, otherData: X)
trait HasId[A] {
  def id(a: A): UUID
implicit hasIdPerson: HasId[Person] = ???
def somethingThatNeedsId[A: HasId](a: A) = ???

// Usage

to work around the fact that different data types might have shared fields. This is doable, but quite a bit of boilerplate, especially if there are multiple permutations of HasX.

If we could do something like:

type HasId = (id: UUID) *: EmptyTuple
def somethingThatNeedsId(hasId: HasId) = ???

// Usage

I think that would make it already better than the status quo and basically what I would reach structural types for (if they were not shunned within Scala).

I would not say structural types are shunned in Scala. It’s just that structural types abstracting from classes need reflection to implement field selection, which follows from the way they are defined.

It’s an interesting idea to generalize convert to work on arbitrary types that define the required fields. That could be quite powerful (in my design it just worked on named tuples).

Another useful functionality is an operator like

person.project(, _.age)

which takes a number of selector functions and constructs a named tuple by applying all these functions to the receiver. That again could work on named tuples and classes alike. Tentative signature:

extension [A](x: A) transparent inline def project(inline selectors: (A => Any)*): Tuple

The project macro would

  • check that all selectors are indeed selection functions of the form _.f that mention a field f in the type A,
  • check that all selection names are different,
  • construct a named tuple (f_1 = x.f_1, ..., f_n = x.f_n)

In each case, the tradeoff is between

  • additional work at the conversion site to construct a named tuple
  • additional work at selection using reflection on a structural type

I tried to think of such examples. At first glance, use cases that involve Iskra or Tyqu would work fine with named tuples. The case of Chimney might be more problematic unless we have that conversion between conformal named tuple types.

#19075 is an experimental implementation that shows a possible encoding of named tuples into regular tuples with opaque aliases as element types. It’s actually surprisingly simple.


This proposal is very exciting!

1 Like

A little off topic, but why did they structural types need to be defined via reflection? What makes them so different from case classes that they needed reflection? SIP-17, makes it seem that reflection was always part of the plan, but why did it need to be?

Specifically, SIP-17 talks about ways to resolve qual.sel, the first being looking to see if qual defines sel, the second being implicit extensions in scope, and the third being the scala.Dynamic stuff. Would it be possible to define a record type such that it falls under the first category? (Genuinely curious. I’m a total noob when it comes to the inner workings of dotty)

The following is a somewhat informal description of why this is not possible, the details are probably incorrect, but the general idea should be helpful

Here is an example:

class A(val x: Int)

class B(val x: Int, val y: Int)

type WithX = Any {val x: Int}

val l: List[WithX] = List(A(1), B(2, 3))

l.forall(wx => wx.x)

This code looks innocent, after all both A and B define an x, but we actually have no guarantees of the binary representation of A and B.
For the first, it might be at offset 0, while for the other at offset 1 (for example if y is placed before)
So we need some way to ask at runtime “Where is x ?” and that’s exactly what reflection allows us to do !

You’ll note in cases where we already know where the data will be, we don’t need reflection (narrowing refinements):

class Foo:
  val x: Any

class A(val x: Int) extends Foo

class B(val x: Int, val y: Int) extends Foo

type FooWithX = Foo {val x: Int} // narrowing refinement

All descendents of Foo will have the same offset for x

I propose we leave it at that for this thread, feel free to open another one if you have more questions