I think this is a tempting but ultimately counterproductive way to think about it.
Scala’s best meta-feature is that its features work together. Practically every time this isn’t true, it rankles.
Take context functions, for instance. Methods take context but functions can’t?! This cripples your ability to create abstractions, and forces you to fall back on trait FunctionWithTheContextIWant { def apply(foo: Foo, bar: Bar)(using MyContext): Baz }
. Generalizing has been a big win, I think.
Scala has plenty of expressive power to handle the fewer-explicit-type-mentions-for-data feature. But I don’t think we should bolt on something simple and consider the more general case as “not a priority at all”. We should get the general case completely clear, then if we think it’s too powerful to unleash or too hard to implement, we can take the easy case first.
In particular, we have four different features we could build off of in the data case.
- We can view it as implicit conversion of tuples.
- We can view it as a spread operation a la varargs
xs*
(with or without a spread operator).
- We can view it as a particular case of relative scoping (with or without a scoping operator).
- We can view it as novel literal syntax, unlike everything else.
If it is an implicit conversion of tuples, then we aren’t dealing just with ("Leslie", (1966, 9, 15))
, but also ("Leslie", dob)
where dob
is a tuple type not a DateOfBirth
class. We might not unlock it, but that’s the generalization.
If it is a spread operation, then you’d expect it to spread wherever you need it, at least if the feature is generalized at some point. So s.substring(r*)
should work, where r
is a 2-tuple of ints and *
is our spread operator. Maybe Array[Int](p*, p*, (3, 5)*)
should work too–it’s common for spreads to expand into varargs. Maybe ("Leslie", (ym*, 15)*)*
should work, where ym*
is a 2-tuple containing two ints. Also common for spreads.
If it is relative scoping, then ..of(1958, 9, 15)
should work too (where ..
is the prefix relative scoping operator)–again, if we decide to go for more generalization.
If it’s its own special snowflake, unrelated to everything else, then it should be expected to pass a much higher bar because you’re introducing a new feature that intentionally doesn’t have any broader use, anything that helps you reason about it. It’s just yet another thing to learn, for one particular use case. Scala is already perilously heavy on separate things to learn–all for good reason, pretty much, but we can’t discount the burden. Enums? Match types? summonInline
? Context bounds? Context functions? Named tuples, maybe?
Thus, I disagree that
is a good policy from which to approach holistic language design. You absolutely do want to cover the common use case well, but if you’re spending your force-programmers-to-learn-a-new-thing budget, it should be a wise expenditure. This means considering very carefully whether one can solve other pressing problems with the same concept. Especially since the other pressing problems which are related are on the table right now.
So I advocate, strongly, for considering all the possible generalizations even if the feature for now ends up just being val CaseClass = (5, 1, 2, "herring", true)
–purely literals, only in named cases, etc… If we haven’t thought through the generalizations, and tentatively picked one of which we’re implementing a special case, we won’t know how to set it up for potential future language development. In the not-very-long-run this leads to a kitchen sink language or a static language.