Varargs limitations
But that’s a different feature.
You certainly can’t
def dotProduct(xs: Double*, ys: Double*) = ???
dotProduct(3, 5, 7, 2)
because there’s no way to tell how much of each vararg to use. If it’s ever a pain point that we can’t
def f(xs: Int*, ys: Double = 1.0) = xs.sum * ys
then the language could be modified to allow it, with the rule being that if a parameter appears after varargs, you must always reference it by name. Incidentally, if you allowed an empty varargs, this would also double as a way to mandate parameter names:
def substr(str: String, *, position: Int, length: Int) = ...
substr("herring", 3, 4) // Unclear--"r" or "ring"?--and not allowed
substr("herring", position = 3, length = 4) // Aha
Anyway, if there are deficiencies in varargs itself, we should fix those. Note that you can have varargs at the end of each parameter block, so it’s only the default-after-varargs that is awkward. (And if it’s really really important, you can fake it with using
.)
I think your proposal is entirely orthogonal to varargs. The point of varargs is to allow as many arguments of the same type as you need. The point of your proposal is to not have to repeat things that are known. There is of course an interaction: if the function has varargs, your proposal works with that, too. But the concerns are almost entirely separable.
No bikeshedding
Field names–whether yes or no, that’s not this proposal
Okay, but this is exactly the opposite from what you’re proposing. This is about adding redundant information that you ought to already know. Which is better:
[
{"x" = 5, "y" = 7},
{"x" = 2, "y" = 9},
{"x" = 6, "y" = 0},
{"x" = 4, "y" = 4}
]
Array(
Vc(5, 7),
Vc(2, 9),
Vc(6, 0),
Vc(4, 4)
)
If you think the latter has unacceptable redundancy, but the former is just clear, well, I think that’s because you’ve gotten used to thinking of data as essentially untyped save for a bundle of key-value pairs. And there’s nothing terribly wrong with that viewpoint–it works fine in many instances. But the idea that data should be typed also has advantages. And objectively (just count the characters!), the kv-pair version is the one with greater redundancy (and less safety).
Now, Scala is lacking a good way to interface with the land of key-value pairs where the keys are strings and the values are well-defined types. That’s what named tuples provides. Check it out! Now if you want, in Scala you can
Array(
(x = 5, y = 7),
(x = 2, y = 9),
(x = 6, y = 0),
(x = 4, y = 4)
)
Who knows what about data?
I agree that it’s a bit of a pain; the reason I mentioned it is because it makes far more explicit what the data types are. You have said so right there.
If you worry about whether something changes your data, can’t I worry about having no idea what the data even is? Presumably there was some good reason why the type was FooRequest
not Int
. And some good reason why FooRequest
holds a FooRequest.Id
not an Int
. [[5]]
says to me, “I know you think types are important, but I don’t, just use this data if you can”. That’s fair enough, but “hey, there’s a good reason this isn’t a bare Int
” is also fair enough.
Now, I do agree that the FooRequest(FooRequest.Id(5))
thing is kind of ridiculous. You ought to be able to tell from context what is what, which is the point of the relative scoping proposal.
This would get it down to FooRequest(Id(5))
possibly with an extra dot or two before Id
.
Your proposal would take it all the way down to FooRequest([5])
. I can imagine this being even better, but I also can imagine this hiding an important distinction. This isn’t exactly an objection, but I do want to point out, again, that there are tradeoffs here. It’s not all just rainbows and ponies; people decided for some reason that being explicit was important, and you’re overriding that.
This is exactly backwards from the reasoning for requiring explicit infix
to have infix notation, incidentally. People there, inlcuding @lihaoyi, were arguing strenuously that the library designer should be in charge of such decisions of usage. I argued otherwise, to empower the user over the library designer.
So I am sympathetic to making the same argument here: go ahead and [[5]]
it if you want to (and if there’s only one possible meaning given the types available).
But I cannot accept both arguments at once; it’s simply inconsistent.
Why other features matter
With varargs, you have both a literal and a broadcast version:
def f(xs: Int*) = xs.sum
f(2, 3) // Fine
val numbers = Array(2, 3)
f(numbers*) // Also fine
f(Seq(2, 3)*) // This is cool too
Is there any reason to restrict this capability to varargs? Not really. Maybe you want it guarded behind a keyword…but maybe not? The guarded version of the spread operator is proposed here.
Your proposal would, I think, supersede that because it’s not that hard to have an extra []
or ()*
or whatever; the point is to not have to type the name of whatever you’re unpacking. But on the other hand, if you just view it as a spread, like with varargs, then
Array[Vc](
(5, 7)*,
(2, 9)*,
(6, 0)*,
(4, 4)*
)
is very close to the feature you’re proposing. The main difference is with named arguments, where your proposal parallels function arguments, but named tuples can’t be partially named.
So, anyway, it’s important to consider all of these things together, because many of them are trying to accomplish similar things, and we don’t want to end up with three ways to do the same thing.
Well, we might. But we can’t assess the tradeoff fairly by ignoring or undervaluing parts of it, and championing others in those cases where it shines. My goal here is to illuminate the tradeoffs, not reject the proposal.
Also, other languages have different tradeoffs. C# has the same feature in two different ways (one for most objects, where you have to say new()
over and over again, and one for collections which has a bunch of restrictions on what is considered a collection).
Kotlin “is getting” sounds overly optimistic; the last word on that thread is, “We’re exploring possible designs for collection literals but aren’t ready to provide or commit to a specific version”, suggesting that this isn’t an easy thing to get right.
Swift has decided that it is an opt-in feature to be initialized by array literals or dictonary literals as part of their general initialization with literals capability.
These are all rather different tradeoffs than we’re discussing here.
So definitely the “wow this is weird, we shouldn’t” reflex is inappropriate. But it’s also important to make sure this fits Scala well, isn’t compromising aspects of Scala that are its strengths, and that out of various ways to accomplish something similar, we get ones that cover the important use cases but don’t provide too many different ways to do the same thing.
In particular, if we go with this, I think we should be very clear on (1) what else it would render unnecessary, and (2) how big a mess you can get yourself into by leaning on the feature too heavily.
Some questions
What happens if there are multiple apply methods?
class C private (i: Int):
override def toString = s"C$i"
object C:
def apply(i: Int) = new C(i)
def apply(i: Int, j: Int) = new C(i+j)
def apply(i: Int, s: String) = new C(i + j.length)
val a = Array[C]([2], [3, 4], [5, "eel"])
Does it work with unapply too?
case class Box(s: String) {}
val b = Box("salmon")
// Does this work?
b match
case ["eel"] => "long!"
case ["salmon"] => "pink!"
case _ => "dunno"
val b2: Box = ["herring"]
// Does this print "I am a herring"?
for [x] <- Box(b2) do println(s"I am a $x")
Does it work if the type isn’t a class? Does it see through type aliases?
opaque type Meter = Double
object Meter:
inline apply(d: Double): Meter = d
val a = Array[Meter]([2.5], [1.0], [2.6], [0.1])
type IntList = List[Int]
object IntList:
def apply(i: Int, j: Int, k: Int) = k :: j :: i :: Nil
val xs: IntList = [3, 2, 1]
type ListOfInt = List[Int]
// ListOfInt.apply not found, or uses List[Int].apply?
val xs: ListOfInt = [3, 2, 1]
Does it trigger implicit conversions?
import language.implicitConversions
given Conversion[Option[String], String] with
def convert(o: Option[String]) = o.getOrElse("")
val o: Option[String] = None
val xs: Array[String] = ["eel", Some("bass"), o]
If a class can take itself (or a supertype) as an argument, can you nest []
as deep as you want?
val s: String = "minnow"
// One of `String`'s constructors takes a `String`
val ss: String = [[[[[[[[[["shark"]]]]]]]]]]