I thought there was already a proposal floating around to deprecate @
? (Which I would very much like to do – IMO it’s one of the longest-standing warts in the language.)
I totally agree with this, the order should not matter
It not binding is better than binding to the first in my opinion
But of course binding both is best if possible
In case I’m not the only one that doesn’t like the changes, I’ll list my issues with them here in case it resonates with others:
We already have syntax for this, it looks like:
def reduce[A](xs: List[A])(using Monoid[A]): A = ...
it supports both anonymous as well as naming it, it makes order quite explicit (in case you need to use the Monoid for path dependent types). Unless you also propose to remove this syntax, then I’m against adding more syntax that means the same, has hard to imagine ordering semantics (super easy to touch something there and accidentally produce different method ordering and break your binary compatibility without realizing), saves like 3 characters per typeclass in total and looks super out of place.
Already solved, see above.
A more honest example would be like
def breathFirstList[SubNode <: Node : Monoid](n: SubNode): List[SubNode] =...
and then you see SubNode.unit
and wonder when did Node
add a unit
method (or imagine a class with several such generics defined and aptly named, then providing methods with different typeclasses)…
Once you start naming your generics (and you should), this reads terribly. Sure there can be examples where it looks great but I don’t want to pay the cost of changing givens syntax again for such negligible (imo) gains.
Frankly, I already have a hard time remembering the current given syntax. My trick to remembering is thinking “despite looking weird, it’s still a def
and the parameters respect the same ordering and syntax, even if some parts are optional”. Now you are taking that away from me as well.
Sorry, given what polymorphic function do I provide compare? When I read that, that’s the immediate interpretation I got. This was only worsened by the as
which I took it to name the implicit Ord for A.
Hard to argue against beliefs. Mine are in the opposite direction, specially given scala’s reputation on syntax.
Finally, the given deferred
, I think this one is intuitive, doesn’t break existing syntax, provides a common functionality in sore need since scala2. I think it’s a smash hit. I’m fine with deprecating abstract givens for this.
I recognize the appeal in the changes, though I would have still argued about the syntax even before scala 3 was out; to me now that 3 is out (and since 2019 already) this is not a welcomed change.
This confused me while reading the proposition, as I’ve always seen them written without the extra space (I only read the edited version).
While I agree that the context bound looks much like a type ascription in its current form, adding the extra space doesn’t really seem clearer to me. Whitespace just feels so arbitrary…
The other style is
def copy :C = fromSpecific(array)
as seen from a bug report contributor. I feel sure it will catch on because they are so consistent or persistent with it that it must offer some great benefit I do not yet appreciate.
Personally, I prefer “no space” because it is punctuation.
On the limitations of automatic naming with multiple bounds, we have already coped with anonymous givens bumping into each other as on a crowded subway platform, anonymous and yet with fates intimately intertwined.
I agree with @rcano. I found that Scala 3 neatly removed the need for context bounds. def sum[A: Monoid]
is barely any shorter than def sum[A](using Monoid[A])
. The latter is also regular, works with all cases today (anonymous, explicit name), and is easier to search for as a newcomer.
Without fail, newcomers ask me about context bounds, which is less so with implicits because you can much more easily find what you are looking for with names than symbols. I already have a hard time today explaining when to use context bounds and when to use an implicit parameter list. This proposal will make it even more difficult.
This came up as a question to the original original “whiteboard” proposal, where it was suggested (as an idea) the compiler could synthesise a single object with members of both, to make it unambiguous
However if this proposal doesn’t ever happen, I guess if there are conflicts then you have to go back to context parameters, or introduce a new Type parameter, with some constraint (=:=
) that they are equal types (not sure if this would introduce ambiguities).
I’m thirding what @rcano said. Context bounds were a nice shorthand in Scala 2 when you didn’t need to use the parameter directly, and implicit paramers were good for when you needed to use them.
Scala 3 has made context bounds much less necessary with anonymous using parameters, and using
is way more powerful than context bounds have been or ever will be. As an example, because using
parameters can come before real parameters, I can cause those using
parameters to influence what inputs are allowable in a certain context.
For what it’s worth, I don’t use context bounds either, only (using Monoid[A])
, and none of these proposals would change whether I did or not.
But I still need to read whatever other people like to write; I don’t begrudge them their syntactic sugar, but I do prefer when it’s clean and regular.
Sure. I personally also like it regular and can choose not to use context bounds. I can also read the context bounds because I know what they mean, in case someone else likes to use them.
For me the problem is more for new developers. Scala is considered complex. That is partly bad marketing, but also having too many ways to do the same thing, in combination with being a powerful language. Each of these might not be a problem, but together they add up. IMO the perfect language would remove unnecessary choice. And stylistic choices are often unnecessary since they do not in any way change how expressive a language is. These proposals give even more choice, so I think it would make it worse, not better.
I believe Context Bounds for Type Members is a natural evolution of where Context Bounds in Scala should go. Scala has been trying to close the gap between Generic types and type members and this will make the language even more regular. Why be able to add context bounds to generic type but not to type members ? If Scala starts using Self in Typeclass Improvement, this will probably even be required in order to have feature parity. The confusion with <: is there but it is low priority in terms of the benefits.
One question though is if the syntaxes will be interchangeable ?
Will the following codes compile ?
class Map:
type Key : {Ord, Show, Eq}
type Value : {Show}
class Map[K: {Ord as ord, Show, Eq}, Value: {Show}] {}
Why are we creating such a special rule for this? It seems very rare, so rare that I would think it would be acceptable to make the example fail with an ill-formed error and require the user to write the corrected desugaring.
The run
example I gave is a common case. I hit the problem in the wild very soon, so I don’t think these are rare.
I just noted something which ties the whole context bound thing together quite neatly. Here is an important principle:
If A: B and C is a member of B, then A.C is well-defined.
With the new context bounds, that works for types as well as terms A
. Indeed:
If x: Monoid
then x.unit
is well-defined since unit
is a member of Monoid
. By the same token, if A
is a type with A: Monoid
then A.unit
is also well-defined.
So, yes, in that sense, context bounds are a form of types for types, and types can be selected with members of their context bounds. It all holds together!
I admit there’s a blemish that this for now only works for the first context bound of a type. But maybe we can fix that? In A: {C, D}
we could accept all selections A.M
of members of either C
or D
, as long as the selection is unambiguous. If we wanted to use A
itself as a value, we’d have to insist that it has only a single context bound, or else demand an explicit as
binding. (Or maybe: disallow using A
as a value by itself no matter how many context bounds it has, only allow selections).
About binary compability: let we have version 3.{x}. and 3.{x+1}.
Let we have in 3.x context bounds as implicit param last (old ABI), in 3.{x+1} - first. (new ABI)
If 3.{x+1} library call method of 3.x, the compiler can determine this (because in tasty, we have the compiler version) and produce call with the old ABI.
So, in 3.{x+1}. we can just use newABI.
We need more than binary compatibility. We need to be able to migrate a library to a new version without changing client code. Clients with explicit using arguments would break if we move the parameter section.
I think the using clause can only be moved first if there are references to its parameters. Otherwise we hit compatibility problems, where clients have to be rewritten. Requiring actual typeclass resolution to be delayed is a rather big ticket. Not sure we can pull this off. Alternatively, we could keep the proposed scheme, which works in many cases, and improve type class resolution independently. Maybe we can improve it enough to handle the case you brought up.
We should really strive to desugar in the “obvious” way, i.e:
def foo[X : A](i: Int)
// becomes
def foo[X](using X: A[X])(i: Int) // with delayed instantiation
Or to have a way to “teleport” a scope:
def foo[X : A](i: Int)
// becomes
def foo[X](i: Int)(using X: A[X]) // but i: Int can "see" X
Anything else, and we’ll run into parameter dependency issues
With every feature we’ll add to the type system, we’ll wish we had done it differently
For example when adding qualified types, that was one of the hassles I ran into:
def relatedOps[A]
(using BoundedUnsignedInt[A]) // Can't use context bound, as it desugars as last parameter instead of first
(iop: (Int, Int) => Int)
(bop: (A, A) => A)
(x: A, y: A with fits(iop)(x, y)): Boolean =
bop(x, y).toNat == iop(x.toNat, y.toNat)
What this exact piece of code does is unimportant, but we can see that I was unable to use a context bound, as the fits
present in the last clause could not access the value
It’s even possible to devise examples where it is the very first clause:
def silly[A : BoundedUnsignedInt](x: A with fits(_ + _)(bound, bound))
Could we synthesize an object
for such cases?
object A extends C[A] with D[A]{
val ca = implicitly[C[A]]
val da = implicitly[D[A]]
export ca._
export da._
}
That should work I think? And can be done syntactically, even without typechecking. Bytecode wise it’s a bit wasteful generating a new class and everything, but maybe we could optimize the object A
away in common cases where they only need individual member references or individual traits, and only preserve it when someone actually needs an instance of C[A] with D[A]
to pass somewhere
In the cases where the two typeclasses contain members with overloading names, I think that such cases are uncommon enough that falling back to summon
or implicitly
is fine
Would this work for the type A: B
declarations introduced in this proposal?
trait Foo{
type A: Monoid
println(A.unit)
}
Your proposal doesn’t cover this case, but it seems to me that we could make it work with exactly the same desugaring as we use for type-parameter context bounds, whatever that ends up being