If we want to completely replace implicit, we also need different names or even different mechanisms for:
DummyImplicit. Used to differentiate between different methods with the same signature after erasure. To me this always felt like a hack. Why can’t the compiler do it by itself?
@implicitNotFound. Can there a better way to provide custom error messages?
I would argue such annotation is redundant. Rust compiler can give suggestions for missing imports (implicit values candidates) itself, by scanning the classpath. Scala compiler could do that too. I’ve raised that issue in the topic about implicits (the topic about “why implicits in Scala aren’t a runaway success, but people love Rust with typeclasses”).
Untrue. I use it frequently to provide custom error messages. It’s not just a better suggestion for missing imports.FWIW, I currently use macros to customize the error annotation.
There are good example about postconditions in “implicit function types”.
I like this pattern.
But I sometimes need to use implicit conversion and extension in my practice for such case.
So the code:
import PostConditions.{ensuring, result}
val s = List(1, 2, 3).sum.ensuring(result == 6)
will be converted into something like:
{ //we will need braces if we do not want that isDistinctFrom is visible outside of expression.
import PostConditions.{given,_}
val s = List(1, 2, 3).sum.ensuring( 6.isDistinctFrom)
}
It is not very convenient and it need time to accustom. So it seems that more preferable construction will be something like:
val s = new Postcondision(List(1, 2, 3).sum){
ensuring( 6.isDistinctFrom)
}
I am not sure that is is good way for scope reusage. But it seems it is still the less error prone and succinct way in such case.
It seems that there are still not enough context reusage in such patterns in general.
Unfortunately they don’t have parity with existing implicit defs yet. There are at least three differences:
Conversions do not support path-dependency currently - [1]
Combining them with macros aka inlines is very odd now because they’re “values”. They can be upcast and they can lose their “inline” status through upcasting. Because of that the compiler will emit an error on straightforward definition of a macro Conversion, you’ll have to define two traits and override a runtime method with an inline method. example: [2]
Given and conversion names cannot be overloaded, because they are not methods – even when they take parameters [3]. This gets in the way of implicit punning technique – replacing a typeclass’s companion with a slew of implicit defs to ensure that whenever a typeclass is imported, its syntax is ALWAYS available with it. examples: [usage][implementation with overloading]. Dotty’s extension methods and the fact that they’re available whenever an implicit is in lexical scope, which happens in strictly more cases than when a type name is imported make this trick unnecessary in most cases, except when extension methods are insufficient like in @morgen-peschke’s example.
Honestly I don’t find the syntax great. given doesn’t seem to give us anything that implicit doesn’t: if we find-and-replace all the givens with implicits, we’d get all the benefits and much less disruption. The usage of :, and how strongly it binds left and right, also goes against what someone programming in Scala would expect.
Some more specific issues:
This syntax is pretty confusing: it looks like some kind of refinement type, but it isn’t:
given intOrd: Ord[Int] {
def compare(x: Int, y: Int) =
if (x < y) -1 else if (x > y) +1 else 0
}
If Scala didn’t have refinement types maybe this could be ok, but as is I find this extremely hard to parse. And the precedence is all wrong: normally { has higher precedence than :, but here it’s reversed.
I also think the anonymous given instances look much worse than what we currently have with implicits:
given Ord[Int] { ... }
given [T](given Ord[T]): Ord[List[T]] { ... }
Neither of these look particularly good. The top one has a type expression in a place that looks like no other language construct in existence, while the bottom looks like keyword soup with the two givens that do not match to any normal english usage of the word.
Alias givens honestly would look fine as implicit
given global: ExecutionContext = new ForkJoinPool()
implicit global: ExecutionContext = new ForkJoinPool()
In fact, that’s almost the syntax now! We just need to make the def/val/lazy val optional, and we’re done.
I am personally not a huge fan of the extension method syntax. The precedence between given, :, extension and { seems all wrong in the given examples
given stringOps: extension (xs: Seq[String]) {
def longestStrings: Seq[String] = {
val maxLength = xs.map(_.length).max
xs.filter(_.length == maxLength)
}
}
Overall the usage of : in the given syntax doesn’t work out. It looks too much like something we’re already familiar with, and comes with baggage of the precedence we’re already used to, but is parsed in a totally different way. Using a new keyword would be far better than overloading : in this way
Can you outline (again if I missed it) why extend is too common but given is rare enough? What’s the threshold (in say of scala files with a particular identifier) ? Was this measured on some corpus?
I find extend or extension for making extension methods far more straight forward and important enough to merit a keyword IMO. Reusing keywords was part of what made implicit confusing (conversions vs the static-dependency injections).
Both extend and given are common, given maybe more than extend. So both are a problem, as would be any other common name we take as a keyword. That said, the disruption is cumulative. So taking away two common words is roughly twice as bad as taking away one.
I think there is a misunderstanding about precedence. When I write
given f: TC { ... }
the precedence of : is (as always) less than the precedence of {, not greater as you state. The crucial point to convey is that f: is a label for the whole definition that follows. It’s not that f has just type TC but that f stands for the whole anonymous class TC{...} including any type refinements it might contain. So the relationship with refinement types is fully intentional.
That’s also a problem with the alias syntax proposed by @smarter and @julienrf. If I would write
given f: TC = new TC { ... }
then f would not get any type refinements in the right hand side.
Indeed, the fact that we can’t anymore have abstract given definitions is an important limitation to me. I guess the problem is that it wouldn’t play well with the given instance definition syntax. Consider for example the following definition:
given foo: Foo
Is it an abstract definition, or an instance definition (whose members are missing and should cause a compilation error).
We wouldn’t have this limitation if we had to put an = between the lhs and the rhs of a given definition:
given foo: Foo // abstract given definition
given foo: Foo = ... // concrete given definition that requires a right-hand side
Also, one comment about the fact that we can define anonymous instances: we probably judge this capability from the eyes of programmers that are used to always having to write names for implicit instances, and probably see here an improvement because we’d have less things to write, but we lack experience with this feature to really estimate its cost. For example, I often find it useful to be able to refer to an implicit value from its term identifier. Typically, when an implicit value is not found and I need to understand the reason why, the ability to explicitly refer to one specific implicit value lets me see the problem if that value were picked. (I’m not sure I am clear…!)
Error reporting might worsen as well: how do we refer to an anonymous given definition in an error message? Here is an example. The nice thing with the “named” version is that the code produced by the error message is actual Scala code that can be copy-pasted.
I am not sure to follow. Why would we want to have a type refinement in the rhs? If we want to define a given instance for a type refinement of TC we would write the following, right?
given f: TC { type A = Int } = new TC { type A = Int; ... }
That said, the disruption is cumulative. So taking away two common words is roughly twice as bad as taking away one.
But the motivation says clearly we want to move from mechanism to intent; I believe that’s only possible if we distinguish also linguistically between the different forms, otherwise the critique stated in various posts holds that we’re just replacing implicit everywhere with given everywhere.
Could one not run a crawler across, say community build, and see what the actual collisions would be?
I know I can invoke as 33 absDif 44, but how would the function call syntax work here – is it absDif(33, 44) or absDif(33)(44) or simply not defined? From my use cases, putting them into one parameter list would probably be the most useful, e.g. atan(dy, dx), randomRange(lo, hi) etc.
Aside from the glaring issues with syntax (the given syntax in general is completely out of place from the rest of the syntax in Scala as others have commented) I would actually like to see some real world non trivial examples (i.e. something equivalent to cats) on the usages of given because
Seems like a very serious limitation, I definitely know that I wouldn’t be able to use given because there is a big difference between something being initialized val/lazy val/def (its also completely out of the place in the Scala language as has been commented elsewhere, everywhere else in Scala when a variable is initialized either implicitly or explicitly you can control how its initialized).
I honestly get the pursuasive feeling that this is being shoved into the language last minute without even properly addressing considerations or seeing if this is going to hold up in the real world. Typically such features would have a proper RFC/hidden behind a flag /plugin in other languages to see if the feature actually justifies itself and that can only be seen when its used in non trivial ways.