This is really helpful, thanks!
There is one point that I think deserves additional clarification: does term inference consist of an embedded mini-language within Scala which is intentionally kept as distinct as practical from “normal” Scala constructs, or is the goal for the feature to feel seamless with the rest of Scala?
There are different advantages either way. The different look of embedded mini-languages calls attention to them, which is important if attention is needed. For instance, for-comprehensions have a set of syntax (for
, yield
, <-
, destructuring without match
) that just don’t look like anything else in the language. But they work quite differently too, so that’s perhaps an advantage. In contrast, implicit classes look and act just like classes, so everything you know about making private defs etc. to hide implementation details works equally well there as anywhere else; you can reuse all your knowledge and that’s also perhaps an advantage.
Because I don’t know the answer to this question (and don’t have sufficient familiarity to have a good basis with which to judge), it’s a bit hard to evaluate the proposal in total.
Intuitively, I am most convinced by the feature changes to term construction (the existing val-and-def based one is weird, with namespace collisions having important yet undetectable consequences), but least convinced by the syntactic changes. But here seems as good a place as any for addressing the whole set of features.
Implied instances
The name is questionable. They are inferred, they are given, but they are implied by…what? Some are implied by other terms, but if the feature as a whole is “term inference”, then if we must have a new keyword it seems like infer
or inferred
is preferable. Having extra synonyms to learn for the same kind of thing is not a virtue (though having separate names for separate things is).
Then we have novel syntax that gives a third way (beyond val
and object
) to create instances. Having two is bad enough (c.f. the perennial question “should I use a lazy val or an object?” followed by experts giving a detailed answer that often ends up approximately as a shrug). I think if we’re going to do this, it needs to be better justified.
The main change here is that previously, implicit vals are just vals. You can make them lazy, you can put them in a list, you can use them explicitly. Now, presumably, you can’t do any of those things. Maybe there is good reason for this, but it’s not spelled out. Likewise, implicit defs are just defs. You can curry them, close over them, use them explicitly. Again, now presumably you can’t.
Those are pretty big differences. If that actually is the difference, then maybe it really does warrant the separate syntax.
However, there is another enormous problem with existing synthesis of terms that doesn’t seem to be addressed here at all: specification of priority. Given the gymnastics currently required–mostly with extension methods, admittedly–to make sure the right thing is inferred (resorting to tricks like fake inheritance hierarchies as an immensely clumsy way to shift priority), a proposal that doesn’t clearly address this is, in my opinion, missing out on the opportunity to solve one of the biggest pain points with term construction.
I don’t have a good solution to these problems. I do think that using infer
in place of implied
would read better.
Inferable Parameters
The same comment about syntax applies, but considering the ongoing problems with manually specifying implicit parameters, I tentatively agree that the novel syntax is worth it.
However, there’s one really ugly part of it, despite it being very pleasing from a no-irregularities case:
def foo given (i: Int) (c: Char) given (b: Boolean): Double = ???
I’ve written it all out with spaces, but visually the closest binding is (i: Int) (c: Char)
; the second closest binding is (b: Boolean): Double
; and the instances of given
bind it all together.
But that’s profoundly unlike how it is meant to be interpreted.
Because of improvements to argument syntax, there is no longer ambiguity about apply
(as far as I’m aware). That frees up apply
to be a soft keyword to use to clarify precedence:
def foo given(i: Int) apply(c: Char) given (b: Boolean): Double = ???
The (b: Boolean): Double
isn’t so bad, but also isn’t fixed.
Alternatively, because Dotty type inference no longer works a parameter block at a time, there’s no particular reason that given
has to be external. So we can move it inside every time, and allow a syntactic variant that the terminal one, if unnamed, can go outside:
def foo(given i: Int)(c: Char)(given b: Boolean): Double = ???
Hey, it looks like Scala, and intuitive precedence works!
def foo(c: Char) given Boolean : Double = ???
Still kinda suggests Boolean : Double
, but oh well.
As far as calling goes, I prefer given/give
or implicit/explicit
or some other keyword-based approach.
Also, there is still the “how do you infer the right term in complex cases” issue that AFAICT isn’t solved here either.
Implied Imports
The name is incredibly misleading. An “implied import” sounds like something that you ought to have to import but actually don’t–that is, it makes things less explicit. The feature is exactly the opposite: it forces you to be more explicit about where you’re getting your term inference goodies from.
Also, right now there are various lookups for companion objects that prevent you from needing to get terms–everything just works out. I’m not sure how this is intended to port over.
I think it’s perhaps an advantage, but the naming needs to be fixed somehow.
import scala.collection.converters._ with inference
is very clear.
Also, I am unconvinced this will actually solve anything. People with IDEs will just plaster more imports at the top of their file and still nobody will know where anything is coming from. Furthermore, people will need to pick out the terms they need and they’ll still collide in an unmanageable mess when importing from different sources.
The solution here seems to me to mostly be tooling, not language constructs. It seems to me just as reasonable to allow turning the imports off:
import scala.collection.converters.{implied => _, _}
Implicit Conversions
This section doesn’t explain how to convert, say, Vector[A]
to ParallelVector[A]
. If you can’t, I think the feature is worthless. Just remove it and make people call an explicit extension method to convert. If you can express generic types, it’s not like any function I know of in Scala 2 and while the feature is exciting I can’t reason about it sensibly.
Note that Rust has the same challenge with Into
and From
traits, but Rust allows the implementation to be generic, so that’s okay: you just
impl<A> Into<ParallelVector<A>> for Vector<A> { ... }
Because the reworking of implicits is holistic and consists of several parts, I think we need to consider the parts together. Each one alone seems less appealing than they do in conjunction; in conjunction I think there’s a good deal of potential but I am not sold on the syntax changes, and think we’re missing some key features too (discoverability of which terms are/aren’t being inferred and selection of terms to infer when conflicts arise).