Proposal To Revise Implicit Parameters

Excluding the implicit for now since it’s new proposed syntax, in Dotty both of these things work and behave the same.

2 Likes

There is another underlying issue with this proposal (incase it wasn’t completely clear when communicating earlier) which is that its not solving an underlying issue with implicits, rather its creating a new pattern/keyword for something that is currently already possible which only works in same cases, and not others (which means we are still screwed in the cases it doesn’t cover). Let me illustrate, lets assume that we have the following code

class UserClient(implicit executionContext: ExecutionContext) {
    def getUser(id: String)(implicit correleationId: CorrelationId): Future[User] = ???
    def query(offset: Int = 0, limit: Int = 100)(implicit correleationId: CorrelationId): Future[List[User]] = ???
}

This kind of code is very typical in backend style Scala code. Now in this case, the ExecutionContext is the “environment” which is what this SIP is trying to address (lets ignore the fact that you can also pass ExecutionContext in the methods for now). So yes, we now have new syntax and new keywords to work with the fact that we can pass the ExecutionContext in different way.

However we still have the problem with CorrelationId, for those that are wondering what a it is you can read https://blog.rapid7.com/2016/12/23/the-value-of-correlation-ids/ but the tl;dr is, a CorrelationId is just a unique ID that a webserver gets a request is made against it and its expected for it to pass this ID along everywhere to help in diagnosing/tracing issues in distributed microservices; hence the UserClient needs to pass this CorrelationId along in all requests its making.

Importantly this is not an environment/context, the value of this variable is going to be different everytime getUser/query are called (unlike ExecutionContext, which really is a context that often has the same value for the lifetime of an application). Using implicit is still the right abstraction to use, because its very rare to ever manually provide a CorrelationId (this is typically only done in tests) but using the CorrelationId as an environment is just wrong.

So what if we somehow need to change getUser/query wrt implicits/default parameters/overrides/currying (and still keep the CorrelationId implicit). Well this SIP may solve this however its still inconsistent with the ExecutionContext (hence why I call it a scapegoat) and we still have the same issues that we are currently stuck with in Scala2.

Hence the fundamental issues with the SIP, we are “solving” (solving is in quotation marks because this solution is already possible) a subset of issues by creating an entirely new syntax (which already raises question marks since this issue as well as others should be treated more like a bug report rather than a new feature) instead of fixing the QoL issues now.

I would rather solve the underlying issues with mixing default parameters/overrides/implicits rather than just avoid this problem and say that everything is an “environment” or “context” and give a completely new syntax for it.

This SIP appears to work CorrelationId above, but if this is the case then its completely unclear and inconstant how given works since the SIP doesn’t work with constructors, so we have different syntax for the same thing in different places. Its not at all clear (either in the definition or how its used) on what level of scoping this definition applies. With current implicits, its really clear. Having the implicit ExecutionContext in the class constructor parameter list makes it will take the implicit ExecutionContext when you instantiate UserClient and then it will persist there, however having implicit CorrelationId in a method means that every call of the method requires its own instance of CorrelationId. Note that if you remove all of the implicit keywords in my example, this is exactly the same; ergo nothing has changed apart from us having to supply parameters explicitly (thats what gives it the clarity). With this SIP, its very unclear what the intent is, not everything is an environment. Saying that the implicit keyword is “overused” is like saying that the val keyword is overused, alone its not a justification of an argument.

EDIT: Yes the whole definition of environment is a bit loose, at the end of the day we are just dealing with currying/application of parameters. The point is we are introducing extra incosistant rules to solve one subset of issues.

1 Like

Apparently it works in Dotty already? I didn’t know, but if so, then great

Second, as Martin points out, (implicit ctx: Context) gets old after a while. It’s clunky and effortful.

Then let’s tweak the syntax to allow anonymous implicits! , implicit _: Context) or , implicit Context) are both possible, and would make it as concise while also benefiting the existing implicits-as-a-curried-parameter-list pattern and without overhauling the whole surface syntax.

def f(x: T, implicit Context) // non-curried
def g(x: T)(implicit Context) // curried

Since we’re no longer using the = syntax, let’s call it implicits-in-mixed-param-list rather than implicits-as-defaults

So those three problems are now solved. What else is there? If we claim “implicits-in-mixed-param-list creates a whole new language”, and result in tons of problems, let’s work through them!

So far we have:

  • Eta expansion needs to work slightly differently for implicits-in-mixed-param-list
def f(x: Int, implicit y: Int): Int
f _ // (x: Int) => f(x, implicitly[Int])
  • We’d need to preserve the “entirely implicit parameter list can be elided” behavior that is the current Scala
def f(x: Int)(implicit y: Int): Int
f(1) // f(1)(implicitly[Int]
  • We’d need to use Dotty’s type inference algorithm for right-to-left dependent types

  • We’d want to not use the b: B = implicit syntax, and instead use implicit b: B syntax:

def f(x: T, implicit ctx: Context)
  • We’d want some shorthand of providing anonymous implicits
def f(x: T, implicit Context)

Some are already fixed in Dotty, some require us to tweak the syntax in a superficial way, and some are minor special cases that aren’t any more weird than the current special cases around implicits.
None of these are “changing the entire language”.

In exchange, we make passing implicits just as easy as passing in any other parameter:

def f(x: T, implicit ctx: Context)
f(1)
f(1, ctx = new Context())

// def read[T](input: String, implicit reader: Reader[T]): T
upickle.default.read[Int]("1")
upickle.default.read[Int]("one", reader = IntWorkReader)
upickle.default.read("one", reader = IntWorkReader) // type inference

// def write[T](input: T, implicit reader: Reader[T]): String
upickle.default.write[Int](1)
upickle.default.write[Int](1, writer = IntWorkWriter)
upickle.default.write(1, reader = IntWorkWriter) // type inference

// def call(implicit cwd: os.Path)
os.proc("ls").call()
os.proc("ls").call(cwd = os.pwd / "workingdir")

// def runAsync[T](f: => T, implicit ec: ExecutionContext)
runAsync(println("hello World"))
runAsync(println("hello World"), ec = WorkerPool)

Whether the implicit use site is typeclass-based, drives type inference (e.g. read above), is driven by type inference (e.g. write), or is a simple context value (f, call, runAsync), a user would need to learn zero special syntax or concepts to manually pass in an implicit parameter. They would do the obvious thing, which would just work.

6 Likes

Actually, there is a very easy solution to the migration problem per se:

  1. In 2.14, allow, but do not require, the new syntax (whatever that syntax may be) for explicit application of implicit arguments
  2. Possibly even warn when the old syntax is used in a way that arguments end up being passed to implicit parameters
  3. In Scala 3.0, disallow explicitly passing implicit arguments with the old syntax
  4. In Scala 3.1, or 3.0 with a -Xsource:3.1, give the new meaning to the old syntax

This is not any longer than other migration periods we’ve been talking about, such as _ for type lambdas. Moreover, it has two major advantages:

  • Definition sites need not be rewritten, at any point
  • No need for libraries to evolve in lock step with their users; the users are the sole responsible for migrating their own code, independently of the choice of the library
2 Likes
  1. In Scala 3.0, disallow explicitly passing implicit arguments with the old syntax

That’s where it breaks down. We have to be able to cross compile a large subset of the language, including the compiler itself, with either 2.13 or 3.0. We can do this now, and make crucial use of it. It’s currently not even clear whether 2.14 will be out before 3.0, so we can’t rely on it for migration. Even if it is out before 3.0, we would have to give people time to change their code, so we’d need a full release cycle between 2.14 and 3.0. This would put 2.14 in a timeframe of maybe 2 years from now and 3.0 another 2 years later. We will not wait that long.

We could introduce the new syntax in 2.13.x; it is a backward and forward binary compatible change and backward source compatible change, which we have precedent for.

3 Likes

We could introduce the new syntax in 2.13.x; it is a backward and forward binary compatible change and backward source compatible change, which we have precedent for.

I’d be in favor of that, if it was possible, since it would also solve the problem what to do with context bounds (which is the one aspect where we kept the old definition syntax, and which consequently causes migration problems now).

In this thread we discuss revised implicit parameters which is however an integral part of a larger design to come up with a better system for context abstraction. I realize that larger design was lacking some context. The doc pages explain what the new features are but give no motivation why we would want to do the changes. I have now revived some of the materials of the PRs and added to it to produce an overview page filling that gap.

Let me just add that for me this is the single most important change for Scala. It makes all the difference between a language that is future-proof and one that is stuck in the past. The one person on this thread who has actually used the new system in anger states it is “vastly superior” to what we have now. So, my closing appeal to everyone here: Don’t let what you’re used to influence your judgement about what is the best solution.

5 Likes

I thought Implicit Function Types were supposed to solve this. Are they still on the roadmap? How do they fit to the new design?

I was assuming that 2.14, 3.0 was going to be sequential. On that basis if

2.13 May 2019
2.14 December 2021
3.0 May 2024
3.1 December 2026

That’s assuming optimistically, that release intervals can be held at the 2.13 length and don’t continue to lengthen as they have over recent cycles. Personally my preference would be for Scala to move to yearly release cycle. On that basis the new collections library would have ended up in 2.15 not 2.13.

The plan is to go into feature freeze for 3.0 this summer and to ship about one year later.

Recently we have

2.9 => 9 months
2.10 => 1 year 7 months
2.11 => 1 year 3 months
2.12 => 2 years 6 months
2.13 => 2 years 5 months+

So at least from the outside the current time projections don’t look realistic.

2 Likes

To be fair, the reasons he gave why it is vastly better all seem to be related to those problems which would also be solved by e.g. @smarters’ proposal:

def foo(i: Int)(implicit Ctx): String => Double = ???

foo(42)(implicit new Ctx)("bar")
foo(43)("baz")

I suspect that everybody in this thread is very much in favor of solving those usability issues with implicits. The question is, how radically do you want to change the syntax? And if you want to change it more radically than is strictly necessary for solving the usability issues, is there enough justification for it?

@drdozer In your experience, would the given for implied syntax still be a huge improvement over regular implicits, if they received those improvements?

To me, this seems self-contradictory. Surely “one person tried it out, in isolation, and liked it” isn’t the benchmark we need to reach to decide upon embarking on the single most important change for Scala!

This thread isn’t just about yes/no on a particular proposal, but has also been about alternatives: the same benefits but less migration cost, or a different approach with potentially better outcomes. “This proposal is better than the status quo” is a strawman: nobody is putting the status quo forward as an alternative!

It’s clear there isn’t agreement that this is the right thing to do. Everyone wants the same thing, and nobody is being unreasonable: the proposal and arguments presented so far are simply not sufficiently convincing, and so people are not convinced. It’s always possible to push these things in as a fait accompli and just merge it, but if we want consensus for such an important change, that just means there’s work to do.

Maybe with more work (explanations, demos, migrated libraries, user-testimonials, …) people may be convinced of this proposal, and we will have consensus. But let us not forget the alternative: that with more work we may realise a consensus that this is in fact the wrong thing to do, and be very grateful to have realised that before it gets baked into the language forever!

7 Likes

Generally, splitting explicit parameters from implicit parameters seems like a clean abstraction to me. I also like the use of different keywords for different concepts rather than co-mingling them all under “implcit”.

I’m not sure that I like the keyword “given” very much, but it is at least short. Possibly “given” could be used in the declaration and “give” could be used at a call site, or perhaps even “using” or “with” could be used at the call site. E.g.

f("abc")
(f give global)("abc")
f("abc") give ctx
(f give global)("abc") give ctx

or

(f using global)("abc") using ctx

Another alternative naming strategy could be to use provided/provide. E.g.

def max[T](x: T, y: T) provided (ord: Ord[T]): T = 
  if (ord.compare(x, y) < 1) y else x

max(2, 3) provide IntOrd

maximum(xs) provide (descending provide (ListOrd provide IntOrd))

provide global for Universe { type T = String ... }
provide ctx for Context { ... }

Hence I wonder whether the objections are more to do with the choice of keyword names rather that the structure of syntax being proposed.

Rob

The proposal lumps three things together:

(1) Lifting some limitations of Scala 2 implicits - can now be any parameter list, not just the last

(2) Changes keyword from implicit to given

(3) Changes the syntax

The proposal does not explain why these three things need to be lumped together. Maybe there is a reason why (1) needs some changes to syntax, but for sure not such dramatic change.

These three changes should be discussed separately:

(1) Sounds good. Why not?

(2) Come on. Those who complain about “too many implicits” don’t really hate the keyword, they hate features that make the code obscure, and this proposal does not make anything clearer.

(3) Looks like a major obfuscation of method calls. Please, please, please no.

give (x) given (x) given (x) give (x) give (x) given (x) given (x) given (x) give (x) given (x) given (x) give (x) give (x) given (x) give (x) given (x) give (x) given (x) given (x) give (x)

I wonder which of this is so important to Martin - (1), (2) or (3)?

Please explain what exactly you mean by “splitting explicit parameters from implicit parameters”.

I don’t care much what keyword we use. I just find it silly to argue that we need to change it because people don’t like “implicits”

Regardless of what the keyword will be, the proposed syntax is counter-intuitive, because it makes a single method call look like a bunch of method calls.

This is really helpful, thanks!

There is one point that I think deserves additional clarification: does term inference consist of an embedded mini-language within Scala which is intentionally kept as distinct as practical from “normal” Scala constructs, or is the goal for the feature to feel seamless with the rest of Scala?

There are different advantages either way. The different look of embedded mini-languages calls attention to them, which is important if attention is needed. For instance, for-comprehensions have a set of syntax (for, yield, <-, destructuring without match) that just don’t look like anything else in the language. But they work quite differently too, so that’s perhaps an advantage. In contrast, implicit classes look and act just like classes, so everything you know about making private defs etc. to hide implementation details works equally well there as anywhere else; you can reuse all your knowledge and that’s also perhaps an advantage.

Because I don’t know the answer to this question (and don’t have sufficient familiarity to have a good basis with which to judge), it’s a bit hard to evaluate the proposal in total.

Intuitively, I am most convinced by the feature changes to term construction (the existing val-and-def based one is weird, with namespace collisions having important yet undetectable consequences), but least convinced by the syntactic changes. But here seems as good a place as any for addressing the whole set of features.

Implied instances

The name is questionable. They are inferred, they are given, but they are implied by…what? Some are implied by other terms, but if the feature as a whole is “term inference”, then if we must have a new keyword it seems like infer or inferred is preferable. Having extra synonyms to learn for the same kind of thing is not a virtue (though having separate names for separate things is).

Then we have novel syntax that gives a third way (beyond val and object) to create instances. Having two is bad enough (c.f. the perennial question “should I use a lazy val or an object?” followed by experts giving a detailed answer that often ends up approximately as a shrug). I think if we’re going to do this, it needs to be better justified.

The main change here is that previously, implicit vals are just vals. You can make them lazy, you can put them in a list, you can use them explicitly. Now, presumably, you can’t do any of those things. Maybe there is good reason for this, but it’s not spelled out. Likewise, implicit defs are just defs. You can curry them, close over them, use them explicitly. Again, now presumably you can’t.

Those are pretty big differences. If that actually is the difference, then maybe it really does warrant the separate syntax.

However, there is another enormous problem with existing synthesis of terms that doesn’t seem to be addressed here at all: specification of priority. Given the gymnastics currently required–mostly with extension methods, admittedly–to make sure the right thing is inferred (resorting to tricks like fake inheritance hierarchies as an immensely clumsy way to shift priority), a proposal that doesn’t clearly address this is, in my opinion, missing out on the opportunity to solve one of the biggest pain points with term construction.

I don’t have a good solution to these problems. I do think that using infer in place of implied would read better.

Inferable Parameters

The same comment about syntax applies, but considering the ongoing problems with manually specifying implicit parameters, I tentatively agree that the novel syntax is worth it.

However, there’s one really ugly part of it, despite it being very pleasing from a no-irregularities case:

def foo given (i: Int) (c: Char) given (b: Boolean): Double = ???

I’ve written it all out with spaces, but visually the closest binding is (i: Int) (c: Char); the second closest binding is (b: Boolean): Double; and the instances of given bind it all together.

But that’s profoundly unlike how it is meant to be interpreted.

Because of improvements to argument syntax, there is no longer ambiguity about apply (as far as I’m aware). That frees up apply to be a soft keyword to use to clarify precedence:

def foo given(i: Int) apply(c: Char) given (b: Boolean): Double = ???

The (b: Boolean): Double isn’t so bad, but also isn’t fixed.

Alternatively, because Dotty type inference no longer works a parameter block at a time, there’s no particular reason that given has to be external. So we can move it inside every time, and allow a syntactic variant that the terminal one, if unnamed, can go outside:

def foo(given i: Int)(c: Char)(given b: Boolean): Double = ???

Hey, it looks like Scala, and intuitive precedence works!

def foo(c: Char) given Boolean : Double = ???

Still kinda suggests Boolean : Double, but oh well.

As far as calling goes, I prefer given/give or implicit/explicit or some other keyword-based approach.

Also, there is still the “how do you infer the right term in complex cases” issue that AFAICT isn’t solved here either.

Implied Imports

The name is incredibly misleading. An “implied import” sounds like something that you ought to have to import but actually don’t–that is, it makes things less explicit. The feature is exactly the opposite: it forces you to be more explicit about where you’re getting your term inference goodies from.

Also, right now there are various lookups for companion objects that prevent you from needing to get terms–everything just works out. I’m not sure how this is intended to port over.

I think it’s perhaps an advantage, but the naming needs to be fixed somehow.

import scala.collection.converters._ with inference

is very clear.

Also, I am unconvinced this will actually solve anything. People with IDEs will just plaster more imports at the top of their file and still nobody will know where anything is coming from. Furthermore, people will need to pick out the terms they need and they’ll still collide in an unmanageable mess when importing from different sources.

The solution here seems to me to mostly be tooling, not language constructs. It seems to me just as reasonable to allow turning the imports off:

import scala.collection.converters.{implied => _, _}

Implicit Conversions

This section doesn’t explain how to convert, say, Vector[A] to ParallelVector[A]. If you can’t, I think the feature is worthless. Just remove it and make people call an explicit extension method to convert. If you can express generic types, it’s not like any function I know of in Scala 2 and while the feature is exciting I can’t reason about it sensibly.

Note that Rust has the same challenge with Into and From traits, but Rust allows the implementation to be generic, so that’s okay: you just

impl<A> Into<ParallelVector<A>> for Vector<A> { ... }

Because the reworking of implicits is holistic and consists of several parts, I think we need to consider the parts together. Each one alone seems less appealing than they do in conjunction; in conjunction I think there’s a good deal of potential but I am not sold on the syntax changes, and think we’re missing some key features too (discoverability of which terms are/aren’t being inferred and selection of terms to infer when conflicts arise).

2 Likes

To me, this seems self-contradictory. Surely “one person tried it out, in isolation, and liked it” isn’t the benchmark we need to reach to decide upon embarking on the single most important change for Scala!

No, but, “I never tried working with it but it looks strange to me and therefore I am against” is not a valid argument either.

Here’s another testimonial, this time from Raul Raya, who is also the author of Kotlin’s implicits proposal. https://scala.love/episode-47-with-raul/. It starts at 54.11. He states:

  • The new implicits system is Dotty’s most exciting feature
  • The “god-like” nature of the implicit modifier is current Scala’s biggest problem.

I agree we need large scale experience reports before we finalize this. What I will propose is the following: We keep new style implicits by and large as they are implemented in the Dotty codebase. A lot of thought went into them and they have been carefully designed and calibrated. If people have ideas how to improve or fill in some details I am very open to integrate them. But we will not start from zero.

If, a year from now, the consensus is that most people hate the new implicits, we will retire them before going into 3.0 final. We can at that point consider whether we would want to backport some of the improvements to the old implicits. But I bet it won’t come to that.

2 Likes

Thanks for going into details of the proposal!

There is one point that I think deserves additional clarification: does term inference consist of an embedded mini-language within Scala which is intentionally kept as distinct as practical from “normal” Scala constructs, or is the goal for the feature to feel seamless with the rest of Scala?

The intention is to be separate. There is a set of constructs to define values, methods, classes and there is one separate construct to inject such definitions into the implicit context.

Implied instances

The name is questionable.

The name was chosen in the sense that implied c for C means that c designates the instance that is implied for type T. (It is implied by the fact that the definition is written.) So, if T is Ordering[Int], say, and no specific instance is given, we pick the instance that is implied for this type. You could also say “canonical”. We can definitely try to come up with a better name. We already went through - and discarded - a lot of them, including witness, capability, default, impl, instance. We should discuss this (and the other points you raise on implied instances) when the thread for instance definitions is open. I personally think implied works best so far, but I am not overly attached to the term. I briefly considered inferred and agree it might also work but then discarded it in favor of implied.

Inferable Parameters

However, there’s one really ugly part of it, despite it being very pleasing from a no-irregularities case:

        def foo given (i: Int) (c: Char) given (b: Boolean): Double = ???

Yes, I agree that is a problem, but I don’t worry too much about it, as it is a special advanced case. Most people would put their given parameters at the end, like they do now. And then it works well I think.

As far as calling goes, I prefer given/give or implicit/explicit or some other keyword-based approach.

I don’t agree with that part. We should keep to the principle that parameter definitions and arguments use exactly the same syntax. And given works really well for that. “the result of a function application, given some contextual property” works both colloquially and in the precise mathematical sense. And I like the infix position, since that re-uses intuitions for parsing Scala that people have anyway, and it reinforces the colloquial sense of given as a connective.

Implied Imports

The name is incredibly misleading.

Agreed. The syntax is actualy the other way round, import implied which is better: We import implied instances. We should also use the term “import implied” to describe it, instead of swapping the noun and the adjective, which comes naturally but distorts meaning.

Implicit Conversions

This section doesn’t explain how to convert, say, Vector[A] to ParallelVector[A] .

It could be:

implied [A] for Conversion[Vector[A], ParallelVector[A]] = 
  v => v.par

Or, writing the apply method out explicitly:

implied [A] for Conversion[Vector[A], ParallelVector[A]] {
  def apply(v: Vector[A]) = v.par
}

I did not see what was hard or special about it, but maybe I am overlooking something.