Proposal To Revise Implicit Parameters

Has it been considered to give up on passing given parameters explicitly?
So you would have to write

implied for Ctx = new Ctx

to override the Ctx passed to foo. It might be slightly inconvenient sometimes but then at least the separation – between regular parameters which are passed in explicitly and given parameters which are inferred – is complete.

There is one thing I still find lacking in the proposal, even if I completely go along with the philosophy behind it. I think I’ve brought it up before but can’t recall getting any response.

implied impliedCtx for Ctx = new Ctx
def bar() given Ctx = ???
def foo() given (givenCtx: Ctx) = bar()

The Ctx passed to bar is givenCtx (which should definitely stay that way!). With implicits it was obvious that an implicit parameter was itself implicit. But now I find it completely non-obvious, because of the disconnect between the different keywords, that a given parameter is also implied.

I think there’s a more fundamental difference, all these languages (I think ?) guarantee some form of “typeclass coherence”. That is, different instances/impls of typeclasses/traits should not overlap. Once you have that, hiding term inference makes sense since the term will always come from the same place, the same isn’t true in Scala and it seems to me that bringing over nameless instances but not coherence would be the worst of both worlds: how do I do “find all references” on a nameless implied ? And how can the compiler give me a useful error message when two nameless implied instances are ambiguous ?


Personally, I am already using this proposal along with the other contextual abstractions in a 5500+ line codebase. I really like the given keyword, I think it makes sense to say it’s some constraint on the current values in scope.

My only complaint is that, having confirmed with a non-scala programmer, that given that the inferrable parameters in def fancy given A, B, C = "Fancy!" do not really look like a parameter list, (which is the intent), why should they be applied as a whole list if you only want to update a single one?

for example, using only new features in this proposal:

trait A
trait B
trait C

def fancy given A, B, C = "Fancy!"

implicit object A_ extends A
implicit object B_ extends B
implicit object C_ extends C


val newFancy = fancy given (implicitly[A], implicitly[B], new C {})

Now I must summon all the other values to change a single implied argument. However, I don’t think it’s possible to really prove which implied parameter list is the correct one if you allow partial application and generate the other arguments.

This causes me never to use given at the use site and instead update the implied scope in a previous statement in a block before calling, or otherwise restrict myself to only single argument implied parameter lists.

I’m not sure I agree with this. The previously mentioned use case of a CorrelationId does not seem like a constraint, just a parameter. Personally, I find this style to be very common and useful. I wonder, then, if this proposal is focusing on too narrow a use case, which may be resulting in the disagreement.

it’s not in this proposal, but the other abstraction: Context Queries would probably be most idiomatic for that argument:

type Transactional[O] = given CorrelationId => O

def getUser(id: String): Transactional[Future[User]]

or whichever is the most appropriate name.

One of main Rust’s strengths is friendliness of compiler errors. Rust compiler very often suggest possible corrections and the first one frequently works. If I forget some import (use in Rust parlance) needed for a typeclass to work, Rust compiler often suggest it to me. What will Scala compiler say?

object Main {
  implicit class RichInt(value: Int)(implicit name: String) {
    def print(): Unit =
      println(s"$name: $value")
  def main(args: Array[String]): Unit = {
    // implicit val name: String = "bbb"
value print is not a member of Int

Scala compiler doesn’t suggest any possible solution. Rust would search for some, order them by suitability and show e.g. 5 first ones.

Changing implicit to given won’t change the fact that Scala compiler doesn’t try to offer possible corrections.

IntelliJ offers implicits expansion display to help decrypting already working code that uses implicits: That helps a lot, but works only when code is already correct. When trying to fix problems with incorrect code, compiler suggestions are required for good developer experience.

1 Like

Are Swift protocol extensions coherent? I have not found anything asserting this. Anyway, I believe coherence is a minor concern, at best. And even with coherence the necessity to somehow identify instances does not go away. You either have to refuse two conflicting instances at the point where they are defined or at the point they are used, that’s all. But you don’t need a user-defined name for that.

I believe we have already made some progress with error messages, but further improvements would definitely help. I believe @olafurpg had some ideas about this. Anyway, any pull requests in that area would be greatly appreciated! But as you write yourself, that issue is orthogonal to the current discussion.

IIUC in such cases a library user do not pass such argument into a function.
In many languages it can be implemented via thread variable.
It is useful in any case. But I think, such pattern is not a killer feature.

There are killer features of implicit parameters. And I agree that it is more like constraints in such cases

maximum(xs) given descending

It is more natural at least for people at our company(we use sql very often )

  select max(xs) over (order by salary)
     from table

Thread locals aren’t compatible with asynchronous programming in general and that bitten me a lot when using Futures with LiftWeb, which uses a lot of thread locals. Personally, I use thread locals only as a last resort to avoid headaches and I don’t have much positive experience with them.

You also get no compilation errors or IDE support when a thread local is missing or set to wrong value. It can also be harder to see where thread local comes from as you can set thread local in some very deeply nested method, whereas implicits are passed from higher level method to lower level method directly.

Version with thread locals:

def highLevel() = {

def lowLevel() = {
  val value = searchDeepSomewhereToGetThreadLocal() // first you need to make sure which thread local container is the correct one

Version with implicits:

def highLevel() = {
  // you can't push implicit definition into some deeply hidden method, it has to be in scope here
  implicit val anImplicit = computeValueUsingHeavyMachinery()

// you don't need to figure out which implicits are available for you, because you have them all in the signature
def lowLevel()(implicit value: Int) = {

You could use a local implied instead as @Jasper-M suggests:

{ implied for c = new C
  val newFancy = fancy


Yes, we could do without given in applications. But I have the impression it’s a useful functionality to have. The workaround of local implied is a bit clunky at times.

I do think that given already has the connotation of propagating automatically to callees. If I am allowed to take some property as a given, everything I call is allowed to assume the same property. The situation is really analogous to other languages where there is one construct to require and propagate constraints (in Haskell: … => …) and another to establish base properties (in Haskell: instance).

You are right my previous saying is a little extreme.
The main idea is that in such case a library user can even do not know about such parameters. It works in background. So the syntax does not matter(at least for me).

It’s a good observation. But note that CorrelationID could not be any parameter type. It could not be an alias of Int, say, that would be a terrible thing to do. So: It needs to be a special type, and the constraint would be “there is an instance of CorrelationId in scope”. True, sometimes it is more direct to think of these things as parameters, and you can. But thinking of constraints instead gives better guidance. The statement “here is an (implicit x: Int) parameter” looks OK to beginners at first. “There is an instance of Int in scope” is immediately seen as non-sensical. So, better guidance.

Fair enough. I do agree that “there is an instance of Int in scope” would be more obviously wrong to a beginner. I might even agree that the given syntax expresses the idea of that constraint more clearly than the previous syntax :smiley:.

A few thoughts from the peanut gallery:

That’s a good question. Does this imply that we should get more hard-assed in the spec about requiring names for implied instances? Weak error messages are arguably an even bigger Achilles’ heel for Scala 2 than implicits are, so this is a consideration. Yes, Dotty has made excellent progress there, but error messages can’t be treated entirely in isolation – the language affects what is available to explain when something goes wrong.

This feels sort of like type ascriptions on implicit vals: it seems like something you could safely leave off, until things go wrong and you realize you really should have been more rigorous about it all along.

Having just done a Dotty overview for my office yesterday, I’ll note that the given keyword was more natural to teach than I had originally expected. I’m a bit more iffy on implied (I generally agree with @Ichoran’s points about word usage), but given works well when talking about an instance or function. So I’m generally in favor of that choice.


This is already possible in Dotty with implicit function types, its not dependant on this change.

Not sure what the argument here is, but at the end of the day we are talking about parameters that are either passed explicitly or implicitly, this is literally what we are talking about. The technical definition of an implicit is being able to go from a type T to a value t (hence why implicits actually roughly correspond to a limited subset of prolog)

The specific type that T happens to have is not really import, the reason why we use CorrelationId instead of just String would be the same reason we would make a special type Email to represent emails rather than just using String. It has little to do with the fact that we are dealing with “context’s” or “constraints” but more to do with the fact that its considered idiomatic and good Scala practice to do so, especially when you add property based tests (which are govern by types), structured logging (also goverened by types) and even things like validation (i.e. you can use safe constructors to valid that the Email is always valid, which means whenever you see an instance of Email you know its correct).

1 Like

A Scala expression is a tree of method calls. The easier to see the method calls, the easier to read the code. This is why method calls are written as compactly as possible in any language that comes to my mind.

Inlining and other compiler or JVM optimizations should not be relevant to understanding the code.

Perhaps you feel that an additional argument list is (almost?) the same as a following unnamed apply method call. In my mind, they are quite different, at least once you want to find what it being called in the source or the api docs, or when you want to understand a stack trace. But let’s for the sake or argument consider an additional argument list and a unnamed apply call as (almost?) the same thing.

In this case, your fundamental unit is basically any named method call together with all directly following unnamed apply calls. I don’t know if there is a name for it, let’s call is a quasi call. In that view, a Scala expression is a tree of quasi-calls, and quasi-calls should be written as compactly as possible.

In any case, if you want to represent a tree, your nodes needs be clearly identifiable as tightly delineated units, and not look like loosely connected pieces.

Well, your principles (1), (2) and (3) are quite restrictive. And I don’t see them explained much other than that you thought about this very deeply and that’s why we should trust you and accept that they are not open to discussion.

To be fair, this isn’t Martin operating in a vacuum. We’ve been talking about this subject for literally years now, and the current proposal is the outcome of enormous amounts of debate, some here and lots in the Dotty repo.

While I don’t think there is a consensus yet (and I’m not sure we are going to get one), many folks have been involved in the discussion leading up to this, and it has evolved considerably over the past year…

1 Like

I don’t think that actually contributes that much to why people love them. As a consequence, it tends to make the relevant definition easy to find, but it’s the findability not the uniqueness that I think is key. The uniqueness helps it “just work”, and when it doesn’t “just work” it’s easy to find why not. In Scala 2 this is not so much the case.

In fact, uniqueness is a huge burden on those languages that have it as some things can very sensibly not always need the same context. For example, sort order. All the solutions with terms uniquely specified by type have impaired usability: you need two (or more) methods or two or more classes (?!), everything wrapped in newtypes, etc… It’s really an awful mess compared to being able to specify what you want.

So if we can keep the ability to specify exceptions while still making it easy for things to “just work” most of the time, and to be easily diagnosed when they don’t, I think not only do we catch up to but we actually surpass the other languages’ usability in this regard. Some use cases will, admittedly, remain not quite as simple; but we have classes of problems that can be neatly solved in ways that other languages (Haskell and Rust, anyway) can’t match.