Proposal To Revise Implicit Parameters

Okay. Well, it definitely is! I think this decision has prompted a lot of the pushback; familiar constructs feel more welcoming than novel ones, typically.

I figured that this was the logic behind it, but it is kind of awkward because “implied” is both an adjective and a past-tense verb:

“He implied that his solution was better.” (Past tense verb.)
“The implied meaning is not always clear.” (Adjective.)

Because of this conflict, and no contextual cues in code to disambiguate, I find it mildly uncomfortable to read implied c for T. Rather than comfortably settling on the adjective form implied c as opposed to maybe explicit c, it seems like a sentence fragment: who implied c for T, and when did they do it? I guess whoever wrote the code did? If it were present tense I could read it as imperative, a directive to the computer: imply c for T. But it just ends up seeming vaguely awkward.

I expect quite some number of people will have the same discomfort with it that I do, and for the same reason (though it took me a while to figure out what the problem actually was and how to express it).

Can we then please limit it to the case where it works well? I don’t see any compelling use case for early givens since implicit arguments are going away and thus the call chain becomes unambiguous. Just create a temporary class or newtype that wraps the given and proceed from there.

def foo given (i: Int) = GivenI(i)
case class GivenI(i: Int) {
  def apply(c: Char) given (b: Boolean): Double = {
    implied for Int = i
    ...
  }
}

Yes, it’s a bit annoying to write, and maybe an optimizer will have to chew on it to get it zero-cost, and yes, formally you can now create GivenI(i) instances that act kind of like a partially applied function without actually being one.

But in exchange for removing a really awkward part of the syntax, I think it’s worth it.

(Note: more ambitiously, it could be an anonymous class: def foo given (i: Int) = new { def apply(c: Char) given (b: Boolean) = ... }. Even more ambitiously, inline new could be a directive to make it all resolve into a single method call.)

Also, on the calling side, it also makes everything clean:

(foo given 7)('e') given true

actually is much more like what it looks like (i.e. (foo given 7) is an operation that produces an intermediate, GivenI(7), and that intermediate is applied. As it stands now, it is a very weird syntax for function application that is isomorphic to foo(7, 'e', true).

It is not very much better because of the past-tense verb form: “The import implied that we needed more functionality.” This brings to mind: “that we have an import implies (or did, when we wrote it) this module path”. Which is of course nonsense; we are using import as verb (imperatively) to state that the things implied in foo.bar.baz._ should be accepted also. Neither verb past-tense-verb module-path-noun nor verb adjective module-path-noun make much sense.

It’s rather weird, even in the improved order.

Of course it doesn’t always have to make sense in English, but it is somewhat jarring, and it is best to minimize such things when possible.

Oh, right, just like Rust. For some reason I had decided that it was the instance of Conversion that had to be parameterized, rather than the…uh…what do you call that thing? Synthesizer of implied instances? Anyway, oops–of course the type parameter should just be pushed out to the implied.

No issues there, then!

(Except the how-do-we-control-precedence-of-compatible-inferences question, but that is a question throughout.)

That is actually covered by the new implicit resolution rules.

This goes through all possible solutions and variations in gory detail.

That does get kind of gory!

I don’t know if it will be transparent enough as to why various things are chosen, but it does look like it will at least overcome the shortcomings in capability of the existing scheme. So, thumbs up for functionality, but I worry a bit about it exacerbating the impression of implicits being impenetrable.

(E.g. we have a lot of training in using numbers for prioritizing, but much less in the harder problem of inferring priority from a graph whose topology might not even be easily inspected due to distribution across multiple files etc.)

I am sure it can be packaged up and made more accessible. But I believe prioritizing in this way is almost always the wrong thing to do, so I don’t see an interest in making it very convenient.

We will have a way to treat implicits functionally instead. It’s currently called implicit match, and is used like this:

implicit match {
  case ev1: T1 => ...
  case ev2: T2 => ...
  ...
}

That gives you a functional way to try implicits in sequence instead of an indirect prioritization scheme. (implicit match is still provisional but it is clear we will have something that’s equivalent).

2 Likes

That ambiguity is the price we pay to allow currying and omitting of apply, right? I don’t complain about that because I don’t have a better suggestion (if I did I would!).

But the proposed syntax that rips the argument lists apart makes it hard to get intuitively that they still part of the same, and I don’t see the need. Once you start nesting that no one can read it any more.

m given (n given (a: A)(b: B) given (c: C))(o given(d: D)(e: E) given (f: F)) given (p given (g: G)(h: H) given(i: I))

Can’t we at least have some syntax that keeps the argument lists together (nothing but whitespace between them)? Just put something inside the parentheses instead of outside, like @Rex Kerr’s suggestion, or mine. Since mine with the dots was hard to read, how about sharps instead:

def m(# a: A, b: B #)(c: C, d: D)(# e: E, f: F #): R

which would be equivalent to, if this was legal:

def m(implicit a: A, b: B )(c: C, d: D)(implicit e: E, f: F): R

This can be called like this:

m(a, b)(c, d)(e, f)

m(a, b)(c, d)

m(a, b)(c, d)(##)

m(##)(c, d)

m(##)(c, d)(##)

Anonymous parameters would be simply underscores, just like in pattern matching:

def m(# _ : A, _ : B #)(c: C, d: D)(# _ : E, _ : F #): R

The issue is, with currying and inliing and other optimizations the number of method calls is not a good measure for anything. Why would you care whether it’s one method or two? The JVM most likely will treat this differently anyway.

It might look strange at first. My bet is, after using it a bit, it will be very natural. I might be wrong, in which case we’ll try to tweak it.

1 Like

As a last resort I’m OK with that plan, but it’d be so much better if we agreed on things sooner rather than later. E.g. my proposal would benefit from a gradual migration strategy like the one proposed by @sjrd in Post 63 and Post 65, if we wait until a short time before 3.0 to reconsider it, then every milestone of the migration plan will have to be significantly delayed.

Literally nobody has said that. You have made a proposal, and gotten a lot of feedback. Detailed, constructive feedback, both positive and negative, including proposed alternatives which are specced out as much as can be without sending a patch for the compiler. Please don’t dismiss that as “It looks strange to me”, because it isn’t.

You are clearly frustrated getting a lot of the same feedback and concerns, over and over, across multiple threads. Not just from me, but from multiple people. But the reason that same feedback keeps appearing is not because people are stubborn and obstinate: it is because you are not responding to it in a sufficiently understandable, convincing manner to win people over, and so the concerns remain.

By putting up a new proposal, people have the implicit/implied/given expectation (ha!) you to want to hear about feedback, concerns, and alternatives. Isn’t that what proposals are for?

Perhaps what we need to do is set expectations for the discussion right:

  • What do you want from the discussion? Do you want only minor tweaks to the current proposal? Are you interested in alternative proposals? Are you interested in meta-feedback, e.g. “this proposal could be more convincing if it had XYZ”?

  • Is there any actionable outcome? Are we looking for a go/no-go consensus? What’s the expected outcome if people are favorable, unfavorable, or favorable to something else? Are we just trying to improve this specific proposal, with no actionable decision point? Or is there no actionably output except the discussion itself, and work will keep churning along regardless?

  • Where do alternatives come in? This thread, or elsewhere? Do alternatives need a detailed, self-contained spec before being worth discussion? Or only alternatives with an implementation in Dotty? Or perhaps we do not have {time,resources,effort} to spare on properly discussing alternatives, and so this proposal is all we’ve got?

  • Since we’ve started bringing in social proof in an ad-hoc manner, should we formalize that into a proper survey? That would save a lot of “this guy here likes this design more” “that guy there likes that design more” boilerplate discussion, and leave that to the end under a more rigorous process everyone can agree on beforehand.

  • What about repeated feedback: things that have been brought up before. Is that welcome, if the feedback still applies? Or is it unwelcome, because it has already been assimilated? How do we know if something has been assimilated, rather than forgotten?

I think by settling these fundamentals right, we’d be able to have a much more productive and much less frustrating discussion for everyone.

8 Likes

I don’t actually need to use it to know how natural the given syntax is going to feel, as I have enough prior art with other things.

This is something that doesn’t feel natural now but very likely I will learn to feel is natural:

def f(x: Double) given RoundingMode: Double
f(x) given HalfUp

This is not:

def f given RoundingMode(x: Double): Double
(f given HalfUp)(x)

The visual precedence is wrong in the definition, the groupings suggest a syntax tree that doesn’t accord with the underlying construct, and even having to consider that this might possibly occur in the code will negatively impact my experience.

The fact that we have prior art in (c + 2)(foo) having foo be an argument to the + method is just, to me, a condemnation of that part of the current rules. The compiler scolds me:

scala> class C { def +(i: Int)(j: Int) = ??? }
defined class C

scala> val c = new C
c: C = C@536bb027

scala> c + 2
<console>:61: error: missing argument list for method + in class C
Unapplied methods are only converted to functions when a function type is expected.
You can make this conversion explicit by writing `+ _` or `+(_)(_)` instead of `+`.
       c + 2
         ^

No conversion unless a function type is expected–yes, please! I want to know.

In other contexts, the presence of a trailing (x) alone isn’t enough to convert something to a function type, so it shouldn’t be here either. (Try (c + 2).pipe(_(3)) for example.)

The rule that what is inside parentheses is evaluated first is one of the most inviolable in mathematical and programming syntax, and it’s broken here. Yes, if the curried and uncurried forms are treated as equivalent, then it’s the same thing, but having to do the mental juggling between different forms is a burden.

2 Likes

Dotty has indeed made great strides to improving the Scala language, yet regarding this proposal I fear we may be putting the carriage before the horses. The ideal language change trajectory should always be an iteration of “spec->discussion->spec…” and only finally implementation, so we don’t waste time on work to see it rejected later. Indeed in many situations it might be easier to present a working feature to play around with and convince people, which is what the dotty project is all about, but this may also get us emotionally attached to ideas we put a lot of effort in. For me, the “already too much work was done” argument is irrelevant to this (technical/user-experience) discussion. Powers-that-be may decide later (outside of this discussion) that this argument is valid since we don’t live in an ideal world and there are other concerns (budget, timetable, etc.).

If Scala 3 books and documentation are a concern, can we reasonably state that?

1 Like

I mostly just lurk in here, but I feel rather compelled to say something.

I’ve taught plenty of Scala newbies about the current implicit system. Conceptually, most folks understand it relatively quickly. It’s pretty straightforward and the fact that the implicit keyword is used in parameter lists and also when declaring implied values is a huge clue to what’s going on, even to the uninitiated. Many folks are also familiar with multiple parameter lists (or at least don’t see it as a terribly strange concept), so the idea of one parameter list being “different” is not such a great leap.

I do agree that there are warts, particularly the arbitrary limiting to a single, final implicit param list. I do not agree that these warts warrant such a drastic, complicated syntax change. I’ve read the proposal multiple times and am still struggling with some of the basics. What was appealing about the old implicit system is that it just made slight modifications to already familiar concepts. With the new system, fundamental things (like the definition and calling of a function) have become pretty unfamiliar through the insertion of given. The use of for is even more confusing.

I will echo what @lihaoyi, @mdedetrich and others have said: let’s identify the major pain points in the current implicit system and try to solve them with minimal changes. I think there have been many compelling alternatives presented that deserve serious consideration, alternatives that build on familiar concepts. This is one of the things I love most about Scala - is that you don’t need to understand everything deeply at first, but you can still have pretty good intuition about what it does, largely because it builds on familiar constructs from other languages. The old implicits, even though they’re a relatively unfamiliar concept, actually scaled gradually with the developer, following the Scala philosophy. I do not think the same can be said for the proposed system.

On a more meta level, I am also frustrated that this SIP process appears to be somewhat for show. @lihaoyi has been very diplomatic about pointing out some of the uncertainty around the goals of this process, but I’m afraid I will be more blunt. When highly-qualified contributors from the community spend hours engaging with the material, politely raising legitimate and well-reasoned concerns, and constructively presenting alternatives in good faith, why are they being dismissed so flippantly?

3 Likes

@lihaoyi Thanks for the meta discussion! I believe you identified the problems quite clearly. I was not communicating well enough. Part of the problem was the format of the discussion, where (as a SIP committee) we decided we would split issues into separate threads and push them out individually for discussion. That means the current thread was greatly lacking context. I was amiss in giving a better motivation and framing and was consequently frustrated in seeing many counter-proposals that were already discussed and discarded previously on PRs.

So let me try to give some more background first and then discuss expectations.

Background

A big challenge in this discussion is that we come from widely different assumptions. A year ago I would have had a similar approach to many people here who are pushing back. But at some point last year I tried to step out of my comfort zone and asked myself a hard question:

  • If implicits are so good why are they not the run-away success they should be? Why do the great majority of people who are exposed to implicits hate them, yet the same people would love Haskell’s type classes, or Rust’s traits, or Swift’s protocols? The usual answer I get from people who are used to current implicits is that we just need minor tweaks and everything will be fine. I don’t believe that anymore.

  • Otherwise put: What can we learn from the other languages? The main distinguishing factor is that their term synthesis is separate from the rest of programming, and that they more or less hide what terms get generated. What terms are generated is an implementation detail, the user should not be too concerned about it.

  • By contrast, Scala exposes implementation details completely, and just by adding implicit we get candidates for term inference. The advantage of that approach is that it is very lightweight. We only need one modifier and that’s it. The disadvantage is that it is too low-level. It forces the programmer to think in terms of mechanism instead of intent. It is very confusing to learners. It feels a bit like Forth instead of Pascal. Yes, both languages use a stack for parameter passing and Forth makes that explicit. Forth is in that sense the much simpler language. But Pascal is far easier to learn and harder to abuse. Since I believe that’s an apt analogy I also believe that fiddling with Forth (i.e. current implicits) will not solve the problem.

So that led to a new approach that evolved over time. Along the way many variants were tried and discarded. In the end, after lots of experimentation, I arrived at the following principles:

  1. Implicit parameters and arguments should use the same syntax
  2. That syntax should be very different from normal parameters and arguments.
    EDIT: In fact it’s better not to think of them as parameters at all, but rather see them as constraints.
  3. The new syntax should be clear also to casual readers. No cryptic brackets or symbols are allowed.
  4. There should be a single form of implicit instance definition. That syntax must be able to express monomorphic as well parameterized and conditional definitions, and stand-alone instances as well as aliases.
  5. The new syntax should not mirror the full range of choices of the other definitions in Scala, e.g. val vs def, lazy vs strict, concrete vs abstract. Instead one should construct these definitions in the normal world and then inject them separately into the implicit world.
  6. Imports of implicits should be clearly differentiated from normal imports.
  7. Implicit conversions should be derived from the rest, instead of having their own syntax.

I arrived at these principles through lots of experimentation. Most of them were not present from the start but were discovered along the way. I believe these principles are worth keeping up, so I am pretty stubborn when it comes to weaken them. And I also believe that, given the mindshift these principles imply, there is no particular value to keep the syntax close to what it is now. In fact, keeping the syntax close has disadvantages for learning and migration.

Expectations

So, how can we make the discussion more productive and less frustrating for everyone?

  • Both meta feedback and proposals to change details would be very valuable. They should be clearly identified as one or the other.

  • Alternative proposals are welcome as well, but maybe they are better developed on separate threads.

  • Actionable outcomes: I’d be glad if we could come up together with a number of changes to the proposal that we could agree on and that could be implemented in short order. It would be great if we could arrive at a go/no go decision of the whole thing by consensus, but I have the feeling that will be hard to achieve at present, since it would require more practical experience of people working with the new constructs (myself included).

  • Alternatives could be worked out on separate threads. I’m happy to give feedback on the iterations. To be fully considered, they’d need to be at the same level of worked out detail as the current proposal. I.e. we need an informal spec and an implementation with which one can experiment.

  • I don’t believe in surveys, because of selection bias. This is particularly pronounced here since the people participating in the survey would mostly be used to current implicits. If I had been asked a year ago what I prefer in a survey I would probably have picked current implicits as well!

  • Repeated feedback: Is probably unavoidable since not everyone is current on what has been discussed and sometimes it is unclear why previous feedback was not incorporated. On the other hand, given the quantity of issues I am unable to respond to all repeated feedback in undisgested form. So, a proposal: Can we make a collective effort to process repeated feedback. E.g. if , say, explicitly is proposed , can we mine github history, note that it was proposed, try to distil the previous discussion and then continue here? That would make it much easier for me to respond and avoids monopolizing the thread with my comments.

Thanks again for the constructive feedback!

4 Likes

For a discussion about concrete syntax it seems surprising that there was so little said about this or similar variants of marking implicit parameters with some type of “implicit parens”, different from the plain explicit ones.

This seems to give about the same benefits for the abstract syntax as the given keyword while possibly being more immediately recognizable on the surface.

This actually replies to my post even before I sent it, but there are trade-offs here and not everyone here seems to consider the same syntactic elements clear and cryptic.

This actually replies to my post even before I sent it, but there are trade-offs here and not everyone here seems to consider the same syntactic elements clear and cryptic.

See also New implicit parameter & argument syntax · Issue #1260 · lampepfl/dotty · GitHub for context.

When I am reading Contextual Abstractions at whole I think it is very good step forword.
I like separation from usual function arguments. It make using library with context bounds much more readable. (I think context bounds is the main feature which should be known by library users)
I like the most things except one:

  • Implied Imports

When I think that someone should sometimes use that it seems terrible.
It means that every usage of that import leads a question to google in the best case.

I have only one question how I can easy get rid of the need to use Implied Imports, when I solve libraries integration tasks.

It seems there are no such tools in scala :frowning:
I do not know the appropriate suggestion. But I think the answer is in scope management.
For example: https://kotlinlang.org/docs/reference/scope-functions.html

I think we could do that. Having leading implicit parameters followed by normal parameters s nice for orthogonality, but there are no use cases that cannot be worked around easily. So I would be prepared to drop them if we get a better syntax for the common case instead. Not sure we still need several given sections in that case, there could be only one.

This restriction is consistent with thinking of implicit parameters as constraints, which is what e.g. Haskell does. If you communicate constraints, you are one level removed from the order in which your constraint evidence should be aligned in a list of curried parameters. So passing all constraint evidence at the end is a sensible thing to do, if we can work around the limitations,

1 Like

Has it been considered to give up on passing given parameters explicitly?
So you would have to write

implied for Ctx = new Ctx
foo()

to override the Ctx passed to foo. It might be slightly inconvenient sometimes but then at least the separation – between regular parameters which are passed in explicitly and given parameters which are inferred – is complete.

There is one thing I still find lacking in the proposal, even if I completely go along with the philosophy behind it. I think I’ve brought it up before but can’t recall getting any response.

implied impliedCtx for Ctx = new Ctx
def bar() given Ctx = ???
def foo() given (givenCtx: Ctx) = bar()

The Ctx passed to bar is givenCtx (which should definitely stay that way!). With implicits it was obvious that an implicit parameter was itself implicit. But now I find it completely non-obvious, because of the disconnect between the different keywords, that a given parameter is also implied.

I think there’s a more fundamental difference, all these languages (I think ?) guarantee some form of “typeclass coherence”. That is, different instances/impls of typeclasses/traits should not overlap. Once you have that, hiding term inference makes sense since the term will always come from the same place, the same isn’t true in Scala and it seems to me that bringing over nameless instances but not coherence would be the worst of both worlds: how do I do “find all references” on a nameless implied ? And how can the compiler give me a useful error message when two nameless implied instances are ambiguous ?

3 Likes

Personally, I am already using this proposal along with the other contextual abstractions in a 5500+ line codebase. I really like the given keyword, I think it makes sense to say it’s some constraint on the current values in scope.

My only complaint is that, having confirmed with a non-scala programmer, that given that the inferrable parameters in def fancy given A, B, C = "Fancy!" do not really look like a parameter list, (which is the intent), why should they be applied as a whole list if you only want to update a single one?

for example, using only new features in this proposal:

trait A
trait B
trait C

def fancy given A, B, C = "Fancy!"

implicit object A_ extends A
implicit object B_ extends B
implicit object C_ extends C

println(fancy)

val newFancy = fancy given (implicitly[A], implicitly[B], new C {})

Now I must summon all the other values to change a single implied argument. However, I don’t think it’s possible to really prove which implied parameter list is the correct one if you allow partial application and generate the other arguments.

This causes me never to use given at the use site and instead update the implied scope in a previous statement in a block before calling, or otherwise restrict myself to only single argument implied parameter lists.