Proposal: Changes to implicit resolution rules

Dear scalamaris,

By now, most of the big implicits-related changes in Dotty have been discussed on this forum, but there’s one less visible but still import aspect that is worth considering in details: the set of rules that determine which implicit will be selected by implicit search. These rules are not fundamentally different in Dotty, but they have been tweaked as described in Changes in Implicit Resolution. Since even a small change could have unanticipated effects on code heavily relying on implicits, I invite you to look at each rule in details. When posting a comment in this thread, make sure to always indicate which rule you’re referring to avoid confusion.

This document was discussed in the latest SIP retreat, see section “Implicit Resolution changes” in the minutes, I suggest keeping the minutes open side-by-side with the document when looking at the rules since they provide some additional context.

If you have ever used inheritance to enforce a prioritization scheme for implicits (e.g., object Foo extends LowPriorityFoo { ... }), your feedback on rule 7 would be useful: as discussed in the minutes and demonstrated in dotty/implied-priority.scala at main · lampepfl/dotty · GitHub, it makes it possible to prioritize implicits without relying on inheritance: would you use such a scheme in practice? Let us know!

5 Likes

I think the changes are mostly good. But just a couple of things to point out:

For (3) – Package prefix no longer part of implicit scope:
It occurs to me that with the availability of top-level methods – assuming top-level definitions in a package can come from two different dependencies without one clobbering the other (this may be a bad assumption, as it’s different from current package object behavior… but if it’s not true, it might be something to look at because it could cause problems!) – if package prefix was not dropped as part of implicit scope, it would give a free solution to the orphan instance problem. I think I brought this up to Martin or Guillame at last year’s typelevel summit and there was a good reason why I was wrong, but I don’t remember what it was.

Regarding (7) – implicit parameter count affects priority of resolution:
This seems like it will cause a lot of difficulty. Right now you can explicitly prioritize derivations with the inheritance trick… it might not be pretty but it’s only necessary for a small amount of code (hopefully!) and at least it can be done. If the number of implicit parameters becomes a sorting factor, it could make some derivations impossible (at least that’s my concern).

The proposal isn’t clear about whether this new prioritization mechanism complements or replaces the inheritance-based mechanism. If the previous mechanism still exists, and takes priority over the new one, then there’s no real cause for concern. But, otherwise, it will be impossible to prioritize a more-specific derivation over a less-specific one (because the more-specific one will almost always have more implicit parameters).

2 Likes

Do these changes take effect under the Scala2 compatibility flag as well?

1 Like

Nice! Most of these changes look excellent to me!

The rules don’t mention anything about subtyping and variance though and I think it’s very, very important to clarify how those factors affect implicit resolution. How does subtyping and variance in return values affect resolution? How does subtyping and variance in implicit parameters affect resolution? Same for “declaration scope”, by which I mean if one were to use the Scala2 inheritance trick I show below, how (if at all) would that affect implicit resolution in Scala 3?

My other feedback is about rule 7. Off the top of my head there are three reasons to use the prioritisation-of-implicits trick:

1. You’re providing generic implicits for multiple subtypes of the same supertype.

trait Functor[F[_]]
trait Monad[F[_]] extends Functor[F]

trait LowPri {
  implicit def functor[F[_]]: Functor[F[_]] = ...
}
trait HighPri extends LowPri {
  implicit def monad[F[_]]: Monad[F[_]] = ...
}

The goal is that monad should be chosen over functor.
Rule 7 supports this case iff the Monad is considered more specific than Functor due to subtyping.

2. You’re providing specific implicits for HK types parameterised by multiple subtypes of the same supertype.

case class SuperQuickLossyEq[-A](areEq: (A, A) => Boolean)

object SuperQuickLossyEq {
  trait LowPri {
    implicit val anyref: SuperQuickLossyEq[AnyRef] = apply(_ eq _)
  }
  trait HighPri extends LowPri {
    // silly but just an example
    implicit val string: SuperQuickLossyEq[String] = apply(_.length == _.length)
  }
}

The goal is that for strings, SuperQuickLossyEq[String] should be chosen. Scala 2 would go for SuperQuickLossyEq[AnyRef] because of the contravariance.

Rule 7 supports this case iff the String implicit is considered more specific than the AnyRef one due to subtyping and ignoring typeclass variance.

3. You want to mix specialised implicits with generic ones.

trait MapReduce
trait Parallel

trait LowPri {
  implicit def sequential: MapReduce = ...
}
trait HighPri extends LowPri {
  implicit def parallel(implicit p: Parallel): MapReduce = ...
}

The goal is that if parallelism is possible, use it, otherwise fallback to a sequential implementation.

Rule 7 actually does the opposite here! According to the rules, A (sequential) is considered more specific because it has no implicit params where as B (parellel) does. B should be more specific right? There’s more specificity in the solution / solved-equation. (A) would work with and without a given Parallel; B wouldn’t. You can go from B to A, but not A to B (without adding a P). That’s more specific right? So maybe it’s a typo?

Finally, as to the idea of modelling priority as types and then requiring users import those as well, I can’t see myself ever using that in my libraries. The reason is that all the cases I can think of where I’ve used the prioritisation trick, it’s been to communicate to Scala rules that are actually unambiguous in whatever domain I’m working in. There have been a few rare times where maybe my code, definitely in library or two I’ve used, has a bunch of built in defaults and then there’s a special object that users should import if they want to change the defaults. And actually come to think of it, those scenarios fall into my example.3 above.

4 Likes

I’m still dubious about this import aspect:

where import takes precedence over definition.

The extensive use of elaborate inheritance chains to guide implicit resolution (see, for instance, the adapters for Java 8 Streams in the standard library) indicate that it is useful to have explicit control over priority.

I don’t like the new indirect rule 7 scheme any better than the old one, precisely because it’s indirect. You need a ton of boilerplate to get anywhere, and you have to craft it via arcane rules for it to work right at all.

The changes to the rules are fine, but the problem of specifying implicit resolution is still just as inadequately solved as before. If we want to solve that problem, there needs to be something much more direct.

Just allow logic, and add syntax for it. For example:

given foo[F <: Foo](using F) as Example[F]
given bar[B <: Bar](using B) priority > foo as Example[B]
given quux[Q <: Quux](using Q) priority < foo, bar as Example[Q]
4 Likes

The feature is a lot bigger than you quickly sketch here. If priority is transitive (and must have cycle detection) and tie breaking has to be part of it, this seems on the late side for 3.0

Bringing implicit into higher scope by nesting (which seems the cleanest solution enough for most situations, but may be verbose) may no longer work if you need to explicitly enumerate enclosing scopes and other scopes. It’s not immediately obvious to me if this may be problematic.

Rule 7 actually does the opposite here! According to the rules, A ( sequential ) is considered more specific because it has no implicit params where as B ( parellel ) does. B should be more specific right?

That would be nice but it’s impossible. We tried to do that: it led to lots of exploding implicit searches. Think about it: If you are constructing a tree and you have an easy direct case and a complicated case with nested implicits you want to try the easy case first. Otherwise you will fall into lots more divergence holes. The effects we observed were even worse than that. Instead of divergence we got heap overflows after minutes of compute time since it always tried implicits with the largest tuples first. If your search tree fanout is 22 that’s what happens. So, no, that’s not in the cards.

Since more parameters first is impossible, we have two choices that remain:

  1. fewer parameters first
  2. no relationship; if we have an implicit with nested parameters and one without they are ambiguous unless there’s some other tie-breaker.

(2) still risks unexpected divergence. If there is no other tie breaker, we have to try both versions before we can issue an ambiguity error. After all, the version with more parameters might fail in a recursive call, in which case we should return the simpler version instead of reporting an ambiguity. So (1) is clearly best from the perspective of being able to control implicit search complexity.

Here is a test file that demonstrates various ways the implicit priority can be controlled, both planned ahead (which is what placing implicits in subclasses provides) and after the fact, if one wants to add new implicits with lower or higher priority of existing implicits which have not planned for these extensions.

Interestingly, “more parameters first” allows the after-the fact addition of new implicits both at higher-priority and at lower priority than existing implicits, with a trick shown in test4 and test5 of that file. So it gives a lot of flexibility which would otherwise be missing.

Finally, as to the idea of modelling priority as types and then requiring users import those as well, I can’t see myself ever using that in my libraries.

You don’t need to import these types. They will be completely transparent to users.

Implicit pritiories are hard. I generally stay away from them. So I would expect only experts would use them in their libraries. But then the new priority scheme is a lot better than the old one, since it does not require to break modularity by putting implicits into special classes and it allows after the fact addition of implicits with tailored priority.

I am sorry not to have the time to write a more accessible description how to tailor priorities than the spec rules and this test. Somebody else will have to pick that up.

1 Like

The feature is a lot bigger than you quickly sketch here. If priority is transitivity (and cycle detection) and tie breaking have to be part of it, this seems on the late side for 3.0

I agree. Also, I believe relying on implicit priority is an anti-pattern. I am convinced that we will need it much less with the new meta programming possibilities, in particular summonFrom, which allows to control explicitly the order in which implicit alternatives are tried.

Since using implicit priority is an anti-pattern, I do not want to spend language footprint on it. The rules we have happen to be optimal for controlling the complexity of implicit search, which is the thing I care most about here. That they also manage to give precise control of priorities, even if it is indirect, is a nice bonus.

1 Like

It’s not “anti-pattern” if everyone’s using it and it’s a required knowledge for effectively using the language. Are implicit priorities going to become any less of a required knowledge for newcomers to learn in the future? Realistically - no. Making them less arcane and more explicit would directly benefit teaching and usage of Scala in the industry, it’s not just a nice bonus.

2 Likes

Implicit resolution rules should not be considered required knowledge.

I’ve been using some implicits in my code, but never to the extend that I had to worry about resolution rules. If there is an ambiguity, more likely it is a bad design or an outright bug. When dependencies use complicated resolution schemes, I expect it to just work without me worrying about how it works.

It’s not “anti-pattern” if everyone’s using it and it’s a required knowledge for effectively using the language.

Some designers of some very advanced libraries use it, but over the whole set of libraries I don’t think it is very common. Also, one should ask the question why priorities are used. In the case of Scala collections up to 2.12 it was so that you would get the right CanBuildFrom without having to do anything. So priorities were an essential part of the recipe but users of libraries should not be concerned with it. That’s how it should be.

I believe if users of your library need to know about priorities then most of the time that design is wrong. So in the end the only people who need to be concerned with priorities are the designers of these advanced libraries. That’s about 4 orders of magnitude smaller than everyone.

1 Like

I disagree, implicit priorities appear nearly everywhere where implicits appear (in my experience).

Stratifying between “users” and “library authors” ends up with said “users” not knowing the basic tools of the language, being unable to code themselves out of any non-trivial problem and then leaving to Kotlin where the toolbox is far smaller, but also perceived by “users” to be more obvious – a failure of education. The big break for Scala 3 is making implicits a more visible, obvious and explicit part of the toolbox, which will ensure that implicits make their way into a larger share of “users” code, for a part of that toolbox to remain arcane I believe will be a disservice for any “user” who doesn’t exactly know the path to becoming a “library author”, but simply has a problem with their existing use of givens that priorities would solve and that would prompt them to scrap all their code, if only they knew about the mechanism designed to solve their problems, buried somewhere deep in the specification only the graybeards ever read. I believe the language mechanisms should be democratized, the categorizing of developers is hurting adoption and hurting the image of Scala, you shouldn’t have to have a graybeard on your team to use the language effectively – you should be able to have a full grasp of the entire toolbox, including “advanced” parts right from online documentation & IDE hints.

5 Likes

Gratifying to my ego as it might be, to believe that I’m a “one in ten thousand Scala programmer”, I really doubt this is the case. And even if it is the case that only a small percentage of Scala programmers are creating libraries where they feel they need to use implicit prioritisation, there would be a significantly higher percentage that will at some point be maintainers or at least contribute to those libraries. Plus even even users are encouraged to look at and learn from the source code of the libraries they use. I understood it was a general rule of software that ideally one should always understand one level below the level one actually programs in.

3 Likes

The first show-stopper I see here (there might be more) is that the Java 9+ module system forbids “split packages”, that is having different modules put classes in the same package.

The previous mechanism still exists, it’s not wrong per se, and removing it would break a ton of code, starting with the Predef itself.

1 Like

We should document the exact behavior of the Scala 2 compatibility mode at some point. Currently, it disables rule 2, 3 and 4.

1 Like

From the Scala 2 spec:

If there are several eligible arguments which match the implicit parameter’s type, a most specific one will be chosen using the rules of static overloading resolution.

However there is one important difference with Scala 2 here: contravariance is now handled differently for implicit/overloading resolution as described in dotty/compiler/src/dotty/tools/dotc/typer/Applications.scala at 979d097b3d5876479bb76a9bc1f55b3f54aadd62 · lampepfl/dotty · GitHub (it seems that we’re lacking user-facing documentation for this currently), the motivation for this change is that it makes contravariant typeclasses behave the way people actually want them to behave, so for example we could make Ordering contravariant in its type parameter and still have Ordering[Int] would be preferred over Ordering[Any].

1 Like

(note that this change is also special cased in the Scala 2 compatibility mode: if an implicit search is ambiguous, we try it again with the old overloading resolution rules and warn if that succeeds)

2 Likes

My observations are somewhere in-between. I wouldn’t say everywhere, but scanning around our company codebase I certainly find things called LowPriorityImplicits scattered around in various libraries, not just ones that I would call “very advanced”. It’s a tool that experienced Scala devs routinely reach for when they hit this sort of problem.

I don’t know whether the new machinery deals cleanly with all of the needed use cases – it might – but I do think Martin’s under-estimating the prevalence of this pattern…

4 Likes

All these things will continue to work, and there are even better alternatives now. The question is, should Scala spend language features to address this directly? My answer is an emphatic no.