Principles for Implicits in Scala 3

I agree that this is the #1 problem. I also basically agree with your comparison to Rust.

I disagree, however, that reworking the term inference system is not part of the solution. The problem with any-random-mix-of-values-defs-and-implicit-classes as a source of inferred terms is that it gets really hard for everyone (compiler included) to know what is going on.

Whether this proposal does enough to simplify term inference so that it’s easy enough to make the compiler be more helpful, I do not know. It will, I think, be easier on users; but as the existing difficulty is mostly in making sure the one right thing is available, and wrong things are not available, I’m not sure that being easier on users is enough.

Rust does that by weakening the system of term inference so that there can only be one right thing, and not many places in which to look for it. That’s handy, but very limiting; if you have a type T then your ordered trees containing type T as a key can have only one order. More generally, you can’t use it for scope injection at all, because the scope is always the same.

Since Rust has zero-cost newtypes, and the newtypes can have alternate orders, and Into is pretty flexible, there are a kinda clunky but moderately okay workarounds. But I don’t think we can just emulate Rust in full here without giving up a lot.

I do think the question of discoverability and comprehensible errors are very important, though.


Yes, it is presently a theorem prover about types. That’s why people worry about soundness: has the compiler correctly proven (and is it even possible to prove that) that this thing is valid given the types involved?

So there are dual functions of implicits: one is as a constraint that limits what is valid to call; the other is a source of functionality that allows you to exploit the now-known constraint.

1 Like

Implementing suggestions in compilation errors will allow to validate whether redesigning implicits improve discoverability or not. Amount of correct suggestions would be an objective measure of that (i.e. discoverability).

A question: Rust prohibits orphan instances, so I was under the impression that all implementions have to be defined with a type that forms part of the trait that’s implemented. In an analogous situation Scala would not require an import at all; the implicit would be available anyway. Is that correct, or did I miss something?

Ok, technically, type inference is a kind of theorem proving. But most people writing code would not consider themselves in the business of proving theorems. For them, type inference is just a convenience allowing them to omit stuff. Telling them they need to worry about “proving” things sounds like a mighty deterrent.

I’m talking more about availability of extension methods rather than typeclass instances per se. In Rust, typeclass instance is not available as a value (even trait objects used for having virtual methods are bound to specific struct or enum instances) but instead visibility of a typeclass adds extension methods to a type.

Example code in Rust

use struct_here::MyStruct;
//use trait_here::MyTrait; // rustc suggest that, IntelliJ inserts automatically
mod struct_here {
    pub struct MyStruct;
    impl MyStruct {}
mod trait_here {
    use struct_here::MyStruct;
    pub trait MyTrait {
        fn print_hello(&self);
    impl MyTrait for MyStruct {
        fn print_hello(&self) {
            println!("Hello, World!")
fn main() {
    let instance = MyStruct {};
    instance.print_hello(); // <- situation: I'm writing this line of code

Compiler error:

error: no method named `print_hello` found for type `struct_here::MyStruct` in the current scope
26 |     instance.print_hello(); // <- situation: I'm writing this line of code
   |              ^^^^^^^^^^^
   = help: items from traits can only be used if the trait is in scope; the following trait is implemented but not in scope, perhaps add a `use` for it:
   = help: candidate #1: `use trait_here::MyTrait`

Analogous code in Scala

import TypeClasses1.struct_here.MyStruct
object TypeClasses1 {
  object struct_here {
    class MyStruct
  object trait_here {
    trait MyTrait[A] {
      def printHello(): Unit
    implicit val myTraitForMyStruct: MyTrait[MyStruct] =
      new MyTrait[MyStruct] {
        override def printHello(): Unit =
          println("Hello, World!")
  def main(args: Array[String]): Unit = {
    val instance = new MyStruct

Compiler error:

Main.scala:22: error: value printHello is not a member of TypeClasses1.struct_here.MyStruct

I know that typeclasses work somewhat differently in Scala and Rust (and above Scala code can’t be fixed with extra import), but Rust’s approach doesn’t look inherently less capable. Typeclasses are ultimately used for extension methods and extra constants. Here’s how Rust typeclass can add a constant to a base type:

use struct_here::MyStruct;
use trait_here::MyTrait;

mod struct_here {
    pub struct MyStruct;

    impl MyStruct {}

mod trait_here {
    use struct_here::MyStruct;

    pub trait MyTrait {
        const MY_CONSTANT: u8;

    impl MyTrait for MyStruct {
        const MY_CONSTANT: u8 = 8;

fn main() {
    println!("{}", MyStruct::MY_CONSTANT); // MY_CONSTANT added to MyStruct

If I forget to add use trait_here::MyTrait; then rustc prints the following:

error[E0599]: no associated item named `MY_CONSTANT` found for type `struct_here::MyStruct` in the current scope
  --> src/
5  |     pub struct MyStruct;
   |     -------------------- associated item `MY_CONSTANT` not found for this
23 |     println!("{}", MyStruct::MY_CONSTANT); // MY_CONSTANT added to MyStruct
   |                    ----------^^^^^^^^^^^
   |                    |
   |                    associated item not found in `struct_here::MyStruct`
   = help: items from traits can only be used if the trait is in scope
help: the following trait is implemented but not in scope, perhaps add a `use` for it:
1  | use trait_here::MyTrait;

Overall, in Rust I do not even need to know the names of typeclasses. Rust compiler will find them for me and suggest them. Same for IntelliJ - it will suggest methods or constants from typeclasses and import those typeclasses (traits) immediately.

Things change when I use a generic type - then we need to have generic type constraints, i.e. explicitly type typeclass names, e.g. (snippet from my project):

fn mul_wide<A: FixI32, B: FixI32, R: FixI64>(a: &A, b: &B) -> R { ... }

but even then rustc can help when I omit something, e.g. when I change A: FixI32 to just A. Suggestions aren’t always working, but it’s better than no suggestions as in Scala. In this case rustc suggested me to change A to A: FixedPoint where FixedPoint is a supertrait of FixI32. Suggestion failed, but was somewhat close to truth. I can search for subtraits of FixedPoint - there are only 4 of them (FixI32, FixI64, FixU32 and FixU64) so choice is easy.

I’m a huge fan of the proposed changes to implicits as well as these principles. As a software engineer and researcher that hops around a variety of codebases in different languages, my main frustrations with Scala implicits as they current exist are:

  1. Often times when I hit errors for missing an implicit argument that I then correct by importing some module, it seems random or arbitrary. As I write code without an IDE with auto-imports, the ambiguity as to why this particular import is fixing the problem can make the code harder to understand. A specific syntax for importing instances for implicit arguments is a huge quality of life improvement.
  2. The syntax for implicit conversions in Scala is not super explicit. While it’s a small issue, it’s annoying to have to remember the difference between implicit def and the other areas where implicit appears. I think the implied instance-based syntax that is being proposed will make it much more clear that an implicit conversion is being defined.
  3. The Scala 2 syntax for implicit parameter lists is also confusing. It looks as though only the first item in that list is implicit. Fixing that with the given syntax will make things a lot more clear, while also allowing for multiple implicit parameters lists (if I’m understanding the proposal correctly). The given syntax is the proposal I’m most excited about, as I believe it much better illustrates the “contextual” nature of implicit parameters.
  4. Generally the different meaning of implicit depending on the context makes code harder to understand.

While these may seem like small quibbles for more experienced or involved Scala engineers, these issues are why I do not make use of implicits in my own Scala code. I do think, as has been mentioned, that more helpful compiler errors are necessary to make using implicits easier for newcomers.


I agree with this argument, and it’s why the evidence proposal sits poorly with me. I know that there’s a theorem prover in there, and I sorta-vaguely understand what is going on with it, but it’s not how I think about normal Scala programming. And I think that 95%+ of the Scala programmers out there would simply scratch their heads at it.

So while the whole types-as-proofs thing is real, I’d be very cautious about shoving peoples’ faces in it too aggressively – IMO, it’s likely to turn a number of people away…

Thanks for explaining! To give better error messages here, the compiler would have to know the complete set of sources in a project and its dependencies. I have the impression that’s made possible by the fact that in Rust package management and compilation are tightly integrated. The Scala and Java world is different in that respect.

Are sources really needed? Lots of information can be extracted from ordinary .class files. For example if I have such class in Java:

class MyComparator extends Comparator<MyType> {

Then I can analyze MyComparator with reflection and know that it parametrizes Comparator by MyType.

import java.lang.reflect.ParameterizedType;
import java.lang.reflect.Type;
import java.util.Comparator;
public class Main {
    class MyType {
    class MyComparator implements Comparator<MyType> {
        public int compare(MyType o1, MyType o2) {
            return 0;
        public boolean equals(Object obj) {
            return false;
    public static void main(String[] args) {
        Class<?> klass = MyComparator.class;
        Type comparatorType = klass.getGenericInterfaces()[0];
        ParameterizedType preciseType = (ParameterizedType) comparatorType;


class Main$MyType

Scala constructs are obviously more complicated than Java ones, but Scala has its own reflection mechanism based on extra information stored (“pickled”) in .class files.

Sorry, it’s the class files. But there again, the first step is to realize which sources or class files form part of a system. And in the Java world, that’s not so easy…

1 Like

I’m not really sure what’s the problem with Java world, but anyways the compiler suggestions don’t need to be totally exhaustive or perfectly accurate (I’ve shown an example where Rust compiler suggestion was inaccurate and that wasn’t a deal breaker). Scala compiler can recognize only a subset of patterns related to definition of typeclasses, extension methods, etc with the intention of suggesting them to programmer. For example Scala compiler could try following strategies:

  1. global typeclasses recognition, i.e. finding all implicit vals and defs with stable paths and recording types for which they can be instantiated (without any extra imports in scope)
  2. local typeclasses search - try members of all values/ results of methods in scope to see if they provide relevant typeclass instance
  3. same goes for implicit conversions / extension methods

Library authors would then probably reorganize their libraries to fit into the suggestions heuristics so library users would have easier time finding implicits thanks to compiler suggestions.

Above strategy misses some cases e.g. implicit method with stable path, but implicit argument that requires special import, i.e.:
implicit def methodWithStablePath[T](implicit arg: ThisTypeNeedsExtraImport[T]): Result[T] = ???

The final form of suggestions heuristics would be a result of negotiation between library authors (who would vote on most useful heuristics) and compiler autors (who would reject infeasible solutions).

1 Like

A post was merged into an existing topic: Proposal To Revise Implicit Parameters

I think this is a failed assumption. People often complain about things which aren’t the actual problem. For example a lot of people complain about lazy evaluation in Haskell, but a lot of people that program in Haskell say that isn’t the real issue, and really this complaint is more about other issues (for example how hard is it to diagnose space leaks).

Matter of fact is, when you are to a language and you are learning it, you will often complain about something vaguely related to the actual problem you have but not the actual problem and often this can be even contradictory. For example, I have seen people complain about “magic implicits in Scala” and yet have no issue with Guice DI in Java, or Scala collections (which rely on implicits).

I think when evaluating these questions, you really need to look into the details rather than just saying in a reductionist manner that what is being complained about needs to be reworked because some people complained about it. That is not to say I am dismissing feedback, when people complain about a problem there is a problem. The point is figuring out where the problem lies.

I have personally had to teach Scala many many times to new people, and from personal experience the number one grand issue with implicits are not the concept, nor the syntax. Its the fact that up until recently there has been no tooling to reveal both how an implicit is used and where an implicit comes from plus the fact that the compiler errors for implicits are incredibly misleading.

What do you think will happen when you have an implicit marshaller which in turn asks for an implicit decoder, you are missing the decoder in scope but the Scala compiler usefully tells you that the marshaller is missing (even though you have an implicit marshaller .... there?

This isn’t really true. All other languags that have some form of implicits (barring Prolog) have always phrased it in terms of parameter passing. In haskell its the same, and in OCaml the proposal of implicits is also the same. None of these languages have come up with keywords or alternate constructs to “hide” the fact that you are passing parameters.

Rust is a bad comparison, it doesn’t even have implicits. Rust is also as explicit as you can get for a language as well. I am not sure why we are even comparing traits and typeclasses to implicits, we are comparing apples to oranges.

I am not trying to sound negative here, but I have issues with the core hypothesis that is the impetus for this proposal for reasons I have already stated. In my opinion, we put the cart before the horse where some people where complaining about implicits (which if we were going to put into perspective, the largest complaints where happening around 5-10 years ago and they centered around implicit conversions which isn’t even related to what we are talking about) and because of this we have to redesign everything.

I mean does this mean we have to completely reinvent how Scala does static typing because people have problems with how types are inferred in Scala, or should we completely reinvent subtyping because of all of the problems associated with it? Or the crazy DSL’s/overloading/pure FP monad thingies?

Because I can say one thing, I have heard many more complaints from those things rather than implicits. I very rarely here people complaining about implicits in the public sphere in an intellectually honest manner, often its just trolling or its someone who is already predispositioned to not like the language anyways and they are stating their “opinion” (i.e. someone who is convinced on dynamic typing is usually always going to complain about the complexity of static types in a language like Scala).

Recalling from memory from teaching new people the language, the only real complaints I hear about implicits is how terrible the compiler is about reporting them.

Also saying that the proposal is incoherent and inconsistent with the rest of the language, using the same reasoning we should also get rid of final, case class, lazy etc etc. There are other inconsistencies such as overloading keywords (i.e. for) which has no precedent in the language. In fact a lot of things about this proposal seems to be creating precedents which haven’t existed before.


This is also a really good point. I mean implicits are essentially a subset of prolog, but that doesn’t mean that generally speaking people like writing programs in terms of proofs theorems.

This is honestly the main issue and I don’t see how this SIP addresses that (if it did I would be more amenable to it). The underlying fundamentals in implicits aren’t being changed, we are just repackaging them. As far as I can see, you can change the syntax and hide the parameter passing as much as you want, but if the Scala compiler still gives you completely wild error messages than most of this effort is wasted (in fact you can make the problem worse).

If Intellij can do it (which it does currently almost flawlessly now), than I don’t see why the actual compiler can’t do this, especially considering we are going to be shipping libraries with TASTY trees. Honestly I would rather the effort be spent in making the error reporting around missing/incorrect implicits better, I mean if you are taking Rust (as a language example) thats probably the main thing what is making it easier for new comers. Its not how they designed traits, but its the fact that for the most part, the Rust compiler is really helpful. The rest of the Rust language is actually just as hard/confusing/obtuse as Scala, tracking memory management through linear types is not that easy especially when you see how people do async programming in Rust (using their so called “easy” traits)


I think @mdedetrich is correct in saying that error messages are the current biggest problem with implicits, and not the superficial spelling/syntax. The fact that when you get an implicit wrong, you get zero help from the compiler about what to do, and in many cases the compiler actively misleads you: that is what confuses newbies the most, and even experienced people.

This can be seen from @tarsa’s comments, @Mocuto’s comments, and even @Ichoran’s comments on another thread Proposal: Implied Imports and /u/kag0’s comment on reddit. I’ll chip in my own experience and say it applies to me as well: when learning how to use Futures, when learning how to use SLICK, the imports-are-wrong-and-error-messages-awful property was far more demoralizing than things like the choice of syntax or interference with currying

Better error messages should definitely be possible in the compiler just by scanning the classpath; no package manager involved. People generally don’t have “forgot to add an SBT dependency to pull in implicit from other ivy artifact” problem, people have a “forgot to add an import for an implicit already on the classpath, often in the same library” problem. All this stuff is already on the classpath, we just need to scan for it in the compiler and surface it. I think @tarsa makes a reasonably compelling argument that Rust’s usability comes from the error reporting, and not from the superficial syntax.

Error messages like:

methodA is not a member of ClassA, but there is an implicit extension available

implicit scala.concurrent.ExecutionContext is not found, but there are available implicit instances at and scala.concurrent.ExecutionContext.Implicits.parasitic available

Would make much more of a difference than changing the syntax. In fact, in the last case you can even see people have manually tried to implement this missing feature using @implicitNotFound annotations, when really this should be the standard auto-generated error message for any implicit that is not found.


@jvican let’s keep feedback on the proposal’s syntax/spelling and other details on its own thread Proposal To Revise Implicit Parameters. This thread was spun off explicitly to avoid being bogged down in such detailed discussions

@jvican I’ve moved my comments there, I haven’t been following these discussions so thanks for pointing out I was off topic here.

Yes, this!

Especially when using libraries such as cats, doobie, shapeless, etc. where knowing the correct import syntax.thing._ or import instances.thing._ incantation is essential.

For simple context/injected parameters like ExecutionContext we can use @implicitNotFound - this same logic could easily be extended to have the compiler error list possible instances of the implicit type where @implictNotFound isn’t specified, so long as:

  • those instances are defined in the codebase being compiled, or in the library where the implicit trait is defined
  • there aren’t too many

It doesn’t even need to be fast, as this is a special case for an error that’s not expected to happen in most compiles.

The other problem is derived types, such as Doobie’s Read and Circe’s Encoder/Decoder. Here the problem is usually one where you have a nested tree of case classes (or equivalent) and some nested member of that structure doesn’t have a defined instance for the relevant case class.

An error message of the form
Unable to derive an instance of Read[Foo] because a Get instance for nested type UUID was not available
Would go a very long way to helping debug such usages!

Now that we have HLists baked into the language, I can only see such patterns becoming more common, and any improvements to the generic derivation machinery that allow for such errors would be a godsend.


If Intellij can do it (which it does currently almost flawlessly now), than I don’t see why the actual compiler can’t do this, especially considering we are going to be shipping libraries with TASTY trees.

It’s because the actual compiler gets invoked with a set of files to compile, and a classpath. The classpath points to a transitively unbounded number of classes. As long as the compiler’s interface stays like this, there is no way to establish a boundary what should be considered and what should not be.

A build tool or an IDE have advantages here because they have a project description. They know what files are part of your project, and what your direct dependencies are. So they can choose to search for candidates in that set of files, for instance.

Don’t get me wrong. It would be great if the compiler could give that help, and we should think hard how to do it. It will require a major redesign of the tool chain. In particular it would completely change the interface between compilers, build tools, and IDEs. So it requires buy-in from many people.

That said, I am still convinced that fixing some technical issues is not enough to solve the implicit problem. Yes, some problems can be addressed by tooling. I put a lot of effort into better recursive implicit search diagnostics and that already pays off (for instance, it would correctly diagnose the Marshaller/Decoder problem that you mention). And we can and will do a lot more. But I am convinced we need a better language design as well.

This isn’t really true. All other languags that have some form of implicits (barring Prolog) have always phrased it in terms of parameter passing. In haskell its the same.

Haskell has implicit parameters, which nobody uses. I was referring to typeclasses which do term inference in a way that hides dictionary parameters completely.