Principles for Implicits in Scala 3

As suggested by @lihaoyi, I have broken out my comment on the principles of the new implicit design into a separate thread. Discussions about these principles or proposals of different design approaches would fit most naturally into this new thread.

The material here complements the Overview and Relationship with Scala-2 Implicits document pages with more thoughts on motivations and design principles. Because of its origin as a comment, it is written differently from other proposals in that it reflects my personal thoughts much more subjectively. So you should take it as an argument I bring forward rather than a neutrally formulated end result of deliberations.

Background

I started the implicit redesign after having asked myself a hard question:

  • If implicits are so good why are they not the run-away success they should be? Why do the great majority of people who are exposed to implicits hate them, yet the same people would love Haskell’s type classes, or Rust’s traits, or Swift’s protocols? The usual answer I get from people who are used to current implicits is that we just need minor tweaks and everything will be fine. I don’t believe that anymore.
  • Otherwise put: What can we learn from the other languages? The main distinguishing factor is that their term synthesis is separate from the rest of programming, and that they more or less hide what terms get generated. What terms are generated is an implementation detail, the user should not be too concerned about it.
  • By contrast, Scala exposes implementation details completely, and just by adding implicit we get candidates for term inference. The advantage of that approach is that it is very lightweight. We only need one modifier and that’s it. The disadvantage is that it is too low-level. It forces the programmer to think in terms of mechanism instead of intent. It is very confusing to learners. It feels a bit like Forth instead of Pascal. Yes, both languages use a stack for parameter passing and Forth makes that explicit. Forth is in that sense the much simpler language. But Pascal is far easier to learn and harder to abuse. Since I believe that’s an apt analogy I also believe that fiddling with Forth (i.e. current implicits) will not solve the problem.

Design Principles

So that led to a new approach that evolved over time. Along the way many variants were tried and discarded. In the end, after lots of experimentation, I arrived at the following principles:

  1. Implicit parameters and arguments should use the same syntax
  2. That syntax should be very different from normal parameters and arguments.
    EDIT: In fact it’s better not to think of them as parameters at all, but rather see them as constraints.
  3. The new syntax should be clear also to casual readers. No cryptic brackets or symbols are allowed.
  4. There should be a single form of implicit instance definition. That syntax must be able to express monomorphic as well parameterized and conditional definitions, named as well as anonymous instances, and stand-alone instances as well as aliases.
  5. The new syntax should not mirror the full range of choices of the other definitions in Scala, e.g. val vs def, lazy vs strict, concrete vs abstract. Instead one should construct these definitions in the normal world and then inject them separately into the implicit world.
  6. Imports of implicits should be clearly differentiated from normal imports.
  7. Implicit conversions should be derived from the rest, instead of having their own syntax.

I arrived at these principles through lots of experimentation. Most of them were not present from the start but were discovered along the way. I believe these principles are worth keeping up, so I am pretty stubborn when it comes to weaken them. And I also believe that, given the mindshift these principles imply, there is no particular value to keep the syntax close to what it is now. In fact, keeping the syntax close has disadvantages for learning and migration.

Feedback

It would be good to have people’s feedback on the principles themselves as well as on how the actual proposal fits with those principles.

16 Likes

I always thought of them as (formal/mathematical) assumptions

Will there be a separate SIP for implied imports (or where should I comment on its semantics)?

There will be a separate SIP. Should be out shortly. I believe @smarter was taking the lead on this one.

Would you explain what do you mean as “run-away success”?.
For example I think scala collection library is a success which is impossible without implicits.

Are there concrete tasks which can be said as success or failed?

May be, the main disappointments of implicits are wrong tasks
For example I cannot imagine success with builder pattern whether there is new syntax or there is not.

1 Like

In my opinion, changing keywords from implicit to given/ implied/ whatever is just another minor tweak when it comes to beginner friendliness, albeit bringing heavy migration pain.

First I’ll link to my previous post on this subject:

Typical problem with implicits is that implicit conversion doesn’t work. Changing syntax for implicits has zero influence on that. A pretty rare and simple to fix problem is when we have to disambiguate between implicit and explicit parameter lists with explicit apply:

def method(explicit1: Int)(implicit implicit1: String): String => Int =
  x => explicit + x.length

method(5).apply("wow") // implicit1 stays implicit

Why implicit conversion doesn’t work in a particular place (but works somewhere else)?

  • because you forgot to import some implicit conversions or implicit values
  • types don’t match so implicit conversions are rejected silently
  • there are implicits ambiguities
  • you forgot to mark something implicit
  • etc

What Scala compiler reports when it can’t find a extension method coming from implicit conversion (or implicit class which is a syntactic sugar for it)?
value print is not a member of Int

What Rust compiler says?

error[E0369]: binary operation `<<` cannot be applied to type `f32`
 --> src/main.rs:6:1
  |
6 | x << 2;
  | ^^^^^^
  |
  = note: an implementation of `std::ops::Shl` might be missing for `f32`
error[E0119]: conflicting implementations of trait `main::MyTrait` for type `main::Foo`:
  --> src/main.rs:15:1
   |
7  | impl<T> MyTrait for T {
   | --------------------- first implementation here
...
15 | impl MyTrait for Foo { // error: conflicting implementations of trait
   | ^^^^^^^^^^^^^^^^^^^^ conflicting implementation for `main::Foo`

I wrote an application using Rust. Here’s very simplified case from it.
In one file I have:

pub trait FixedPoint where Self: Sized {
  ...
}
pub trait FixI32: FixedPoint<Raw=i32> {
  ...
}
impl<T: FixedPoint<Raw=i32>> FixI32 for T {}

In second file I have:

#[derive(Clone, Copy, Debug, PartialEq, Eq, PartialOrd, Ord)]
pub struct NoFractI32(i32);

impl FixedPoint for NoFractI32 {
  ...
}

In third file I forgot to import relevant typeclasses (signalled by commenting out):

// use demixer::fixed_point::{FixedPoint, FixI32, FixU32};
use demixer::fixed_point::types::{NoFractI32, Log2D};

#[test]
fn initial_cost_corresponds_to_one_bit_costs_series() {
  ...
  let new_tracker = old_tracker.updated(NoFractI32::ONE.to_fix_i32());
  ...
}

What Rust compiler says?

error[E0599]: no method named `to_fix_i32` found for type `demixer::fixed_point::types::NoFractI32` in the current scope
  --> tests/cost_tracking.rs:33:59
   |
33 |     let new_tracker = old_tracker.updated(NoFractI32::ONE.to_fix_i32());
   |                                                           ^^^^^^^^^^
   |
   = help: items from traits can only be used if the trait is in scope
help: the following trait is implemented but not in scope, perhaps add a `use` for it:
   |
20 | use demixer::fixed_point::FixI32;
   |

Plus similar error message about FixedPoint. Following compiler suggestions correctly solves the problem with missing imports (uses in Rust parlance).

There’s a huge discrepancy between scalac error messages and rustc error messages. Usefulness of Rust’s error messages is IMO the main reason Rust is so liked.

In Scala we don’t have the simple rules and useful error messages that Rust has. Instead:

  • Scala has very complicated prioritized implicit resolution, while Rust just rejects any ambiguities
  • typeclasses instances in Scala can be defined anywhere, while Rust rejects orphans
  • typeclasses instances in Scala can be stored in anything, while Rust allows only global definitions
  • Scala just says that a method is not a member of some type and doesn’t even try to suggest any solution while Rust gives ready to copy-and-paste code snippet that usually solves the problem

Rust also has more simplifications compared to Scala, e.g. Rust doesn’t have method overloading: Justification for Rust not Supporting Function Overloading (directly) - Rust Internals

Complex implicit conversions/ classes/ whatever are done almost exclusively by library writers. Library user’s job is usually to have proper implicits imported into scope. Implicits defined by ordinary Scala programmers are usually simple, like implicit correlation ID or implicit ExecutionContext.

Providing useful suggestion in compilation errors for all existing code may be unfeasible now, but when Scala compiler starts giving suggestions to fix missing implicits in scope then library writers will reorganize their libraries to make the compiler suggestions more useful. Rust compiler was designed to provide useful error messages from the start (i.e. from the first public release, I think), so if we want to compete with Rust in this area we must make useful error messages a core feature of the compiler.

Rust’s syntax related to typeclasses wasn’t perfect either, but that didn’t give Rust bad PR. Some explanation: Redirecting...

Using just the trait name for trait objects turned out to be a bad decision. The current syntax is often ambiguous and confusing, even to veterans, and favors a feature that is not more frequently used than its alternatives, is sometimes slower, and often cannot be used at all when its alternatives can.

To summarize:
Typical problem with implicits is not the syntax, but figuring out what to import to have required extension methods available on type.

6 Likes

I agree that Scala can learn a thing or two from Rust’s compiler errors. But that is completely orthogonal to the proposal. You can change the syntax and improve error messages.

I disagree that the proposal is just about changing implicit to implied/given. It is about making the language more regular, and having a more distinct syntax between various usages of implicits, like conversions and type classes. The new syntax also makes a more clear distinction between normal parameters and implicit parameters.

That said, you make a fair observation about cryptic error messages and we should maybe give it more emphasis than is currently done.

I completely agree.

It does not matter what. If ide code assistant does not help me I will not like such methods.
I need that people in my company do not waist time on it.
So I would prefer ‘CollectionHelper.asScala(someVal)’ just because it is more simple to remember. If I do not use it often.

Theoretically compiler errors are orthogonal to language redesign, but in this case language redesign is motivated by comparisons to other languages that have similar features (look at the section “background” in Martin’s original post). Therefore I’ve provided a more informed comparison, I think (I’ve actually programmed in Rust for many months).

I also think that Rust’s syntax isn’t any better than current Scala’s syntax for defining typeclasses. Rust typeclasses are pretty limited and it is often easier to define them in Scala. But because Scala is more flexible and powerful, people can encode much more complex typeclass derivation in Scala leading to very complex to use libraries. However, logic that can be easily expressed in both languages doesn’t look worse in Scala.

2 Likes

(1) That syntax should be very different from normal parameters and arguments.
EDIT: In fact it’s better not to think of them as parameters at all, but rather see them as constraints.

I see. I always thought of implicit parameters as, well, parameters you can omit. If they are not parameters, then what are they?

Constraints or assumptions or the like, sounds like facts to be asserted or used for reasoning, as if the Scala compiler has now become a theorem prover. Like i < 5 is a constraint. But that’s not what we mean.

Implicts are really just a convenient way to stash away some instances and get them back via their type. Essentially, a key store with types as keys. Let’s use the keyword implied as if it was a key store:

def caller(): Unit = {

implied.put(“Yo”)
called(42)

called(42)(“Hello”)

}

def called(i: Int)(s: String = implied.get): Unit = ???

2 Likes

I wasn’t able to find documentation covering the differences (if any) between implicit scope in Scala 2 and Dotty. Can you point me in that direction?

I’d like to review it before commenting further, but generally I’m concerned by anything which increases the “import tax” of implicits, which this point seems to imply is desirable.

2 Likes

I agree that this is the #1 problem. I also basically agree with your comparison to Rust.

I disagree, however, that reworking the term inference system is not part of the solution. The problem with any-random-mix-of-values-defs-and-implicit-classes as a source of inferred terms is that it gets really hard for everyone (compiler included) to know what is going on.

Whether this proposal does enough to simplify term inference so that it’s easy enough to make the compiler be more helpful, I do not know. It will, I think, be easier on users; but as the existing difficulty is mostly in making sure the one right thing is available, and wrong things are not available, I’m not sure that being easier on users is enough.

Rust does that by weakening the system of term inference so that there can only be one right thing, and not many places in which to look for it. That’s handy, but very limiting; if you have a type T then your ordered trees containing type T as a key can have only one order. More generally, you can’t use it for scope injection at all, because the scope is always the same.

Since Rust has zero-cost newtypes, and the newtypes can have alternate orders, and Into is pretty flexible, there are a kinda clunky but moderately okay workarounds. But I don’t think we can just emulate Rust in full here without giving up a lot.

I do think the question of discoverability and comprehensible errors are very important, though.

4 Likes

Yes, it is presently a theorem prover about types. That’s why people worry about soundness: has the compiler correctly proven (and is it even possible to prove that) that this thing is valid given the types involved?

So there are dual functions of implicits: one is as a constraint that limits what is valid to call; the other is a source of functionality that allows you to exploit the now-known constraint.

1 Like

Implementing suggestions in compilation errors will allow to validate whether redesigning implicits improve discoverability or not. Amount of correct suggestions would be an objective measure of that (i.e. discoverability).

A question: Rust prohibits orphan instances, so I was under the impression that all implementions have to be defined with a type that forms part of the trait that’s implemented. In an analogous situation Scala would not require an import at all; the implicit would be available anyway. Is that correct, or did I miss something?

Ok, technically, type inference is a kind of theorem proving. But most people writing code would not consider themselves in the business of proving theorems. For them, type inference is just a convenience allowing them to omit stuff. Telling them they need to worry about “proving” things sounds like a mighty deterrent.

I’m talking more about availability of extension methods rather than typeclass instances per se. In Rust, typeclass instance is not available as a value (even trait objects used for having virtual methods are bound to specific struct or enum instances) but instead visibility of a typeclass adds extension methods to a type.

Example code in Rust https://www.ideone.com/SXefDQ

use struct_here::MyStruct;
//use trait_here::MyTrait; // rustc suggest that, IntelliJ inserts automatically
 
mod struct_here {
    pub struct MyStruct;
 
    impl MyStruct {}
}
 
mod trait_here {
    use struct_here::MyStruct;
 
    pub trait MyTrait {
        fn print_hello(&self);
    }
 
    impl MyTrait for MyStruct {
        fn print_hello(&self) {
            println!("Hello, World!")
        }
    }
}
 
fn main() {
    let instance = MyStruct {};
    instance.print_hello(); // <- situation: I'm writing this line of code
}

Compiler error:

error: no method named `print_hello` found for type `struct_here::MyStruct` in the current scope
  --> prog.rs:26:14
   |
26 |     instance.print_hello(); // <- situation: I'm writing this line of code
   |              ^^^^^^^^^^^
   |
   = help: items from traits can only be used if the trait is in scope; the following trait is implemented but not in scope, perhaps add a `use` for it:
   = help: candidate #1: `use trait_here::MyTrait`

Analogous code in Scala https://www.ideone.com/kY7zRG

import TypeClasses1.struct_here.MyStruct
 
object TypeClasses1 {
  object struct_here {
    class MyStruct
  }
 
  object trait_here {
    trait MyTrait[A] {
      def printHello(): Unit
    }
 
    implicit val myTraitForMyStruct: MyTrait[MyStruct] =
      new MyTrait[MyStruct] {
        override def printHello(): Unit =
          println("Hello, World!")
      }
  }
 
  def main(args: Array[String]): Unit = {
    val instance = new MyStruct
    instance.printHello()
  }
}

Compiler error:

Main.scala:22: error: value printHello is not a member of TypeClasses1.struct_here.MyStruct
    instance.printHello()

I know that typeclasses work somewhat differently in Scala and Rust (and above Scala code can’t be fixed with extra import), but Rust’s approach doesn’t look inherently less capable. Typeclasses are ultimately used for extension methods and extra constants. Here’s how Rust typeclass can add a constant to a base type:

use struct_here::MyStruct;
use trait_here::MyTrait;

mod struct_here {
    pub struct MyStruct;

    impl MyStruct {}
}

mod trait_here {
    use struct_here::MyStruct;

    pub trait MyTrait {
        const MY_CONSTANT: u8;
    }

    impl MyTrait for MyStruct {
        const MY_CONSTANT: u8 = 8;
    }
}

fn main() {
    println!("{}", MyStruct::MY_CONSTANT); // MY_CONSTANT added to MyStruct
}

If I forget to add use trait_here::MyTrait; then rustc prints the following:

error[E0599]: no associated item named `MY_CONSTANT` found for type `struct_here::MyStruct` in the current scope
  --> src/main.rs:23:30
   |
5  |     pub struct MyStruct;
   |     -------------------- associated item `MY_CONSTANT` not found for this
...
23 |     println!("{}", MyStruct::MY_CONSTANT); // MY_CONSTANT added to MyStruct
   |                    ----------^^^^^^^^^^^
   |                    |
   |                    associated item not found in `struct_here::MyStruct`
   |
   = help: items from traits can only be used if the trait is in scope
help: the following trait is implemented but not in scope, perhaps add a `use` for it:
   |
1  | use trait_here::MyTrait;
   |

Overall, in Rust I do not even need to know the names of typeclasses. Rust compiler will find them for me and suggest them. Same for IntelliJ - it will suggest methods or constants from typeclasses and import those typeclasses (traits) immediately.

Things change when I use a generic type - then we need to have generic type constraints, i.e. explicitly type typeclass names, e.g. (snippet from my project):

fn mul_wide<A: FixI32, B: FixI32, R: FixI64>(a: &A, b: &B) -> R { ... }

but even then rustc can help when I omit something, e.g. when I change A: FixI32 to just A. Suggestions aren’t always working, but it’s better than no suggestions as in Scala. In this case rustc suggested me to change A to A: FixedPoint where FixedPoint is a supertrait of FixI32. Suggestion failed, but was somewhat close to truth. I can search for subtraits of FixedPoint - there are only 4 of them (FixI32, FixI64, FixU32 and FixU64) so choice is easy.

I’m a huge fan of the proposed changes to implicits as well as these principles. As a software engineer and researcher that hops around a variety of codebases in different languages, my main frustrations with Scala implicits as they current exist are:

  1. Often times when I hit errors for missing an implicit argument that I then correct by importing some module, it seems random or arbitrary. As I write code without an IDE with auto-imports, the ambiguity as to why this particular import is fixing the problem can make the code harder to understand. A specific syntax for importing instances for implicit arguments is a huge quality of life improvement.
  2. The syntax for implicit conversions in Scala is not super explicit. While it’s a small issue, it’s annoying to have to remember the difference between implicit def and the other areas where implicit appears. I think the implied instance-based syntax that is being proposed will make it much more clear that an implicit conversion is being defined.
  3. The Scala 2 syntax for implicit parameter lists is also confusing. It looks as though only the first item in that list is implicit. Fixing that with the given syntax will make things a lot more clear, while also allowing for multiple implicit parameters lists (if I’m understanding the proposal correctly). The given syntax is the proposal I’m most excited about, as I believe it much better illustrates the “contextual” nature of implicit parameters.
  4. Generally the different meaning of implicit depending on the context makes code harder to understand.

While these may seem like small quibbles for more experienced or involved Scala engineers, these issues are why I do not make use of implicits in my own Scala code. I do think, as has been mentioned, that more helpful compiler errors are necessary to make using implicits easier for newcomers.

3 Likes

I agree with this argument, and it’s why the evidence proposal sits poorly with me. I know that there’s a theorem prover in there, and I sorta-vaguely understand what is going on with it, but it’s not how I think about normal Scala programming. And I think that 95%+ of the Scala programmers out there would simply scratch their heads at it.

So while the whole types-as-proofs thing is real, I’d be very cautious about shoving peoples’ faces in it too aggressively – IMO, it’s likely to turn a number of people away…

Thanks for explaining! To give better error messages here, the compiler would have to know the complete set of sources in a project and its dependencies. I have the impression that’s made possible by the fact that in Rust package management and compilation are tightly integrated. The Scala and Java world is different in that respect.

Are sources really needed? Lots of information can be extracted from ordinary .class files. For example if I have such class in Java:

class MyComparator extends Comparator<MyType> {
  ...
}

Then I can analyze MyComparator with reflection and know that it parametrizes Comparator by MyType. https://www.ideone.com/EeLiq4

import java.lang.reflect.ParameterizedType;
import java.lang.reflect.Type;
import java.util.Comparator;
 
public class Main {
    class MyType {
    }
 
    class MyComparator implements Comparator<MyType> {
        @Override
        public int compare(MyType o1, MyType o2) {
            return 0;
        }
 
        @Override
        public boolean equals(Object obj) {
            return false;
        }
    }
 
    public static void main(String[] args) {
        Class<?> klass = MyComparator.class;
        Type comparatorType = klass.getGenericInterfaces()[0];
        System.out.println(comparatorType.getTypeName());
        ParameterizedType preciseType = (ParameterizedType) comparatorType;
        System.out.println(preciseType.getActualTypeArguments()[0]);
    }
}

Result:

java.util.Comparator<Main$MyType>
class Main$MyType

Scala constructs are obviously more complicated than Java ones, but Scala has its own reflection mechanism based on extra information stored (“pickled”) in .class files.