Principles for Implicits in Scala 3

@odersky one thing that’s not clear is why you think classpath scanning is so difficult: we’d be loading pre-compiled metadata generated earlier, so it’s not like we need to run heavy compilation logic to extract the information. Since we control the compiler we can generate whatever metadata we want beforehand, at a time when the compiler already has the information in a convenient representation.

Classpath scanning is pretty common in JVM-land. Spring does it. Dropwizard does it. Intellij and Eclipse obviously do it. Even Ammonite does it, to power its autocomplete. Sure, it’s a bit fiddly, and there are things to be careful about, but it’s something that’s been done before many times in many projects. It’s certainly going to be orders of magnitude easier than overhauling both the syntax and semantics of a core language feature and collectively upgrading a hundred million lines of code.

4 Likes

It doesn’t even need to work with every build tool out of the box. If compiler needs extra information from build tool for suggestions heuristics then let that be optional. E.g. invoking:

scalac <some options> --implicits-suggestions-classpath=$A:$B:$C <other options>

would cause compiler to print suggestions on compilation errors. Without --implicits-suggestions-classpath there would be no suggestions. Fair deal, I think.

1 Like

OK. The compiler needs to scan the whole classpath. As you wrote, just the Tasty files, since implicits or extension methods cannot hide in Java class files. If the class path is too big it could do the scanning only when -explain is set (maybe triggered by a timeout). That looks all feasible, and would sure be a big help.

1 Like

Absolutely - it only needs to be a “reasonable best effort”, it’s a guide, not an essential feature.

It’s also okay to be slow because this will only happen on a particular class of build failures (and good search heuristics can help), but it’s not practical to do anything that would blow the heap!

That’s an orthogonal concern, the changes are unrelated to improved error messages. Of course… if the changes help with this then brilliant, but that’s not the primary goal!

Thanks for starting this discussion! It’s important to align on goals before getting into the implementation.

Can we talk about what “good” and “run-away success” mean? While I think that implicits are useful in some scenarios, ultimately I think they should be used sparingly. No matter the implementation, we’re always talking about the compiler automatically turning a type into a value through some kind of automatic process not explicitly directed by the programmer. This has a large cost in terms of readability, maintainability, etc.

When I hear “run-away success” it seems like @odersky wants implicits to be used more often in Scala code. I hope that is not the case, due to the aforementioned cost. Rather, I would like to see implicits that are used rarely but effectively, and that come with great tooling to offset the readability cost. And if we can lower the readability cost by relating implicits to other foundational language concepts (like multiple parameter lists or parameter defaults), that is even better.

Yes, implicits are sometimes useful, and somewhat unique to Scala, and an interesting concept in programming languages. But let’s not make them into a defining feature of Scala code or a feature that eats up the language complexity budget. I’m well aware that Martin has described implicits as one of the defining features of Scala, and as the creator that is certainly his prerogative. But as someone who loves Scala (pretty intensely!), I just can’t say that I agree.

4 Likes

I’ve said it earlier, but I’ll repeat. Martin’s motivation for language redesign was the perception of implicits in Scala compared to equivalents in e.g. Rust:

The answer is tooling, not syntax.

Imagine Rust without suggestions in compilation error messages (and without imports agnostic auto-completion in IntelliJ). Would it still be loved?

I think a proposal for a type-class-inspired syntax that emphasizes the intent of type classes rather than the mechanism is a good idea, but it has little chance of being accepted as is, due to the migration costs for little practical gain.

However, if the type-class-oriented syntax came with simplifications and restrictions that could ensure the resulting type classes are easier to reason about and give rise to better error messages, the proposal would have a better chance of being widely accepted.

So I’d propose to keep the original implicit mechanisms as the general-purpose but often too-powerful-for-its-own-good tool, but also add a more user-friendly type class special syntax that works similarly behind the scenes, but has restrictions and is easier to work with for everyday users.

In particular, it should be possible to deal with prioritization and orphan instances more elegantly than with the “hacky” approaches that have been proposed recently for Dotty, which only complicate the already very complicated rules of implicit resolution in ad-hoc ways. And ideally, we’d also have a way to enforce type class instance coherence.

4 Likes

I so totally agree with every word @mdedetrich says.

For the record, here is a minimized case where Rust compiler gives incorrect suggestion (regarding a typeclass), but that’s still a good starting point https://www.ideone.com/MD2Ah8

trait SuperTrait {
    type Raw;
 
    fn raw(&self) -> Self::Raw;
}
 
trait SubTrait: SuperTrait<Raw=i32> {}
 
// compiler suggests changing 'B' to 'B: SuperTrait'
// while correct solution is 'B: SubTrait'
fn generic_method<A: SubTrait, B>(a: A, b: B) -> A::Raw {
    a.raw() + b.raw()
}
 
fn main() {}

Result:

error: no method named `raw` found for type `B` in the current scope
  --> prog.rs:12:17
   |
12 |     a.raw() + b.raw()
   |                 ^^^
   |
   = help: items from traits can only be used if the trait is implemented and in scope; the following trait defines an item `raw`, perhaps you need to implement it:
   = help: candidate #1: `SuperTrait`

The standard library’s lack of Monoid seems to be an important symptom

I found that Scala’s typeclass conventions contributed as much or more to my learning curve than the theory of typeclasses/semantics. I haven’t found any place that these conventions are explained well except “Scala with Cats”.

For example in the circe library we have Decoder.instance, this used to seem arcane and exactly the kind of thing that might make newcomers feel stupid and detracted from continuing with Scala. After reading “Scala with Cats” I now see “Oh, Decoder.instance is a helper function to construct a typeclass instance for the Decoder typeclass, and this is a common convention across Scala”.

Compared to a language like golang, Scala might seem to suffer from “canonical Scala” requiring layers of conventions and non-standard libraries that are exogenous to the “official” language. It sounds like the mission for Scala 3 acknowledges some of this and is attempting to build some best practices into the language.

For example, if typeclasses are such a powerful, obvious, necessary part of Scala, why is there no Monoid in the standard library? It seems to me that by respecting the diversity of approaches to these core language features (eg. scalaz vs cats) we make Scala less approachable because newcomers must learn 1. the language, 2. the standard library, 3. the “standard” non-standard libraries (for which discovery is a huge issue for newcomers), and 4. the conventions to weave them all together. In golang a user need only learn (1) and (2) and I think this should be the case for Scala 3 as well.

Also, things I do with implicits that I hope are supported in Scala 3:

I’m writing a game in Scala where performance is a major issue and empirically dominated by dynamic memory allocations. To meet this goal I have found these practices related to implicits to be useful:

  • avoiding use of implicit def foo[A] because it reallocates typeclass instances each interface call, instead doing manual specialization with val fooA = foo[A]. (I know dotty has done work on instance derivation but I haven’t studied it)
  • impure “typeclass instances” eg. DoubleLinkedList.setNext(node, nextNode) = node.concreteNextNode = nextNode
  • “typeclass instances” that are both impure and close over context, eg. an instance of DoubleLinkedList which sets head of list by def setHead(container, newHead) = this.myMap(this.contextualKey)(container) = newHead
  • def DoubleLinkedList.isEmpty[A, B, C, H](h: H, t: InferType[A])(implicits using A, B, C, H) where the type parameters can’t be inferred by the compiler with the arg h, but can be inferred with a hint about A, where InferType[A]: Option[A] = None, preventing the need for the client to write isEmpty[verbose concrete type parameters](h)

These practices have helped me leverage implicits and I hope they are supported in the implicit redesign.

2 Likes

It would be great to see a good part of Cats imported into Standard Scala, including Monoid. Perhaps the new collections library would be more amenable for creating Monad instances. I would object though to having a standard Monoid instance for Ints.

Would it be possible to achieve all 7 of the design principles with macros? Racket for example is famous for defining its entire class system and traits system with macros (classes, traits, paper). Through composable and easy-to-use macros, Racket exposes high-level syntax for classes, traits, contracts, etc, while still keeping the lower level substrates available and approachable as well.

A macro for defining extension methods, for example, could expand to an implicit class. If that macro were part of the standard library, the compiler might scan the classpath for extensions defined with it when emitting “method not found” error messages. IDEs (or metals?) could also offer to auto-import an extension while a developer is typing a method call, or even list all importable extensions in the auto-completion list.

Macros for typeclasses already exist in simulacrum, machinist, etc. Could changes to the macro system make them easier to implement, easier to understand the implementation, and easier to use as syntax without confusing IDEs?

Likewise for context parameters (ExecutionContext, RequestID, UserID, Viewer, etc), could some form of def/context macro expand to def + implicit functions?

Implicits and macros in Scala 2 enable useful patterns but are arguably advanced features that users of those patterns need to understand to some degree as well. If the patterns were expressible as abstractions and if those abstractions weren’t leaky, Scala could have the approachability of Rust traits, Swift protocols, Kotlin extension functions, and Racket classes without losing the core part of the language that enables this.

1 Like

@dsilvasc Macros might help in some specific cases, but they have restrictions compared to Racket; last I checked, creating declarations in macros was either very fragile or unsupported, depending on which Scala version you consider.

I’ve long wondered why Scala cannot follow Racket, which I’m familiar with. But after working on Dotty for a while, I have a hypothesis: you must restrict your language significantly to allow macros like Racket’s (which are, basically, extremely powerful compiler plugins). Java and Scala give you extreme freedom in writing mutually recursive modules (classes, packages, …). That’s very convenient for users, and mutually recursive modules are painful or impossible in many other languages (Haskell, ML, …).
But they complicate compilation a lot, especially in combination with type inference — compilation must be lazy enough to avoid infinite loops on mutually recursive modules, and that’s hard for compiler authors, and harder for macro authors.
That’s why certain features are so hard to support. For instance, you might want to create declarations in class A based on the type of some other code in B, and have them available right away, as in Racket! But typechecking B can then require knowing declarations in A, making the whole thing circular!

3 Likes

I agree that we should solve existing problems with implicits first, and that main problem is compiler diagnostics. Some points about this:

  1. For reporting errors about a broken implicit chain (an implicit might work if only the implicit it needs were known), https://github.com/tek/splain does a great job (if a bit terse). I seem to recall a proposal for merging it into the compiler, which would really solve this to a great degree. (Perhaps the output could be made more explanatory though.)
  2. Much of the discussion was about the difficulty of finding a suggestion for x.foo where foo is not a method on x and is therefore suspected to be an extension method. (It could also be misspelling or simply a bad guess.) The way I see it this is as type mismatch, with the expected type ? { def foo: ? } and the actual type x.type, thus triggering a search for a suitable implicit conversion, like any type mismatch does. However because the expected type is structural, it may not be as worth it to search for suggestions. In any case the new mechanism for extension methods may obviate this issue. But what about when the sought type is simpler than a structural-returning function – if that’s easier then start with that at least.
  3. If there would be some set of conventions the compiler could limit its suggestions search to, that might make things simpler. The conventions could be based on most existing libraries but it would also encourage libraries to follow them.
  4. Having the compiler dig up and present suggestions would be great, but it’s not the main thing that’s needed. The more basic thing is just to share in a user-friendly way whatever information the compiler does have. Some of the compiler flags, as well as Splain, do this to some extent but not in a way that’s user-friendly enough.
2 Likes

I’m not at all convinced that we need to learn this from other languages, or that implicits are too low-level, or that this proposal makes them less low-level. The only way to change this is to make a huge, coerced change in how Scala programmers make design decisions, and I am absolutely opposed to that.

The motivation for each principle is not obvious. I’m sure your experimentation is more than trustworthy but without detailing the individual motivations we can’t debate them. Also it would be helpful to rank each principle in terms of importance. Meanwhile I will try to guess the motivations and give my rankings.

I think you mean that explicitly passing an implicit argument should mirror the syntax for declaring the implicit argument? If so to me this would be a “nice-to-have” at most.

This is a HUGE, FUNDAMENTAL change to Scala. I’m also really not sure I like it.

Clear in what sense? In what sense is the current syntax not clear?

Why?? These seem to be the biggest syntax-changing principles but I don’t see any gain. On the contrary they make the language more rigid and less orthogonal.

I think this is completely orthogonal to everything else. It could be a separate proposal, before or after this one.

Not sure what this means.

Isn’t the whole point of this thread to debate these principles? I mean it seems to me these principles are what is controversial, aside from the bikeshedding that was being done before this was made into a thread.

AFAICT the only “mindshift” mentioned is in the “EDIT” part of principle 2, that implicits are no longer parameters but constraints. The rest are just a “requirements document” for the syntax.

In conclusion, I think there is still a very big missing “Why” here. The costs and risks are tremendous, so ideally the “Why” that I would like to see would be strong enough to make it a no-brainer.

The problems with implicits that I am aware of fall into 3 categories: (1) learning curve, (2) pain points working with code that uses implicits, (3) pain points writing code that uses implicits. There are a lot of proposals to solve most of them, most of which have minimal risk and cost. On the other hand, it’s very unclear which of those problems this proposal solves, and how effectively. (Making a case as to how it solves a problem is a first step, although it doesn’t prove that it will, that is why all software development needs good risk management.)

5 Likes

I have to also agree that I personally don’t think we need to be looking outward to other languages nearly as much as is being made out, nor that “implicits are not a run-away success”.

I don’t see the benefit. The new syntax proposals in the related SIPs will require major code refactoring for any non-trivial projects, and do not seem to me to make the concept easier to learn for “new” Scala developers (indeed as has been mentioned by many, if we are determined to find an issue with implicits it is lack of information in compiler errors rather than syntax).

1 Like

Given that we have a statically typed language, and the changes proposed are not changing the semantics, it should be fairly easy to do all the refactoring with an automated tool.

You just parse Scala 2 code as an abstract syntax tree, and let it represent that tree as the newly proposed in syntax in Scala 3.

Also the old style implicits are still supported so you don’t have to do it right away.

I will believe that an automated tool will be able to do that when I see it. Not that it isn’t feasible, but like all software products, until it’s finished you don’t know what might go wrong.

It looks problematics for me because it breaks the next usage pattern:

Assemble my library/framework in the package (along with the set of implicits) and say: all that you need to use this package is to say ‘import myPackage._’. A client can know nothing about the organization of myPackage, it can just use one.

Now, a client should use one of:

import myPackage._

or

import myPackage._
implicit import myPackage._

Dependent from existing of implicit values in MyPackage. But this break encapsulation,