Principles for Implicits in Scala 3

That’s an orthogonal concern, the changes are unrelated to improved error messages. Of course… if the changes help with this then brilliant, but that’s not the primary goal!

Thanks for starting this discussion! It’s important to align on goals before getting into the implementation.

Can we talk about what “good” and “run-away success” mean? While I think that implicits are useful in some scenarios, ultimately I think they should be used sparingly. No matter the implementation, we’re always talking about the compiler automatically turning a type into a value through some kind of automatic process not explicitly directed by the programmer. This has a large cost in terms of readability, maintainability, etc.

When I hear “run-away success” it seems like @odersky wants implicits to be used more often in Scala code. I hope that is not the case, due to the aforementioned cost. Rather, I would like to see implicits that are used rarely but effectively, and that come with great tooling to offset the readability cost. And if we can lower the readability cost by relating implicits to other foundational language concepts (like multiple parameter lists or parameter defaults), that is even better.

Yes, implicits are sometimes useful, and somewhat unique to Scala, and an interesting concept in programming languages. But let’s not make them into a defining feature of Scala code or a feature that eats up the language complexity budget. I’m well aware that Martin has described implicits as one of the defining features of Scala, and as the creator that is certainly his prerogative. But as someone who loves Scala (pretty intensely!), I just can’t say that I agree.

4 Likes

I’ve said it earlier, but I’ll repeat. Martin’s motivation for language redesign was the perception of implicits in Scala compared to equivalents in e.g. Rust:

The answer is tooling, not syntax.

Imagine Rust without suggestions in compilation error messages (and without imports agnostic auto-completion in IntelliJ). Would it still be loved?

I think a proposal for a type-class-inspired syntax that emphasizes the intent of type classes rather than the mechanism is a good idea, but it has little chance of being accepted as is, due to the migration costs for little practical gain.

However, if the type-class-oriented syntax came with simplifications and restrictions that could ensure the resulting type classes are easier to reason about and give rise to better error messages, the proposal would have a better chance of being widely accepted.

So I’d propose to keep the original implicit mechanisms as the general-purpose but often too-powerful-for-its-own-good tool, but also add a more user-friendly type class special syntax that works similarly behind the scenes, but has restrictions and is easier to work with for everyday users.

In particular, it should be possible to deal with prioritization and orphan instances more elegantly than with the “hacky” approaches that have been proposed recently for Dotty, which only complicate the already very complicated rules of implicit resolution in ad-hoc ways. And ideally, we’d also have a way to enforce type class instance coherence.

4 Likes

I so totally agree with every word @mdedetrich says.

For the record, here is a minimized case where Rust compiler gives incorrect suggestion (regarding a typeclass), but that’s still a good starting point https://www.ideone.com/MD2Ah8

trait SuperTrait {
    type Raw;
 
    fn raw(&self) -> Self::Raw;
}
 
trait SubTrait: SuperTrait<Raw=i32> {}
 
// compiler suggests changing 'B' to 'B: SuperTrait'
// while correct solution is 'B: SubTrait'
fn generic_method<A: SubTrait, B>(a: A, b: B) -> A::Raw {
    a.raw() + b.raw()
}
 
fn main() {}

Result:

error: no method named `raw` found for type `B` in the current scope
  --> prog.rs:12:17
   |
12 |     a.raw() + b.raw()
   |                 ^^^
   |
   = help: items from traits can only be used if the trait is implemented and in scope; the following trait defines an item `raw`, perhaps you need to implement it:
   = help: candidate #1: `SuperTrait`

The standard library’s lack of Monoid seems to be an important symptom

I found that Scala’s typeclass conventions contributed as much or more to my learning curve than the theory of typeclasses/semantics. I haven’t found any place that these conventions are explained well except “Scala with Cats”.

For example in the circe library we have Decoder.instance, this used to seem arcane and exactly the kind of thing that might make newcomers feel stupid and detracted from continuing with Scala. After reading “Scala with Cats” I now see “Oh, Decoder.instance is a helper function to construct a typeclass instance for the Decoder typeclass, and this is a common convention across Scala”.

Compared to a language like golang, Scala might seem to suffer from “canonical Scala” requiring layers of conventions and non-standard libraries that are exogenous to the “official” language. It sounds like the mission for Scala 3 acknowledges some of this and is attempting to build some best practices into the language.

For example, if typeclasses are such a powerful, obvious, necessary part of Scala, why is there no Monoid in the standard library? It seems to me that by respecting the diversity of approaches to these core language features (eg. scalaz vs cats) we make Scala less approachable because newcomers must learn 1. the language, 2. the standard library, 3. the “standard” non-standard libraries (for which discovery is a huge issue for newcomers), and 4. the conventions to weave them all together. In golang a user need only learn (1) and (2) and I think this should be the case for Scala 3 as well.

Also, things I do with implicits that I hope are supported in Scala 3:

I’m writing a game in Scala where performance is a major issue and empirically dominated by dynamic memory allocations. To meet this goal I have found these practices related to implicits to be useful:

  • avoiding use of implicit def foo[A] because it reallocates typeclass instances each interface call, instead doing manual specialization with val fooA = foo[A]. (I know dotty has done work on instance derivation but I haven’t studied it)
  • impure “typeclass instances” eg. DoubleLinkedList.setNext(node, nextNode) = node.concreteNextNode = nextNode
  • “typeclass instances” that are both impure and close over context, eg. an instance of DoubleLinkedList which sets head of list by def setHead(container, newHead) = this.myMap(this.contextualKey)(container) = newHead
  • def DoubleLinkedList.isEmpty[A, B, C, H](h: H, t: InferType[A])(implicits using A, B, C, H) where the type parameters can’t be inferred by the compiler with the arg h, but can be inferred with a hint about A, where InferType[A]: Option[A] = None, preventing the need for the client to write isEmpty[verbose concrete type parameters](h)

These practices have helped me leverage implicits and I hope they are supported in the implicit redesign.

2 Likes

It would be great to see a good part of Cats imported into Standard Scala, including Monoid. Perhaps the new collections library would be more amenable for creating Monad instances. I would object though to having a standard Monoid instance for Ints.

Would it be possible to achieve all 7 of the design principles with macros? Racket for example is famous for defining its entire class system and traits system with macros (classes, traits, paper). Through composable and easy-to-use macros, Racket exposes high-level syntax for classes, traits, contracts, etc, while still keeping the lower level substrates available and approachable as well.

A macro for defining extension methods, for example, could expand to an implicit class. If that macro were part of the standard library, the compiler might scan the classpath for extensions defined with it when emitting “method not found” error messages. IDEs (or metals?) could also offer to auto-import an extension while a developer is typing a method call, or even list all importable extensions in the auto-completion list.

Macros for typeclasses already exist in simulacrum, machinist, etc. Could changes to the macro system make them easier to implement, easier to understand the implementation, and easier to use as syntax without confusing IDEs?

Likewise for context parameters (ExecutionContext, RequestID, UserID, Viewer, etc), could some form of def/context macro expand to def + implicit functions?

Implicits and macros in Scala 2 enable useful patterns but are arguably advanced features that users of those patterns need to understand to some degree as well. If the patterns were expressible as abstractions and if those abstractions weren’t leaky, Scala could have the approachability of Rust traits, Swift protocols, Kotlin extension functions, and Racket classes without losing the core part of the language that enables this.

1 Like

@dsilvasc Macros might help in some specific cases, but they have restrictions compared to Racket; last I checked, creating declarations in macros was either very fragile or unsupported, depending on which Scala version you consider.

I’ve long wondered why Scala cannot follow Racket, which I’m familiar with. But after working on Dotty for a while, I have a hypothesis: you must restrict your language significantly to allow macros like Racket’s (which are, basically, extremely powerful compiler plugins). Java and Scala give you extreme freedom in writing mutually recursive modules (classes, packages, …). That’s very convenient for users, and mutually recursive modules are painful or impossible in many other languages (Haskell, ML, …).
But they complicate compilation a lot, especially in combination with type inference — compilation must be lazy enough to avoid infinite loops on mutually recursive modules, and that’s hard for compiler authors, and harder for macro authors.
That’s why certain features are so hard to support. For instance, you might want to create declarations in class A based on the type of some other code in B, and have them available right away, as in Racket! But typechecking B can then require knowing declarations in A, making the whole thing circular!

3 Likes

I agree that we should solve existing problems with implicits first, and that main problem is compiler diagnostics. Some points about this:

  1. For reporting errors about a broken implicit chain (an implicit might work if only the implicit it needs were known), https://github.com/tek/splain does a great job (if a bit terse). I seem to recall a proposal for merging it into the compiler, which would really solve this to a great degree. (Perhaps the output could be made more explanatory though.)
  2. Much of the discussion was about the difficulty of finding a suggestion for x.foo where foo is not a method on x and is therefore suspected to be an extension method. (It could also be misspelling or simply a bad guess.) The way I see it this is as type mismatch, with the expected type ? { def foo: ? } and the actual type x.type, thus triggering a search for a suitable implicit conversion, like any type mismatch does. However because the expected type is structural, it may not be as worth it to search for suggestions. In any case the new mechanism for extension methods may obviate this issue. But what about when the sought type is simpler than a structural-returning function – if that’s easier then start with that at least.
  3. If there would be some set of conventions the compiler could limit its suggestions search to, that might make things simpler. The conventions could be based on most existing libraries but it would also encourage libraries to follow them.
  4. Having the compiler dig up and present suggestions would be great, but it’s not the main thing that’s needed. The more basic thing is just to share in a user-friendly way whatever information the compiler does have. Some of the compiler flags, as well as Splain, do this to some extent but not in a way that’s user-friendly enough.
2 Likes

I’m not at all convinced that we need to learn this from other languages, or that implicits are too low-level, or that this proposal makes them less low-level. The only way to change this is to make a huge, coerced change in how Scala programmers make design decisions, and I am absolutely opposed to that.

The motivation for each principle is not obvious. I’m sure your experimentation is more than trustworthy but without detailing the individual motivations we can’t debate them. Also it would be helpful to rank each principle in terms of importance. Meanwhile I will try to guess the motivations and give my rankings.

I think you mean that explicitly passing an implicit argument should mirror the syntax for declaring the implicit argument? If so to me this would be a “nice-to-have” at most.

This is a HUGE, FUNDAMENTAL change to Scala. I’m also really not sure I like it.

Clear in what sense? In what sense is the current syntax not clear?

Why?? These seem to be the biggest syntax-changing principles but I don’t see any gain. On the contrary they make the language more rigid and less orthogonal.

I think this is completely orthogonal to everything else. It could be a separate proposal, before or after this one.

Not sure what this means.

Isn’t the whole point of this thread to debate these principles? I mean it seems to me these principles are what is controversial, aside from the bikeshedding that was being done before this was made into a thread.

AFAICT the only “mindshift” mentioned is in the “EDIT” part of principle 2, that implicits are no longer parameters but constraints. The rest are just a “requirements document” for the syntax.

In conclusion, I think there is still a very big missing “Why” here. The costs and risks are tremendous, so ideally the “Why” that I would like to see would be strong enough to make it a no-brainer.

The problems with implicits that I am aware of fall into 3 categories: (1) learning curve, (2) pain points working with code that uses implicits, (3) pain points writing code that uses implicits. There are a lot of proposals to solve most of them, most of which have minimal risk and cost. On the other hand, it’s very unclear which of those problems this proposal solves, and how effectively. (Making a case as to how it solves a problem is a first step, although it doesn’t prove that it will, that is why all software development needs good risk management.)

5 Likes

I have to also agree that I personally don’t think we need to be looking outward to other languages nearly as much as is being made out, nor that “implicits are not a run-away success”.

I don’t see the benefit. The new syntax proposals in the related SIPs will require major code refactoring for any non-trivial projects, and do not seem to me to make the concept easier to learn for “new” Scala developers (indeed as has been mentioned by many, if we are determined to find an issue with implicits it is lack of information in compiler errors rather than syntax).

1 Like

Given that we have a statically typed language, and the changes proposed are not changing the semantics, it should be fairly easy to do all the refactoring with an automated tool.

You just parse Scala 2 code as an abstract syntax tree, and let it represent that tree as the newly proposed in syntax in Scala 3.

Also the old style implicits are still supported so you don’t have to do it right away.

I will believe that an automated tool will be able to do that when I see it. Not that it isn’t feasible, but like all software products, until it’s finished you don’t know what might go wrong.

It looks problematics for me because it breaks the next usage pattern:

Assemble my library/framework in the package (along with the set of implicits) and say: all that you need to use this package is to say ‘import myPackage._’. A client can know nothing about the organization of myPackage, it can just use one.

Now, a client should use one of:

import myPackage._

or

import myPackage._
implicit import myPackage._

Dependent from existing of implicit values in MyPackage. But this break encapsulation,

I’ve been trying to catch up on all of these related threads recently while also asking myself where i have observed problems and downsides to implicits (personally when first learning scala, and more recently when reading other people’s code as they learn it).

Several of the cases have been brought up already here and elsewhere: conversions, poor compiler / tooling output, explaining them to noobs, etc. I think the poor compiler/tooling issue is HUGE. Intellij’s recent steps forward have migrated the situation from ‘untenable’ to ‘fine for non-complex use cases’. But there’s still a ways to go there.

The biggest meta-issue i’ve seen for early-to-mid scala folks, myself included, ended up being “When should i use them” and “How can i avoid horrible-to-use and horrible-to-understand code when using them”. My understanding on this has evolved quite a bit through a combination of:

  1. Personal experience / banging head against the wall / hard knocks
  2. Learning MUCH more about FP through years of study, conferences, and videos
  3. Internalizing the mathematical foundations of FP
  4. Learning Haskell

Now, after many years, i feel I can use implicits effectively and responsibly, not to the detriment of the codebase i’m working in. The biggest takeaway i’ve internalized about implicits is indeed as Martin mentions here:

  • They should generally be treated as constraints rather than parameters.

YES!!!

This realization took me a long time to come to, but looking back over my last 6+ years with scala, most of the non-tooling problems i’ve seen with implicits seem to roughly boil down to treating them as “just another parameter” that one can helpfully use the syntax around to avoid typing a few characters. The complexity and readability costs of this mistake are common, cumulative, and in my opinion the biggest downside to existing implicits.

Marking this powerful, useful concept as something distinct from “just another parameter” (even if, in low level reality, that’s what it boiled down to) is likely to have a huge impact on reducing the future detrimental effects i mentioned above, as well as help push the new-scala-dev training / explanation of implicits and their responsible use into a much better (and simpler) direction.

This is why my own thoughts on the current proposal have changed. I no longer consider @lihaoyi 's “alternate proposal X is much more familiar to people” which retains more of the familiar syntax to be a better approach, whereas i would have initially. I think that the lack of familiar syntax should be considered a feature rather than a bug for a concept as powerful and easy to misuse as implicits. This feature should be “easy to use ‘the right way’” and “hard to use ‘the wrong way’” for some suitable definition thereof, and if familiar syntax doesn’t achieve that goal because it maps too easily into the familiar (and thus to misuse), then the familiarity is a disadvantageous flaw in any proposal.

My two cents…

4 Likes

I also had to take course in Haskell (LYAHFGG in my case) to understand monads, typeclasses and FP’s approach to application design in general (because after years of doing imperative OOP I couldn’t find deep sense in avoiding side effects using seemingly complicated mechanics). But I do not think changing the looks of a programming language as proposed here will help anything. All the new constructs are generally as flexible as previous constructs. The biggest change is removal of implicit conversions. Other than that the similarity is so high that automatic rewriting is proposed. I do not see how new syntax makes bad patterns harder to use, except implicit conversions. To remove implicit conversions you do not need to remove implicit keyword.

Splitting implicit to e.g. given and implied won’t prevent programmers of thinking of them in the same terms. In fact you must describe given and implied using very similar terminology like implicit instances. If we forget about implicit conversions then situation is as follows:

  • (in current Scala) implicit marks declarations that are automatically passed to parameters marked implicit
  • (in future Scala) implied and given mark declarations that are automatically passed to parameters marked given

Not a big change. The biggest change is to remember when you have to use implied and when to use given, where previously there was one keyword (implicit).

1 Like

I suppose my post was a longer-winded way of saying that “using syntax unfamiliar to people who would otherwise see a familiar-ish syntax and use implicits in a familiar-ish-but-leads-to-bad-code way” is a net win, even if two proposals (and existing state) all have similar expressive power.

Someone upthread (or cross-thread) made the comment that “Java folks understood implicits relatively easily by comparison to Guice”, and i think that captures exactly what i don’t want to see as a model for what this feature is used for. I’ve seen it already, and the results are awful. Hard pass, thanks. :slight_smile:

1 Like

implied and given will be equally as easy to explain by comparison to Guice.

Remember that Guice has more complicated model than implicits in Scala:

  • in current Scala you have just implicit
  • in future Scala there could be given and implied
  • Guice OTOH has several annotations, Inject, Provider, Singleton, etc plus various scopes (e.g. request), plugins, etc

Both implicit markers as well as implied and given markers can be seen as simplified versions of Guice annotations.

It’s just a matter of time until given will be given the same abuse as implicit.

Scala is not a policeman that prevents people from:

  • abusing tuples instead of defining classes with proper names
  • creating very deep inheritance hierarchies
  • passing functions with super generic signatures like (String => Int) through many layers of methods, so their contract and purpose is very hard to track
  • using mutability
  • using unsafe constructs
  • etc

Scala never warns about such design issues. One can add such policeman.