Alternative proposal for implicits

Update: For the full proposal visit here. It’s different than the proposal originally posted here, mainly due to addition of dedicated type class syntax and the new component for declaring implicit interpretations – “lenses”.


Motivation

This (pre) SIP is a direct follow-up to the updated implicits proposal. I feel that the previous SIP doesn’t accomplish its main goal – making implicits clearer while retaining their usability. If anything, it makes things worse.

I believe it’s not that the proposal has specific problematic parts which sum up to a big problem, but rather that the whole approach is misguided. It views all of the design patterns achieved today with implicits as different forms of “term inference”; dependency injection, extension methods, type classes and implicit conversions.

Making this abstract association between those designs by using a common construct is a major source of confusion surrounding these features. It is hardly a necessity for a developer to understand the underlying connection between these designs.

At its core, this SIP takes the polar opposite approach and wishes to to break this abstraction and give each design its own unique set of constructs. Call a spade a spade; or as we say in Hebrew, call a child by his name.


Dependency injection & Type Classes

Implicit parameters are at the core of two major design patterns; dependency injection and type-classes. Both patterns are a form of the separation-of-concerns principle, where the difference between them lies in the usage of type-parameterized (generic) parameters in the latter but not in the former.

The original wording – implicit – is quite successful in capturing the idea of this feature, but it conflates two different aspects of the mechanism; (a) declaring where a function may accept an implicit value and (b) the action of making a value implicit and thus available to those functions. If I’m not mistaken, this is the only keyword in the language that has multiple different meanings.

This polysemy is evident not just in the core mechanic of implicits, but rather it’s reflected in its usage of the keyword in all of the “implicit design patterns” as a definition construct, which make it seem as if all of those patterns are strongly related, where in reality they are not.

Thus, it would be fitting to use a different wording that will (a) remain close to the original wording, (b) still capture the essence of the mechanism and – most importantly – (c) will distinguish between the definition site and the call site.

Implication

The mechanism should be thought of as a cloud residing in a certain scope, from which all the values are automatically applied to functions invoked in said scope. This is the “implication cloud” of the scope.

In order to add a value to that cloud, one has to imply it. Implying a value is an active operation on that value; it is not a defining property of that value. It is an operation with (local) side effects.

In order to make parameters of a function able to accept values from the implication cloud (where the function is called), one has to define them as implied.

Defining parameters as implied also automatically places them in the cloud of the function’s scope, and thus propagation of these implied parameters throughout nested invocations is done with minimal syntax.

The result is something fairly similar to what we’re used to with the basic usage of implicits:

// dependency injection

object Future {
  def apply[A](body: => A)(implied ec: ExecutionContext) = ???
}

imply ExecutionContext.global
Future { ... }

// type class

trait Ordering[A] { ... }
object Ordering {
  object IntOrdering extends Ordering[Int] { ... }
}

imply Ordering._
(2,1,3).sorted

Only parameters are implied

It is possible to imply any instance available in the language; however, it is impossible to define anything other than parameters as implied – not class, object, def, val nor var.

Previously, defining these “things” as implicit served only two use-cases; (a) adding values to the implication cloud in a local scope; and (b) forcing these things into the implication cloud of any importing scope.

The first use-case is now replaced with imply, which is anonymous and accepts any expression:

imply 2
imply otherVariable
imply function(arguments)
imply new Ordering[Int] { ... } // obviously without `new` if anonymous classes change syntax

The second use-case – forcing values into the implication cloud on import – is quite problematic. It uses a construct (implicit) in the place of a modifier, but it serves as an operative keyword; it is not an intrinsic property of a “thing”, but is rather an operation with side effects.

It may very well be that the prevalence of this use-case increased due to conflation between the definition site and the call site that implicit has. With the new syntax, it becomes apparent that this use-case is odd:

imply object IntOrdering extends Ordering[Int] { ... }

Type-classes should be defined just like any other object; if one wishes to emphasize their usage as a type-class, one should either state it in their documentation or annotate them with @typeclass[Ordering] (perhaps a new standard annotation?).

It may still be highly useful to provide a shortcut for the two underlying operations of using type-classes:

import imply Ordering._
import imply Ordering.IntOrdering
import Ordering.{imply IntOrdering}

Unlike the previous SIP, I believe that this mechanism of “implicit import” should only serve as a shortcut for these two operations, and shouldn’t provide a new namespace. Modularity and encapsulation can be easily achieved with other constructs:

object Ordering {
  private val secret = ???
  object IntOrdering extends IntOrdering[Int] { ... } // uses `secret`
}

import imply Ordering._ // implies `IntOrdering`, but not `secret` (as it is private)

Context bound

The following is not a direct continuity to the previous laid out line of thought, and therefore a more “optional” idea in this SIP - that is, dropping the context bounds (by deprecation first, of course).

Context bounds are perhaps not the major source of confusion surrounding implicits, but their irregularity does add up to that confusion, while they only serve as a shortcut that can be expressed with existing constructs (implied parameter with a single type-parameter).

What makes them irregular?

  1. They only work with a single type-parameter; any type-class with more than one type-parameter cannot be expressed via context bounds.
  2. The : syntax makes it seems as if the generic type is the type class. This is so because the upper and lower type bounds use similar syntax (<:, >:).
  3. They can be expressed alternatively, while the other (type) bounds cannot. It make it seem as if all bounds are somehow related to implicits.

Context bounds also allow for anonymous implied parameters, which can be potentially expressed in other ways:

def foo[A](arg: Int)(implied _: Ordering[A], _: Eql[A, A]) = ???

Dropping context bounds also means that we can probably go away with implicitly / summon.


Extension methods

I wasn’t here in the early days of Scala, but I understand that in some way implicit parameters were introduced to the language as an abstraction of extension methods (and conversions). Regardless, I believe that extension methods are quite different from implicit parameters, and should be completely separated from that concept.

For starters, extension methods don’t have anything to do with functions or parameters. They are all about extending objects in a different way than inheritance. The same thing could be said about type-classes, which may be seen as a form of ad hoc polymorphism, but they are more general in the sense that they can extend multiple types (with more than one type parameter), while extension methods cannot.

Moreover, extension methods are part of a “magical class” – an extension – which is constructed invisibly and tied to an (extended) instance. One might think of this behind-the-scenes instance as “implied”, but this has nothing to do with the “implication cloud” of implicit parameters; in no way the extended instance declared that it accepts implied methods from the implication cloud.

Lastly, extensions are a unique “thing” in the language. They are not an implied class, object, def, etc. Their semantics differ from those of other existing constructs; they are constructed (instantiated) differently and have a special “self” member (referring to the extended instance). Type-classes do not have any such special semantics, but are rather a pattern of using the regular “things” differently.

Therefore, extension methods deserve their own unique syntax:

extension Pretty extends Any { 
  // `this` is the extended instance
  def prettyToString() = s"Pretty: ${this.toString()}" 
}
1.prettyToString()

trait RandomSelection[A] {
  def random(): A
}
extension SeqOps[A] extends Seq[A] with RandomSelection[A] {
  override def random(): A = ???
}
Seq(1,2).random()

Since an extension has no defined constructor, it cannot accept parameters. This reflects the notion that extensions should not maintain a state (they really shouldn’t), and are not a replacement for delegation / decoration patterns. That also means that context bounds are illegal on the type-parameters of the extension (but not on the methods themselves).

Learning from past mistakes, extensions should probably not have their own bounds (like the old view bounds); they should not be used to implement type-classes, nor to define the interface of function. Furthermore, chaining extensions (extension extending another extension) probably should also be disallowed.


Implicit conversions

Lastly, we have implicit conversions; or rather, type conversions. They too differ from implied parameters. Much like extensions, they are also confused as being “implied” because their mechanism works behind the scenes, but are in fact a “thing” with its own special semantics:

conversion intToDouble(i: Int): Double = ???

It is important to remember that conversions are not extensions. The latter extends an object and can refer to it after it has been constructed, while the former cannot.

There is also the topic of chaining conversions, which I have no strong opinion about, and can remain the same as far as I’m concerned.

6 Likes

At this point, even if it is the right thing to do, nobody will do it :roll_eyes:

Please don’t discourage good discussion.

There are a lot of advantages to this proposal. The current system of givens has a lot of issues. There’s also Rex’s proposal which I like a lot of.

No solution is perfect, but I think through robust discussions and back-and-forth we could arrive at something much better.

4 Likes

Most of these are QoL issues which are easily solvable (and have been extensively documented). I think it makes much more sense to solve those QoL issues to throw out the baby with the bathwater.

The problem is that the solution is being presented as an all or nothing. Our only choice is to either accept this solution or have nothing happen because all suggestions for improvements to current implicits have been shot down.

Also as people have started using the new solution, there are also many QoL issues that even the old one didn’t have, so I don’t see it as a general net improvement.

Are you talking about givens or Eyal’s suggestions?

2 Likes

A bit off topic, but it would be interesting to see a list of QoL issues that have arisen from actual usage so far.

1 Like

To be clear: under this proposal, would the companion object automatically be part of the “implicit cloud”? If not, the import tax for typeclasses would be unpleasantly increased.

If someone is up for starting a thread for this, I have a couple of things to add.

Instead of starting a new thread, I’d encourage feedback on the dotty implicit proposal to go to the existing Updated Proposal: Revisiting Implicits since that’s the thread monitored by the SIP committee.

3 Likes

Hey, sorry for the lack of response. I’ve taken a bit of time to read, think and experiment with ideas related to the topic.

After reading a lot about type-classes and re-visiting some previous threads in the overall discussion, I have come to realize that this proposal is not enough; it doesn’t give an adequate solution to the problems faced with type-classes in both Scala and Haskell.

Another thing I took note of in the overall discussion is the apparent confusion surrounding the conflation of the term context. It means one thing for type-classes – a set of compile-time constrains – and an entirely different thing for injection – ephemeral run-time shared data (similar to React’s context).

My conclusion to both of these is that the endeavor of making type-classes easy, fluent and useful in Scala should be separated (orthogonal?) from the “implicit features” (injection, extensions and conversions). It seems that the historical attempts to bend these features to make type-classes work is what causes a lot of misuse and confusion in the language. Type-classes deserve their own distinct syntax, constructs and rules.

I would like then to update my proposal to reflect that conclusion by making the “implication” feature even weaker:

  1. It should not be possible to import imply (one can still import and then imply). Importing and then implying values is hardly a common use-case for anything other than type-classes.
  2. Definitely no context bounds, or at least detach the concept completely from “implicit” and associate it with the new type-class constructs.
  3. Do not allow for implied parameters with type-parameters (which is a bit of an irregularity, so I’m not entirely sold on that).

As for type-classes, I believe they should be explored someplace else. This has already been done, but not with the mind-set of differentiating them from implicits and giving them the solution to their own unique set of problems and use-cases instead of a generic abstraction over many unrelated concepts.

However, I do see some connection between type-classes, extensions and conversions, in the sense that they are all a set of compilation rules / hints / constraints that can be imported. I’d be tempted call this concept “lenses”, as in adding a lens on an optical scope (adding compilation constraints on a lexical / programming scope). Funny though, it seems that Haskell already managed to use this term for something else (ugh).

3 Likes

IIRC this is also a mechanism for bringing extension methods into scope, so it may be better to allow this and simply specify a different way of importing typeclass instances - or go the other way and allow something like import extensions to bring in just extension methods.

I’m a little leery of this as well, as it’s entirely plausible that a context object could be somewhat generic, without being a typeclass.

Currently in my proposal you only need a simple import to get them, but as I said in my last comment, I think this should be combined with a new module / namespace component – lens – dedicated for resolving compile-time rules, unlike regular import which is dedicated for resolving names without any side effects.

You’d still be able to declare (implied arg: JsonFormat[String]), but not arg: JsonFormat[A] nor arg: A.

My concern is not that people will still try abusing implied for type-classes, as this would be impossible to do without being able to import “implications”. My concern is that this would somehow conflict with the new type-class system, as it might make function definitions harder to resolve for both the compiler and – more importantly – the developer.

That would really reduce the utility of things which can’t be typeclasses, but act as a locally global context. For example, an overly simplistic memoizing wrapper might look something like this:

def memo[A, B](f: A => B)(input: A)(implied memory: mutable.Map[A, B] @@ Memo): B = 
  memory.get(input) match
    case Some(cached) => cached
    case None =>
      val result = f(input)
      memory += (input, result)
      result

Fully generic, but completely incompatible with type classes. JsonFormat[_] could (and probably should) be a typeclass, but something like this wouldn’t be as easy to convert.

1 Like

I’m not sure I follow the example. It’s basically a getOrElseUpdate, and I’m not sure why the map is implied. But never-mind that, let’s keep the generics as long as it doesn’t horribly conflict with the new type-classes (which I’m not sure will be a problem).

Spray’s JSON formats are one of the prime examples of type-classes. If they don’t fit the new model, then the model has failed.

Agreed, I was attempting (badly) to explain why I didn’t use JsonFormat in my example

1 Like

It’s taken me a little while, but I formalized a full proposal here.

Actually, it’s not yet complete, since I still need to fill in the parts about extensions, conversions and implications, but those are already discussed here. The parts about lenses and type classes are new.

4 Likes

Fully complete now :slight_smile:

I only scanned through this so far, would like to take a more in depth look to see what comprehensive ideas other than the currently implemented are out there. Meanwhile, thanks for the time you put into this.

I don’t think calling lenses lenses is the best choice, since a lens concept already exist in FP land, including the monocle library in scala. Any fitting alternative names you/someone can think of?

2 Likes

@eyalroth - It looks like a decent proposal for what it’s trying to do, but unfortunately I think it has two downsides that render it unsuitable:

(1) It’s not clear that you can actually support the use-cases that we have now without completely rethinking code (e.g. that typeclasses are traits). Scala 3 is supposed to be backwards-compatible to a large extent, at least with manual rewrites!

(2) Personally, I think the move towards more distinct features is exactly backwards. I don’t want to learn one computational scheme for how to make change, and a separate one for how to do taxes, and yet another for accounting for liquids, and…I just want to learn arithmetic and apply it all over the place. Similarly, I want a language with powerful general-purpose term inference that can be used for whatever term inference is good for. Implicit conversions infer a term of one type from a term of another type; implicit vals provide default terms to infer when one is asked for; implicit defs provide a way to synthesize default terms given types and other default terms. Extension methods locally infer a term with more capability than the old one. The more this can be unified, the better, IMO.

There is little downside to a powerful, convenient abstraction. People who like to reason from first principles can do so. People who like well-defined use cases can apply “patterns”. If you create a myriad of individual features, each may be slightly more refined, but you can’t reason from first principles any more; you have N different things to learn, plus N(N-1)/2 interaction terms to understand. No thank you!

(I like some of the designs you’ve proposed, but since I think the overall push is in the wrong direction, I’ll leave it to others to discuss those.)

3 Likes

I don’t believe it breaks anything that was possible previously with implicit object and traits. Inheritance of type classes is still supported, but merely modeled differently. AFAIU this is also the way they are modeled in Haskell, and how they in a general sense considered an alternative to inheritance.

I would love to see examples and try to work out on them.

But that’s the whole point - that lack of distinction between those different features is what makes implicits so hard to grasp and understand. It’s like trying to abstract over whatever a software does as a turing machine with only the most basic operations.

Such generic abstractions that fail to capture separate ideas with separate structures and constructs may be extremely generic, but are also extremely low-level and hard to understand; after all, assembly is the most general purpose language out there, but it is extremely hard to work with.

I don’t know what you refer to as “first principles”, but those features are still quite generic and suited for multiple purposes. It’s the extremely generic abstraction of “term inference” that allows for so many abusive design patterns, or ones that expect a huge understanding from to developer to connect the dots and see the greater picture.

I’ve now added a new section which compares Scala 2 implicits with the proposal.

2 Likes