Can We Wean Scala Off Implicit Conversions?

I think the tooling is significantly improving in this area (IntelliJ can now import orphan implicits, the Scala 3 compiler can suggest you to import orphan implicits). Hopefully, this will solve the problem that they are hard to discover, and they won’t become problematic anymore? I think in some situations orphan implicits are the right solution (typically, when a first library defines some typeclasses, a second library provides data types, and you want to define instances of these typeclasses for these data types).

3 Likes

Yeah I don’t mean that we should prohibit orphans in general, since there are certainly places where they are the right tool for the job. I’d just like to nudge people towards putting implicits (both conversions and typeclasses) into companion objects where-ever possible, since those are pretty fool-proof and lack many of the pitfalls that orphan implicits tend to have

Enabling implicit conversions in companion objects with the current language feature flag (definition site), but then orphan implicits with a different language feature flag a the use site seems like a good compromise. It’d be an easy transition for me but I worry about people new to the language.

When I first read here about turning on implicit conversions at the use site I thought it was the worst idea. Who would bother to use implicit conversions if you have to use an annotation at every call site? But then I thought if it was one import at the top of the file that would be fine.

I have no objections. At the moment I am waiting for the dust to settle on Scala 3 and the many changes to compiler options that have happened in the 2.13 series as I don’t want to break users’ builds over and over for compiler options that require source changes.

3 Likes

It will be impossible for library creator to initiate all types though. This cannot be done without changing export to allow generic type, or it has to be done with implicit conversion right?

Hi, I am still trying to migrate specs2 to Scala 3. On my roadmap there’s an attempt to convert all implicits to given clauses and extension methods.

I have however stumbled on a few issues. For example in specs2 there is a syntax to provide and/or logic for both Booleans and Results:

success and failure
true and false

This works by having 2 implicit defs transforming both Boolean and Result into ResultLogicalCombinator instance supporting and and or. I could not transform this code into extension methods because then I would have name clashes with the generated methods. This seems to happen because the implicit defs are using byname parameters. In the example above it is important to use byname parameters because we can use or on 2 assertions where the first one is throwing an exception and the second one succeeds.

Moreover in that case, implicit classes have the advantage over extension methods that it is possible to define private members like so:

implicit class LogicalOps(r: =>Result):
  private lazy val result: Either[Throwable, Result] = Result.evaluate(r)

  def and(other: =>Result): Result = ???
  def or(other: =>Result): Result = ???

and then result can be shared with all the class methods. With extension methods this code has to be duplicated.

Then I have other cases with implicit defs and byname parameters which I wanted to remove. At first I thought I would be clever and do the following:

implicit def convertByName[T, S](t: =>T)(using convert: Conversion[() => T, S]): S =
    convert(() => t)

This way I would have only one implicit def left in my whole codebase and I would define given Conversion[() => Something, SomethingElse] everywhere else.

Unfortunately this doesn’t play well with type inference. The search seems to favour convertByName first over other implicits, trying to find instances of Conversion[() => T, S] and failing with an ambiguous search saying for instance that:

  • there is a Conversion[() => String, S]
  • there is a Conversion[() => Int, S]

Even if those 2 conversions are actually:

given Conversion[() => String, Option[String]]
given Conversion[() => Int, Option[Int]]

and the S that we expect is something entirely different like Result.

Overall this motivates me to remove some of the DSL craziness in the next version of specs2 but I don’t see how to completely remove implicit defs (but I would like to, since there would be less concepts to understand in Scala).

Personally I don’t think implicit classes can be removed, because extension methods are just not enough, I’ve needed implicit classes for the purpose of abstract types and fields as well. I guess we’ll know after a while with scala3, we’ll be able to run an analysis and see if people are still using it.
That said you can still simulate them in the original way, before they were added.

Regarding

implicit def convertByName[T, S](t: =>T)(using convert: Conversion[() => T, S]): S =
    convert(() => t)

You can use your own lazy converter type

trait MyLazyConverter[T, U] {
  def apply(t: => T): U
}
given MyLazyConverter[String, Option[String]] = Option(_)
given MyLazyConverter[Int, Option[Int]] = Option(_)

def withOptionLike[T](t: T)(using MyLazyConverter[T, Option[T]]) = ???

def test = {
  withOptionLike("someString")
}

I have started a project in swift language recently, there are no implicit conversions. I have overcome most use cases(by extensions and extensions’ constraints) except conversion to boolean in “if statement”. It is not comfortable to write something like

if (a < b).isTrue

I use implicit conversions so you can do something like:

val q = quote { query[Person] }
val f = quote { q.filter(p => p.name == "Joe") }

Otherwise you have to do:

val q = quote { query[Person] }
val f = quote { unquote(q).filter(p => p.name == "Joe") }

That is to say, implicit conversions are my auto-splicing mechanism. Here’s what the types approximately look like:

val q: Quoted[Query[Person]] = quote { query[Person] }
val f: Quoted[Query[Person]] = quote { unquote(q).filter(p => p.name == "Joe") }

In other words, unquote should do Quoted[Query[Person]] => Query[Person] specifically inside of a quote block. If there’s a better way to do this I’m open to alternate approaches.

Thanks for the tip but there are still places where I would need implicit conversions. For example I have a type of interpolated string which is defined by:

extension (sc: StringContext):
    inline def s2(inline variables: Interpolated*): Fragments = ???

Where Interpolated is the kind of values which you can “inject” into the interpolated string. So I need to be able to convert many different types to Interpolated and make the values by-name to control the evaluation.

Where Interpolated is the kind of values which you can “inject” into the interpolated string.

Maybe instead you could take values of any type:

extension (sc: StringContext):
    inline def s2(inline variables: Any*): Fragments = ???

and let the macro do the wrapping based on the type of each argument.

1 Like

The topic was already discussed in this thread, the only proposed solution to vararg conversions would be Odersky’s with tuples, and it doesn’t work with string interpolation.
Also worth nothing that even for non var-args, writing something like

def foo[A, B, C, D](a: A, b: B, c: C, d: D)(using MyTargetType[A], MyTargetType[B], MyTargetType[C], MyTargetType[D]) = ...

is a downgrade compared to implicit conversions (or implicit constructors, or however you want to call them) from a readability and maintainability standpoint.

If scala3 expanded varargs to tuples, instead of Seqs, this could be solved with “simple” inline method using comptime, and avoid macros itself, by doing something like

  extension (sc: StringContext):
    inline def s2(inline variables: Tuple): Fragments = {
      type requiredImplicits[T <: Tuple] = T match {
        case EmptyTuple => EmptyTuple
        case h *: tail => Interpolated[h] *: requiredImplicits[tail]
      }
      val eachRequired = scala.compiletime.summonAll[requiredImplicits[variables.type]]
      ...
    }

It would still be an improvement over having no alternative at all, or writing a macro, but in comparison to implicitly converted parameters, it is more complicated, non reusable, less useful under compiler errors, harder to document in doc tools, and inlined, which could cause bytecode bloat.

2 Likes

I agree that if we recommend to replace implicit conversions by macros, we’re making things severely more complex and harder to understand, which is completely the opposite of what removing implicit conversions is supposed to achieve.

12 Likes

One admittedly cludgey way aorund it is to add a dummy parameter for disambiguation. It typically comes with an obscure type and a default value.

There’s also the possibility that at some point we will be able to use @alpha annotations (or whatever they might be renamed to) for disambiguation. At present that does not work, and it would be require quite a bit of effort to make it work. But it might happen. The idea here is that you could define

@alpha("fooOnBar") def foo(x: => Bar) = ...
@alpha("fooOnBaz") def foo(x: => Baz) = ...

The encoded names of foo would be fooOnBar and FooOnBaz so there’s no conflict at the bytecode level. We “just” have to teach the frontend to know this as well.

Private fields in implicit classes: yes, this requires extra code to port to extension methods.

1 Like

I hope implicit conversions will stay, maybe more constrained than now but stay. They might look ugly from the inside of the compiler, but definitely are helpful to reduce unwanted clutter from the final business code when done right. They might not be everyone’s taste but they are definitely not evil per se.

5 Likes

Here is one more issue I encountered with the fact that we can not define conversions with by-name parameters.

This currently means that I have to use an implicit def for a by-name conversion. However if I have 2 applicable conversions, one with given and one with implicit def then a user has to remember to import the given one with MyConversions.{given _, _}. Otherwise only the implicit def will trigger.

This is one aspect that I find problematic with implicit definitions and given definitions. They don’t follow the same import rules and not being able to completely dispense with implicit definitions makes scoping error-prone.

In fact, you can import both implicit definitions and given definitions with given. So you could recommend the user writes:

import MyConversions.given

that will import both definitions.

3 Likes

I know I’m a little late to the game, and I didn’t read the entire thread, but I just wanted to put in my vote for those who have suggested that “implicit constructors”, even if very limited in scope, can be quite useful. I often find myself getting rid of repetitive test code (where I’m calling libraries that I do not wish to clutter with using c: Conversion parameters, because you shouldn’t have to change your non-test code to make tests better) by defining implicit conversions that promote e.g. a simple string literal to an error message.

I very much agree that using implicit conversions for extension methods is evil, but I would be sad to lose the ability above. By construction, this is a case where you need an orphan implicit: you don’t want to declare that e.g. all Strings can be implicitly converted to an error message outside of tests, so you shouldn’t put it on either companion object. The restriction that type inference doesn’t consider implicits at all, and so only works when the need for an explicit conversion is “obvious” by being a concrete type, would be totally acceptable in most of these cases.

I think with IDE tooling that reveals implicits, there is little downside to providing implicit conversions as long as type inference isn’t slowed down. I was surprised to learn that about half the people on my team have permanently left IntelliJ’s Show Implicit Hints on at all times, but whatever works for them! This way, half the team gets concise code, and the other half gets the benefits of full explicitness – everyone wins.

I also agree that we need to have a way to partially apply type parameters in order to make use of the Conversion typeclass/Rust Into pattern without it getting ugly, since making nice DSLs with implicit conversions is a very important Scala ability IMHO. It sounds like that’s planned, which is very exciting.

1 Like

I am also a little late to the game.

I work on a refined types systems that uses an implicit conversion:

def log(x: Double < 0d): Double = ???

log(1d)

In this example, 1d: Double is implicitly converted to a Double < 0d.

Another example:

def sqrt(x: Double > 0d): Double > 0d = ???
def square(x: Double): Double > 0d = ???

def distance(x: Double, y: Double): Double > 0d = refined {
  sqrt(square(x) + square(y)) //Implicit unbox (only in the `refined` capability)
}

As you can see, these two implicit conversions are extremely important for my project and I currently don’t see any alternative in Scala 3.

I think implicit conversions can be evil in most of the cases but it shines in some niches.

A good alternative could be a given-like import as proposed above to prevent accidentally shadowing an unexpected conversion.

In fact, the later discussion on this topic has moved here

1 Like