Proposed Changes and Restrictions For Implicit Conversions

To say the truth I can not completely understand it. What does it mean?
After reading the proposal I have feeling that such action means that we understand the drawbacks of that option, so we take responsibility by our own risk.
Here are drawbacks(I don’t know whether it is true or not):

  • it is magic and untenable functionality.
  • Scala team is going to reduce usage of that function as much as possible. And it is doubtful that you will be able provide pull request in probable bugs, it is just too difficult, and we cannot guarantee it wll be welcome because it is really difficult and it is not our main priority.
  • It significantly increases compilation time.
  • It is unwelcome functionality and it will be better not to use it.

Here are the quotes which lead me to such total:

Even single point from the list can make the choice unacceptable. I just don’t know how to make a decision.
At least this feature is stabilized in scala 2 .

I can only hope that the situation with “value classes” and “opaque types” will have some appropriate decision sometimes. I hope that our use case deserves it.
We do not need full support for implicit conversion at least, we just need some sort of “unboxing”

I don’t know why we keep removing the “Scala magic” and turning it into a boring language. If that’s the direction we are going, I might as well just use python then.

5 Likes

What would happen to conversions between Scala/Java primitive types, e.g. implicit def boolean2Boolean(x: Boolean): java.lang.Boolean? Would these also need import implicitConversions?

The Argument Conversions use case can already be satisfied by type classes, e.g.:

class Iterable[+A]:
  def concat[F[_]](xs: F[A])(using Foldable[F]): Iterable[A] = ...

This requires no implicit conversions nor new syntax.

  1. Bulk Extensions

Seems quite useful.

Unrelated, one of the most addictive features of implicit conversions is allowing DSLs to reuse literals. For example, a parser combinator library which allows strings to be treated as literal parsers ("trait" ~> whitespace ~> ...); or a calculation library which allows formation of expressions involving literals (expr + 42).

Via overloaded literals, Haskell achieves this feature without implicit conversions. Indeed, Scala already overloads for comprehensions (achieving a kind of “programmable syntax”). It could be useful to investigate supporting overloaded literals via a very limited type class, something like:

trait FromLiteral[A, B]:
  def fromLiteral(value: A)(using Literal[A]): B

where:

enum Literal[A]:
  case StringLit extends Literal[String]
  ...
object Literal:
  given Literal[String] ...
4 Likes

yes, but

  • this requires global rewritings of libraries to support the new structure

  • this is significantly more complicated. I believe since we are all experts we don’t tend to see it. But for a newcomer:

    • there’s an extra type parameter - and even worse it’s a higher kinded one. What is F[_]?
    • the actual argument is of type F[A] – what is this F? It’s not descriptive at all!
    • there’s an additional constraint that says Foldable[F] – no idea what that is!

compare to

  def concat(xs: Iterable[A]): Iterable[A] = ...

That’s much clearer: concat takes an Iterable. Everyone gets that.

The FromLiteral typeclass idea is very intriguing! We should look into that.

I think the simplicity here here is deceptive. “concat takes an Iterable[A], except where it doesn’t” is a much more unpleasant rabbit hole to go down. If you start from, “Scala is strongly typed,” then it’s just confusing when concat("hi there") works - despite String being a Java type that doesn’t extend Iterable.

It only looks simpler when both parties can be satisfied by, “just trust that it works” - which is fine for a student because Scala is the vehicle for learning, not the target. It doesn’t work well when you’re trying to onboard a junior engineer and Scala is the learning target.

If you try and actually explain how it works, then you end up covering the same complexities as if you had to explain the typeclass version (most notably implicit scope), and the conceptual hooks they could use to associate with this new knowledge is missing. A Foldable[F] constraint is much nicer to reason about than, “there’s an invisible conversion that has to exist” - especially because it gives them a heads up that something more complicated is going on.

And that’s without having to add the additional burden of explaining that, while this does exist in the standard library, it’s not something they should use.

8 Likes

Maybe it’s a good idea to combine this with the FromDigits class, to make it a more powerful mechanism.

4 Likes

I think the simplicity here here is deceptive. “concat takes an Iterable[A] , except where it doesn’t” is a much more unpleasant rabbit hole to go down

Right now, you have a point. That’s why I propose

def concat(xs: ~Iterable[A]): Iterable[A] = ...

which says it takes an Iterable or something that is convertible to an Iterable. Conversions are a thing in all languages including Haskell and Rust. That avoids the Rabbit hole and is still a lot simpler than the F[_] solution.

3 Likes

If there’s a way to fix the underlying issues with overloading that make the magnet pattern necessary, the main thing that’s left is simple conversions - and non-symbolic approaches are much friendlier in general.

Compare the debugging journey for these alternatives, assuming we’re trying to call this with some non-standard Java class JClass<A> which has the missing dependency squirreled away in a utils project:

Implicit conversion marker

def concat(xs: ~Iterable[A]): Iterable[A] = ...
  1. Why isn’t this working?
  2. What’s that ~ mean again?
    This one is particularly bad, as Googling a symbol is tricky, and ~ has a bunch of other meanings.
  3. Ok, it’s a conversion marker.
  4. Do I have a conversion?
  5. I don’t have one.
  6. Where can those live again?
  7. IDE can’t find it, what’s the trait name again so I can grep for it?
  8. Conversion, well that makes sense in hindsight, what goes where?
  9. Ok, looks like I should grep for Conversion[JClass[A], Iterable[A]]
  10. Well, that was a bust.
  11. Looks like they wrote it as Conversion[JClass[T], Iterable[T]]
  12. At least I found it

Direct encoding with the Conversion typeclass

def concat[In](xs: In)(using Conversion[In, Iterable[A]]): Iterable[A] = ...
  1. Why isn’t this working?
  2. Hmm, what does using mean again?
  3. Looks like I need a Conversion, do I have one?
  4. I don’t have one, where can those live again?
  5. IDE can’t find it, at least I know I can grep for Conversion[JClass[A], Iterable[A]]
  6. Well, that was a bust.
  7. Looks like they wrote it as Conversion[JClass[T], Iterable[T]]
  8. At least I found it

Encoding using a capability typeclass (in this case, Foldable)

def concat[F[_]](xs: F[A])(using Foldable[F]): Iterable[A] = ...
  1. Why isn’t this working?(
  2. Hmm, I wonder what F[_] means.
  3. Dunno, might as well ignore it.
  4. Hmm, what does using mean again?
  5. Oh, the docs use [_] a lot, looks like F[_] means JClass[_]
  6. Looks like I need a Foldable[JClass], do I have one?
  7. I don’t, where can those live again?
  8. IDE can’t find it, at least I know I can grep for Foldable[JClass]
  9. Found it

All three take about the same number of steps, but both the Conversion and Foldable versions provide the breadcrumbs needed to lead the neophyte in the right direction, which the symbolic one just doesn’t provide.

7 Likes

Even the compiler is currently smart enough to suggest missing typeclass imports. It should be possible to make it smart enough to suggest the same for ~Iterable

1 Like

Provided they’re on the class path. If you have to grep your organization’s git repos to find it, the compiler can’t help you.

1 Like

I agree the higher-kinded generic type is confusing and off-putting. I’m not suggesting ~ isn’t useful (I think it is useful), just that it does overlap with existing features.

In any case, having seen the drawbacks of implicit conversions, as addictive as they can be at times, I’m excited to see some movement toward alternatives. Looking promising!

1 Like

Two more questions:

  1. Is the Boolean in an a if expression considered a ~Boolean for the purpose of supporting implicit conversion in that context?
  2. Is it worth it (if possible) to enable ~Foo in pattern matching:
bar match 
  case foo : Foo => println("bar is really a foo")
  case likeafoo : ~Foo => println("bar is like a foo")

Edit: Ignore the second question. Pattern matching is about runtime information, so it is irrelevant here.

I find @morgen-peschke s example actually pretty good.

IMO the type class approach:

def concat[In](xs: In)(using Conversion[In, Iterable[A]]): Iterable[A] = ...

is way better than

def concat(xs: ~Iterable[A]): Iterable[A] = ...

IMO it is easier to read, easier to Google for, and more regular. The only downside I see is that you still have to do the conversion within the method body and that it is slightly longer, which can become a bit repetitive. And even then, it seems worth it given the advantages.

Ultimately I think ~ is simply not explicit enough (which is why we are having this whole discussion), so I am afraid we will have the discussion again in a few years to get rid of ~.

6 Likes

As I understand, ‘~’ is needed only for source compatibility of 3.0 and 3.1? (Needed changes to the sources: import implicit conversions on client-side) Mb issue scala 4.0 instead of 3.1, or changing the version schema to have SemVer after 3 (scala3-1.0.0, scala3-2.0.0) will be a better solution?

I would think no. The default case should be “no conversions”. Are there any good use cases where someone wants to convert to Boolean implicitly?

Maybe weird DSLs with custom booleans or maybe for libs that have a Boolean = True | False-style bool

Hi,

Does this mean that many of the standard library methods will end up with prototypes looking like this?

If so, then I think that you will end up in the same place as we did with the Scala 2.8 standard library with method prototypes that are too confusing for beginners, and you end up pretending the complexity isn’t there are offering fake method prototypes to make the methods look simpler.

I generally believe that we should be aiming for the minimum amount of complexity necessary to achieve the required outcome.

But I’m also not really convinced that now is necessarily the best time to be tackling this problem. I would prefer to see Scala 3 ship, see what the top issues are reported by users, and then focus on alleviating those pain points.

3 Likes

I don’t know about “good” use cases, but I have a use-case where I use the possible conversion to cause a compile-time failure with a better message. It is in a DSL that has its own Bool and if, so if someone accidentally uses a Scala if with the DSLBool they get a very clear error message:

trait DSLBool
object DSLBool {
  import scala.reflect.macros.blackbox
  implicit def toBooleanError(mybool : DSLBool) : Boolean = macro toBooleanErrorMacro
  def toBooleanErrorMacro(c: blackbox.Context)(dfbool : c.Tree) : c.Tree = {
    c.abort(c.enclosingPosition,
      """Type mismatch. It appears you are trying to use a DSL `Bool` where a (Scala) `Boolean` is expected.
        |Make sure you use `dslIf` and not `if`.""".stripMargin
    )
  }
}

IMO, this is not a good enough reason to allow it, but here is an example use-case for the record.

There used to be special syntax that meant exactly this (<%) and it was deprecated.

7 Likes