Proposed Changes and Restrictions For Implicit Conversions

We need bulk extensions (as I explained earlier) to avoid the implicit conversion here. So, yes, we should get rid of it, but first we need to provide an alternative.

1 Like

Bulk extensions and argument conversions can work with user functions.
What are you plannig with language operators if, while, return?

We will get much boilerplate code in our current code if we migrate to scala 3 currently.

To prepare for when implicit conversions are disabled, you can append .convert to your non-boolean values:

given Conversion[Int, Boolean] = _ > 0
if (1.convert) then println("success")
1 Like

We use three-valued logic in our business logic, it is common practice in database languages where null logic is actively used.
So the more common example is:

If ((a < b).convert) {
}

It is ironical but most annoying stuff is brackets.
Who loves brackets in if expression, they even have been removed in scala3.
Actually if implicit convertions is just completely removed we will not suffer significantly more than now.

Of course it is not tragedy, it is just unpleasant.

You have proposed Argument Conversions:

class Iterable[+A]:
  def concat(xs: ~IterableOnce[A]): Iterable[A] = ...

Would it be possible to add expression conversion?:

if ~(a<b) then
return ~"select * from table"

It will avoid us from magic names at least.

The problem is that ~ is already used for bitwise not:

~1 // -2
1 Like

I understand that it is a difficult question, I can propose only an idea. Martin have maken scala2 and scala3, He certainly can make miracles, so I can hope at least )

2 Likes

That said, I personally define my own extension (below). You won’t be able to use it with Int though, so I do not think it will make it into the language.

extension [U](x: U)
  def unary_~[T](using c: U => T) = c(x)

given Conversion[String, Boolean] = _.length > 0

if ~"1" then println("success")
1 Like

Btw, caller-side ~ can be an interesting workaround.

I.e. tired:

given conversionWhichEventuallyWillBeExpired: Conversion[A,B] = ???

wired:

import  scala.unlockLicenseForLibraryProgramming

given conversionWhichReallyNeeded: Conversion[A, ~B] = ???

Two pull requests that implement some variations of the proposed changes:

I believe the convertibleTo approach, restricted to only being on parameters of defs, will still fall short of many good use cases, as was mentioned several times in this thread.

For example, one thing everyone knows about:

val l = List("a", "b")
val r.flatMap(c => Array(c))

That example requires an implicit conversion from Array[String] to IterableOnce[String]. But it’s not directly as an argument to flatMap. It’s in the result type of the lambda that is passed to flatMap. This is not expressible with convertibleTo.

A similar use case exists in Laminar, a popular Scala.js UI library whose type-safe API for HTML tag builders is powered by Modifier[El]s. And there are a number of things, like Strings and Int and Seq[Modifier[El]] that have an implicit conversion to Modifier[El] (for some Els). And every single method that takes a Modifier[El] should accept the things that can be converted to it. That also applies in a case similar to flatMap, which is inContext, a method of the form

def inContext[El <: Element](makeModifier: El => Modifier[El]): Modifier[El]

which has a lambda whose result type is Modifier[El].

What to say about the zillions of methods that accept js.Function1[A, B] or any other js.FunctionN. Currently we can pass in Scala FunctionNs to them. Will we have to add convertibleTo to every single one of them to preserve that ability? What about the reverse conversion, from js.FunctionN to FunctionN? Should we now add convertibleTo to every single method that takes a function parameter in all of the Scala ecosystem? Or does it mean that converting JS functions to Scala functions is not a valid implicit conversion?

It is clear to me that there are (named) types that are clearly meant to be used as the target or source of implicit conversions, in all situations. With the proposed convertibleTo solution, we will have to put convertibleTo everywhere those types are used, and even that will fall short of use cases like flatMap and inContext.

On the other hand, I see very few situations in which I want a particular parameter of type T to accept stuff convertible to T, but where I don’t want that conversion to apply elsewhere where a T is expected. One could argue that ++=(TraversableOnce) is such a use case, but it’s in the same family as flatMap(x => TraversableOnce) so it doesn’t explain why convertibleTo is an appropriate solution to that problem.

In conclusion, I don’t see convertibleTo as an appropriate proposal to replace existing well-designed uses of implicit conversions.

5 Likes

These are good counter examples. But I still think we should try to phase out implicit conversions without the language import. So it would be good to see proposals how we can support more use cases and still do that.

Note that even today, use of new style implicit Conversion instances give a feature warning. The only reason you don’t see them that much is that everybody still uses the old style conversions, which will go away.

It’s been proposed above to mark the types themselves are convertible to. Modifier, js.FunctionNs and IterableOnce could be marked as such. I would also propose to be able to mark types as convertible from (for the js.FunctionN → FunctionN case, for example).

Nobody defines Conversion instances because they cause feature warnings for users. We won’t be able to move people away from implicit def to Conversion as long as there is no good way to design something that causes no warnings for users.

I am not sure; this might be too sweeping. For instance, we will want to make some Seq[T] parameters convertibleTo but I don’t think it would be a good idea to accept conversions to Seq anywhere.

Fortunately the flatMap and inContext use cases have a simple fix: we can lift permission to convert to function types. I.e. when we write

def inContext[El <: Element](makeModifier: convertibleTo El => Modifier[El]): Modifier[El]

we also accept “conversions” to the type El => Modifier[El] that work by converting the result type of that function. An alternative would be to change the syntax to allow

def inContext[El <: Element](makeModifier: El => convertibleTo Modifier[El]): Modifier[El]

but unfortunately that refinement could not be expressed in Scala-2 code without additional syntax. Note that annotations on types are inherently less reliable here since they can be propagated by type inference to places where we don’t expect them.

So that leaves the scala.Function <-> js.Function conversions. I note that this sort of mutual adaptation is usually a big mistake. That’s what we did with the Scala/Java collection conversions and we had to back out of it again. In the case of functions, I assume it’s OK since morally these are the same anyway.

But in that case I’d just special case these conversions in the compiler. Just like we will probably want to special case numeric widening conversions.

Should we also allow convertibleTo on other types? Again I am not sure. It invites misuse, since it’s cheap to make a type convertibleTo and that might cause unforeseen conversions, in particular since the type might be inferred. So, in the interest of strictness I think it’s better to restrict it to parameters. After all, the aim here is to get rid of surprising implicit conversions.

I wonder if it’s worth being a little more explicit about borrowing (pun intended) from Rust? into is a lot shorter than convertibleTo, and although it’s less self-documenting, it seems fair to say that keyword that gets added to the language is not expected to be fully self-documenting. And going even a step further, one could also have a type called Into[T]. and then require the code to call .into() or .convert. The compiler can still optimize that stuff away, but then syntactic worries like the one above ("can I write El => convertibleTo Modifier[El]") go away? You could even remove the requirement to call .into() if you really want.

I see that @odersky explicitly rejected ConvertibleTo[T] above, though that was for the typeclass version that I agree is cumbersome, mostly because of the nuisance type parameter.

def foo(xs: Into[IterableOnce[A]]): Unit

looks more familiar and efficient to me than

def foo(xs: convertibleTo IterableOnce[A]): Unit
1 Like

What about

def foo(xs: into IterableOnce[A]): Unit

? into initially feels a bit off in terms of meaning, but it certainly is shorter and no camelCase is a plus, too.

I agree that into > convertibleTo as a keyword, even if it’s a little off. Even just to might work?

I probably shouldn’t have mixed the two dimensions in the comparison. I think the term “into” is better than “convertible to” because of brevity, and (separately) think existing type syntax (Into/ConvertibleTo[T]) is better than a (hopefully soft) keyword because of the possible need to figure out how it fits inside other complex type definitions.

Is the current plan to allow something like def foo(xs: IterableOnce[Into[T]]), or does that defeat the whole point of this exercise?

I think the change to Conversion is well-intentioned (make it more explicit; shame people out of using this) but toothless.

Identifying the useful cases vs. the pathological cases would have been better. In particular, I think two (and a third, maybe) simple changes would suffice. The first two are (each of these have probably been suggested by others):

  • Don’t consider implicit conversions for resolving methods. These should be done using extension methods, and are a misuse of implicit conversions (this has been a necessary misuse in some cases, but hopefully those cases are solved). I think the bulk extension method problem has been solved (but I’m not sure because I don’t work with Scala 3 yet, unfortunately)

  • For implicit conversions on arguments (which should be the only place they can be used), never consider local or import scope. Only companion scope. The argument conversions are not always replaceable by typeclasses, but in cases where they aren’t, hopefully the use cases are fully defined. In cases where there is to be some heterogenous ad-hoc polymorphism of arguments, it could be defined by an implicit conversion which is using a typeclass.

I think implicits (in general) got a hugely bad wrap, and implicit conversions specifically got a hugely bad wrap (even though not many people abused them, IME). The change to given is more than enough overreaction/rebranding; let’s focus on reducing actual cognitive burden without disabling legitimate things.

TBH I think this is already a sufficient step to take. We do not need to get rid of implicit conversions everywhere, just where they are most confusing, which is often when calling methods. IIRC those scenarios were what made the implicit java conversions confusing. That use case is anyway subsumed by extension methods, single or bulk.

Once you have removed triggering implicit conversions via calling methods, then the conversion only triggers at a point where:

  • (a) someone explicitly specified the type they would like to construct to be the target type, of the method argument or constructor argument etc., and either

  • (b1) that type defined conversions in the companion object or

  • (b2) someone explicitly imported the conversion into scope

Specifying twice that we want the implicit conversion to occur is about as explicit as you can hope to be. Asking people to perform a language import on top of that, or special keywords like into or Into[T], is totally unnecessary.

I do get the feeling that tightening up implicit conversions in this way is not a good return on investment. While they conversions did get a bad reputation a decade ago, and remains a slightly tricky edge case in advanced library code, I don’t find overuse of implicit conversions a big complaint in recent years. Even in the com-lihaoyi ecosystem, which uses implicit conversions pretty liberally, it just never gets brought up as something problematic.

And the downsides of language churn are very real; it doesn’t help the Scala 2-3 migration effort for 3.x to be seen to be less stable than 2.12/2.13 were when they first released.

13 Likes

Another use-case where implicit conversions are useful is “equivalence” between two types/values.

Re-taking my example from the old thread:

def log(x: Double > 0d): Double = ???
log(1d)

1d <: Double should be equivalent to Double > 0d. In the actual implementation of this code, an implicit conversion is called. This implicit method looks like:

implicit inline def refineValue[A, B, C <: Constraint[A, B]](value: A)(using inline constraint: C): A / B = {
    Constrained(compileTime.preAssert[A, B, C](value, constraint.getMessage(value), constraint.assert(value)))
}

Where Constraint[A, B] is a typeclass validating the value: A input.

I’m not sure if an alternative implementation using the proposed ~ is possible:

type >[A, V] = Constrained[A, Greater[V]]
opaque type ~Constrained[A, B] = ???

I actually don’t want my users to be forced to pass everytime ~ when they’re using a refined/constrained parameter.

The b1 and b2 solutions named above seem to address this problem.

1 Like