Proposed Changes and Restrictions For Implicit Conversions

A possible middle ground:

  1. Implicit argument conversions are available either if (a) the magic import is available at the call site or (b) the conversions are defined on the companion object of the source type, target type, or type argument of a target type, AND the target type is decorated with a soft modifier convertible (or some other name).
  2. There is a non-distinguished type convertible trait ConvertibleTo[T] (or convertible trait Into[T]) available in the standard library. By convention, if a library designer wishes to define an implicit conversion from T to U in a case where they control either T or U, they can define a Conversion[T, ConvertibleTo[U]] on the companion object of T or U. So the standard library could define a Conversion[String, ConvertibleTo[Seq[Char]] on the companion object of String of Seq, and then use ConvertibleTo[Seq[Char]] as it would have used convertibleTo Seq[Char].

This way, it is very clear from an argument type’s declaration that it can accept implicit conversions. The compiler can safely avoid implicit searches unless an argument type is declared as convertible, or the magic import is available. And there is still a way for methods that accept types that are not themselves implicitly convertible to opt in to implicit conversions on those types.

I think there’s some thought about what happens in cases where the compiler can infer a convertible type. For example, if I define def foo[T](x: T): T = ??? and I have

val y: SomeTypeConvertibleToBar
val x: Bar = foo(y)

The compiler can infer that T = Bar, so does that mean that I the implicit conversion is available (without the magic import) if Bar is convertible? I think it would be acceptable to forbid this, and force caller to use the magic import, or the library writer to use def foo[T](x: ConvertibleTo[T]): T. I have not thought through the corner cases here.

You might worry that library authors will sprinkle around convertible too liberally. Again, this doesn’t worry me very much: if it is very clear from a type’s definition that it is can be produced via implicit conversions, remembering the rules for search for where those conversions can be defined is not so hard. The only downside will be potentially slower compile times, but it will at least be abundantly clear whose fault it is.

(Note: convertible is a bad name because it is ambiguous between “convertible to” and “convertible from”, but in the proposal, it would be “convertible from (some other type)”).

“If it makes you say shun, then shun it.”

The assumption is that the language import matters to people’s behavior.

Do we know whether this is actually true? I have dozens of imports in most files. The barrier to me adding another one that the compiler says I need to is approximately zero. It certainly wouldn’t be a barrier to me

If the compiler doesn’t warn me and things mysteriously don’t work right because of a missing import, that’s easy too: I just always include that import, in every single file.

Same deal as with givens. I never (intentionally) fail to import givens. Who knows if I might need them? Just import and don’t worry about it unless there’s a problem. Usually the companion object rules and such take care of it–I try to structure my code so they do. But can I be sure? Generally, no. So…

Anyway, personally, while I hope that the details about import-free conversions get nailed down to be optimal, as long as the “just add an import” option exists, I think the stakes are low.

Also, I think in cases where method argument annotation would be needed, people would instead adopt the best practice of using a Rust-style .into() at the call site (extension method with given conversion), because the chance that the annotation will be everywhere you need it in every library is probably roughly zero except in very unusual cases. (Those unusual cases might be important so it might still be worth defining the annotation.)

Regardless, I would imagine that the most common solutions will be to either import, or .into().

1 Like

I have given this some thought but am not yet convinced. The alternative would be to demand

Set.empty[Foo] + notfoo.convert

Isn’t that clearer? Also if cases like this happen frequently, what’s to stop us from defining:

extension [T](s: Set[T]) def +~ (x: into T) = s + x

What would be a more complete example, where a case like this would arise and we do not have control over the addition method?

That’s completely fine. If people want maximum power they can have it. But there’s also work in teams where members are worried that the design becomes unmaintainable. Here a team lead could easily put a foot down and decree “no language imports” and that would be easy to check.

Also, teaching newcomers. It’s all well and good to say “implicit conversions are dangerous, use them sparingly” but without a clear control where they are inserted, this is not actionable. Requiring the language import helps. First, it means I need explicit opt in. Second, it means I can withdraw the language import and see by way of error messages where all the implicit conversions in my code are.

A meta-design point why we want to allow implicit conversions in arguments but not elsewhere: Implicit conversions are sometimes useful to support ad-hoc polymorphism. We already have two alternatives in our toolbox to address this: overloading and typeclasses. But overloading can lead to a combinatorial explosion of method definitions, typeclasses are more heavyweight, and neither can deal with variadic arguments. So allowing implicit conversions as a third way to support ad-hoc polymorphism seems reasonable.

My thesis is that that’s it. There is no other widespread, legitimate use case of user-defined implicit conversions in simple Scala 3 code. For more advanced code a language import is a reasonable requirement. Now, prove me wrong. :wink:

1 Like

Actually, the language import seems to be a good middle ground but I’m not sure about the exact behaviour of this import: Is it also required at use/call-site or only when defining an implicit conversion ?

If it is required at call-side, is the export feature compatible with this language import ? In the quoted project for example, there is no situation where the user would not enable implicit conversion while using this lib.

I have a class

case class Multiple[+A](value: A, num: Int){
def * (operand: Int): Multiple[A] = Multiple(value, num * operand)
//etc
}

object Multiple {
implicit def toMultipleImplicit[A](value: A): Multiple[A] = Multiple(value, 1)

The whole point of the class is to allow rapid data entry of serial values in code. Forcing the user of the class to add a “~” in very method where they use the class would sabotage its purpose.

3 Likes

This seems like the heart of the question to me. I have no problem with a language import at the definition site, but requiring one at the use site seems like it would be an unreasonable burden on libraries. One of the major selling points for Scala is the amount of power your can get from libraries (which helps justify a stdlib that is a lot slimmer than many languages); we shouldn’t introduce speedbumps that are basically non-sequiturs to the business logic that is using the library.

(Like @Iltotore I’m not clear on whether the proposal is to require such a use-site import or not; my point is just that we should keep library consumers in mind as a use case where such an import is inappropriate.)

2 Likes

The alternative .convert sounds plausible; it would thematically be similar to how we deal with => and * arguments already: as sugar for the library callsite, but in library “internals” often expanded to full lambdas/Seqs and passed around that way.

I’m not sure it’s better than the status quo, as in exchange for limiting implicits it would provide a whole new user-facing api they need to remember to call everywhere, and new confusing errors when it’s wrong, but it’s workable. These same annoyances exist with passing around =>s or typeclass-ed method parameters today, so nothing new.

The extension method wrapper approach on the other hand doesn’t work. Even if we wanted to “put things in collection and deal with them later”, there are dozens of collection operators on dozens of collections people use in different places, many of which are outside the standard library. Wrapping them all up front is impossible, and wrapping them on demand would be super tedious and messy

I guess my question here is: how widespread is this a problem these days? And does the proposed solution really happen?

If you look at Python, anyone and everyone can do crazy stuff with every feature and every internal piece of the runtime exposed, and yet life goes on without too much issue. Java is similar: anyone can pull up sun.misc.Unsafe and go to town. Both of these easily create as convoluted code as anything Scala language features can create!

I don’t see a similar “language import for advanced features” approach being used in any mainstream language. And at least in my professional and OSS experience, language imports have had zero role in determining the subset of Scala to use, even in orgs with relativey strict automatic enforcement of guidelines (e.g. ours, where we lean heavily on -Xfatal-warnings/@SuppressWarnings)

Are there really thousands of engineering teams and orgs out there relying on language imports to control their usage of the language? And what makes Scala so special that we need to take an approach that no other popular language community does here?

I agree with @odersky 's point about newcomers. Knowing that you have to add imports, but not really understanding why, leads to a lot of frustration for newcomers. Even worse, because of the interaction of inheritance, implicits, and name clashes, we have found ourself in a situation where if you include two common DSL imports in the same file, you will break compilation – but removing one of them (or in come cases, either of them) will compile just fine. Even worse, the error message says that an implicit conversion is missing when in fact the problem is that that implicit conversion is present twice because of inheritance.

For a newcomer, being able to copy-paste code have it compile is a very important ability. Of course they don’t expect that no imports are necessary, but for simple name imports, IDEs are usually pretty good at doing the work for you. Having to copy/paste all wildcard imports and crossing your fingers is not a good situation. At least with the magic import for implicit conversions, a newcomer knows that they should be able to copy-paste code with confidence if that import is not present.

I think right now I agree with most of this. The only concerns are:

  • Should ad-hoc polymorphism only apply to methods? Currently overloading and typeclasses only apply there, so it’s conceivable that we could make implicit conversions apply there too. Some added user-facing complexity, but not unprecedented. Everything I don’t like about limiting implicit conversions to methods, basically already applies to typeclasses today.

  • Are language imports the way to go to restrict things? I have personally seen zero evidence they work, despite writing code in a variety of environments. They could work, they are meant to work, but they just don’t. And there are other, better, ways to nudge developers in a direction you want (e.g. see the section “Linting Workflows” in Scala at Scale at Databricks)

What I don’t like about Set.empty[~Foo]: at first glance it looks like the type of the Set is not Foo but a type which is convertible to Foo. So, seeing

val s1           = Set.empty[~Foo]
val s2: Set[Foo] = s1
val s3           = s1 + notFoo
s3 + notFoo

It looks like (again at first glance, not after reading the docs) as if the assignment of s2 to s1 involves a conversion for every element in s1. Moreover, it is not clear to me what type s3 would have Set[Foo] or Set[~Foo].

Edit:
Also, If I get it right, then what is actually desired with Set.empty[~Foo] is kind of a zero overhead adapter in the sense of: apply some code (in this case a conversion) every time the type parameter occurs as parameter in the Set (what about if T occurs as return type?).

If we should support something like that then we should maybe strive for a more general solution.
Something like:

object ConversionToFoo:
  inline convert[T](t: T)(using c: Conversion[T, Foo]): Foo = c(t))

val s1           = Set.empty[Foo pre-processed by ConversionToFoo.convert]
val s2           = s1 + notFoo
val s3: Set[Foo] = s1

which desugars to

val s1            = Set.empty[Foo]
val s3 : Set[Foo] = s1
val s3            = s1 + ConversionToFoo.convert(notFoo)
s3 + notFoo

which, after inlining etc. is more or less zero overhead. In this case I would assume s3 has type Set.empty[Foo] and calling s3 + notFoo would fail.

1 Like

This leads to an interesting hypothetical: would fixing variadic arguments so they could support heterogeneous arguments and their corresponding type parameters + givens (and thus be amenable to typeclasses w/o implicit conversions) significantly change the situation?

Typeclasses are already considerably more lightweight now, with the way that extension methods are now imported, so if variadic methods can be fixed the use cases for implicit conversions could be shrunk further.

3 Likes

Just to expand on this idea a bit further, another possibility is to allow the convertible modifier to be placed on type aliases, such that convertible type T = U means that implicit conversions to T can be declared, but such can conversions can produce values of type U. In that case, you could just have type ConvertibleTo[T] = T in the standard library. Assuming this doesn’t cause too many compiler headaches, then you can use ConvertibleTo[Seq[Char]] with no .convert overhead inside the method.

IMHO: It will not.

Yes, it is. In such situation a code assistant does not help, google does not help. There will be no quick way to finf out answers in the documentation. So it will be toothache. So you should know a library by heart to effectively use it. Such approach dramatically decreases usability of a library.

So, everywhere you used to write

def f(x: Multiple[T])

you now would have to write

def f(x: into Multiple[T])

Would that be so bad? It’s a bit longer, sure, but also clearer.

Yes, fair enough. I was not sure whether you meant Set.+ and other stdlib operations like it literally or just to illustrate a general principle. If it’s literally, I agree extension methods are not practical. But then I would argue that explicit convert is preferable in any case. Note that List.+ or Seq.+ would not insert conversions anyway since those types are covariant. So it’s better to make it clear that Set is different by an explicit .convert IMO.

That’s a good question, and we should probably discuss this on a separate thread. Languages do define subsets, some with pragmas, and some with imports. For Scala it seems to be essential to me to have some means of expressing this, since we are faced with two challenges:

  • having an evolving language where certain features become first experimental, then standard, and other features become deprecated and are then phased out
  • the desire to give guidance in a very orthogonal and unopionated language.

Are language imports a good way to achieve this? If not, is there a way to improve them? I’ll open a separate thread for this.

3 Likes

I think it would be painful in some situations. If the majority of your codebase uses eavily “implicitly convertible into” parameters like the example explained above (refined types).

A potential solution to this caveat would be to allow conversions at type definition:

opaque type ~Constrained[A, B] = ??? //Allows implicit conversions to `Constrained` without `into` or `~` at parameter-level

//Conversion from A to Constrained[A, B]
given [A, B]: Conversion[A, Constrained[A, B]] = ???
1 Like

It’s not so bad, but consider that the alternative is “use one extra import statement”.

For a few carefully-crafted libraries maybe it’s worth it. For almost everyone else, the balance between maintaining many annotations vs. one import per relevant file would I think lean towards the import.

1 Like

Could you add such annotations in language operators ‘if’, ‘while’?