Proposed Changes and Restrictions For Implicit Conversions

That is a good point. So I’m quoting it to stress it. :stuck_out_tongue:

I do have a segue about it, though: still using Laminar as my “benchmark” for what a good design using implicits looks like. In Laminar it is common practice to define user functions that take Modifiers as arguments. See here for one of the examples that does that (function renderInputRow):

and here for the sequence of the “Big Video” tutorial that explains the rationale:

For these user functions, we also want stuff that can be converted to Modifier[El] to be converted. With the into keyword at parameter site, it means we’ll have to teach users of Laminar that they need to define all parameters of type Modifier[El] with the into keyword.

It is quite likely that, in those situations, the alternative of “use one extra import statement” is likely to prevail. Even with a carefully-crafted library code.

This wouldn’t be an issue if we could instead annotate the definition of Modifier itself, as a type-to-convert-to.


Now, if we do annotate types instead of parameters, how can we still respect the meta-design point mentioned at

?

I think we can do this by being quite specific about what we can annotate, and where those annotations are looked up. For the purposes of giving examples, I will use the following syntax:

conversionTarget trait Foo

I believe it would be fine to restrict what we can annotate to nominal type definitions. That means:

  • class definitions
  • trait definitions
  • opaque type definitions

Specifically, this excludes (non-opaque) type aliases and type parameters. This is an important ingredient to limit the influence of type inference on where implicit conversions can happen.

The other ingredient is when do we even look for an implicit conversion? I think for that we can stick to the existing rules, modulo the conversionTarget restriction: if I have an expected type E known to be a reference to a conversionTarget T, and the term doesn’t conform to E, we can look for implicit conversions to E. Since conversionTarget only applies to nominal types, and we must know that E is a reference to a conversionTarget before starting to look for implicit conversions, that should dramatically reduce the influence of type inference on where implicit conversions can enter into play.

3 Likes

One place this runs into trouble is Scalatags, which is ~identical to Laminar, except it uses a type alias for Modifier in a generic Bundle trait. This is because it provides multiple implementations - a Text-based implementation, a DOM implementation, and a virtual-DOM implementation - all of which have different concrete types for Modifier. This is all in line with the “objects as modules” thing that Scala expouses; what used to be a JVM package is now an object, allowing inheritance and abstraction and all that.

Apart from letting the library share code internally, this also allows code sharing of user code between different implementations by programming against the abstract Bundle type (ScalaTags). In this case the concrete implementations are not known statically, but we still need implicit conversions to the abstract Modifier type to work.

It’s possible that the library can be refactored to avoid an implicit conversion to an abstract type, but it might be a pretty invasive refactoring to make that happen.

2 Likes

Ah. That seems to kill the restriction on nominal types. :frowning: I’ll have to think about how we can remove that restriction in a way that might be acceptable to @odersky.

TBH I think I would be ok with a method-parameter-level annotation for implicit conversion targets. It’s different from how it works now, but it brings things closer to how typeclass/bynameparams/varargs sugar works, and we’ve been living with those forever

We would need to do it thoughtfully thought. To me, that means:

  • The annotation must be concise. into is far too verbose for something that may be all over the codebase. * and => are OK levels of conciseness, so something like ~ for implicit conversion targets would fit nicely

  • We need a concise, standard way to apply the implicit conversion, for the "put thing into collection of type T" use case. IMO .convert is too verbose too. Something like ~t may be ok, and fit nicely with ~ being used to annotate the param, and * used for defining and expanding varargs, and =>/() => used for defining and wrapping by-name params.

No matter what we do, having to care about “implicit conversion target” v.s. “not implicit conversion target” is something we are adding to the language. Previously, library users just annotated the relevant type and things just magically worked. But with a concise enough syntax, similar to the other special ways we can annotate method parameters, I think it could be OK

1 Like

Since a method-level annotation can be accomplished with a typeclass, but everyone agrees that using typeclasses would be too verbose, perhaps we could make typeclasses more ergonomic? One thing that has always bothered me about typeclasses in Scala is that in the (reasonably common) case where you don’t care about the type, you still have to name it:

def foo[T: Typeclass](x: T): Unit = {...only accesses methods via Typeclass[T] ...}

In Rust, you don’t have to name the type:

fn foo(x: impl Typeclass) { ... }

This is why the Into pattern works so well:

fn foo(x: impl Into<Int>) { ... }

is more verbose than the proposed x: into Int, but it’s not a special language construct.

We could do something in similar in Scala, but more powerful. Suppose that in addition to being able to ascribe a type to a parameter, we could also ascribe a type lambda TL, where the semantics are

def foo(x: TL)

is syntactic sugar for

def foo[T](x: T)(using TL[T])

Then, we could write the very first definition as

def foo(x: Typeclass[_]): Unit = { ... only uses methods on Typeclass ... }

Similarly, for implicit conversions, you could write

def foo(x: Conversion[_, Int]): Unit = { ... }

This is still pretty verbose, but with a succinct alias for Conversion like Into, it’s actually less verbose than Rust

def foo(x: Into[_, Int]): Unit

If we want to get really fancy, we could even use an operator like =:> and infix types to get

def foo(x: _ =:> Int): Unit = { ... }

but that’s probably too much and I don’t even know if it parses.

This is maybe too radical of a proposal, but if you agree with the premise that it would be nice to make typeclasses lightweight enough that they can be used without modification for method-level implicit conversions, then it’s maybe worth thinking through other options.

And yes, I’m well aware that this will proposal will be very confusing given the recently changed meaning of _ in types. It will have to wait for 3.2 at a minimum, and even then the potential for confusion might be too great. I don’t want to focus too much on this specific proposal, just argue that something like it might solve not just the problem at hand, but also make typeclasses easier to use in general.

1 Like
type Into[T] = [X] =>> Conversion[X, T]
def foo(x: Into[Int]): Unit = { ... }
2 Likes

I sent this before by email, not sure why it didn’t show up.

Why not simply have them all extend an empty marker trait?

Mark that as convertible-to and use it as an upper bound of the base type alias.

Wouldn’t that solve the problem?

No, it can’t. It doesn’t work for varargs. That’s why we’re having this whole discussion. :wink:

(also it doesn’t work for JS/Native types in interoperability scenarios, like for js.FunctionNs)

That’s a good point that I wasn’t thinking about that, though I would bristle a little at the notion that it’s the reason for the whole discussion – varargs aren’t mentioned until very close to the bottom of this very long thread. It’s true that heteregenous varargs would not be supported, but it’s not clear to me how important that really is.

I personally favor conversionTarget/convertible annotation on types by a wide margin, but I was trying to follow the discussion here. I think having a special language construct for implicit conversions, but only on method declarations, will make newcomers even more scared of Scala.

If I understand correctly, this can be circumvented using the Fragable mechanism.

Fragable is a very complicated infrastructure, relying on auto-tupling, tuples-as-HLists, and recursive typeclass definitions. That’s not something you want to explain to the typical beginner writing their custom Laminar function. Gosh, I have to stare at the code for a while to get what it means!

(Not to mention that relying on auto-tupling instead of varargs means that you cannot have f(fixedArg1, fixedArg2, vararg1, vararg2, ..., varargN).)

There is always the magic import for heterogenous varargs. Maybe the right question is whether the standard library would suffer without heterogenous varargs. If not, then maybe it’s okay to limit them to advanced use requiring the magic import?

1 Like

I think the into proposal is actually quite modest and it does address the issue of varargs. The question is whether it is general enough. E.g. in Laminar, we’d have to add into annotations also on user-defined functions. I tend think that’s actually OK, since it makes the types of these functions less magical.

Another issue is that conversion of arguments would not be conserved by eta expansion. Example

def f[T](xs: Seq[T]) = ...
val a: Array[String] = ...
f(a)  // OK
List(a, a).map(f)  // error, since arrays are not sequences. 

On the other the abstraction is leaky already. E.g. if we add

val b: Seq[String] = ...

then

List(a, b).map(f)

already gives an error today. So, maybe taking a line of restricting implicit argument conversions to the minimum would work.

It is modest, but it is a modest addition to the language, with very little meaningful subtraction. It means that, compared the current Scala 3 spec, a new user still needs to learn everything they currently do about implicit conversions, they just also need to learn when they apply. Since we expect into to be in the standard library, you can’t meaningfully tell a newcomer that they don’t need to worry about learning how implicit conversions work.

It is true that a user (beginner or experienced) can more confidently rule out implicit conversions as a potential problem in some cases, but they have to first go through a checklist (Am I passing an argument to a function? Does this argument have into? Is the import present?). Since IIUC, into will be common in the standard library (and in practice, potentially common in many libraries), ruling out such problems might not save much time. There are now also times when the new additions are themselves a source of confusion (“Wait, how come an implicit conversion applies here, there’s no into keyword? Oh, shoot, didn’t notice that import in that other file. Hmm, should I add into to the declaration and remove the import, or use the import here?”)

The same objections apply to conversionTarget/convertible too, but they come with considerably less declaration clutter. It seems to me that it is just as easy to look at a type’s declaration to understand that it is effectively always preceded by into, but I could be wrong about that. conversionTarget would also permit implicit-conversion-by-type ascription, but that specific case doesn’t seem to be a central worry here.

EDIT: Of course the performance benefits to the compiler of limiting the surface area of conversions to into is potentially substantial, though the library writer can still thwart the gains by overusing into.

5 Likes

I’m :+1: on making this intent clear at the definition site. That would pretty much wipe out the most legitimate need for implicit conversions, IMHO.

But, isn’t this kind of backsliding on the <% implicit view thing that was deprecated so many years ago?

For context, the “legitimate use case” I’m talking about seems like kind of an edge case, but I think it’s a reasonably common one: you want to define a variadic method that takes any number of what Rust would call (I think) “trait objects” – i.e. a value along with a typeclass instance that permits an operation upon the value in which you’re interested. (Existing examples of this use case don’t necessarily have a typeclass, or make it obvious that it’s essentially a “trait object”. But could be restructured that way)

You can’t use typeclasses / context bounds for this, because it’s variadic. But, if type parameters could be variadic as well (with a variadic type parameter being able to have a context bound)… that might be a more principled way to supplant that use case. Maybe the already built-in HList stuff is sufficient to enable something like that?

What happens in pattern matching?

case class Foo(l : into Long)
val f = Foo(5)
val x = f match 
  case Foo(1) => 1
  case _ => 2

Is the unapply signature looks like:

def unapply(arg : Foo) : Option[into Long]

Can we do type X = into Long?

In my proposal into cannot be used on arbitrary types, only on parameter types. So the unapply could not be written like this.

So there is no way to have implicit conversion occurring in pattern matching?
If that is so, this is too crippling and unexpected behavior, IMO.

There is no implicit conversion in pattern matching today, neither with given Conversion nor with implicit def.

You may be confusing with cooperative equality between primitive numeric types? Basically the fact that in Scala (1: Any) == (1L: Any).

1 Like

Oh yeah, you are right. It’s the cooperative equality.