Proposed Changes and Restrictions For Implicit Conversions

Why should such decision be done for a method rather then for a type?

I don’t understand the sense of parameter annotation unless unpleasant design which aim to restrict ability to inject types. Why does a library author should decide how to work with other types which can be just unknown by him? I like Scala for ability to add base types(custom numbers). It is one of undeniable strong feature of scala. But when there are parameter annotation it will look like a joke: if a library author does not need any information about custom type(type class) the use of custom type will be unpleasant, just because library author does not need it.

@AMatveev - You have articulated some of the reasons why I prefer a language to enable rather than restrict.

There is a big difference between lifetimes & ownership in Rust and implicit conversions in Scala 3. Lifetimes & ownership in Rust are a foundation of the language, they are present everywhere and they bring correctness advantages (although they are sometimes too restrictive). OTOH implicit conversions in Scala 3 are treated as second class (or third class) citizen that should be used sparingly. Scala 3 brings new language constructs that reduce the need for implicit conversions, so overall they should be rarely seen (at least that’s my impression). Additionally (as written in first post), there is a plan to improve type inference for code that doesn’t have implicit conversions, so sprinkling ~ (implicit conversion annotation) over your codebase would degrade type inference, make your code fail to typecheck more often and people you ask for help would suggest you to get rid of unnecessary implicit conversions first.

3 Likes

Let’s look at how to applies to using implicit conversions in dotty-cps-async for automatic coloring.

A little recap of what is it:
If we have some async/await structure, then we should split our code technically into two parts (colors): one works with async expressions (i.e., F[T]) and one - sync. (T without F).
If we want to put asynchronous expression into the synchronous function, we should write await(expr) instead expr, for transforming the synchronous to the asynchronous: async.

Example:

   val c = async[Future]{
        val url = "http://www.example.com"
        val data = await(api.fetchUrl("http://www.example.com"))
        val theme = api.classifyText(data)
        val dmpInfo: String = await(api.retrieveDMPInfo(url, await(theme), "1"))
        dmpInfo
     }

Note, that setting async/await not adding something to business logic, and when the underlying monad supports results caching, can be fully automated. I.e. the last code snippet can be rewritten as:

   import cps.features.implicitAwait.given
 
    val c = async[Future]{
        val url = "http://www.example.com"
        val data = api.fetchUrl("http://www.example.com")
        val theme = api.classifyText(data)
        val dmpInfo: String = api.retrieveDMPInfo(url, theme, "1")
        dmpInfo
     }

We see - code is much more cleaner. Awaits are inserted here by transparent inline implicit conversions. (runnable example: dotty-cps-async/TestImplicitAwait.scala at master · rssh/dotty-cps-async · GitHub )

Now let’s look, how this can be aligned to the new limitation of implicit conversions:

  1. Argument Conversions.
    Not an option. Usually external API is a third party.
    In general, I can’t imagine that in industrial programming, somebody will devote time to analyzing how the function will be called: I guess, eventually all will come to 2 styles:
    A) the ‘~’ will be everywhere
    B) no ~ at all.
    So, I guess, in general, this will not work at all.

  2. Bulk extensions… I guess not applicable.

  3. Explorative programming – will work (but require additional import clause), but this is not a temporary situation. The code with automated coloring is ready for production. We allow programmers not to care about async/await in the same way, as now he/she does not care about malloc/free calls.

So, for my use-case – this is worse than now. If you want inevitably change the status quo, maybe we can have few other options:

    1. Some way to enable conversions in an inline code block, which passed to the macro.
    1. Add PleaseIReallyNeedImplicitConversion[A,B] <: Conversion[A,B] to standard library.
    1. Some way to reexport language flags …
2 Likes

I can, but then compiler doesn’t tell me whether that conversion is still needed or not. I’ve tried to somehow achieve compilation warning on unnecessary conversion, but can’t get it right:
Scastie - An interactive playground for Scala.

import scala.language.implicitConversions
import scala.util.NotGiven

extension [T] (v: T)
  def unary_~ [U](using NotGiven[=:=[T, U]])(using conv: Conversion[T, U]): U = ???
  @deprecated("no conversion fired here, so this call can be removed")
  def unary_~ : T = v

@main def mainz = {
  // this should compile, but give a warning that conversion is unnecessary
  // instead I get ambiguous overload compilation error
  val x: String = ~"5" 
  println(x)
}

Not sure if the compilation error is a bug or not.

I’m personally not a fan of this proposal. It’s true that Dotty is meant to replace generic language features with more specific use-case specialized features, but I think here the tradeoff isn’t great: one very-familiar user-facing feature that does something well (Array => Seq, String => Seq, etc.) v.s. a bunch of special cases that kind-of/sort-of end up composing together to fit the original use case in a very roundabout way.

In particular:

  1. I do not think language imports are useful. I have never ever seen a person deterred from what they wanted to do via a language import, and they end up auto-imported and collapsed at the top of the file by the editor. Whether or not that’s ideal or not, that is the reality after the last decade of experience, and this proposal does nothing to change that reality.

  2. The ~ argument annotation doesn’t feel right to me. As others have mentioned, the “Scala-way” of doing things would be to annotate the Type rather than the argument. However, for the common case, people would be defining the implicit conversions in the companion object: those definitions have to live in the same file near the declaration site, and having to have a separate annotation seems redundant.

IMO the change of implicit conversions to be a typeclass rather than a def are already sufficiently inconvenient to make them not the default choice, and we already provide specialized features for several existing use cases, which would further reduce usage of the raw feature. I think that is sufficient for now.

The experience of people using e.g. Python shows that having complex features be exposed is not the problem, after all ~every Python internal API and hook and advanced feature is exposed and they generally don’t have problems with people using them too much. Instead it is about what features are presented in classes and trainings and documentation that shapes usage patterns and community.

If we focus on teaching people how to write webservers and clients and interpreters and other concrete use cases, that’s what the community will focus on. And if we teach people how to twist the language in a pretzel with implicit parameters and conversions and cakes and fancy type signatures, that is what the community will focus on.

At least in the past, the Scala core documentation (docs, books, conferences, etc.) has been heavily focused on the latter. That is great for PL-research, less great for professional and production use cases. I think that’s what we should fix rather than adding more bureaucratic hoops to the language that people are just going to jump through anyway.

9 Likes

On reflection, I think one better way of tackling this problem would be as follows:

  1. Prohibit implicit conversions from kicking in when someone calls a method e.g. foo.doesntExist
  2. Implement bulk extensions, as described in the top post
  3. For everyone which wants both implicit-conversion behavior and bulk-extension behavior, they have to define both an implicit conversion and a bulk extension

I think such an approach would be more elegant than the language imports and ~-annotations described in this thread, and make the language more orthogonal rather than less (conversions v.s. extensions are now fully separate).

Furthermore, the fact that someone has to jump through a small number of hoops to get the combined implicit-conversion+bulk-extension behavior (defining both explicitly, rather than just a single implicit conversion) would be sufficient inconvenience to make people think twice about asking for both, and allow the majority of folks to just get what they ask for.

As an example, for Array => Seq and String => Seq, we want implicit conversion and bulk extension. For String => fastparse.P we also want both. But for String => scalatags.Text.Frag, we only want the implicit conversion, and the current bulk-extension behavior is incidental and unwanted. Splitting things as above would both reduce the amount of power in the feature, while also allowing me to be more precise in cases like String => scalatags.Text.Frag, guiding me to the happy path of least power rather than forcing me to jump through hoops to get what I know I want

9 Likes

By the way, in the std library these are already handled separately. In 2.13 StringOps and ArrayOps are value classes.

From a user point of view I agree with this. I don’t think anyone would be discouraged from using a feature because of some mandatory import.

However if I understand the proposal correctly, if no such import is present type inference can be faster and more precise. The price of some mandatory import is quite small if there is a notable difference in type inference for all code that doesn’t need it IMO. Especially, since that would be most code by far in my experience. Though I think your suggestions are good, as long as they do not interfere with ability to improve type inference provided by Odersky’s proposal.

The bulk export extensions seem like a great addition, even without taking into account any changes to implicit conversions. Maybe we can separate that from the rest of the proposal?

Other than that I don’t really have much of an opinion on points 1. and 3. Making implicit conversions a bit more explicit seems like a good addition, though I need them so rarely, that it’s not going to make much of a difference in the code I am working with.

  1. I think special syntax for argument conversion looks complicated and it does not seem to be worth it :-1:
  2. bulk extension with export seems fantastic!! and also if exports in general could be relaxed in terms of what can be exported, that would also be very nice :+1:
  3. supporting explorative programming is important, and it is OK by me if it becomes a bit more bulky by always requiring some import, if that is required to make type inference in general more precise and faster, which is a very nice improvement :+1:
1 Like

I think some concrete examples of where type inference would improve if implicit conversions are restricted would also be very interesting.

1 Like
  1. I think special syntax for argument conversion looks complicated and it does not seem to be worth it :-1:

But if we disallow implicit conversions, how else would we handle the very common situations where these argument conversions are inserted today? We cannot rewrite all libraries to use type classes instead. It causes too much breakage and would make type signatures more complicated. Compare

def concat(xs: ~IterableOnce[A]): Iterable[A]

with

def concat[I: ConvertibleTo[IterableOnce[A]](xs: I): Iterable[A]

The first is shorter, more efficient, and more familiar.

disallow implicit conversions

Hmmm. I thought that the third option of importing language.implicitConversions mean that you still can use implicit conversions. So why not just stick with that import also for argument conversions (sorry if I’m missing something in my reasoning here?).

The goal must be to make the language import unnecessary. But if most libraries demand it that would be self-defeating.

Instead of annotating API parameter types, I wonder if we could annotate the definitions themselves, that is:

@ConversionTarget trait IterableOnce[A] { ... }

This way implicit conversions can be restricted to only trigger when the expected type is a conversion target.
If that doesn’t cover enough usecases I can think of two generalizations:

  • Allow this annotation for method parameters too, just like the original ~, for situations in which you don’t control the target type.
  • Have a dual @ConversionSource (are there any good usecases for this?)
3 Likes

If for instance, my library method requires ‘Employee’ instance, I don’t think I should be deciding at definition site, how users come up with an ‘Employee’ instance. They can pass it explicitly or using a conversion. But I personally don’t like the idea of taking this decision in my hands. That seems like library users choice.

To enable everyone to design what they want, I would have to go ‘~Employee’ everywhere.

4 Likes

I think that would defeat the purpose. IterableOnce can be an inferred type. That means that

  • the compiler still does not know whether implicit conversions need to be inserted or not, it depends on the instantiations of variables which might come after the decision needs to be taken
  • the reader still does not know where implicit conversions are inserted; it depends on types that are impliciit themselves.

I firmly believe it has to be the method that’s annotated, not the type. Exactly like we indicate a required type class instance with a context bound or using clause in a method.

If for instance, my library method requires ‘Employee’ instance, I don’t think I should be deciding at definition site, how users come up with an ‘Employee’ instance. They can pass it explicitly or using a conversion. But I personally don’t like the idea of taking this decision in my hands. That seems like library users choice.

The idea is that in the future, the default case is that nobody will use an implicit conversion anymore. What typically will be happening is

  • Either the library designer already knows about common (explicit) conversions into Employee, maybe because the library defines them themselves. That’s the case for the conversion from Array to IterableOnce in the standard library for instance. Then the library can annotate methods so that the conversions are allowed to be implicitly inserted.
  • or the library designer does not consider any conversions, then all conversions at the use-site must be explicit.

In essence, implicit argument conversions are just a convenient way to avoid overloads. The library designer could define a zillion overloads instead, one for each argument that could otherwise be implicitly converted. Or manage that with a typeclass. Or, allow the conversion. But, just like overloads, or using clauses, it has to be the library designer who has to think of these things. If we throw it back to the user we are back to the current status quo where anything goes.

5 Likes

I understand the objective better now. Makes sense. :+1:

This “~” declaration-site behavior could be defined as library using a type parameter:

case class Foo()
case class Bar()

/* Some type `T` implicitly convertible to `U` */
type `~`[U] = [T] =>> Conversion[T, U]

def someMethod[BAR: ~[Bar]](bar: BAR): Unit = ???

given Conversion[Foo, Bar] with
  override def apply(foo: Foo) = ???

def example: Unit =
   someMethod(Foo()) // no warning, as intended by API provider

I don’t know how prevalent this use-case will be but this doesn’t seem to be too bad in terms of boilerplate (it’s just BAR: ~[Bar]). My main worry would be users confused about the more complex type signature.

Maybe it’s worth experimenting with this approach to see if better ergonomics are needed. This also creates a path towards simple desugaring rules vs more complex language semantics.

1 Like

But is there a situation where this would make a difference? By the time we’re inferring the type of an expression, if its expected type is still an undetermined type variable such that we don’t know if it’s going to be upper-bounded by a type which is a ConversionTarget, then I don’t think that an implicit conversion will ever be inserted around that expression. Maybe there’s some corner case where it would make a difference, but there’s already corner cases today where implicit conversion don’t kick in because type inference takes precedence: Implicit conversion is not applied when the compiler has to also infer a type parameter · Issue #8803 · lampepfl/dotty · GitHub

2 Likes