Proposed Changes and Restrictions For Implicit Conversions

This looks really nice. The export feature looks really useful in particular, even when not taken in the scope of this proposal.

Are there any downsides to it, that I’m not seeing right now, that led to the original design where the exported members had to be stable identifiere?

1 Like

How will we be able to use an argument conversion with operators(if, while) it is a killer feature for us. There are just no way to redefine “if” without loosing readability.

I appreciate the thoughtfulness you put into identifying different situations in which implicit conversions are legitimately useful (or at least, otherwise common).

But, one criticism (in the spirit of candor) is that I’m not sure the proposed solution follows from the examples. The proposed solution is to tighten restrictions on the Conversion construct, but what about instead making it unnecessary for the legitimate use cases among those you identified?

At least #2 (bulk extensions) seems like a legitimate use case (at least in Scala 2) which ought to be replaceable by normal extension methods, (though maybe the new import requirements interfere with that?).

In cases where I’ve used #1, it’s usually newtype-related; e.g. the method takes a TinyList and it’s annoying to have to say TinyList(List(foo)) everywhere, when List(foo) is obviously tiny (here meaning <= 256 elements). I think that’s probably solvable with inline in Scala 3 at the argument level, though it would probably be less clear (but more safe), and a little more work at the end of the day.

Another notion to put out there is that I can’t recall ever having legitimate use cases for implicit conversions that escape a project/package (at least, which aren’t addressed in Scala 3 already). Maybe forcing implicit conversions to be package-private to some parent package would help in restricting them without any new semantic rules?

I am not convinced that we’re addressing the problem at the right point here.

Even if they are used as intended they make it hard for a reader of the code to figure out what goes on.

Let’s solve that directly. @explicit { ... } produces warnings on everything inside that isn’t explicit, including implicit conversions. Now if you get into more than trivial difficulty, you can fix it with a recompile. Furthermore, IDEs can already show what’s going on AFAIK, but if there are any pieces missing, let’s fix them.

Another important reason is that unrestricted implicit conversions make type inference less precise and less efficient. The problem is that we can never be sure whether some type in the context constrains a type variable.

Normally this isn’t an issue–the type inference works fine. If there are assumptions that are needed to make it work better in some cases, it would be better to be able to (1) have the compiler tell you, if you supply a flag, that it’s struggling here (by instrumenting the depth and breadth of ; and (2) be able to manipulate the assumptions.

One way to do this locally is to utilize shadowing. If you import language.enableFeature.implicitConversions you would turn on the feature; import language.disableFeature.implicitConversions would turn it off. You can always get the behavior you want within your scope.


I think the suggestions are fine. I’m just not convinced that they really tackle the problem. Point (3) is basically just, “But sometimes you actually do want them”, and then we’re in the same position as before. Unless we get to the point where we say, “No, Scala doesn’t support that, period,” I think it’s better to solve the pain points than to guide people away from a feature that is still there but has pain points.

1 Like

Is this something that can be generally allowed (not just inside extensions) and also for imports?

1 Like

It can be generally allowed for exports but not for imports. It’s a question of evaluation order. Say you have

import f(x).a
...
a + a

when is f(x) evaluated? I believe it would have to be evaluated at the point where the import appears, so the code would rewrite to

val $temp = f(x)
...
$temp.a + $temp.a

The alternative semantics would rewrite at the point of access, i.e.:

...
f(x).a + f(x).a

That would not only surprising and potentially costly. It would not work at all if we import a type since type prefixes may not be expressions. On the other hand, evaluating at the point of import does not work either since we get into a mess for toplevel imports outside a class. So, it was a wise decision to restrict import prefixes to stable identifiers and we should keep it.

For exports, we have the same question.

export f(x).{a, b}

must mean

private val $temp = f(x)
def a = $temp.a
def b = $temp.b

since otherwise we could not export types. This time we do not have a problem with that interpretation, since vals are allowed wherever exports are allowed. Note that for extension methods
the two evaluation strategies lead to the same result.

extension (x: T)
  export f(x).a

rewrites to either

def a(x: T) = val $temp = f(x); $temp.a

or

def a(x: T) = f(x).a

but those two are equivalent.

Are there any downsides to it, that I’m not seeing right now, that led to the original design where the exported members had to be stable identifiere?

Just that we modeled exports after imports, and this would introduce a deviation between the two.

4 Likes

You can already do shadowing with language imports.

import language.implicitConversions

turns on implicit conversions and a nested

import language.implicitConversions as _

turns them off again.

2 Likes

Oh, that’s handy! I didn’t realize it would work like that.

I’m not 100% sure it should work like that. This compiles instead of complaining about no HashSet:

object Main {
  import collection.mutable.HashSet
  def main(args: Array[String]): Unit = {
    import collection.mutable.{HashSet => _}
    val a = HashSet.empty[String]
    a += "salmon"
    println(s"Hello, $a world!")    
  }
}

But, anyway, it’s good to have the capability!

It’s currently special cased for language features, and shadowings of root imports. But yes, we might want to generalize that to all imports. I.e. a suppressing import also suppresses all imports further out.

This is awesome. And I was completely unaware this was possible. Is this explicitly documented somewhere?

This is the first time I have ever seen this. And a quick Google search on “scala implicit conversions turning them off and then back on” didn’t show anything.

1 Like

What about annotating arguments at call site instead? This way IDE could highlight the conversion annotation differently, depending on whether it fires or not. Example:

someIterable.concat(~Array(a, b, c))

inject requires conversion, while conversion annotation declares an optional conversion, i.e. it’s inserted by compiler when it’s needed. That would also enable implicit conversions in ifs, while loops, calls to external librariers, etc

5 Likes

Thinking about this more, I don’t think I’d know when I write this I also want to accept things that can be implicitly converted to IterableOnce.

Scala 3 already took the more complicated type class approach with Converter. Maybe we should just take an implicit converter where there are now magnets, since those signatures are already complex. They don’t really become all that more complex, in the sense that you’re going to read the documentation to understand how you’re supposed to call it already in the case of magnets. The typeclass may make that clearer instead of less clear.

For the case where you have a non-magnet conversion like the example, maybe the call site annotation makes sense.

2 Likes

What about annotating arguments at call site instead? This way IDE could highlight the conversion annotation differently, depending on whether it fires or not. Example:

Big no to this. Having to know that a conversion is expected, is the same as writing it yourself. Furthermore, it’ll lead to people randomly trying to prefix things with ~ to see if it works (or whatever symbol/notation is chosen). Pretty much how I throw & and * randomly (figuratively speaking) in rust to try to satisfy the compiler because I refuse to track silly colored pointer types in my head.

3 Likes

The proposed use-site ~ is exactly the same as .into() in Rust, just spelled differently.

I’m not up to date with rust nor do I use it regularly, I just did the expedite introduction by reading the book twice, once in 2016 and once in 2020 and then writing a small project. I think my point still stands. Anything that fosters “casting” habits is a big no for me. (I call casting habits the act of trying the several cursory “casting” mechanisms to see if one makes the compiler shut up, such as as[Thing], ~thing, &thing, *thing, or .into() ).

A use site annotation is indeed redundant. One can just write argument.inject or argument.into or whatever we choose to call the conversion method.

But I believe a declaration site annotation does make sense. For instance in the concat method it’s very much by design that conversions should be admitted. Those conversions exist in the same library and the otherwise necessary overloads of concat are omitted since the designers knew that a conversion was available. The same holds for the magnet pattern. The magnet type is co-designed with the conversions into it.

Though it’s not so much concat that is designed to accept a converted argument. It’s actually IterableOnce that is designed to be converted to. So maybe it makes more sense to annotate the type/class declaration rather than every individual method declaration.

1 Like

I think it’s a matter of style.

I prefer a style where a language enables you more than it restricts you. So I am in favor of allowing implicit conversions, symbolic operators, infix methods, optional braces, and so on. This is very helpful for rapidly creating code that is readable and maintainable for someone who has similar preferences.

However, when one is primarily concerned with having to figure out, modify, and clean up code created by others who may have different preferences, then I can understand preferring more restrictions. I don’t fully understand why this cannot be adequately handled by linters and code-rewriters (not to mention IDEs). But I accept that it is an important consideration for some people despite having these other tools that can help. So perhaps it does make sense for this style. It sounds reasonably consistent with this sort of concern.

However, with all of these things, I think it is a good idea to ask: is this restriction which makes the language more complicated necessary even in light of other tools that we have or can build (with comparable amounts of work) to solve the same problem? To me it seems like the answer would be “it’s not worth it” in this case, but then I’m not a very good judge because my stylistic preference isn’t aligned with the goal here.

Still, I think it’s good to ask the question.

8 Likes

Why should such decision be done for a method rather then for a type?

I don’t understand the sense of parameter annotation unless unpleasant design which aim to restrict ability to inject types. Why does a library author should decide how to work with other types which can be just unknown by him? I like Scala for ability to add base types(custom numbers). It is one of undeniable strong feature of scala. But when there are parameter annotation it will look like a joke: if a library author does not need any information about custom type(type class) the use of custom type will be unpleasant, just because library author does not need it.

@AMatveev - You have articulated some of the reasons why I prefer a language to enable rather than restrict.