Can We Wean Scala Off Implicit Conversions?

implicit conversions is still a best possible way, when you want to maintain two views of computation flow: one, coarse-grained when you don’t want to think about some details, which should be aligned automatically, and second - precise, where you want to control all details. Implicit conversions can be a layer which turn second into a first without boilerplate. Potential example:

async {
     import cps.implicitAwait     //  I don't want to think about details, like in Loom
     val  urlData = fetchData( url )
     val  theme = classify(urlData)
     retrieveDMPInfo(url, theme, userId)
}

and

async {
    //  I want to color things themself
     val  urlData = await(fetchData( url ))
     val  theme = classify(urlData)
     await( retrieveDMPInfo(url, await(theme), userId) )
}

1 Like

I was very much in favour of this at first. I believe implicit conversions should be used as little as possible in non-test code. But they’re very, very, very useful in test code. They…

  • help migrations (as mentioned above)
  • enable conciseness - i want my non-test code as safe as possible which often means semantic layers for type-safety, but in tests it becomes boilerplate as you often manually construct data for verification and you don’t really care about type safety as the whole point of most tests is runtime value validation.
  • DSLs - I’m personally not very interested in modelling code so that it reads like natural language but dsls are quite useful for dev UX and expressing complex concepts whilst reducing complexity from its construction. From time to time I find a good use cases in non-test code but in test code I find this way more useful, way more often. Implicit conversions often become a very important building block, often in a non-argument position meaning that typeclasses aren’t a drop-in replacement that preserves the DSL usage.

So while I’d be fine having implicit conversions removed for non-test code, I’d still want to specify a scalac flag or similar to retain implicit conversions for test code. It would be significantly detrimental when writing (and maintaining) tests, to be without implicit conversions.

3 Likes

I think int2bigInt is a case that requires both features : implicit constructor and extension method. How do you think we could handle it ?

3 Likes

So what if we kept scala.language.implicitConversions indefinitely but started requiring it in use site? Then

  • we would enjoy superior type inference or performance for the vast majority of the code, and
  • for the rare cases where implicit conversions is what the author desires, be it tests or whatever else, she can have the import there, so it’s still possible.

Everybody wins :tada:

I’m sure sbt-tpolecat maintainers would happily remove -language:implicitConversions if it meant better inference/performance for their users :wink: /cc @DavidGregory084

In other words, we won’t pay the price for implicit conversions, if we don’t use them :+1:

3 Likes

I have the impression that a consensus is emerging to allow implicit conversions only under a language import at the use site. My original speculation that this option could at some point no longer be necessary has not born out. It looks there are sensible use cases to allow implicit conversions for prototyping and testing, but any such use cases should better be restricted to one project only.

Implicit conversions are useful as well for defining some interaction patterns between a library and its users, if they are well designed. But there’s an inherent problem with them: Normally, an API provides a contract between a library and its users. That contract should be as restricted as possible: it should just allow the required functionality and nothing extra. But implicit conversions in general violate this principle.A library defining them says in effect: “here are the methods you can call, and here is a bunch of extra things that make the calls more convenient, but that can also kick in in arbitrary places and interact strangely with other library uses”.

The primary example where these unwanted interactions are excluded by design is the magnet pattern. Here, the library defines its own magnet type and implicit conversions into that type. As long as every magnet type is restricted to a single library, and conversions into that type are only defined in the same library, interactions between different libraries are excluded. But the magnet types may not even share superclasses or traits except for Object; they have to be completely disjoint from one another. We could restrict implicit conversions to just that use case. Since the compiler has no idea what a project or library is we’d have to restrict conversions into a magnet type to be in the same source file, but that’s probably still OK. Define an implicit conversion to a class M to be interference-free if (1) M does not have any base classes or traits except Object, AnyVal, or Any and (2) the conversion is in the same source file as the definition of M.

One could consider to still allow interference-free conversions without the language import. But I am not sure it’s worth it, since the use magnet pattern case can be addressed with type classes as well. Also, I am not sure whether type inference can improve if we still have to account for interference free conversions everywhere.

2 Likes

It seems to be about a trade off between boiler plate code (if I may call import language clauses so) and rules guarding freedom from inference.

I would like to be able to use a set of implicit conversions and other imported things easily in multiple source files of a project. I would be happy if I could group multiple import clauses, including import language clauses, so that I would need only 1 line in a source file; then the need for explicitly importing implicit convertors would not bother me.

There has been discussion of such an export feature; I don’t know the current status. Do you think this would reduce the need for boiler plate code when using implicit conversions with the requirement of language import?

Would requiring such an import help the compiler with type inference speed & robustness when no import is present (and thus no implicit conversions are done in that scope)?

1 Like

I don’t think magnet excludes library interactions – one of the advantages of magnet is that it IS extensible in other libraries, even without orphan implicits I can create my own types with conversions into the magnet. And this is something I do in reality, e.g. with doobie above, after converting it to magnet pattern I have defined a bunch of utility types in private code interacting with new magnet-powered interpolations by converting into the magnet type.

This feature is implemented in dotty, but I’m not sure if exporting language.implicitConversions will enable it, would be good if it did.

Is that something we can improve export by supporting a generic type parameter?

Hmmm… What about extension exports?

class A(val x: Int)
class B[T](val a: T, val y: Int)

extension (b : B[A]) 
  export b.a._ //an extension export
  def justSomeExtension : Unit = {} //an extension def

val a = new A(1)
val b = new B[A](a, 5)
println(b.x)
2 Likes

I don’t know the full implications … but on the surface I quite like this suggestion.

The same behaviour could be achieved by a defining a explicit list of forwarder methods under the extension, so allowing export under an extension is really just a convenience that removes potential boilerplate.

1 Like
  • They make it hard to see what goes on in code. For instance, they might hide bad surprises like side effects or complex computations without any trace in the source code.
  • They usually give bad error diagnostics when they are not found, so code using them feels brittle and hard to change to developers not intimately familiar with the codebase.

How are these two points not true for the suggested alternatives, i.e typeclasses and magnets?

Type classes are more often than not provided by implicit defs, which incur the first point.
The second point is so bad for type classes that I never recommend people learning scala any library that uses them:

  • Type classes are not auto-completable. An IDE will never help you out with whatever combination of implicit defs you need in scope to craft that instance
  • Scaladoc has no idea about them. No tool in scala-land helps you answer the question “how do I pass this value to this method provided that it requires this typeclass”. This is in fact so bad that a feature was introduced to scaladoc to hide the actual method signature in the collections api (CanBuildFrom), because the tool has no means to explain to you or shed any light into what that is.
    In java, when a method takes a type Foo and I don’t know how to get one, I go to javadoc for the type, hit usages, and I can find all the methods that return said type, which quickly reveals how to get one.
  • It’s impossible to know if given the libraries that you have in scope (which for type class heavy libraries, this means following someone’s recommendation on what you need) you have your required typeclass already there, or it is derivable, or you have to implement one.
  • Compiler errors when missing a type class are unhelpful, since typeclasses trigger prolog-style logic reasoning, and a valid derivation path may fail due to a missing part, but scalac has to abort the entire path and unhelpfully say “I just can’t find the typeclass for your value”
  • You end up with the worst kind of documentation: booklets. These are non navigable (no immediate way to reach to what you want), non discoverable (you don’t know if the book will cover at any point what you need), non complete (you depend entirely on the author foreseeing your use case and having provided a snippet about it, and it won’t cover every single type because libraries grow faster than these booklets) books of prose that are a huge waste of time compared to javadoc.
  • Reality is most people end up in a gitter simply asking one of the library authors how to proceed (or someone with equivalent knowledge). And Humans as a Service for documentation sounds horrible.

While the point about making things hard on the compiler still stands, and it’s worth discussing, I don’t find valid any of the arguments about implicit conversions making code hard and non obvious, unless we accept all of those are also true for type classes, which everyone has blessed.

4 Likes

@rcano - I think your analysis is largely right. Typeclasses suffer from many of the same problems as implicit conversions.

However, the benefits you gain from typeclasses are enormous (in many instances). So one has to wonder whether implicit conversions also pull their weight sufficiently, even if you accept that typeclasses do.

My personal answer is yes. Other people seem to feel not. But I think you’re absolutely right that there will be painful aspects of typeclasses even if implicit conversions are completely removed…and also that the same fixes that would help typeclasses be more approachable may also help implicit conversions.

1 Like

FWIW recently IntelliJ suggests imports that would supply missing implicits, and I does a much better job than before of autocompleting extension methods and then adding the import.

2 Likes

Great example of places where typeclasses cannot be used is scalatags library where you often write code like that:

div( //Takes Modifier*
  "Title", 
  Seq(1,2,3),
  someHtmlElement,
  h3(someRXVaule),
)

Each argument has different type and it cannot be written without implicit conversion (at least I don’t see it). It is also pretty common to write own conversions to Modifier (from Future/Rx/other) and currently it works seamlessly.

I understand that implicit conversions can be tricky when abused, but there are cases where they give us clean, readable code that works. I’ve hundreds of views written like that… ~15+ own types that implicitly converts to Modifier and there was no problem with them at all.

3 Likes

Maybe all above points could be removed by imposing a restriction on implicit conversion result? Instead of having:

@java.lang.FunctionalInterface
abstract class Conversion[-T, +U] extends Function1[T, U]

We could have:

@java.lang.FunctionalInterface // don't know if if will be still applicable
abstract class Conversion[-T, +U] extends (T => Conversion.Result[U]) {
  // restriction: can't make U directly, can only make Conversion.Result[U]
  final def apply(v: T): Conversion.Result[U] = Result.wrap(convert(v))
  protected abstract def convert(v: T): U
}

object Conversion {
  opaque type Result[+U] = U
  object Result {
    // make sure user code can't create Conversion.Result to ease inference
    private def wrap[U](r: U): Result[U] = r
  }
  def unwrap[U](r: Result[U]): U = r
}

That would make implicit conversion somewhat more explicit and painful to use, but it will keep most advantages of them anyway. However it wouldn’t work for ScalaFX use case described in Can We Wean Scala Off Implicit Conversions? - #23 by kavedaa

imposing a restriction on implicit conversion result?

this will not help in example with implicit await (described above in this thread).

Yes indeed. I failed to convey my point, what I was trying to get at is that removing implicit conversions is like heaving a disease spreading through your body and chopping off a part of it where it has manifested the most visibly. You have not cured yourself, and the problem is rooted deeply in Scala (that’s what I was trying to show with the enumerated points).

Implicit conversions are a natural thing, everybody liked them before 2010 and the number of languages that has them is increasing, not reducing (furthermore, the languages that have added them are way more popular than Scala, more and more programmers are becoming familiar with this concept than with, say, type classes).

Nobody disliked their expressiveness, what people disliked was their ability (lack thereof) to get insights into the result (as has clearly been discussed in this thread that people fear them in code written by others, like libraries). These are the same problems that I showed for typeclasses. Dotty was a new compiler from scratch that essentially repeated the same mistake (my biased personal opinion) of scalac: being a traditional batch compiler whose job is turning directories of text into binary.

I’m o the opinion that a language as rich as scala, needs a compiler whose main task is answering questions about code, as fast as possible, and as rich as possible. I don’t think text is a good enough communication mean between Scala and the programmer. When a graph of implicits derivations fail to produce the type class I need, I want to see the fully typed graph of paths attempted by the compiler, with collapsible/expandable nodes, to understand why it failed. Most of the time I have a clear idea of the path I think the compiler should’ve taken, and I really wants answer as to why that path failed. There’s no way to convey this information via text in a console without flooding it.

I don’t think scala failed at the features level, I think it failed (in a relative way, scala is by all means a success of a language) at the insights level. The richer the language, the more you should be able to communicate, but scalac and scaladoc don’t.

Summary: I believe killing off implicit conversions will deject people and make code invalid, providing no nice alternative, and not solving the problems for which you are killing them.

6 Likes

We went through this example earlier.

I do agree with @rcano that the problem is not so much implicit conversions, but orphan implicits in general. Implicits defined in companion objects, whether conversions or typeclasses, are generally foolproof and relatively discoverable. “orphaned” Implicits defined elsewhere are generally problematic, as they can be accidentally shadowed, forgotten to be imported, and are generally difficult to find.

Perhaps we should be discouraging orphan implicits in general?

2 Likes