Can We Wean Scala Off Implicit Conversions?

In this case, we can avoid the extension noise by using SAM syntax (I’ve just tried, it works):

trait Fragable[T]:
  extension (x: T)
    def toFrags: List[Frag]

given Fragable[Int] =
  IntFrag(_) :: Nil
given Fragable[String] =
  StringFrag(_) :: Nil
given [A: Fragable] as Fragable[List[A]] =
  _.flatMap(_.toFrags)
given Fragable[EmptyTuple] =
  _ => Nil
given [A: Fragable, B <: Tuple: Fragable] as Fragable[A *: B] =
  x => x.head.toFrags ++ x.tail.toFrags
5 Likes

I agree. But the two standard library conversions are there essentially for the extension methods they provide. I.e. I want all sequence ops on Arrays. But I rarely want to silently convert an array to a sequence. I think for that case demanding an explicit toSeq is fine. So I’d try to model the Array and String conversions as extension methods instead.

True. Complexity is created by “unkown unknowns”. That means as long as its your own code, and you know what you are doing, you might prefer implicit conversions. But as soon as it leaks out to users of libraries, I’d try to avoid them. I believe the language import is the right control mechanism then: Use it in your own code, but don’t force it on your users.

[EDIT] Here is an idea to improve usability of conversions:

  1. Implement an extension method convert in Conversion:

    trait Conversion[-A, +B]:
      def apply(x: A): B
      extension (x: A) def convert: B = apply(x)
    
    
  2. Offer a refactoring in the IDE or compiler to rewrite every implicitly inserted conversion on E to E.convert.

That way a workflow could look like this:

  • Compile with language.implicitConversions
  • Before you ship, apply the refactoring, and scrutinize your code for unwanted conversions and possible further refactorings.
2 Likes

That sounds like a great idea. And once it’s converted to .convert, regular refactoring tools in IDEs are more likely to be able to change it to something else if you prefer something else.

I’m not really sure how replacing Magnet with conversion can cover magnets that are using dependent types:

trait Magnet {
  type Out 
  def value: Out
}

def complete(magnet: Magnet): magnet.Out = magnet.value

Akka http DSL is built on this pattern. And loosing such api might be a big drawback/limitation for the community to switch on dotty.

1 Like

I’m not really sure how replacing Magnet with conversion can cover magnets that are using dependent types:

trait Magnet {
  type Out 
  def value: Out
}

def complete(magnet: Magnet): magnet.Out = magnet.value

Akka http DSL is built on this pattern. And loosing such api might be a big drawback/limitation for the community to switch on dotty.

I guess the same behaviour could be achieved by using type classes: Scastie - An interactive playground for Scala.

2 Likes

I think option2Iterable in most cases goes to the Implicit conversions are Evil category.
For example I hate that

def conv(s: String): Option[String] = ???
List.empty[String].flatMap(conv)

compiles. convs return type has nothing to do with lists so why we can use .flatMap?. It was a big struggle for me at foreign codebases, and it was a big struggle for every beginner I pair-programmed. I usually write an extender, or use .map().collect(), or .map().flatten. The extender and the flatten always raise a red-flag in the reader (why it is not a .flatMap? ohh because it does not return a list), and the collect is explicit.

Also, I think the ++ syntax can lead to confusion without the .toList. (I understand why the Option is a special list, but still, you need to keep a smaller codebase in your head when you read others code if these are explicit.)

The only place where I like implicit conversions, where I need to change/refactor data structures for prototypes, and not in the whole app. (For example, changing two database views to one external source, and I don’t need to modify the code everywhere if I write the deriving conversions from the new format to the old format in the testing/prototyping period.) But I would let this convenience go if we get rid of the option2Iterable and we can still write extender methods,

4 Likes

My 2c are that I have at least a dozen libraries that use pattern similar to String => Frag. For example:

sealed trait GE  // graph element
case class Constant(value: Double) extends GE
trait UGen extends GE // unit generator dsp block
    
implicit def doubleIsConstant(value: Double): Constant = Constant(value)

or

trait Ex[+A] {  // expression
  def value: A
}

implicit def doubleConstant(value: Double): Ex[Double] = ...
implicit def stringConstant(value: String): Ex[String] = ...

This doesn’t work with just extension methods, and type class approach would just be horrific (I would hate Scala if it forced me to pass type classes here everywhere). This is a very fine case for implicit conversions (or implicit constructor as lihaoyi calls it) IMHO.

case class SinOsc(freq: GE, phase: GE) extends UGen

SinOsc(400.0, 0.0)  // ok
case class PrintLn(in: Ex[String], tr: Trig) extends Act

PrintLn("Hello world", LoadBang())  // ok
1 Like

Looking at what has been discussed so far, I have to say that I find implicit constructors significantly easier to understand than context-bound variadic generic tuples. Implicit constructors aren’t unique to Scala, and C# and C++ both have them. People coming from Java know of the adapter pattern, and implicit constructors just smooth it out a bit. Far fewer people would know how to work with context-bound variadic generic tuples.

If you look at the problems people have with implicit conversions, it’s almost always the fact that they provide extension methods when they really just wanted an implicit constructor, or vice versa. I think narrowing “implicit conversions” to “implicit constructors” and forcing people to jump through additional hoops in the occasional case that they want both would be a reasonable way forward. It would fix ~all the existing issues with conversions and provide hardcoded support for the two most common use cases (extension methods and implicit constructors) in a simple and understandable fashion.

The proposed def f[T: Fragable](x: String, y: T**) = ... does look workable, and isn’t too awkward. Maybe if we paper over the context-bound variadic generic tuple stuff with nice enough syntax it’ll be just another magic incantation that people can cargo cult? The fact that it desugars to something else could be an implementation detail.

Having a python-like ** syntax that works for any ProductN could have a lot of value beyond just dealing with Fragable, but that may be a different discussion

4 Likes

Unfortunately you suggestion doesn’t cover return type. I should have specified this explicitly, what I’m looking is achieving following akka http DSL:

parameters("arg1".as[String], "arg2".as[Int]) { (arg1, arg2) =>
  // ...
}
// or
parameters("arg1".as[String]) { (arg1) =>
  // ...
}

this two code sample utilizing 3 features - auto tupling, magnet pattern and dependent types:

def parameters(pdm: ParamMagnet): pdm.Out = pdm()

where Out might be Function1, Function2 etc, depends on resolution of Magnet patten. It might be possible to achieve in position of type parameter but it might be lacking of type inference because lack of type context. I will try to play with it tomorrow.

3 Likes

Dotty has auto untupling for functions or whatever it’s called, which might make this a lot easier. But it’s also possible with a typeclass: https://scastie.scala-lang.org/gjrW5eOQQqaes1xP7HVunQ

3 Likes

I was unfamiliar with the term, so looked up the C# reference. In fact these look to me just like implicit conversions, with the restriction that they have to be defined in the implicit scope (as we would call it) of the source or target type. C#'s implicit scope definition is very close to Scala’s. I don’t know whether one can define a proper subclass of implicit conversions that are just implicit constructors. How would you define them?

The fact that both C# and C++ have user-defined implicit conversions (and very few other languages have them, it seems) does not count as a recommendation for me. These are literally the two most complex mainstream languages out there.

3 Likes

Well speaking for my own experience I can only say that Haskell (with the exception of Idriss, which requires Haskell to bootstrap) is the most complicated language ever I’ve tried to use by a long mile. A few years back I couldn’t even manage to build and run Haskell Hello World on an Ubuntu install. And I’ve programed at least with minimal success in in C, C++, C#, Basic, Javascript, Java, Pascal, Bash and Coral.

In C I guess you do user defined implicit conversions (like many things) through “simple” text substitution macros. Between then C / C++ / C# have had enormous success in implementing a huge percentage of the world’s most challenging programming development problems. I’m not saying that means we must slavishly follow them or are bound to facilitate every pattern or capability that they offer, but we should consider why they have been and still are so popular.

3 Likes

As the larger tuples are already going to be backed by arrays, would it make sense to simply shift how varargs are modeled so that they’re all backed by tuples and provide some quality of life tooling around them?

Stuff like:

  1. An extension method to convert a TupleN to a Seq if all the types are the same.
  2. A default lift you you don’t have to write this for every typeclass you create:
    given [A: Fragable, B <: Tuple: Fragable] as Fragable[A *: B]:
      extension (x: A *: B)
         def toFrags = x.head.toFrags ++ x.tail.toFrags
    
1 Like

Premature optimization is (the root of all) evil; implicit conversions are malicious :smile:

I do not have a lot to add to the discussion other than that I support the general notion of reducing the usage of implicit conversations. In my experience, they often cause more confusion than clarity.

For instance, the Akka HTTP use case that was brought up in here is a good example of an API that might seem neat at first, but in my opinion becomes more confusing and cumbersome the longer you use it, and is highly susceptible to abuse. I can’t say it’s all because of the implicit conversions, but they do add up to the confusion surrounding this API.

I absolutely agree. I have personally converted the doobie library from using HLists + typeclasses for its SQL string interpolation to using magnet pattern (aka implicit constructors)

PRs: #1035 #1045

The resulting diff

+45 −155

Tells a lot about the benefits of these two approaches. And all that code was removed while the library gained more features than it had before – e.g. ability to nest SQL interpolations – and became much easier to understand.

Also, everyone again forgets the elephant in the room – implicit macro conversions. They can never be replaced by typeclasses, even dependently-typed, because they transform (trees of) values, not types. They are used by many libraries and DSLs, like sbt, quill, shapeless, refined, logstage and distage. I have written before and again on this forum how the not well thought-out introduction of Conversion typeclass may harm this pattern. Obviously removing conversions outright will harm it much more.

4 Likes

How about the following limitation?

  • Implicit conversions no longer provide extension methods; they can now only trigger based on expected result types
  • If you want extension methods, use the explicit extension method syntax

Other than that, I think implicit scope is fine as is: companion object scope is great when it can be used since those will be picked up automatically without imports, while “orphan” implicit constructors defined outside companion objects are occasionally necessary for integrating independent libraries. Both have their place, though companion-object implicits should be preferred where possible.

While not as drastic a change as “getting rid of implicits conversions” entirely, this would be a conservative step in making implicit conversions less powerful and error-prone. This change would explicitly split out “implicit constructors” (or “magnet pattern”) and “extension methods” into two orthogonal use cases. Given the popularity of the new dedicated “extension methods” language feature, I think such a split would not be too controversial.

For people who want both together they can still get it by asking for both (perhaps with a bit of boilerplate), but by default people would reach for the specific tool they need. In most cases, this would be strictly less powerful than the status quo of using “implicit conversions” to serve all purposes.

7 Likes

I would like to add that if we all go with this route, it would be good to add two implicit constructors to the stdlib, that I have seen being requested by newcomers a lot.

  1. From A to Option[A]
  2. From A to List[A]

The first one is probably the most requested one; pretty useful when you have a lot of optional arguments and what to avoid the boilerplate at call site.
The second one is useful in situations like foo(flags = List(onlyOneFlag)); which, for what I can remember, was common in Python libraries.

I have always thought that the boilerplate is better than the “magic” of the implicit conversions, especially considering that those two are very open and can lead to strange bugs. But, if we restrict such conversion to be only applicable when calling a method, I guess they would help to improve the conciseness of the code.

One situation where conversions where pretty handy was when consuming another JVM language library which uses own FunctionX types. Being able to just use scala’s Function made it look like scala. Same applies if the other language uses an own Unit type.
Yet, I guess this is rather the exception. Although ugly, I would be fine with using an explicit conversion

2 Likes

I think that after this discussion it becomes clear that implicit conversions are not just evil. There are good use cases that have not given problems in real world production code. That said, I don’t think we have talked much about the gain/cost. Personally I find the cost of worse and slower type inference quite big compared to the benefits. If I understood correctly, type inference is worse for all code, even if no implicit conversions are involved.

So I rather look for alternatives to the ‘implicit constructor’ pattern and the macro implicit conversions. I think the type-class based approach works well enough to replace the implicit constructor. Sure it might be more code and initially harder to grok, however I think this is still a lower cost than worse type inference in all code. That said, I wouldn’t really know if and how we could facilitate macro implicit conversions in any other way.

3 Likes

How would alternative solutions look like? I mean, if “implicit conversions are evil”, shouldn’t we try to find alternatives? To me, the combination of implicit conversions + macros sounds quite scary. From the examples you mentioned, I’ve only used sbt and quill, and my experience with their DSL is not great AFAICT.

1 Like