Can We Wean Scala Off Implicit Conversions?

Dotty has auto untupling for functions or whatever it’s called, which might make this a lot easier. But it’s also possible with a typeclass: https://scastie.scala-lang.org/gjrW5eOQQqaes1xP7HVunQ

3 Likes

I was unfamiliar with the term, so looked up the C# reference. In fact these look to me just like implicit conversions, with the restriction that they have to be defined in the implicit scope (as we would call it) of the source or target type. C#'s implicit scope definition is very close to Scala’s. I don’t know whether one can define a proper subclass of implicit conversions that are just implicit constructors. How would you define them?

The fact that both C# and C++ have user-defined implicit conversions (and very few other languages have them, it seems) does not count as a recommendation for me. These are literally the two most complex mainstream languages out there.

3 Likes

Well speaking for my own experience I can only say that Haskell (with the exception of Idriss, which requires Haskell to bootstrap) is the most complicated language ever I’ve tried to use by a long mile. A few years back I couldn’t even manage to build and run Haskell Hello World on an Ubuntu install. And I’ve programed at least with minimal success in in C, C++, C#, Basic, Javascript, Java, Pascal, Bash and Coral.

In C I guess you do user defined implicit conversions (like many things) through “simple” text substitution macros. Between then C / C++ / C# have had enormous success in implementing a huge percentage of the world’s most challenging programming development problems. I’m not saying that means we must slavishly follow them or are bound to facilitate every pattern or capability that they offer, but we should consider why they have been and still are so popular.

3 Likes

As the larger tuples are already going to be backed by arrays, would it make sense to simply shift how varargs are modeled so that they’re all backed by tuples and provide some quality of life tooling around them?

Stuff like:

  1. An extension method to convert a TupleN to a Seq if all the types are the same.
  2. A default lift you you don’t have to write this for every typeclass you create:
    given [A: Fragable, B <: Tuple: Fragable] as Fragable[A *: B]:
      extension (x: A *: B)
         def toFrags = x.head.toFrags ++ x.tail.toFrags
    
1 Like

Premature optimization is (the root of all) evil; implicit conversions are malicious :smile:

I do not have a lot to add to the discussion other than that I support the general notion of reducing the usage of implicit conversations. In my experience, they often cause more confusion than clarity.

For instance, the Akka HTTP use case that was brought up in here is a good example of an API that might seem neat at first, but in my opinion becomes more confusing and cumbersome the longer you use it, and is highly susceptible to abuse. I can’t say it’s all because of the implicit conversions, but they do add up to the confusion surrounding this API.

I absolutely agree. I have personally converted the doobie library from using HLists + typeclasses for its SQL string interpolation to using magnet pattern (aka implicit constructors)

PRs: #1035 #1045

The resulting diff

+45 −155

Tells a lot about the benefits of these two approaches. And all that code was removed while the library gained more features than it had before – e.g. ability to nest SQL interpolations – and became much easier to understand.

Also, everyone again forgets the elephant in the room – implicit macro conversions. They can never be replaced by typeclasses, even dependently-typed, because they transform (trees of) values, not types. They are used by many libraries and DSLs, like sbt, quill, shapeless, refined, logstage and distage. I have written before and again on this forum how the not well thought-out introduction of Conversion typeclass may harm this pattern. Obviously removing conversions outright will harm it much more.

4 Likes

How about the following limitation?

  • Implicit conversions no longer provide extension methods; they can now only trigger based on expected result types
  • If you want extension methods, use the explicit extension method syntax

Other than that, I think implicit scope is fine as is: companion object scope is great when it can be used since those will be picked up automatically without imports, while “orphan” implicit constructors defined outside companion objects are occasionally necessary for integrating independent libraries. Both have their place, though companion-object implicits should be preferred where possible.

While not as drastic a change as “getting rid of implicits conversions” entirely, this would be a conservative step in making implicit conversions less powerful and error-prone. This change would explicitly split out “implicit constructors” (or “magnet pattern”) and “extension methods” into two orthogonal use cases. Given the popularity of the new dedicated “extension methods” language feature, I think such a split would not be too controversial.

For people who want both together they can still get it by asking for both (perhaps with a bit of boilerplate), but by default people would reach for the specific tool they need. In most cases, this would be strictly less powerful than the status quo of using “implicit conversions” to serve all purposes.

7 Likes

I would like to add that if we all go with this route, it would be good to add two implicit constructors to the stdlib, that I have seen being requested by newcomers a lot.

  1. From A to Option[A]
  2. From A to List[A]

The first one is probably the most requested one; pretty useful when you have a lot of optional arguments and what to avoid the boilerplate at call site.
The second one is useful in situations like foo(flags = List(onlyOneFlag)); which, for what I can remember, was common in Python libraries.

I have always thought that the boilerplate is better than the “magic” of the implicit conversions, especially considering that those two are very open and can lead to strange bugs. But, if we restrict such conversion to be only applicable when calling a method, I guess they would help to improve the conciseness of the code.

One situation where conversions where pretty handy was when consuming another JVM language library which uses own FunctionX types. Being able to just use scala’s Function made it look like scala. Same applies if the other language uses an own Unit type.
Yet, I guess this is rather the exception. Although ugly, I would be fine with using an explicit conversion

2 Likes

I think that after this discussion it becomes clear that implicit conversions are not just evil. There are good use cases that have not given problems in real world production code. That said, I don’t think we have talked much about the gain/cost. Personally I find the cost of worse and slower type inference quite big compared to the benefits. If I understood correctly, type inference is worse for all code, even if no implicit conversions are involved.

So I rather look for alternatives to the ‘implicit constructor’ pattern and the macro implicit conversions. I think the type-class based approach works well enough to replace the implicit constructor. Sure it might be more code and initially harder to grok, however I think this is still a lower cost than worse type inference in all code. That said, I wouldn’t really know if and how we could facilitate macro implicit conversions in any other way.

3 Likes

How would alternative solutions look like? I mean, if “implicit conversions are evil”, shouldn’t we try to find alternatives? To me, the combination of implicit conversions + macros sounds quite scary. From the examples you mentioned, I’ve only used sbt and quill, and my experience with their DSL is not great AFAICT.

1 Like

shapeless & refined also use implicit macro conversions for literals & for compile-time chekcing.

In particular, there’s no alternative for implicit macro conversions in refined’s use-case – refined must execute arbitrary user-supplied code at compile time against arbitrary user values to be able to refine its type against the predicate, it must have access to the tree of the value and the refinement must happen invisibly for the user.

I won’t argue from that position since I do not believe it. I can concede that magnet pattern / implicit constructors are a much better use for conversions than arbitrary undisciplined conversions between types, added for dubious “convenience” of not typing a few more symbols, that can greatly hurt a codebase’s maintenance, but I don’t know a way to separate them syntactically such as to only allow “good” conversions – they are fundamentally just implicitly applied functions either way.

1 Like

It is possible to achieve this without implicit conversions.
In refined:

def foo(x : Positive) = ???

With singleton-ops:

type Positive[P] = Require[P > 0]
def foo[P <: Int with Singleton](x : P)(implicit positive : Positive[P]) = ???

In dotty, the goal is to go even further, and have the type system support these constraints:

Regarding refined and Scala 3, there is also a gap of c.eval, which does not exist in the dotty macro system.

When do implicit constructors, or implicit conversions in general, affect type inference?

And could we just disable that, so they were only activated where the expected type was more known?

1 Like

Can we offer more choices to the user instead of taking things away? Being evil, hard to see, etc. these are very subjective and not solid argument, meanwhile lack of implicit macro conversion is real. I don’t think we should rid our existing working solution, creating a new problem just for the sake of it or personal preference.

4 Likes

Offering more choices doesn’t always mean better. In fact, IMO offering more choices in many situations means worse.

1 Like

I believe any such restriction would be too drastic to be acceptable. Essentially, we’d have to restrict implicit conversions to situations where the target type was completely known before type inference. As soon as the expected type contained an inferred type variable, it would fail. Even handling overloaded functions would be a major headache.

Correction: For literals with singleton types only. Current refined allows refinement for things that aren’t singleton-typed, e.g. BigDecimal values are not literals, but they are supported - BigDecimalSpec - same for Symbols, and you could realistically write a refinement check for any possible value that you can parse from a Scala tree.

Actually, with macro trickery it is possible to grab the actual tree by specifying the argument index.

def foo[P](x : BigDecimal)(implicit xArg : GetArg.Aux[0, P], positive : PositiveBigDecimal[P]) = ???

Hmm, is that in Scala 2 or 3? I was under impression that Scala 3 does not allow access to tree outside of the macro application - c.enclosingTree APIs are deprecated even in Scala 2.