Proposed Changes and Restrictions For Implicit Conversions

What happens in pattern matching?

case class Foo(l : into Long)
val f = Foo(5)
val x = f match 
  case Foo(1) => 1
  case _ => 2

Is the unapply signature looks like:

def unapply(arg : Foo) : Option[into Long]

Can we do type X = into Long?

In my proposal into cannot be used on arbitrary types, only on parameter types. So the unapply could not be written like this.

So there is no way to have implicit conversion occurring in pattern matching?
If that is so, this is too crippling and unexpected behavior, IMO.

There is no implicit conversion in pattern matching today, neither with given Conversion nor with implicit def.

You may be confusing with cooperative equality between primitive numeric types? Basically the fact that in Scala (1: Any) == (1L: Any).

1 Like

Oh yeah, you are right. It’s the cooperative equality.

@odersky So I was really enthusiastic about this change but I’m running into a nasty problem with API tractability that I don’t think is solveable by export-clauses or really any of the proposed things.

I am trying to implement method extensions in every possible case where I need unquotation but it’s getting very, very cumbersome.

Right now Quill’s quotation mechanism universally relies on a conversion Quoted[T] => T (let’s call this implicit-unquoting). This applies not only to Quill data-types but built in Scala data-types as well.
For example:

case class Person(name: String, age: Int)
val filterJoes: Quoted[Person => Boolean] = quote { (p: Person) => p.name == "Joe" }

// I am relying on Quoted[Person => Boolean] => (Person => Boolean) implicit-unquoting
val a = quote { filterJoes.apply(query[Person].head:Person) } // (Quoted[Person => String]).apply(Person)

In order to have a usable API, I need to allow my user to use Quoted[Person => String] as Person => String inside of a quoted section but that also means I need to convert the argument of Function1 from Quoted[Person] to Person when needed.

All four of the below possibilities (a, b, c, and d) need to be supported.

val people: Quoted[Person] = quote { query[Person].head }
val filterJoes: Quoted[Person => Boolean] = quote { (p: Person) => p.name == "Joe" }

// (Quoted[Person => Boolean]).apply(Person)
val a = quote { filterJoes.apply(query[Person].head) }
// (Quoted[Person => Boolean]).apply(Quoted[Person])
val b = quote { filterJoes.apply(people) }                    
// (Person => Boolean).apply(Person)
val c = quote { ((p: Person) => p.name == "Joe")(query[Person].head) }
// (Person => Boolean).apply(Quoted[Person]
val d = quote { ((p: Person) => p.name == "Joe")(people) }

Now since I don’t control the Function1.apply method I can’t just change it to Function1.apply[A, R](a:~A):R so I need to write extension methods for all 4 of the above cases (well, actually 3 because Function1.apply(A) is already given:

object Forwarders:
  extension [A, B](inline f: Quoted[A => B])
    inline def apply(inline a: Quoted[A]) = unquote(f).apply(unquote(a))
    inline def apply(inline a: A) = unquote(f).apply(a)

  extension [A, B](inline f: A => B)
    inline def apply(inline a: Quoted[A]) = unquote(f).apply(unquote(a))
    // inline def apply(inline a: A) = unquote(f).apply(a) // Already exists

This essentially means that I need to at-least 3x the size of my API footprint to define all of these methods, but it gets worse!

Each one needs a Quoted[T] / T variation for its input and output respectively. If they have multiple T arguments then each argument needs to have a Quoted[T] and T method, for example:

trait Query:
  def groupByMap[A, G, R](f: A=>G)(m: A => R)
// This needs to have the following defined:
extension (Query[T])
  //inline def groupByMap[A, G, R](inline f: A=>G)(inline m: A => R) // already given
  inline def groupByMap[A, G, R](inline f: Quoted[A=>G])(inline m: A => R)
  inline def groupByMap[A, G, R](inline f: A=>G)(inline m: Quoted[A => R])
  inline def groupByMap[A, G, R](inline f: Quoted[A=>G])(inline m: Quoted[A => R])


extension (Quoted[Query[T]])
  inline def groupByMap[A, G, R](inline f: A=>G)(inline m: A => R)
  inline def groupByMap[A, G, R](inline f: Quoted[A=>G])(inline m: A => R)
  inline def groupByMap[A, G, R](inline f: A=>G)(inline m: Quoted[A => R])
  inline def groupByMap[A, G, R](inline f: Quoted[A=>G])(inline m: Quoted[A => R])

This represents a literal cartesian explosion of the different possibilities!
Now I could possibly change it to inline f: ~Function1[A, G] which would hopefully decrease the API down to 2x but that’s only for methods that I control. What about if I want to use Function2[T/Quoted[T], T/Quoted[T], R], I don’t have the option to use ~T with that.

Therefore, I don’t think export-clauses would help me at all because they can’t account for the needed Quoted[T] => T conversion in every single argument (also, I’m not sure if they can be inline).

For this reason, I think that I need to stay with implicit conversions Quoted[T] => T (and possibly T => Quoted[T]) somehow. The only realistic possibility that I am seeing is injecting Converter[Quoted[T], T] (and possibly Converter[T, Quoted[T]]) whenever I open a quote e.g:

val q = quote { // inject `implicit Converter[Quoted[T], T]` here:
  ...
}

Will this be possible?

P.S. Maybe if extension methods could be defined on ~T it would mitigate the issue for me:

// I.e. define these extensions on both Query[T] and Quoted[Query[T]]
extension (~Query[T])
  inline def groupByMap[A, G, R](inline f: ~Function1[A,G])(inline m: ~Function1[A, R])

… but that would still not solve the cartesian-explosion problems for data-types that I don’t control (e.g. Function2). Not unless I could extend them as well:

extension [A,B,R](inline f: ~Function2[A,B,R])
	inline def apply(a: ~A, b: ~B): R

Also I think this extension-capability would introduce lots of scope-conflicts.

1 Like

I don’t think forcing the usage of any annotations on def or call side to be able to use implicit conversions is a good idea.

Even it’s sad to state that after some work has been already put into an implementation in the compiler…

This idea doesn’t bare it’s weight, imho, and makes a “legal” pattern much more awkward.

First of all:

If one has to chose to do some severe mental gymnastics to “properly” design where implicit conversions could happen, than annotate a lot of things appropriately, or, just add one single line of code with the magic import, I guess I know what everybody will do.

In case things would stop working without that import at some later point in time I guess people would loudly complain that they have to rewrite a lot of code. That’s a clear path to annoy a lot of people. That wouldn’t be good for Scala. (I don’t think any automatic migration would suffice in this case, if even doing that would be possible at all).

The other issue that wasn’t named by now, and the potential issues in Quill show nicely exactly that one, is that conversions are actually a proper design pattern which isn’t otherwise expressible in any concise way:

Conversions are “Auto-Adapters”. That’s an useful pattern!

If I have an API producing Foos and an API consuming Bars, both of which I don’t really control, I need some Adapter between Foos and Bars. In the tradition of Scala which gives me for example objects so I don’t have to implement the common Singleton pattern over and over, Scala gives me also implicit conversions to deal with the somehow common situations where Adapters are needed.

I would agree that in case I need only to convert some types in few calls doing that “by magic” would be too surprising, and some explicit adapt call would be preferable.

But in case there is a whole large Foo-API to be made compatible with some large Bar-API something like conversions comes very handy!

This works fine as long as the two APIs are substantially different, so no confusion between those APIs can arise even in the present of implicit conversions.

This constrain, that the APIs shouldn’t have the same purpose and be therefore quite similar, like in the case of for example Scala and Java collections, makes this an appropriate use of the “Auto-Adapter” pattern, imho.

For cases where this constrain does not apply a Facade (which is a related pattern) or maybe delegation would be more appropriate. Or just doing the adaptation manually, to avoid any confusion, like what is now done in case of the mentioned collections.

As long as this constrain holds “Auto-Adapters” are very useful.

I think most of the other use-cases showed in this thread revolve also around this pattern. For example the examples with scala-tags.

But I don’t think the point of not being able to control both sides (def and call) was stressed until now enough!

Inside an app / lib you can always rewrite your code. But Adapters are need especially in cases where you don’t control either side. (Especially the definition side of things!)


The problems of the past with implicit conversions were mostly caused by the unrestricted nature of implicit functions. An implicit functions (that could do whatever!) could be inserted by the compiler anywhere anytime. And people overused that power do make Scala code feel very dynamic. But than you get also, of course, all the problems of dynamic code, where nobody understands all of those brittle magic type conversions that happen everywhere…

The problem was that implicit functions were to powerful, and largely overused.

But we don’t have unrestricted implicit functions anymore (modulo compatibility for now), as I understand this.

Having explicit conversions, with some additional restrictions even, is good enough imho! Only because the old conversions were problematic we shouldn’t cripple an otherwise perfectly valid language feature. That would be overdoing things rooted in overreaction.

The only real issue remaining is the very bad reputation of implicit conversions in Scala. But what worked for implicits in general could work here also. Let’s just rename that feature!

To something those people coming form Java would even like: Let’s just call them (Auto)Adapters (with an adapt-Method), and forget about any references to those “bad implicit conversions”! :slight_smile:


As I’m here, one question actually: Am I a mad person 'cause I’ve written now a few times

import scala.Conversion as as

to use it as an infix type? (Btw. that parser bug also occurs in Metals)

given (Foo as Bar) = Bar(_)

Doesn’t this look beautiful? :sweat_smile: :crazy_face:

3 Likes

Yes, using implicit conversion for automatic coloring in dotty-cps-async also can be viewed as an auto-adapter. Maybe think about a new syntax for auto-adapter instead conversions? Also, I see that in both cases conversions between ‘T’ and 'F[T]`. Can this be a pattern restriction?

That ignores the difference between library and user code. The magic import has to happen use-side, so it’s a fairly big deal. The library writer will have every incentive to add a little bit more documentation to avoid the import.

As you say, it’s the difference between

“properly” design where implicit conversions could happen, than annotate a lot of things appropriately

and allowing implicit conversion everywhere in user land, but forcing the user to write an import. What would be the better design?

2 Likes

See: Implement `into` modifier on parameter types by odersky ¡ Pull Request #14514 ¡ lampepfl/dotty ¡ GitHub

1 Like

@odersky I’m so sorry to keep pestering you about this. I would love to get rid of implicit conversions in Quill but Implement `into` modifier on parameter types by odersky · Pull Request #14514 · lampepfl/dotty · GitHub does not really help me and I commented as much there. Please see my previous comment above as to why this is the case here.
In absence of any clarity on the issue, I will keep on pestering.

1 Like

Yes, maybe we have to accept that into does not help for these use cases. I just wanted to say that in general, conversions of the form F[T] -> T are super dangerous, precisely because they are so expressive. You can hide all sorts of evaluation under the F! It’s a DSL designer’s dream. That’s precisely the reason it should not be in Scala.

I agree that the case of Quill is more limited, we just need to see whether there is a way to support that specific use case without going all the way in. It will take time. Until that, there’s the language import that will always be available.

1 Like

Coming back to the original problem:

Maybe a SAM type that specializes Quoted[A => B]? Then you could achieve what you want by defining the right apply method.

What would that practically look like? If it’s just something that lets me call .apply on Quoted[Function1] it wouldn’t be of much help. Fundementally, quotation needs to compose something like this:

val a: Quoted[Int] = quote { foo + bar }
val b: Quoted[Int] = quote { a + baz }
// b captures foo + bar + baz

I can imagine extending some kind of SAM method Convertable[Quoted[T], T] doing something like this:

object quote extends Convertable[Quoted[T], T]:
  inline def apply(inline qt: Quoted[T]): T = ...

The issue is that both the method and the arguments would need to be inline (as shown).

As others stated above, implicit conversions have good use cases that are not reproducible in such a concise manner. What about refined types ?

In both Iron and Refined, implicit conversions are importants for automatic refinement.

Refined:

// This refines Int with the Positive predicate and checks via an
// implicit macro that the assigned value satisfies it:
 val i1: Int Refined Positive = 5

Iron 2:

//An inline implicit conversion requiring a given instance of `Constraint[Int, Positive]` is called.
val x: Int :| Positive = 5

They both you implicit conversion for a compile-time verification (using whether macro or inline).

These conversions are totally suited for this use case IMHO.

Also note that these libs do not make implicit conversions uncontrollables since they do compile-time verification and have a way to convert the refined type back to its unrefined type (Iron uses opaque type subtyping, and AFAIK Refined uses another implicit conversion).

From what I understand, the real problem with implicit conversions is that they get abused. I understand the concern of restricting this feature to discourage bad usage but given imports look enough to me.

4 Likes