Can We Wean Scala Off Implicit Conversions?

shapeless & refined also use implicit macro conversions for literals & for compile-time chekcing.

In particular, there’s no alternative for implicit macro conversions in refined’s use-case – refined must execute arbitrary user-supplied code at compile time against arbitrary user values to be able to refine its type against the predicate, it must have access to the tree of the value and the refinement must happen invisibly for the user.

I won’t argue from that position since I do not believe it. I can concede that magnet pattern / implicit constructors are a much better use for conversions than arbitrary undisciplined conversions between types, added for dubious “convenience” of not typing a few more symbols, that can greatly hurt a codebase’s maintenance, but I don’t know a way to separate them syntactically such as to only allow “good” conversions – they are fundamentally just implicitly applied functions either way.

1 Like

It is possible to achieve this without implicit conversions.
In refined:

def foo(x : Positive) = ???

With singleton-ops:

type Positive[P] = Require[P > 0]
def foo[P <: Int with Singleton](x : P)(implicit positive : Positive[P]) = ???

In dotty, the goal is to go even further, and have the type system support these constraints:

Regarding refined and Scala 3, there is also a gap of c.eval, which does not exist in the dotty macro system.

When do implicit constructors, or implicit conversions in general, affect type inference?

And could we just disable that, so they were only activated where the expected type was more known?

1 Like

Can we offer more choices to the user instead of taking things away? Being evil, hard to see, etc. these are very subjective and not solid argument, meanwhile lack of implicit macro conversion is real. I don’t think we should rid our existing working solution, creating a new problem just for the sake of it or personal preference.

4 Likes

Offering more choices doesn’t always mean better. In fact, IMO offering more choices in many situations means worse.

1 Like

I believe any such restriction would be too drastic to be acceptable. Essentially, we’d have to restrict implicit conversions to situations where the target type was completely known before type inference. As soon as the expected type contained an inferred type variable, it would fail. Even handling overloaded functions would be a major headache.

Correction: For literals with singleton types only. Current refined allows refinement for things that aren’t singleton-typed, e.g. BigDecimal values are not literals, but they are supported - BigDecimalSpec - same for Symbols, and you could realistically write a refinement check for any possible value that you can parse from a Scala tree.

Actually, with macro trickery it is possible to grab the actual tree by specifying the argument index.

def foo[P](x : BigDecimal)(implicit xArg : GetArg.Aux[0, P], positive : PositiveBigDecimal[P]) = ???

Hmm, is that in Scala 2 or 3? I was under impression that Scala 3 does not allow access to tree outside of the macro application - c.enclosingTree APIs are deprecated even in Scala 2.

That’s in Scala 2. I used c.enclosingImplicits.last.tree for that. I don’t know enough about Scala 3.

I think I took us too far from the OP. Let’s continue this in a separate thread if you wish to discuss this some more.

For most of my use cases for implicit-constructors/magnet-pattern, the target type is always a concrete type: os.Source, requests.RequestBlob, scalatags.Text.Frag, fastparse.P[Unit]. Would those count as “target type was completely known before type inference”?

Some of my implicit conversions like sourcecode.Text[T] or mill.Target[T] do have a type variable, but those seem to be the minority.

1 Like

Even in those cases, the implicit conversion is selected explicitly. When type inference begins it knows exactly what it has to solve. You could still say that type inference doesn’t need to look for possible implicit conversions.

To be honest, I still don’t understand the case that this whole issue is about. When is type inference slowed down by needing to consider potential implicit conversions?

2 Likes

Yes, this approach is working, which is nice, but I just came across one nasty usability disadvantage:
With Magnet pattern you can have neat overloads:

def someMethod(magnet: MagnetA): magnet.Out = ???
def someMethod(magnet: MagnetB): magnet.Out = ???

This feature is used in out codebase in tests, we have bunch of traits that looks like:

trait TestDomainFooModelRepository {
  // definition of test repository ...

  def givenExists(model: Foo): F[Foo] = ???
}
trait TestDomainBarModelRepository {
  // definition of test repository ...

  def givenExists(model: Bar): F[Bar] = ???
}

and sometimes we need to support generic givenExists for some repositories (usually when model is not plain, but ADT). And in order to support overload without conflicts such repository will be defined as:

trait TestDomainBazModelRepository {
  // definition of test repository ...

  def givenExists(model: BazMagnet): F[model.Out] = ???
}

This way allow us to mixin different repositories in tests and use uniformly givenExists method to create fixtures/state for the tests for different domain models.

I also have code that relies on implicit conversion and extension method cannot resolve. It’s proxy pattern but without all the code duplication.

class A(val x: Int)
class B(private val a: A, val y: Int)
implicit def getA(b: B) = b.a

Then I can get x from B directly like

val b: B = ...
println(b.x)

Without implicit conversion, to implement proxy pattern you will have so much boilerplates like in Java, e.g. creating new methods to call all the methods provided by a member. I can’t think of a better way to do this, and I think Scala should be better than Java in this case.

1 Like

The proxy pattern came up several times now. The stdlib examples are also instances of this. So, we should take note that this needs to be supported somehow.

3 Likes

Isn’t export enough?

class A(val x: Int)
class B(private val a: A, val y: Int) {
  export a._
}
2 Likes

The only limitation I see in export vs. implicit proxies, is ability to export the “most updated class”

class A(val x: Int)
class B[T](private val a: T, val y: Int) {
  export a._
}

val a = new A(1)
val b = new B[A](a, 5)
println(b.x) //error
1 Like

If we had extension exports as was suggested way up I think we could get around that.

Can you make this slightly more concrete? I’m having trouble seeing why this wouldn’t work using the typeclass approach.

Sure, with suggested typeclass approach you might end with overload conflict with same erasure, because I want to keep different implementations of givenExists per trait: Scastie - An interactive playground for Scala.
The result will be

module class Main$ inherits conflicting members:
  method givenExists in trait TestDomainFooModelRepository of type [A](model: A)(using c: Conversion[A]): Id[c.Out]  and
  method givenExists in trait TestDomainBarModelRepository of type [A](model: A)(using c: Conversion[A]): Id[c.Out]
(Note: this can be resolved by declaring an override in module class Main$.)

It’s possible to solve the issue by pushing givenExists into Conversion typeclass, and make only one givenExists definition, but I feel this will increase indirectness of logic and increase debugging difficulty when you haven’t mixed in right repository with right type class, while Magnet pattern has more explanatory suggestions from IDE (Idea) because you have all overloads in method suggestion.