To say the truth I can not completely understand it. What does it mean?
After reading the proposal I have feeling that such action means that we understand the drawbacks of that option, so we take responsibility by our own risk.
Here are drawbacks(I don’t know whether it is true or not):
it is magic and untenable functionality.
Scala team is going to reduce usage of that function as much as possible. And it is doubtful that you will be able provide pull request in probable bugs, it is just too difficult, and we cannot guarantee it wll be welcome because it is really difficult and it is not our main priority.
It significantly increases compilation time.
It is unwelcome functionality and it will be better not to use it.
Here are the quotes which lead me to such total:
Even single point from the list can make the choice unacceptable. I just don’t know how to make a decision.
At least this feature is stabilized in scala 2 .
I can only hope that the situation with “value classes” and “opaque types” will have some appropriate decision sometimes. I hope that our use case deserves it.
We do not need full support for implicit conversion at least, we just need some sort of “unboxing”
I don’t know why we keep removing the “Scala magic” and turning it into a boring language. If that’s the direction we are going, I might as well just use python then.
What would happen to conversions between Scala/Java primitive types, e.g. implicit def boolean2Boolean(x: Boolean): java.lang.Boolean? Would these also need import implicitConversions?
The Argument Conversions use case can already be satisfied by type classes, e.g.:
class Iterable[+A]:
def concat[F[_]](xs: F[A])(using Foldable[F]): Iterable[A] = ...
This requires no implicit conversions nor new syntax.
Bulk Extensions
Seems quite useful.
Unrelated, one of the most addictive features of implicit conversions is allowing DSLs to reuse literals. For example, a parser combinator library which allows strings to be treated as literal parsers ("trait" ~> whitespace ~> ...); or a calculation library which allows formation of expressions involving literals (expr + 42).
Via overloaded literals, Haskell achieves this feature without implicit conversions. Indeed, Scala already overloads for comprehensions (achieving a kind of “programmable syntax”). It could be useful to investigate supporting overloaded literals via a very limited type class, something like:
trait FromLiteral[A, B]:
def fromLiteral(value: A)(using Literal[A]): B
where:
enum Literal[A]:
case StringLit extends Literal[String]
...
object Literal:
given Literal[String] ...
I think the simplicity here here is deceptive. “concat takes an Iterable[A], except where it doesn’t” is a much more unpleasant rabbit hole to go down. If you start from, “Scala is strongly typed,” then it’s just confusing when concat("hi there") works - despite String being a Java type that doesn’t extend Iterable.
It only looks simpler when both parties can be satisfied by, “just trust that it works” - which is fine for a student because Scala is the vehicle for learning, not the target. It doesn’t work well when you’re trying to onboard a junior engineer and Scala is the learning target.
If you try and actually explain how it works, then you end up covering the same complexities as if you had to explain the typeclass version (most notably implicit scope), and the conceptual hooks they could use to associate with this new knowledge is missing. A Foldable[F] constraint is much nicer to reason about than, “there’s an invisible conversion that has to exist” - especially because it gives them a heads up that something more complicated is going on.
And that’s without having to add the additional burden of explaining that, while this does exist in the standard library, it’s not something they should use.
I think the simplicity here here is deceptive. “concat takes an Iterable[A] , except where it doesn’t” is a much more unpleasant rabbit hole to go down
Right now, you have a point. That’s why I propose
def concat(xs: ~Iterable[A]): Iterable[A] = ...
which says it takes an Iterable or something that is convertible to an Iterable. Conversions are a thing in all languages including Haskell and Rust. That avoids the Rabbit hole and is still a lot simpler than the F[_] solution.
If there’s a way to fix the underlying issues with overloading that make the magnet pattern necessary, the main thing that’s left is simple conversions - and non-symbolic approaches are much friendlier in general.
Compare the debugging journey for these alternatives, assuming we’re trying to call this with some non-standard Java class JClass<A> which has the missing dependency squirreled away in a utils project:
Implicit conversion marker
def concat(xs: ~Iterable[A]): Iterable[A] = ...
Why isn’t this working?
What’s that ~ mean again?
This one is particularly bad, as Googling a symbol is tricky, and ~ has a bunch of other meanings.
Ok, it’s a conversion marker.
Do I have a conversion?
I don’t have one.
Where can those live again?
IDE can’t find it, what’s the trait name again so I can grep for it?
Conversion, well that makes sense in hindsight, what goes where?
Ok, looks like I should grep for Conversion[JClass[A], Iterable[A]]
Well, that was a bust.
Looks like they wrote it as Conversion[JClass[T], Iterable[T]]
Oh, the docs use [_] a lot, looks like F[_] means JClass[_]
Looks like I need a Foldable[JClass], do I have one?
I don’t, where can those live again?
IDE can’t find it, at least I know I can grep for Foldable[JClass]
Found it
All three take about the same number of steps, but both the Conversion and Foldable versions provide the breadcrumbs needed to lead the neophyte in the right direction, which the symbolic one just doesn’t provide.
Even the compiler is currently smart enough to suggest missing typeclass imports. It should be possible to make it smart enough to suggest the same for ~Iterable
I agree the higher-kinded generic type is confusing and off-putting. I’m not suggesting ~ isn’t useful (I think it is useful), just that it does overlap with existing features.
In any case, having seen the drawbacks of implicit conversions, as addictive as they can be at times, I’m excited to see some movement toward alternatives. Looking promising!
IMO it is easier to read, easier to Google for, and more regular. The only downside I see is that you still have to do the conversion within the method body and that it is slightly longer, which can become a bit repetitive. And even then, it seems worth it given the advantages.
Ultimately I think ~ is simply not explicit enough (which is why we are having this whole discussion), so I am afraid we will have the discussion again in a few years to get rid of ~.
As I understand, ‘~’ is needed only for source compatibility of 3.0 and 3.1? (Needed changes to the sources: import implicit conversions on client-side) Mb issue scala 4.0 instead of 3.1, or changing the version schema to have SemVer after 3 (scala3-1.0.0, scala3-2.0.0) will be a better solution?
Does this mean that many of the standard library methods will end up with prototypes looking like this?
If so, then I think that you will end up in the same place as we did with the Scala 2.8 standard library with method prototypes that are too confusing for beginners, and you end up pretending the complexity isn’t there are offering fake method prototypes to make the methods look simpler.
I generally believe that we should be aiming for the minimum amount of complexity necessary to achieve the required outcome.
But I’m also not really convinced that now is necessarily the best time to be tackling this problem. I would prefer to see Scala 3 ship, see what the top issues are reported by users, and then focus on alleviating those pain points.
I don’t know about “good” use cases, but I have a use-case where I use the possible conversion to cause a compile-time failure with a better message. It is in a DSL that has its own Bool and if, so if someone accidentally uses a Scala if with the DSLBool they get a very clear error message:
trait DSLBool
object DSLBool {
import scala.reflect.macros.blackbox
implicit def toBooleanError(mybool : DSLBool) : Boolean = macro toBooleanErrorMacro
def toBooleanErrorMacro(c: blackbox.Context)(dfbool : c.Tree) : c.Tree = {
c.abort(c.enclosingPosition,
"""Type mismatch. It appears you are trying to use a DSL `Bool` where a (Scala) `Boolean` is expected.
|Make sure you use `dslIf` and not `if`.""".stripMargin
)
}
}
IMO, this is not a good enough reason to allow it, but here is an example use-case for the record.