Proposed Changes and Restrictions For Implicit Conversions

The problem I see is that if the library owner did not use ~, I’m possibly stuck with many explicit conversions (imagine if the Scala library isn’t modified to continue Array support as stated above). Why should the library anticipate how its code is used exactly? This does not seem acceptible to me not as a library writer nor as a library user.

2 Likes

Instead, why don’t we take a note of what was done with normal given imports, to separate them from regular imports to avoid confusion.
import conversion Foo => Bar must be explicitly required to allow implicit conversion as specified.

1 Like

Hmm, in some cases this should be something like:

   import  conversion[F[_],T]  F[T]=>T

(with type parameters in import).

That’s the situation in almost every other language… And it’s nothing new. If the library owner chooses an argument type that’s needlessly restrictive (say Seq instead of IterableOnce) you are out of luck if you want to pass it an iterator. Library designers are supposed to think about these things, and users should lobby them if a design is too restrictive.

I think I should respond to this, since it perpetuates a myth. You may not have followed my Coursera videos but others have (800K inscriptions as of last count). I don’t think you find anything like “a pretzel with implicit parameters and conversions and cakes and fancy type signatures” there. Same for “Programming in Scala”. Sure, it covers the language itself and general functional programming principles instead of applications but there must be a place for people to start!

The myth I want to counter is that complicated types are related to programming language research. They are not. If I tried to publish a paper with fancy type signatures the reviewers would usually kick it out immediately since they would not understand it. Good research always tries to bring out the simplest version of a concept, and the simplest way to explain it without resorting to handwaving. It’s industry and hobbyists where you find the fancy type signatures. So, I agree that fancy types are a problem but I think it is too naive to believe they come about because of PL research. It would be good to make a more detailed study where these signatures arise and why.

6 Likes

That’s why I qualified it with “at least in the past” :stuck_out_tongue:. The community as a whole has been moving away from over-cleverness, and I am thankful for it, but I don’t think it’s disputable that the history casts a long shadow. It’s not surprising that someone seeing /: #<< or =++> operators in the standard library or toolchain will assume it to be idiomatic and follow suit. It’s good that there is now broad consensus that those experiments in language/API/library design didn’t pan out, and I’m glad for all the efforts to try and move things in a different direction.

Perhaps “Research” was the wrong choice of word here. “Experimentation” or “Exploration” may be a better fit for what I mean

My concerns with the current proposal still apply though, and I agree with others saying that needing to put ~ in every single standard library method taking a Seq or IterableOnce seems pretty invasive.

The original proposal does mention type-inference performance though. I’d be willing to suffer quite a lot of inconvenience for a sufficiently large speedup…

3 Likes

Hey, I like those and use them a lot, why were they deprecated in the first place?

As in the previous discussion, export cannot replace implicit conversion because it doesn’t support exporting methods of a generic type. Is it going to different?

Scala 3 gives a lot but it strikes the base our requirement in Scala with such proposal.
Scala 2 looks more preferable in such.

When we choose Scala instead of kotlin. Scala wins because it allows to inject base types. Numbers, dates, strings. That specialization gives us a significant boost when we are making calculations, working with databases and localization.

In kotlin it is just impossible, kotlin loses. But when we use value classes in Scala we expect good integration with the rest of ecosystem. And implicit convertion gives us that ability.

1 Like

It looks like that’s a particular use case where implicit conversions are essential. In that case, you could just add the language import everywhere. Then your code would compile, and you would explicitly point out the “magic” it uses. Would that make sense?

1 Like

To say the truth I can not completely understand it. What does it mean?
After reading the proposal I have feeling that such action means that we understand the drawbacks of that option, so we take responsibility by our own risk.
Here are drawbacks(I don’t know whether it is true or not):

  • it is magic and untenable functionality.
  • Scala team is going to reduce usage of that function as much as possible. And it is doubtful that you will be able provide pull request in probable bugs, it is just too difficult, and we cannot guarantee it wll be welcome because it is really difficult and it is not our main priority.
  • It significantly increases compilation time.
  • It is unwelcome functionality and it will be better not to use it.

Here are the quotes which lead me to such total:

Even single point from the list can make the choice unacceptable. I just don’t know how to make a decision.
At least this feature is stabilized in scala 2 .

I can only hope that the situation with “value classes” and “opaque types” will have some appropriate decision sometimes. I hope that our use case deserves it.
We do not need full support for implicit conversion at least, we just need some sort of “unboxing”

I don’t know why we keep removing the “Scala magic” and turning it into a boring language. If that’s the direction we are going, I might as well just use python then.

5 Likes

What would happen to conversions between Scala/Java primitive types, e.g. implicit def boolean2Boolean(x: Boolean): java.lang.Boolean? Would these also need import implicitConversions?

The Argument Conversions use case can already be satisfied by type classes, e.g.:

class Iterable[+A]:
  def concat[F[_]](xs: F[A])(using Foldable[F]): Iterable[A] = ...

This requires no implicit conversions nor new syntax.

  1. Bulk Extensions

Seems quite useful.

Unrelated, one of the most addictive features of implicit conversions is allowing DSLs to reuse literals. For example, a parser combinator library which allows strings to be treated as literal parsers ("trait" ~> whitespace ~> ...); or a calculation library which allows formation of expressions involving literals (expr + 42).

Via overloaded literals, Haskell achieves this feature without implicit conversions. Indeed, Scala already overloads for comprehensions (achieving a kind of “programmable syntax”). It could be useful to investigate supporting overloaded literals via a very limited type class, something like:

trait FromLiteral[A, B]:
  def fromLiteral(value: A)(using Literal[A]): B

where:

enum Literal[A]:
  case StringLit extends Literal[String]
  ...
object Literal:
  given Literal[String] ...
4 Likes

yes, but

  • this requires global rewritings of libraries to support the new structure

  • this is significantly more complicated. I believe since we are all experts we don’t tend to see it. But for a newcomer:

    • there’s an extra type parameter - and even worse it’s a higher kinded one. What is F[_]?
    • the actual argument is of type F[A] – what is this F? It’s not descriptive at all!
    • there’s an additional constraint that says Foldable[F] – no idea what that is!

compare to

  def concat(xs: Iterable[A]): Iterable[A] = ...

That’s much clearer: concat takes an Iterable. Everyone gets that.

The FromLiteral typeclass idea is very intriguing! We should look into that.

I think the simplicity here here is deceptive. “concat takes an Iterable[A], except where it doesn’t” is a much more unpleasant rabbit hole to go down. If you start from, “Scala is strongly typed,” then it’s just confusing when concat("hi there") works - despite String being a Java type that doesn’t extend Iterable.

It only looks simpler when both parties can be satisfied by, “just trust that it works” - which is fine for a student because Scala is the vehicle for learning, not the target. It doesn’t work well when you’re trying to onboard a junior engineer and Scala is the learning target.

If you try and actually explain how it works, then you end up covering the same complexities as if you had to explain the typeclass version (most notably implicit scope), and the conceptual hooks they could use to associate with this new knowledge is missing. A Foldable[F] constraint is much nicer to reason about than, “there’s an invisible conversion that has to exist” - especially because it gives them a heads up that something more complicated is going on.

And that’s without having to add the additional burden of explaining that, while this does exist in the standard library, it’s not something they should use.

8 Likes

Maybe it’s a good idea to combine this with the FromDigits class, to make it a more powerful mechanism.

4 Likes

I think the simplicity here here is deceptive. “concat takes an Iterable[A] , except where it doesn’t” is a much more unpleasant rabbit hole to go down

Right now, you have a point. That’s why I propose

def concat(xs: ~Iterable[A]): Iterable[A] = ...

which says it takes an Iterable or something that is convertible to an Iterable. Conversions are a thing in all languages including Haskell and Rust. That avoids the Rabbit hole and is still a lot simpler than the F[_] solution.

3 Likes

If there’s a way to fix the underlying issues with overloading that make the magnet pattern necessary, the main thing that’s left is simple conversions - and non-symbolic approaches are much friendlier in general.

Compare the debugging journey for these alternatives, assuming we’re trying to call this with some non-standard Java class JClass<A> which has the missing dependency squirreled away in a utils project:

Implicit conversion marker

def concat(xs: ~Iterable[A]): Iterable[A] = ...
  1. Why isn’t this working?
  2. What’s that ~ mean again?
    This one is particularly bad, as Googling a symbol is tricky, and ~ has a bunch of other meanings.
  3. Ok, it’s a conversion marker.
  4. Do I have a conversion?
  5. I don’t have one.
  6. Where can those live again?
  7. IDE can’t find it, what’s the trait name again so I can grep for it?
  8. Conversion, well that makes sense in hindsight, what goes where?
  9. Ok, looks like I should grep for Conversion[JClass[A], Iterable[A]]
  10. Well, that was a bust.
  11. Looks like they wrote it as Conversion[JClass[T], Iterable[T]]
  12. At least I found it

Direct encoding with the Conversion typeclass

def concat[In](xs: In)(using Conversion[In, Iterable[A]]): Iterable[A] = ...
  1. Why isn’t this working?
  2. Hmm, what does using mean again?
  3. Looks like I need a Conversion, do I have one?
  4. I don’t have one, where can those live again?
  5. IDE can’t find it, at least I know I can grep for Conversion[JClass[A], Iterable[A]]
  6. Well, that was a bust.
  7. Looks like they wrote it as Conversion[JClass[T], Iterable[T]]
  8. At least I found it

Encoding using a capability typeclass (in this case, Foldable)

def concat[F[_]](xs: F[A])(using Foldable[F]): Iterable[A] = ...
  1. Why isn’t this working?(
  2. Hmm, I wonder what F[_] means.
  3. Dunno, might as well ignore it.
  4. Hmm, what does using mean again?
  5. Oh, the docs use [_] a lot, looks like F[_] means JClass[_]
  6. Looks like I need a Foldable[JClass], do I have one?
  7. I don’t, where can those live again?
  8. IDE can’t find it, at least I know I can grep for Foldable[JClass]
  9. Found it

All three take about the same number of steps, but both the Conversion and Foldable versions provide the breadcrumbs needed to lead the neophyte in the right direction, which the symbolic one just doesn’t provide.

7 Likes

Even the compiler is currently smart enough to suggest missing typeclass imports. It should be possible to make it smart enough to suggest the same for ~Iterable

1 Like