I firmly don’t have enough qualifications to argue with such statement whether method lookup is before or after resolution of expression type ))
But current changes will break up so much our Scala code. I am getting discouraged very much. I just don’t know how to motivate and organize the migration, until Scala 2 is alive(After all I believe some day It will became easy ))) ). I have never used experimental features, I always have checked language specification. Why have we gotten so much code debt? (((
I think that shows the point nicely. The question when implicit conversions are considered or not is so complicated that it’s become untenable. We explain that an implicit conversion is inserted when a type error would arise otherwise. But that’s clearly only a rough approximation. So what happens in practice is that
Compiler writers spend an unreasonable amount of effort in figuring out how to hide the complexity from the user and make implicit conversions work most of the time.
Developers experiment with implicit conversions and usually don’'t ask when they don’t work. They just try something else until it does work.
Except when developer A writes a library with implicit conversions which blows up in the face of developer B that uses the library in their code in unexpected ways. And we know that happens all too often
Another appreciable benefit of the declaration-site syntax is the indication it provides to API users that they should expect an implicit conversion at the call site.
Users are often confused when they call a method and their arguments’ types don’t match the expected parameter types – especially when they don’t have the proper implicits in scope and the compiler reports an error. The declaration marker can help steer them in the right direction.
Along this line of thinking, it might be desirable to add documentation annotations for the parameters that could link to known/provided ‘Conversion’ instances. IDEs or even the Scala complier could use those to suggest imports that might be relevant to the mismatching values provided. (I’m not aware such annotations exist)
The best would be to get rid of implicit conversions altogether. They’re not worth the trouble, they have cause so much harm to Scala developers and reputation of Scala already.
Allowing instances of the Conversion[From, To] type class to be defined only in From’ or To's companion object, could be a compromise, if implicit conversions absolutely have to stay.
Well, getting rid of them would also likely get rid of a lot of Scala developers. I have a lot of code that is trim and powerful precisely because of implicit conversions. It would be worse if it happens suddenly, but even if it happens gradually, it removes one of Scala’s killer features for me.
I think we should spend much more brainpower on understanding whether it’s possible to mitigate the downsides than we should on deciding which of each others’ features we should take away because we don’t personally like them.
For instance, I don’t like for-comprehensions. I think their use, especially with futures, badly obscures what is going on. They have their own weird irregular syntax, and the expansions are non-obvious except in the simplest cases. However I recognize that for some they are a killer feature–far better for them than they are bad for me–and it would be a bad idea to remove them.
If the solution is to annotate every method parameter, then I agree that removing them completely would be better… Every parameter in the standard library with a type that is or contains one of IterableOnce, Iterable or Seq would have to be annotated in order to maintain the current integration of Array and String. And that’s only for the most obvious example.
val x : ~Foo = myBar //implicit conversion from myBar.type to Foo
What happens with backward compatibility when we want to invoke code from a library in a previous version of Scala that assumes implicit conversions are in place? Supposedly that will still be possible through Tasty, no?
Probably not. I don’t see a strong reason to allow this. If you want the right hand side to be converted to Foo, I think it’s better to write that explicitly. Also, allowing this would strengthen the association of ~ with a type, which I believe is wrong. The type ~Foo has exactly the same instances as Foo. I believe the correct role ~ is as a parameter annotation, alongside prefix => and postfix *. Byname, vararg, and convertible all influence what we can pass to a method and how the argument is evaluated.
What happens with backward compatibility when we want to invoke code from a library in a previous version of Scala that assumes implicit conversions are in place? Supposedly that will still be possible through Tasty, no?
We assume that all parameters of methods coming from such a previous version allow conversions.
If implicit conversions are gone completely, how would you rewrite the standard library to allow String and Array for every Iterable or Seq argument? I see only three possible answers, which are all unacceptable:
overloads everywhere, which would likely triple the method count.
drop support for arrays and strings. This would make Scala an unnecessarily hostile language and break lots of existing code and learning materials.
rewrite the standard library with typeclasses. This would make things significantly more complicated than with ~.
The problem I see is that if the library owner did not use ~, I’m possibly stuck with many explicit conversions (imagine if the Scala library isn’t modified to continue Array support as stated above). Why should the library anticipate how its code is used exactly? This does not seem acceptible to me not as a library writer nor as a library user.
Instead, why don’t we take a note of what was done with normal given imports, to separate them from regular imports to avoid confusion. import conversion Foo => Bar must be explicitly required to allow implicit conversion as specified.
That’s the situation in almost every other language… And it’s nothing new. If the library owner chooses an argument type that’s needlessly restrictive (say Seq instead of IterableOnce) you are out of luck if you want to pass it an iterator. Library designers are supposed to think about these things, and users should lobby them if a design is too restrictive.
I think I should respond to this, since it perpetuates a myth. You may not have followed my Coursera videos but others have (800K inscriptions as of last count). I don’t think you find anything like “a pretzel with implicit parameters and conversions and cakes and fancy type signatures” there. Same for “Programming in Scala”. Sure, it covers the language itself and general functional programming principles instead of applications but there must be a place for people to start!
The myth I want to counter is that complicated types are related to programming language research. They are not. If I tried to publish a paper with fancy type signatures the reviewers would usually kick it out immediately since they would not understand it. Good research always tries to bring out the simplest version of a concept, and the simplest way to explain it without resorting to handwaving. It’s industry and hobbyists where you find the fancy type signatures. So, I agree that fancy types are a problem but I think it is too naive to believe they come about because of PL research. It would be good to make a more detailed study where these signatures arise and why.
That’s why I qualified it with “at least in the past” . The community as a whole has been moving away from over-cleverness, and I am thankful for it, but I don’t think it’s disputable that the history casts a long shadow. It’s not surprising that someone seeing /:#<< or =++> operators in the standard library or toolchain will assume it to be idiomatic and follow suit. It’s good that there is now broad consensus that those experiments in language/API/library design didn’t pan out, and I’m glad for all the efforts to try and move things in a different direction.
Perhaps “Research” was the wrong choice of word here. “Experimentation” or “Exploration” may be a better fit for what I mean
My concerns with the current proposal still apply though, and I agree with others saying that needing to put ~ in every single standard library method taking a Seq or IterableOnce seems pretty invasive.
The original proposal does mention type-inference performance though. I’d be willing to suffer quite a lot of inconvenience for a sufficiently large speedup…
As in the previous discussion, export cannot replace implicit conversion because it doesn’t support exporting methods of a generic type. Is it going to different?
Scala 3 gives a lot but it strikes the base our requirement in Scala with such proposal.
Scala 2 looks more preferable in such.
When we choose Scala instead of kotlin. Scala wins because it allows to inject base types. Numbers, dates, strings. That specialization gives us a significant boost when we are making calculations, working with databases and localization.
In kotlin it is just impossible, kotlin loses. But when we use value classes in Scala we expect good integration with the rest of ecosystem. And implicit convertion gives us that ability.
It looks like that’s a particular use case where implicit conversions are essential. In that case, you could just add the language import everywhere. Then your code would compile, and you would explicitly point out the “magic” it uses. Would that make sense?