Most of these are QoL issues which are easily solvable (and have been extensively documented). I think it makes much more sense to solve those QoL issues to throw out the baby with the bathwater.
The problem is that the solution is being presented as an all or nothing. Our only choice is to either accept this solution or have nothing happen because all suggestions for improvements to current implicits have been shot down.
Also as people have started using the new solution, there are also many QoL issues that even the old one didn’t have, so I don’t see it as a general net improvement.
Instead of starting a new thread, I’d encourage feedback on the dotty implicit proposal to go to the existing Updated Proposal: Revisiting Implicits since that’s the thread monitored by the SIP committee.
Hey, sorry for the lack of response. I’ve taken a bit of time to read, think and experiment with ideas related to the topic.
After readingalotabouttype-classes and re-visiting somepreviousthreads in the overall discussion, I have come to realize that this proposal is not enough; it doesn’t give an adequate solution to the problems faced with type-classes in both Scala and Haskell.
Another thing I took note of in the overall discussion is the apparent confusion surrounding the conflation of the term context. It means one thing for type-classes – a set of compile-time constrains – and an entirely different thing for injection – ephemeral run-time shared data (similar to React’s context).
My conclusion to both of these is that the endeavor of making type-classes easy, fluent and useful in Scala should be separated (orthogonal?) from the “implicit features” (injection, extensions and conversions). It seems that the historical attempts to bend these features to make type-classes work is what causes a lot of misuse and confusion in the language. Type-classes deserve their own distinct syntax, constructs and rules.
I would like then to update my proposal to reflect that conclusion by making the “implication” feature even weaker:
It should not be possible to import imply (one can still import and then imply). Importing and then implying values is hardly a common use-case for anything other than type-classes.
Definitely no context bounds, or at least detach the concept completely from “implicit” and associate it with the new type-class constructs.
Do not allow for implied parameters with type-parameters (which is a bit of an irregularity, so I’m not entirely sold on that).
As for type-classes, I believe they should be explored someplace else. This has already been done, but not with the mind-set of differentiating them from implicits and giving them the solution to their own unique set of problems and use-cases instead of a generic abstraction over many unrelated concepts.
However, I do see some connection between type-classes, extensions and conversions, in the sense that they are all a set of compilation rules / hints / constraints that can be imported. I’d be tempted call this concept “lenses”, as in adding a lens on an optical scope (adding compilation constraints on a lexical / programming scope). Funny though, it seems that Haskell already managed to use this term for something else (ugh).
IIRC this is also a mechanism for bringing extension methods into scope, so it may be better to allow this and simply specify a different way of importing typeclass instances - or go the other way and allow something like import extensions to bring in just extension methods.
I’m a little leery of this as well, as it’s entirely plausible that a context object could be somewhat generic, without being a typeclass.
Currently in my proposal you only need a simple import to get them, but as I said in my last comment, I think this should be combined with a new module / namespace component – lens – dedicated for resolving compile-time rules, unlike regular import which is dedicated for resolving names without any side effects.
You’d still be able to declare (implied arg: JsonFormat[String]), but not arg: JsonFormat[A] nor arg: A.
My concern is not that people will still try abusing implied for type-classes, as this would be impossible to do without being able to import “implications”. My concern is that this would somehow conflict with the new type-class system, as it might make function definitions harder to resolve for both the compiler and – more importantly – the developer.
That would really reduce the utility of things which can’t be typeclasses, but act as a locally global context. For example, an overly simplistic memoizing wrapper might look something like this:
def memo[A, B](f: A => B)(input: A)(implied memory: mutable.Map[A, B] @@ Memo): B =
case Some(cached) => cached
case None =>
val result = f(input)
memory += (input, result)
Fully generic, but completely incompatible with type classes. JsonFormat[_] could (and probably should) be a typeclass, but something like this wouldn’t be as easy to convert.
I’m not sure I follow the example. It’s basically a getOrElseUpdate, and I’m not sure why the map is implied. But never-mind that, let’s keep the generics as long as it doesn’t horribly conflict with the new type-classes (which I’m not sure will be a problem).
Spray’s JSON formats are one of the prime examples of type-classes. If they don’t fit the new model, then the model has failed.
It’s taken me a little while, but I formalized a full proposal here.
Actually, it’s not yet complete, since I still need to fill in the parts about extensions, conversions and implications, but those are already discussed here. The parts about lenses and type classes are new.
I only scanned through this so far, would like to take a more in depth look to see what comprehensive ideas other than the currently implemented are out there. Meanwhile, thanks for the time you put into this.
I don’t think calling lenses lenses is the best choice, since a lens concept already exist in FP land, including the monocle library in scala. Any fitting alternative names you/someone can think of?
@eyalroth - It looks like a decent proposal for what it’s trying to do, but unfortunately I think it has two downsides that render it unsuitable:
(1) It’s not clear that you can actually support the use-cases that we have now without completely rethinking code (e.g. that typeclasses are traits). Scala 3 is supposed to be backwards-compatible to a large extent, at least with manual rewrites!
(2) Personally, I think the move towards more distinct features is exactly backwards. I don’t want to learn one computational scheme for how to make change, and a separate one for how to do taxes, and yet another for accounting for liquids, and…I just want to learn arithmetic and apply it all over the place. Similarly, I want a language with powerful general-purpose term inference that can be used for whatever term inference is good for. Implicit conversions infer a term of one type from a term of another type; implicit vals provide default terms to infer when one is asked for; implicit defs provide a way to synthesize default terms given types and other default terms. Extension methods locally infer a term with more capability than the old one. The more this can be unified, the better, IMO.
There is little downside to a powerful, convenient abstraction. People who like to reason from first principles can do so. People who like well-defined use cases can apply “patterns”. If you create a myriad of individual features, each may be slightly more refined, but you can’t reason from first principles any more; you have N different things to learn, plus N(N-1)/2 interaction terms to understand. No thank you!
(I like some of the designs you’ve proposed, but since I think the overall push is in the wrong direction, I’ll leave it to others to discuss those.)
I don’t believe it breaks anything that was possible previously with implicit object and traits. Inheritance of type classes is still supported, but merely modeled differently. AFAIU this is also the way they are modeled in Haskell, and how they in a general sense considered an alternative to inheritance.
I would love to see examples and try to work out on them.
But that’s the whole point - that lack of distinction between those different features is what makes implicits so hard to grasp and understand. It’s like trying to abstract over whatever a software does as a turing machine with only the most basic operations.
Such generic abstractions that fail to capture separate ideas with separate structures and constructs may be extremely generic, but are also extremely low-level and hard to understand; after all, assembly is the most general purpose language out there, but it is extremely hard to work with.
I don’t know what you refer to as “first principles”, but those features are still quite generic and suited for multiple purposes. It’s the extremely generic abstraction of “term inference” that allows for so many abusive design patterns, or ones that expect a huge understanding from to developer to connect the dots and see the greater picture.
If I’m understanding correctly, disallowing extension of generics would disallow postfix extensions like .some or .pure[F] from cats, which would be a deeply unpleasant hit to usability (particularly with the type inferencing issues around methods like foldLeft).