Principles for Implicits in Scala 3

I think most proposals regarding type classes in Dotty still make of implicits right? Maybe global coherence could be added, but I don’t think it is a solved problem yet.

Regardless, I don’t know how we could achieve any restrictions, mostly because I don’t think we know yet what should be restricted. I don’t mind if there are no restrictions, but at least I would like to have a consensus on what could be considered idiomatic use of implicits.

(@mdedetrich I don’t mind critique, these discussions hopefully will help the proposals forward, so please don’t think I am offended or anything. Just trying to convey my point of view).

1 Like

I agree with this 100%, and have brought it up before. Technical solutions are useless unless you fully understand the use cases, and I don’t believe we do right now given the current debate as well as the sparsity of use-case-related verbiage in any of the proposals so far.

The actual details of the proposal to change implicits is, in itself, immaterial compared to how the proposal interacts with the existing design patterns, antipatterns, and codebases. This statement of principles is a step in the right direction, but I think there’s still a lot more analysis of use cases and downstream effects that is missing, v.s. bikeshedding over keywords that these discussions tend to get stuck with.

7 Likes

The issue with powerful abstractions is that it’s very hard to predict where they will lead us. AFAIR implicits were brought to Scala to make Scala’s collection framework smarter. After that implicits were used to model typeclasses, type safe heterogenous lists and many other use cases. In fact implicits are the enablers of the so called Hascalator - implementing everything Haskell in Scala. Should we care what is implemented in Haskell when redesigning implicits in Scala? If yes, then should we prevent any potential abuses that happen in Haskell (I don’t know of any because I generally don’t program in Haskell)?

Haskell’s implicit parameters are name-based, while Scala’s are type-based; they don’t behave the same at all and have very different expressiveness, so conflating them seems like a mistake. OTOH Scala’s implicit parameters are generalizations of type classes. But we’re getting totally out of topic here :slightly_smiling_face:

Some of the last few messages are assuming that there are some bad use cases for implicit parameters. If so I would like to hear some examples with justifications. We don’t need a list of good uses, we need a list of bad uses, and then we need to see which of those everyone agrees are bad.

There may be some that some people think are bad and other people think are good. That is not justification for a language change. We need to find uses that everyone wants removed.

Even if you can come up with such a rule, such restrictions could be added without redesigning implicits. It’s still not a reason to make such a tremendous change to Scala.

And if you can’t find any use case or rule that everyone agrees should be disallowed by the language, then there is certainly no justification for shaking up the whole syntax.

It’s interesting how the discussion of the new syntax started with (1) let’s fix some limitations and irregularities, then shifted to (2) let’s radically rethink this to make it the run-away success it ought to be, and now it became (3) let’s design it in a way that discourages use to prevent abuse.

I don’t think your offensive at all! My point was just that (in my opinion) I think that Scala is a langauge that is more about combining orthogonal features to try and tackle problems rather than having one tool that does it well for only this job.

I agree that looking at use cases is very important to validate the design. Two years ago I gave a
talk about it. Roughly, I categorized use cases first into conversions (to be used sparingly) and parameters. Parameters have a large number of use cases, including

  • prove theorems
  • establish context
  • set configurations
  • inject dependencies
  • model capabilities
  • implement type classes

It’s further explained on slides 26ff of the talk.

There’s also a discussion about the implicit footprint on slides 15 and 16 that is relevant here. Essentially it says that one has a limited implicitness budget. Being very implicit in one dimension has to be counter-balanced by more explicitness in other dimensions. That one was adapted from a
blogpost by Aaron Turon.

I believe it would be a good idea to take the example code from that talk and compare to how it looks in the new syntax.

As to anti-patterns: I believe most of them have to do with busting the limited implicitness budget. I.e.

  • Using too many implicits
  • Using conflicting implicits
  • Using implicits on types that are not specific enough
  • Making it hard to realize that implicits are used (i.e. letting them hide in long import lists,
    or hiding them in package prefixes of types used elsewhere).
  • Overuse of implicit conversions.
  • Hiding side effects in implicit arguments

I am sure there are others.

That’s why I think that the implicit-as-a-modifier syntax can be a trap for the uninitiated. We are normally used to the fact that a modifier can be put on anything. E.g. everything can be final, or private, or protected, all vals can be lazy, all traits can be sealed, and so on. But not everything should be implicit! In fact there are very tight criteria that have to be met before you should make a definition implicit. Having a completely separate syntax serves as a better marker that something different is going on.

4 Likes

Thinking about what confused me, the thing that took me longest to grok and clarify is that you couldn’t use implicit classes for creating typeclass instances. You could only use implicit vals, defs and objects, although often you were just using the implicit def to create a typeclass, class of instances. If implicit classes could be replaced with explicit extension method syntax, that would be a big win in terms of simplification.

1 Like

There actually was a proposal along these lines: https://github.com/lampepfl/dotty/pull/4153

The idea is similar to what has been done in Swift, among others: To treat a type class as a trait and a type class instance as some form of extension of the trait. In the end things did not work out so well. That is, they got too complicated for my taste. That’s when I changed tack, worked on extension methods as a separate feature and started to think about witnesses, which eventually led to the current proposal.

1 Like

Therefore implicit was proposed as a separate keyword, i.e. implicit can be an alternative to val or def, not an extra attribute. But that is discussed in other topics.

1 Like

There are plenty of valid use cases of an implicit def/val/lazy, I don’t think you can really claim this. Examples were provided earlier of such cases.

I believe I watched the talk. As I said, implicit conversions are not what’s in question here, only implicit parameters.

It sounds like your argument in this post boils down to this (please confirm or correct): No specific implicit parameter is bad but using it too much is bad, therefore we need them to use syntax that creates more friction, so it should be harder to use implicits.

In other words the objective of this proposal is to make defining implicits and using them in definitions more verbose.

Personally I disagree with the premise – it’s not the quantity of implicitness that’s at issue, it’s the decision of what should be and shouldn’t be. It’s a decision that’s comparable to the many other design tradeoffs that API designers make. That which is implicit is that which people reading the code don’t see. The keyword “implicit” makes this point louder than the other keywords that everyone is busy arguing about. There are things people should see when they read code, and there are things they shouldn’t. To me, implicit overuse begs the question why there are so many things people don’t want seen in the code, which could be an interesting discussion – is it a superficial appeal, like looking “prettier”? Is it an overcompensating reaction to being fed up with too much explicitness in other languages? Is it a legitimate goal but implicits are currently too crude a tool to support it far enough? Etc.

If so making it harder for API authors to use implicits is curing the symptom not the disease, and may cause more harm than good.

But even granting the premise, that the issue is quantity of implicitness per se, the idea that the solution is to make it harder to use implicits seems to me to make a number of unproven assumptions (such as, that if you make something more verbose people will use it less), which carries a large risk. Why not instead start with the low-risk steps that can be taken, and reassess then? There really is no sudden urgency.

Your passion for solving the long-standing pain points related to implicits is laudable, and I realize your specialty is in language design, but if people would put the energy into it there are a lot of more mundane improvements that can be implemented with zero risk. Wouldn’t it be better to do that first and then see where things stand? Isn’t that the lesson that the software industry keeps having to be taught, and don’t we see it validated all around us?

3 Likes

If macros are anything to go by (i.e. an experimental feature that was buggy as hell and relied on compiler internals), then making things harder/annoying to use is not going to prevent people from using them if there is no alternative and/or there is a legitimate use case for them.

I am surprised that you say that. Did you actually read the proposal? If yes, then you should have noted that the new scheme leads to definitions that are more concise than the old ones rather than more verbose. Sometimes dramatically so. They are just syntactically more distinct from normal definitions and do not suffer the problem that many combinations of modifier/normal definition are non-sensical.

Your passion for solving the long-standing pain points related to implicits is laudable, and I realize your specialty is in language design, but if people would put the energy into it there are a lot of more mundane improvements that can be implemented with zero risk. Wouldn’t it be better to do that first and then see where things stand? Isn’t that the lesson that the software industry keeps having to be taught, and don’t we see it validated all around us?

My specialty is also compilers and tools and I am pretty proud of what we have achieved in this respect. One does not exclude the other.

2 Likes

I’m just trying to understand your last message.

So is your point then to reduce implicit use not by making it harder, but by making it not use the connotation of modifiers, namely that it’s a free-for-all mix-and-match?

I’m not so sure I agree with that assessment either – most traits can’t be sealed [although it’s fairly obvious when it can and can’t], and sometimes lazy is harmful, and sometimes leaving it off results in an NPE. Choosing the wrong access modifier can have grave repercussions, resulting in a prematurely frozen API or the need to break user code, or keeping something useful out of reach for reuse. In contrast, making something implicit or not has no effect on the compiled code, IIUC changing it is only a source compatibility issue, and the only real impact is aesthetic, and the fact that it’s such a big issue IMO underscores how crucial aesthetics are in general, making the difference between understandable code and cryptic code.

In short, I have a hard time accepting that being a modifier (not pure ease-of-use) conveys universal validity of usage.

2 Likes

Yes, that is closer. I want a single, syntactically quite distinctive syntax for defining an implicit instance of a type. The theory being, if that’s what you do, then the fact that this becomes now an implicit is the most important part, so we want to draw people’s attention to it. The mechanics how you do that is secondary. Actually, in the current system getting the mechanics right is also very hard, in particular if your implicits are polymorphic or conditional or if they define operators. If all of these come together, getting this right is so far the domain of true experts (that is, everybody else who attempts it is likely to screw it up one way or another). In the new system, it’s completely obvious how to achieve this.

1 Like

Amen! This is exactly what i was trying to express previously! I’m glad to see this as part of the guiding principles for the future of implicits.

Can someone please specify what this

“very tight criteria” is?

As far as I can tell, there is no coherent definition of such a thing since there are so many exceptions which makes the exercise fruitless. I can say right now that apart from implicit conversions (which we should stop talking about) that for every orthogonal combination of implicit right now there are multiple valid cases.

I have personally written every such combination at least once, and I think that if you ask library authors of major libraries they would have probably done it 10 times more.

I honestly would like to see some concrete cases of what is considered as “wrong usage of implicits” (and more importantly wrong usages of implicits that never have an equivalent right usage) so that we have some basis to make some argument.

3 Likes

Sounds good.

Could you give some examples?