I think that library authors bring invaluable experience to the Committee by commenting on design decisions/trade-offs that we miss and highly enrich discussions. That’s why I’d like to invite you to join our next meeting. If you’re up to it, please let me know and I’ll send you all the details.
While I thank you for the invitation, I think it might be rather difficult for me to join time-wise because I live in the Japan time zone, and I don’t want to impose on other attendees. That said, please send me the details and I’ll try my best to make it
There’s currently a discussion on whether EnumEntry should be made a trait (currently an abstract class).
I don’t have a firm opinion either way, but quantitative arguments are currently on the side of staying with abstract class. If you have time, please chime in at the link below.
Exciting development yesterday: @odersky opened a proposal to bring a new language construct enum to Scala !
In light of this, it might be a good idea to postpone thee proposal to add an enum lib to the Scala Platform.
My current thoughts:
It looks like it should be safe and easy for Enumeratum users to migrate to the enum construct, and future developments should aim at making this easier where possible.
A large chunk of enumeratum-core will be made redundant by enum, namely the Enum and EnumEntry stuff. The stuff in enumeratum.values might still be worth keeping for now (depends on where the proposal goes)
I think that’s interesting, definitely! Dotty is bringing a new array of amazing features.
Although I’m optimistic this could be implemented in Scalac, I’m not convinced that we should stage this proposal in light of Martin’s proposal. We don’t know yet when Dotty will be production-ready, and so we should try to satisfy our current needs even though the future looks quite promising.
Since enumeratum is the best we have, what about continuing the process to add it to the Scala Platform? Ultimately, we can migrate code in future versions, hopefully with the help of an automatic tool .
@jvican I see. I’m not familiar with the current progress of Dotty. Given that we do not yet know when it will be production ready, I think it may still make sense to try to get Enumeratum into the platform for current users.
I just want to chime-in and share a little (trivial?) insight about enum use-cases.
We have two types of enumerations:
Non-observable - All the programmer cares about is to easily set different (type-safe) states, but not their value. In this case the compiler can encode the enumeration value in any fashion.
Observable - The programmer sets the (encoded) value of each enumeration entry, either explicitly (sparse) setting of each entry, or by defining some index -> value transfer function.
It is possible to create enumerations with just the observable option, but maybe there is some advantage for non-observable enumeration (I can’t think of any, aside from hardware synthesis perspective).
In the broader picture, it would be useful for Scala to consider how its variants interop with other languages on the JVM (what does it look like from Java?). I have often simply resorted to using a Java enum, because It is easy to use from both Scala and Java and I can have it in public facing APIs that are intended for use by multiple JVM languages. I don’t compile to ScalaJS so that tints my view significantly – I’m in a polyglot shop mostly on the JVM.
Enumeratum looks great, and for pure Scala projects I’ll probably recommend its use once it is in the SP. The lack of extra dependencies is huge (If it used Shapeless I probably would not use it – just one more classpath conflict waiting to happen, where some developer is struggling with strange errors, wastes a day or two on the problem before coming to me for help).
I have many times had to do the boilerplate code that converts from strings and/or ints to types, and to resolve the lack of compiler time support for uniqueness, I write a test that explodes spectacularly if the identifiers
are not unique. Unfortunately, without the ability to enumerate all the types automatically, a developer can add an extra type, fail to add it to the Set, and then break a lot of stuff that the tests can not find, since they must enumerate all the values in the set to assert the required properties. This is one reason I still reach back to a Java enum – at bare minimum I can find all the instances and assert what I need to assert for every one of them in a test, even if it requires quite a bit of boilerplate to do the work on the Java side and in the test.
@rssh Please note that the document has been obsolete since 2014, as it became clear that the goals can be achieved without language changes or syntax additions.
Such language additions would be a bad idea, considering the lessons of case, in terms of language footprint and the question of migrating existing code.
All that needed to be done is having an @enum annotation that can be added to existing code:
@enum // add this ...
sealed trait Switch // ... to existing code like this
object Switch {
object On extends Switch
object Off extends Switch
}
This takes roughly a day to implement, and a week to add documentation and tests, and iron out bugs and corner cases.
I have updated the document to reflect this and prevent further confusion.
//Btw, language annotation ‘is a’ language change.
Also exists possibility not change syntax at all, but just have marker interface for Enum.
(the behaviour of a compiler (inserting JVM flag), restriction to have only sealed implementations will be practically the same). Not sure, that differences between keyword, annotation, and marker class variants, are big enough, to be not a question of taste. Main difference from the current situation with Enumeration are common: java compatibility, no mandatory nesting of values into class, no boilerplate with type-alias, looks more similar to algebraic classes.
No, the goals cannot be achieved without language changes. (they may be so without syntax addition or new typing rules, though)
case and enum are fundamentally different on that aspect. The product of case can be replicated with semantically equivalent source-level user code (including with respect to Java interop). In other words, one can manually desugar a case class into its desugaring, compile the resulting source code, and obtain observably equivalent bytecode.
It is not the case for enum: I cannot write source code that does not use the enum keyword/@enum annotation/Enum magical trait, yet would be observably equivalent to the actual enum, at least from the perspective of Java interop (which is the whole point of this thing, isn’t it?).
According to this criteria, enumneeds to be a language change, as in, it must be spec’ed in the core language specification. Whether or not you can implement a macro annotation/compiler plugin/fork the compiler to implement it is irrelevant; the enum functionality creates a unique expressive power which needs to be part of the language.
Note that this does not preclude its surface syntax being an annotation. But if it is, that annotation needs to be specified in the language specification, just like a keyword would be. (There are precedents for this, e.g., @strictfp.)
So let me repeat my initial statement: the goals of Java interop-enabled enums can be achieved without syntax changes or even without changing the type system, but definitely not without language changes. If that were true, then Scala.js would not be a language change either (it preserves the entire syntax and typing rules of Scala), yet I don’t think anyone wouldn’t consider these semantic mismatches as language changes.
My initial reply was solely in response to my name being mentioned in relation to an obsolete proposal that I discarded in 2014.
The only purpose of my comment was to correct the impression that this proposal could lend its support to similar proposals made in 2018 – not to litigate the meaning of “language change”.
That an annotation living in an unrelated third-party library constitutes a language change is a perfectly valid opinion to hold, but not an opinion anyone needs to subscribe to.
Whether you call it an opinion, myth or shared belief, the fact is that dialogue becomes very costly if we don’t share a set of definitions. Sébastien carefully explained one of them. You’re of course free to disagree, but I don’t see what “that’s just your opinion, man!” contributes to the discussion unless you argue your point.
I believe it’s important to specify as part of the language anything that is not directly expressible in source code (“at user-level”), especially when it interacts with other parts of the language. Note that this goes beyond the litmus test of emitting the same byte code. It should also consider whether ill-formed code is properly rejected if all we have is the desugaring instead of the language construct. Most type checking rules also aren’t expressible in source code, although it’s very interesting to think about how we could make more of them customizable (that would likely result in a different language, though!)
Again, this comes back to the need of a common agreed-upon set of definitions. Without it, we couldn’t reason about the meaning of Scala programs without also specifying all macros and other extensions in use. This is why I’ve always argued against macros that go beyond blackbox.
Since enumerations affect bytecode generation, pattern match emission and analysis, as well as type checking, I think we have to accept that they can’t just live in a library. Normally, we try our hardest to bake as few things as possible into the language, so that other libraries can take advantage of the same extension mechanism (string interpolation is my favorite example here), but I don’t think this is one of those.
PS: Sébastien’s example of case classes is an interesting one, because we’ve been trying to move as much of what makes them special out of the compiler and into the purview of library authors. Paul’s name-based pattern matching is a great example of that, as it affords any class the same zero-overhead pattern matching that was formerly reserved for case classes. Sadly, this mechanism itself was never fully specified. Current efforts around deriving could remove even more of what makes case classes unique.
It’s third party. That’s the entirety of the argument and I agree with it. Soc’s approach isn’t a Scala Language change, it is opt-in.
Making an annotation to reduce boilerplate is a few days work. Making a change in the compiler costs several weeks / months, a year lead time, plus much bike shedding, and has consequences on all downstream tooling developers. Case in point, there is already a third party effort to introduce @deriving support. https://gitlab.com/fommil/scalaz-deriving/ and I had to abandon efforts to do the same for @data because the meta macros were discontinued https://gitlab.com/fommil/attic/tree/master/stalagmite and I have no time to rewrite it, but I may return with a much smaller scope version.
To add some context to why I am sticking up for Soc on this point: I am no longer maintaining ensime, having prioritised family life. This constant increase in the scope of the core language is one of the reasons why ensime will cease to work fairly soon. ScalaIDE has already ceased to work on the most recent version of scala. Neither are commercially funded. It feels to me like we’re hitting the point where tooling volunteers cannot keep up: we’ve reached bursting point. I’d like it if discussions were about reducing the complexity, or bolstering our current position, not adding more things.
Actually I see this differently, to me Enum has nothing to do with Java interopt. Enum is a standard language feature that is available in almost all other mainstream languages, which is basically a sealed collection wrapped values (ideally at zero cost, typically int or string) that also let you look up an enum value by its wrapped type.
That makes no sense to me at all. If you let third party macros change language semantics, your IDE is just as broken. The fundamental problem with scala ide has nothing whatsoever to do with adding language constructs, which we do not do lightly. Enums are the perfect example. We’ve tried for years to support it in the library with subpar results.
On the contrary: third party plugins are often easier to support in the IDE than changes to the language. scalaz-deriving, for example, is well supported in all major editors, and this can be updated outside the compiler release cycle, with new features being backported to older versions of scala trivially. I would be happy to explain the details to anybody who is writing a compiler plugin or advanced macro how they can do this, it’s why I wrote https://github.com/ensime/pcplod
If something can be achieved with a compiler plugin or a macro, then that is the best thing to do, no? There is little reason to take an existing third party thing and smash it into the compiler, thereby removing its release cycle freedom and tying its interactive compiler support to the compiler codebase.
FYI, I am not asserting that language changes are to blame for breaking Scala IDE (in fact I know the details, as I’m sure you do). I’m saying that new language features require additional work in the editor and tooling, and that adding new features (without a plan to upgrade the tooling along with the language) will break the tooling going forward. The scala ecosystem has a huge gap in tooling at the moment and I don’t think we can put all our eggs in the LSP-WG basket. I’m bringing this up as a meta-issue… that in general core scala development needs to slow down with more hands on the tooling or it will lead to JetBrains vendor lockin.
BTW, you glossed over the suggestion to use the existing @deriving and @xderiving. I do not understand why a new effort has been initiated at EPFL (or Lightbend?) without at least talking to me about lessons learnt, given that I’ve written an entire library around the concept. Perhaps my code is so awesome that every line of it makes perfect sense, with no further comment needed? Yes, let’s go with that But, really, it feels like NIH from where I’m standing, and that’s quite demotivating.
I don’t imagine that anyone has any problems with scalaz-deriving… but scalaz-deriving is an entirely different matter.
The entire scalaz ecosystem is built around different goals, different core principles, and a different style of community to core Scala. At a time when the drive is to make Scala more modular, when minimal dependencies is a big plus point for any library, and when cats is the FP library de-jour anyway; it’s going to be a hard sell to suggest that scalaz-core be pulled in as a transitive dependency for anything used in general Scala tooling.