Yep—and I do it all the time. Problem is that most developers don’t, and they don’t intend to make their classes extendable.
But case classes typically are part of an ADT. The ADT is the enum, not the case class itself.
If case-classes are only intended for ADTs then replacing them with enums seems like a completely valid idea.
Only speaking for myself, I do see quite a bit of non-ADT case classes, where the
case part is being used to get a sensible toString, a copy-method, apply-syntax for construction, a hashCode and an equals-method etc.
It’s normal for libraries to be used in ways developers did not intent. Why should I prevent users from extending my class?
Maybe for small often used methods or classes, for compiler optimization, but else? You just going to force them to use reflection and that’s even worse.
The problem is maintaining binary compatibility, it is not possible to make a class final after-the-fact if one notices that one forgot to make a class final. Having it be an explicit decision would be an improvement IMO.
And in those cases, it would be a breaking change, and arguably inconsistent with regular classes.
I must say that I dislike the idea of having a too-opinionated library. Libraries evolve with experimentation which is much more feasible when allowing users to freely extend from classes. Those experiments may very well lead to people contributing to the library or maybe publishing the way they would have liked to use it.
One can always document classes with “internal API” warnings. Perhaps all that is needed here is a new annotation to make these intentions clearer.
OTOH, if the topic thread degoes on long enough, maybe they’ll drop subtyping in Scala 3 after all. Just out of exasperation.
There may be other ergonomics, e.g.,
sealed case class for non-final, with subclasses in this unit; otherwise,
abstract for non-final, because all bets are off for foreign extensions, the class is open but you don’t get the usual goodness.
@eyalroth And my point is that it should be the writer of the class that decides whether it allows experimentation or not. Making that choice implicitly/silently has—in my mind—proven to lead to the wrong results.
What specifically are you referring to? Issuing a warning for classes and case-classes without final/abstract modifiers would be a good start in my mind. Then it becomes clear that whoever created the class intended for it to be extended or not, and omission was not due to oversight.
But the real question is, do library authors have all the knowledge about how their libraries will be used to make this call? I’m not convinced they do. We software engineers spend a lot of time building upon other people’s work and I always end up extending external libraries’ classes to fix bugs or polish rough edges in ways that the library authors would have probably not deemed possible/appropriate. But the upside is that this makes me much faster at making changes to my code.
If classes were to be final by default, we would need a mechanism to replace how easy it is now to extend software without forking and publishing it. If I wanted to extend a library with my own subclass or with a small fix to its implementation, I now would need to fork the library, make the change, and publish it so that I can consume it from my other module. This workflow is not user-friendly and slows everyone down. The conventional build tools don’t optimize for this use case, source-based build tools like Pants and Bazel do.
Note that Class Hierarchy Analysis does an excellent job at optimizing open class hierarchies, so we cannot really say that closed class hierarchies will have better performance on the average case.
All the arguments of the kind “but I want to extend/fix that library” are short-sighted, because they are accidental.
It is accidental that a library author chose to give a feature as a class rather than a static method, for example. If they chose the static method, you couldn’t “fix it” either.
It is accidental that the class you want to fix is only instantiated by yourself. If it was also instantiated by the library itself, your “fix” would not apply to these instances.
It is accidental that the method you override is responsible for the problematic behavior. The problematic behavior could be in the initialization of a private
val, or in a private method deep inside the call graph, or in a separate helper private class. In all those cases, you wouldn’t be able to “fix” the library either.
I can continue all day. If unforeseen extension of a class is what allows you to work around a bug, it is accidental. We must not make language design decisions based on that kind of reasoning.
Fix the library instead! You would have to do that anyway if you were in any of these other situations, accidentally.
It may be accidental but it is the line between “I will use this library” and “This is shit, lets write it from scratch”. If I have a problem and I can fix it right now in my code, I will go with it and probably make a PR to fix it upstream later. If I cannot use it immediately I will look for alternatives as it is hardly an option to wait with implementation of business logic for some random person on the internet to merge my PR and make a release. Forking is also rarely worth the effort. So my view is that making things final by default can easily lead to much less code reuse and much less code contributed to opensource software by non-maintainers.
I understand that this is accidental and covers only some cases, but still, being able to apply this to 50% of libraries is better than 0%.
I pretty much agree with what @Krever has just said.
I agree it would be short-sighted if these accidental situations happened rarely, but I don’t subscribe to that view. These accidental design decisions that allow downstream users to fix/work around behavior they don’t like occur quite often, in my experience. I don’t have data to back this up, but I imagine that some static analysis could yield some interesting answers to the question of how often do libraries extend public classes of third-party software. Remember, too, that users only patch behavior on classes that are user-facing and that usually are instantiated by them and then passed around the rest of the APIs.
We have discussed whether making class hierarchies final by default is worth it or not from different perspectives. The strongest argument against it is that it would break an immense amount of code and it’s too late for it. That alone is a good argument to not consider this change for future versions of Scala, as we care about binary compatibility and not breaking people’s mental model about the language.
However, I haven’t seen many people challenging @joshlemer’s initial claim about why making class hierarchies final is the best default:
And I think that it’s worth not taking the “pretty rare” and “code smells” claims for granted. We don’t actually have data to back up how rare it is, and the usefulness of extending open class hierarchies is not about good API design, but about working around behaviors you don’t like (or that may be too specific for your use case) in user land. If everyone used Bazel or Pants in our community, this wouldn’t be an issue, since we would have a frictionless way of forking and then changing somebody else’s code.
It has happened to me a few times that I wanted to do something with a library and was eventually told by developers that the only practical way of doing what I wanted was using some undocumented method. The lesson is that library developers usually don’t anticipate all the use cases.
Making concrete classes final by default would make sense under one scenario: if there was a separation of type and implementation, by which I mean, if every concrete class (i.e the implementation) extended a type that exposes exactly the same members. I would actually love to have a language that enforces that. If we had separation of type and implementation, you could always create your own implementation based on an existing one via delegation.
To add on to this, it is also accidental that they didn’t make the class final. It is quite possible that they meant to and forgot.
The problem is, non-final classes present very real problems, ranging from incorrectness to security vulnerabilities.
Extending a class which should have been final may allow the extending class to violate class invariants which are not exposed as part of a public API, but are inadvertently broken due to the class being non-final.
Worse still, a security-critical method may take an instance of some class (that does not normally allow exploitation), but an extending class is able to leak data (e.g. a private key) out of the API, because the author(s) forgot to make the class final.
My 2 cents …
Well, no, that’s not the middle-ground.
Instead of extending classes in order to tie yourself to private and brittle APIs in order to fix the library, far better ideas would be to:
- contact the authors of the library, like via a GitHub issue
- fork the library, fix it, then send a PR
- publish and use your own fork in case the above doesn’t work in a reasonable time
Point nr 3 sounds like a lot of work, but with the tooling we have publishing your own stuff in your own private Maven repository is not that complicated. All companies end up with their own Maven repo of stuff.
I personally make all of my classes final by default, I also recommend this to other people, I’ve seen others do the same, so this argument is not valid for an increasingly number of Scala libraries.
Also I agree with @jvican in the sense that library authors do not know how their library will get used and in my own work I had to patch Java libraries, so I know this has been happening and for Java open classes has been a good thing.
However from the library’s point of view it’s best if you limit the exposed API, because if you’re exposing internals that may change in the future, you then can’t change those internals without breaking the API obviously. The less you expose as a library author, the more future proof the API is. So for a library author the safe choice is to make those classes final and only open them when users show a genuine need for it.
I actually wish Scala adopted C#'s convention in this regard … all classes should be final by default and mark the open ones via a keyword like
virtual. That’s the sanest approach and even though C# libraries aren’t friendly to brittle patches like in Java, C# developers have been surviving this limitation just fine.
Btw, I’m basically echoing the thoughts of Anders Hejlsberg, the lead architect of C#, read this interview:
I strongly agree with the 3 paths you mentioned, but first 2 are rarely acceptable as the brings a development delay. The third one is fully acceptable but only worth doing in case of big libraries.
Also you mentioned case classes and I can’t recall a single case when I needed to patch those. Having case classes final is not a problem at all, its normal classes that needs patches sooner or later.
I also don’t get the ‘API breakage’ argument. If I decide to rely on internals, I’m not really in position to make any complaints, am I?
As a library author, if I left my class open in one version, it’s not internal, it’s become part of the public API. I am not in a position to break some of my users’ code by releasing a new version where it is final, or even where I’ve broken the undocumented “outgoing contact”, as Anders Hejlsberg calls it (brilliant interview btw!).