I pretty much agree with what @Krever has just said.
I agree it would be short-sighted if these accidental situations happened rarely, but I don’t subscribe to that view. These accidental design decisions that allow downstream users to fix/work around behavior they don’t like occur quite often, in my experience. I don’t have data to back this up, but I imagine that some static analysis could yield some interesting answers to the question of how often do libraries extend public classes of third-party software. Remember, too, that users only patch behavior on classes that are user-facing and that usually are instantiated by them and then passed around the rest of the APIs.
We have discussed whether making class hierarchies final by default is worth it or not from different perspectives. The strongest argument against it is that it would break an immense amount of code and it’s too late for it. That alone is a good argument to not consider this change for future versions of Scala, as we care about binary compatibility and not breaking people’s mental model about the language.
However, I haven’t seen many people challenging @joshlemer’s initial claim about why making class hierarchies final is the best default:
And I think that it’s worth not taking the “pretty rare” and “code smells” claims for granted. We don’t actually have data to back up how rare it is, and the usefulness of extending open class hierarchies is not about good API design, but about working around behaviors you don’t like (or that may be too specific for your use case) in user land. If everyone used Bazel or Pants in our community, this wouldn’t be an issue, since we would have a frictionless way of forking and then changing somebody else’s code.