Make concrete classes final by default

I guess we disagree :wink:

That’s what they teach you in school. In practice, when you design a library, meant for long-lived binary compatibility requirements, a lot of design decisions are vastly different from what you do in applications with more-or-less source compat requirements.

See also my ScalaSphere talk, where I spend the entire talk on these issues:

1 Like

I guess you teach me more better :slight_smile:
The key word is “in general” :wink:

We implement our decision with such core libraries as:
-anorm sql
And with any of that library the final classes or interfaces absence is a headache.
It is a real headache on practice.
We implement many services around this libraries and final classes force us to hack bytecode or use reflection and proxies

I recognize the words of a real teacher :slight_smile:

Ok you are right.
But in my practice we have to use aspectj to override the constructor to overcome this truth(or we should write diffrent branch of orm reader for examle.)
Of course someone can say you should change core library, ok we should.
But in real life our customers do not want to wait until the library author approve and merge this change.

So I love the teachers which let me get rid of aspectj :slight_smile:

And I love the libraries which can be decorated easily :wink:
For example I love XAResource(infinispan)

With that interface I can use atomics in small project. And customize it in ERP environment.

In java it is common practice to declare standard separately from its implementation(jdbc,jta and so on).
May be they were too good pupils. But I am really glad it :slight_smile:

I do not argue that this approach is good for Scala.js

But if it is the general scala recommendation, It seems like language weakness.

I have never seen something like: “Do not use interfaces in java it cause compatibility problems”

The big difference with the examples you give (i.e. the javax interfaces) is that they are explicitly intended to be extended (or implemented if you will) by a third party.

I just try to use commonly known examples which show that

Is not always true.

I agree that there is libraries that never need any AOP customization or any other implementations.
But there is other reality when an author just cannot imagine that there is such requirements, or there is a lack of time to design such cases.

-How can the author make easier to use his work?
-How can he “motivate” others not to do fork?

I do not think the best way is to say somethink like:
“Final classes is the best practice in scala, so I just can not do anything”

Of course there is no silver bullet :wink:

But I think if you write library which may need some aop, it is better to make interface than not to make it :slight_smile:
At least it can be more easier to patch it.

What is wrong with forks? You can still make PRs to the original library. Using bytecode magic is quite dangerous, very error-prone (I still remember, when working on a Minecraft mod, how bytecode editing was really the last resort, because it always meant conflicts with other mods and very probably getting broken in next game/compatibility framework update, so like in a month or two). What happens when you decide to update the library? Aren’t there issues when some other library uses the library you are hacking at runtime? Isn’t maintaining a fork and trying to push your work to the library easier and safer option?


Sorry, it seems I was not exact.

I mean separate branch which will not be intended to merge.
For example the author do not allow this changes because it is too “uncommon”. Or he can not support it. Or he cannot allow traits because it is a bad practice or he does not want to support it, or we do not want to argue him :slight_smile:

Of course it should not be, but it happens

It require time and money, so we must accurately calculate which option is better

1-we can make branch\fork. It may be quite fast, but can we support separate branch?
2-we can make pull request. it is the best option but how many time is it require, will the author want to merge this request?
3-we can make bytecode patch. It can be quite fast, but can we support library upgrade?

There is no silver bullet :frowning:

I personally prefer the second option, but It do not only depends on me.


sjrd: “everything must be final or sealed, unless explicitly intended to be extended”

AMatveev: “Separating implementation from declaration should give much gain.”

Can’t you have both? Or do I misunderstand?

1 Like

I hope I understand your question correctly.
Yes I prefer both way, but some library authors do not want to make interfaces, may be they think that make classes final is enough.

But let’s return to the main theme: Make concrete classes final by default

I think it is bad idea. I think the freedom is good :slight_smile:
I think most peaple are smart, so the freedom will give much more gain.
Of course I do not suggest to make all fields public, because it make code assistance less comfortable :wink:

Why do you think making classes final by default somehow bites your freedom?

The question is not about forbidding you to write inheritable classes but about making (possibly, the most) often used (at least, in some programming paradigms) pattern of having classes final to not to require mentioning the keyword final at a declaration.

And, one more point (as it was mentioned before): question is also to think about people who were not thinking about inheritance of their classes at all (and thus, were not writing anything about finality) – did they mean them to be inheritable? As far as I see, it is usually thought that more restrictive rules are safer, thus classes that were not intended to be inheritable should not be inheritable. It seems reasonable and it seems to not to bite your freedom. Do you argue about that?


“The road to hell is paved with good intentions”

If they suggested ‘final by default’ and 'force extend ’ then it would be not about forbidding.

Only the default finalization will make my choice smaller.
Now I can choice:

  • inheritance
  • aspectj
    They suggest forbid me the first option in practice.

I do not think, I am sure :slight_smile:

But it is not interesting question.
The real question is should I sacrifice my freedom for the safety of others?

I do not think so.
I think inheritance is a very bad thing(Although sometimes it’s less evil) :slight_smile:

So if someone has a habbit to make inheritance whenever it is possible it is his troubles.

I just do not understand how inheritance can be made accidentally.

By forgetting to use final? If you extend a class which was not intended ever to be extended, you can waste a lot of time (e.g. breaking interface contracts which are not checked by types and may demonstrate as random errors anywhere anytime, getting a result from other library calls not instance of your class which you put as an argument, but the library un-extended one).


Why do you think You can manage my risks better then I can do it? And of caurse I do not whant that my risk managment will be implemented by forbidding me to drive a car :wink:

It’s not about managing your risks. It’s about managing mine, as the library maintainer.

The risks of not being able to evolve my library, because to do it I would have to break some obscure use case that was never intended. And no matter that reasonable developers will accept that they were abusing an internal API, there are unreasonable developers who will be angry because I broke their exploit. And angry library users are what drive library maintainers into burnout. I don’t want to be driven into burnout by angry users of my free stuff.


I have known about it sevral days ago. Just do not convince me that it was be better for me :wink: With this proposal my personal interests would be bitten.

May be it is interesting

The aim to decrease coupling clear and correct it is the golden rule.
I am just wondering why when I had told about interfaces I was responded

I really believe that James Gosling (Java’s inventor) is right when he said:
Programming to interfaces is at the core of flexible structure.

It is interesting that he said:
During the memorable Q&A session, someone asked him: “If you could do Java over again, what would you change?” “I’d leave out classes,” he replied.

So I think the finalizing the class is not a perfect decision. It is just freezing the problem.
May be it works good somewhere. But not in general.
By the way language designs that decouple inheritance from subtyping (interface inheritance) appeared as early as 1990;[19] a modern example of this is the Go programming language.

@sjrd you’ve mentioned binary compatibility several times here. Forgive my ignorance, but why is it important? I’d argue that most library maintainers should not worry themselves about it, and that a library with requirements of binary compatibility is quite a niche case.

It’s all in my Scala Sphere talk. Watch it there:

A library that does not need to care about binary compatibility is niche.


This is a really good talk! It’s probably the perfect time for me to watch it since I’m starting to work on a contribution to Mockito Scala – which is quite a new project. Would you consider coming to Israel in the summer to talk in the Scalapeno conference (assuming there will be one next year)?

This may not be the core issue here, but I’d like to address the point you brought up in the talk about breaking source compatibility by adding methods because of implicit-classes. IMHO this is an acceptable way of breaking source compatibility, much like it would be in Java when adding methods in non-final classes.

If one decides to extend a non-final library class, one should be aware that it’s unsafe and cannot expect this “API” to remain intact. With implicit-classes it’s even worse, since in the case of extending non-final classes, adding a method would break the user code (because of missing override modifier), while with implicit-classes the code would likely still compile. Perhaps the Scala compiler should prevent “overriding” existing methods with implicit-classes?

Anyway, back to the topic at hand.

As you elegantly demonstrated in your talk, the deeper a library is down the dependency stack, the higher the risk of a non-binary-compatible change to screw things up, and the lower the importance of maintaining source compatibility.

Thing is, I believe this “formula” goes both ways: The closer a library is to the top of the stack, the more important it is to maintain source compatibility and the lesser it is to maintain a binary one. Why? well, since the “higher” a library is, the less likely it is for other libraries to depend on it, and the more likely it is for an “end-user” to use it directly; meaning, breaking binary compatibility becomes less of a risk, and breaking the source making it more likely that the end-users would have to concern themselves with it.

I’ll make the assumption that most library maintainers (in terms of sheer numbers) are maintaining libraries closer to the top of the stack – libraries which may very well be of niche use only, and may not be used by a large community of other developers. I would also make the assumption that these libraries require less skill to be maintained, and are expected to make major changes more often.

Given these assumptions, I’d say that most library maintainers should not be so concerned with binary compatibility. This is obviously a plus to have, but I don’t expect most maintainers to have the necessary skill and / or time to deal with these concerns. I would much rather them experimenting with their libraries up to the point where they find balance, and that only then they’d start thinking about long-term repercussions of their design decisions.


Martin is proposing sealed instead: PRE-SIP: Make classes `sealed` by default.