Make concrete classes final by default

Not easily. The initialization order is all screwed up.

Does that affect anything if no trait that is used contains any vals or vars, though?

I do not know exactly.
But if take a look at trait decompilation:

trait SimpleTrait {
   var someVal : String 
}

javap.exe" -s -c -l  -p SimpleTrait.class > st.txt
Compiled from "SimpleTrait.scala"
public interface ru.bitec.app.gtk.lang.SimpleTrait {
  public abstract java.lang.String someVal();
    descriptor: ()Ljava/lang/String;

  public abstract void someVal_$eq(java.lang.String);
    descriptor: (Ljava/lang/String;)V
}

The fully abstract trait is a simple interface. So there should not be such problem with binary compatibility

So may be we need a keyword which guarantee that the trait has no compatibility problem.

It seems a bad decision in general.
Separating implementation from declaration should give much gain.

I guess we disagree :wink:

That’s what they teach you in school. In practice, when you design a library, meant for long-lived binary compatibility requirements, a lot of design decisions are vastly different from what you do in applications with more-or-less source compat requirements.

See also my ScalaSphere talk, where I spend the entire talk on these issues:

1 Like

I guess you teach me more better :slight_smile:
The key word is “in general” :wink:

We implement our decision with such core libraries as:
-eclipselink
-jdbc
-anorm sql
And with any of that library the final classes or interfaces absence is a headache.
It is a real headache on practice.
We implement many services around this libraries and final classes force us to hack bytecode or use reflection and proxies

I recognize the words of a real teacher :slight_smile:

Ok you are right.
But in my practice we have to use aspectj to override the constructor to overcome this truth(or we should write diffrent branch of orm reader for examle.)
Of course someone can say you should change core library, ok we should.
But in real life our customers do not want to wait until the library author approve and merge this change.

So I love the teachers which let me get rid of aspectj :slight_smile:

And I love the libraries which can be decorated easily :wink:
For example I love XAResource(infinispan)

With that interface I can use atomics in small project. And customize it in ERP environment.

In java it is common practice to declare standard separately from its implementation(jdbc,jta and so on).
May be they were too good pupils. But I am really glad it :slight_smile:

I do not argue that this approach is good for Scala.js

But if it is the general scala recommendation, It seems like language weakness.

I have never seen something like: “Do not use interfaces in java it cause compatibility problems”

The big difference with the examples you give (i.e. the javax interfaces) is that they are explicitly intended to be extended (or implemented if you will) by a third party.

I just try to use commonly known examples which show that

Is not always true.

I agree that there is libraries that never need any AOP customization or any other implementations.
But there is other reality when an author just cannot imagine that there is such requirements, or there is a lack of time to design such cases.

-How can the author make easier to use his work?
-How can he “motivate” others not to do fork?

I do not think the best way is to say somethink like:
“Final classes is the best practice in scala, so I just can not do anything”
:slight_smile:

Of course there is no silver bullet :wink:

But I think if you write library which may need some aop, it is better to make interface than not to make it :slight_smile:
At least it can be more easier to patch it.

What is wrong with forks? You can still make PRs to the original library. Using bytecode magic is quite dangerous, very error-prone (I still remember, when working on a Minecraft mod, how bytecode editing was really the last resort, because it always meant conflicts with other mods and very probably getting broken in next game/compatibility framework update, so like in a month or two). What happens when you decide to update the library? Aren’t there issues when some other library uses the library you are hacking at runtime? Isn’t maintaining a fork and trying to push your work to the library easier and safer option?

3 Likes

Sorry, it seems I was not exact.

I mean separate branch which will not be intended to merge.
For example the author do not allow this changes because it is too “uncommon”. Or he can not support it. Or he cannot allow traits because it is a bad practice or he does not want to support it, or we do not want to argue him :slight_smile:

Of course it should not be, but it happens

It require time and money, so we must accurately calculate which option is better

1-we can make branch\fork. It may be quite fast, but can we support separate branch?
2-we can make pull request. it is the best option but how many time is it require, will the author want to merge this request?
3-we can make bytecode patch. It can be quite fast, but can we support library upgrade?

There is no silver bullet :frowning:

I personally prefer the second option, but It do not only depends on me.

2 Likes

sjrd: “everything must be final or sealed, unless explicitly intended to be extended”

AMatveev: “Separating implementation from declaration should give much gain.”

Can’t you have both? Or do I misunderstand?

1 Like

I hope I understand your question correctly.
Yes I prefer both way, but some library authors do not want to make interfaces, may be they think that make classes final is enough.

But let’s return to the main theme: Make concrete classes final by default

I think it is bad idea. I think the freedom is good :slight_smile:
I think most peaple are smart, so the freedom will give much more gain.
Of course I do not suggest to make all fields public, because it make code assistance less comfortable :wink:

Why do you think making classes final by default somehow bites your freedom?

The question is not about forbidding you to write inheritable classes but about making (possibly, the most) often used (at least, in some programming paradigms) pattern of having classes final to not to require mentioning the keyword final at a declaration.

And, one more point (as it was mentioned before): question is also to think about people who were not thinking about inheritance of their classes at all (and thus, were not writing anything about finality) – did they mean them to be inheritable? As far as I see, it is usually thought that more restrictive rules are safer, thus classes that were not intended to be inheritable should not be inheritable. It seems reasonable and it seems to not to bite your freedom. Do you argue about that?

4 Likes

“The road to hell is paved with good intentions”

If they suggested ‘final by default’ and 'force extend ’ then it would be not about forbidding.

Only the default finalization will make my choice smaller.
Now I can choice:

  • inheritance
  • aspectj
    They suggest forbid me the first option in practice.

I do not think, I am sure :slight_smile:

But it is not interesting question.
The real question is should I sacrifice my freedom for the safety of others?

I do not think so.
I think inheritance is a very bad thing(Although sometimes it’s less evil) :slight_smile:

So if someone has a habbit to make inheritance whenever it is possible it is his troubles.

I just do not understand how inheritance can be made accidentally.

By forgetting to use final? If you extend a class which was not intended ever to be extended, you can waste a lot of time (e.g. breaking interface contracts which are not checked by types and may demonstrate as random errors anywhere anytime, getting a result from other library calls not instance of your class which you put as an argument, but the library un-extended one).

2 Likes

Why do you think You can manage my risks better then I can do it? And of caurse I do not whant that my risk managment will be implemented by forbidding me to drive a car :wink:

It’s not about managing your risks. It’s about managing mine, as the library maintainer.

The risks of not being able to evolve my library, because to do it I would have to break some obscure use case that was never intended. And no matter that reasonable developers will accept that they were abusing an internal API, there are unreasonable developers who will be angry because I broke their exploit. And angry library users are what drive library maintainers into burnout. I don’t want to be driven into burnout by angry users of my free stuff.

9 Likes

I have known about it sevral days ago. Just do not convince me that it was be better for me :wink: With this proposal my personal interests would be bitten.

May be it is interesting

The aim to decrease coupling clear and correct it is the golden rule.
I am just wondering why when I had told about interfaces I was responded

I really believe that James Gosling (Java’s inventor) is right when he said:
Programming to interfaces is at the core of flexible structure.

It is interesting that he said:
During the memorable Q&A session, someone asked him: “If you could do Java over again, what would you change?” “I’d leave out classes,” he replied.

So I think the finalizing the class is not a perfect decision. It is just freezing the problem.
May be it works good somewhere. But not in general.
By the way language designs that decouple inheritance from subtyping (interface inheritance) appeared as early as 1990;[19] a modern example of this is the Go programming language.

@sjrd you’ve mentioned binary compatibility several times here. Forgive my ignorance, but why is it important? I’d argue that most library maintainers should not worry themselves about it, and that a library with requirements of binary compatibility is quite a niche case.