will no longer be valid Scala unless method is defined with the @infix annotation.
I must honestly admit, that I find this very odd. It means that, to me, an essential part of the Scala language, which is simple syntax with minimal clutter, will be deprecated.
Instead, one would have to clutter @infix all over one’s business logic, if one prefers this syntax.
Additionaly, what is valid Scala syntax becomes confusingly conditional: a method b may or may not be legal syntax depending on how method is defined. Regularity is broken.
Hopefully I’ve misinterpreted something. If not, I think this is a very bad idea.
I have to agree. Both the @alpha and @infix annotation and all the rules that come with them seem to do little but add extra mandatory boilerplate, confusion and irregularity. But I assume a SIP discussion about this will have to happen anyway.
I agree. I use infix notation ubiquitously to write cleaner-looking code that is more readily understood. This includes both Scala and Java standard library methods, e.g. xs take 5 and now minus Duration.ofSeconds(5)
Though I am sympathetic to your point. The current problem is that there is too much choice on how to use methods. Which means that there is a very inconsistent use. This rule makes Scala more opinionated, and enforces a style. Personally I am in favor of this style, though I can understand if you are not.
Though, as a library designer, you still have the choice on how the code should be used. I think writing definition site boilerplate is worth it since you only write it once. So if you prefer a method b you can actually still use that.
There are two separate questions here. One is whether it is good to have the @infix annotation. The other is whether it belongs in Scala 3.0.
As I’ve said before, we seem to have picked up a mentality somewhere that Dotty is our one chance to make all the changes we’ve been dreaming of for the last ten years. This is a very harmful mentality. Scala, like all software, needs to evolve constantly yet safely. There is something called “change management.” We need to pace changes so that people can adapt to them.
It was recently observed that more than a fifth of community build projects are still on sbt 0.13, nearly two years after sbt 1 came out, however almost all sbt 1.x builds are on 1.2.x. (https://gitter.im/sbt/sbt?at=5ce6d7a6ad024978c617d022)
My interpretation of that is that when people are nervous about how much work an upgrade will require, they push it off indefinitely, but when they believe upgrading is simple they are quick to do it.
So even if you believe that infix syntax should be largely prohibited, and that backticks should accrue another overloaded meaning, please please please can we not have so many changes at once? I don’t care if each individual change is easy to make, or if it can be automated hopefully, or if you have an elaborate backwards compatibility and deprecation scheme. Making too many changes at once is plain and simple, a major risk factor.
I don’t care how close together releases are, but too many changes in one release is just too dangerous.
In fact, @infix will not be enforced at once. The annotation is available for use, but will by default do nothing in Scala 3.0. Only if you compile with -strict it will give a deprecation warning if methods lacking an @infix annotation are used as infix operators. In versions post after 3.0 the warnings will be issued by default.
I usually make a decision differently. If I do not need higher version, I will not waist my time to migration.
Usually it minimizes the amount of upgrades, so I can spend my time more effectively.
It is difficult to imagine how it will be hard making migration to dotty for us.
For example removed Class Shadowing will break 6000 lines of code in the project of 1000 000 lines of code(IIUC there will be no automatic tool for migration for this). And it will make our lives more harder.
But we will have done migration if the compilation time is twice faster.
The problem with definition-site is that you often use code that you do not define. If not for that, there would be no point in libraries or Java interoperability.
Let’s put this on a different perspective. If infix notation was the default style and dot notation could only be used if annotated at the definition site, would you still be in favor of this change? If not, I’d argue you are not in favor of this change as it is, just in favor of adoption of your preferred style.
I don’t. I favor “all of the above”, and you can’t have “all of the above” without “all” =D
I always prefer definition-side flexibility rather than use-site flexibility. I like that Scala has definition-site variance instead of use-site variance, and have campaigned for having a definition-site choice of the of number-of-empty-parentheses-lists in a method, rather than having a use-site choice. Similarly, I prefer Scala’s definition site “method names can have any combination of upper or lower case characters and underscores”, rather than something like Nim’s old use-site “you can call a method with whatever case you want, with whatever underscores you want” flexibility.
This case is all about definition-site vs use-site choice of infix-vs-method syntax, and so naturally I’d prefer the former.
I don’t think most of those examples really apply to infix vs dot notation.
Variance is a good fit for definition-site flexibility, because it expresses something meaningful about what type of operations a particular type supports.
Nim’s use-site casing and particularly its ambivalence towards underscores is so deeply unpleasant that AFAIK it’s never been considered in another language, so it’s very nearly a strawman argument - nobody is asking for that.
The callsite freedom to omit empty parens is directly applicable, and I agree that removing it would make things more predictable.
Frankly, I don’t mind either style, because it’s a difference that doesn’t change anything but the style. Supporting one or the other, or both doesn’t change much of anything.
Supporting one, and the other in a context-dependent fashion is far more confusing. Hiding that decision behind an annotation is going to be a “gotcha” moment, particularly when learning.
Okay, well, that explains it, then. I don’t always. I think it depends on who should have control.
In the case of variance, because of the difficulty in making a library variance-safe, the library designer either has to get control, or the library design is complex, impossible, unsafe, or a mixture of all three (all of which are in evidence in Java).
In the case of infix notation, the library designer doesn’t really know how the user is going to use the thing (maybe they have some hunches), so the designer is operating from a position of relative ignorance while the user has full information. Control goes to the user.
In the case of variable names, allowing identifiers to appear in a wide variety of visually distinct forms is ridiculous, and nobody should get to make that choice: it’s confusion waiting to happen and should be forbidden.
Parens suffer from constant tension between accuracy and usability. If you want to enable accuracy for users, the library designers have to set the number parentheses, so it’s a collaboration between the two; they can’t get control of accuracy unless the decision is fixed at the library side.
As an example of how the user needs control, for instance, this is exceedingly clean and clear code to me:
val pre = thing take 5
val end = thing drop 14
But in another context you want to make a different choice:
Both of those look fine to me in isolation, but put together, it looks awfully inconsistent:
val pre = thing take 5
val end = thing drop 14
thing.
filter(x => bar(x) < bax(x)).
take(8).
foldLeft(z)((acc, x) => acc append x)
Why use two conventions when one will do?
val pre = thing.take(5)
val end = thing.drop(14)
thing
.filter(x => bar(x) < bax(x))
.take(8)
.foldLeft(z)((acc, x) => acc append x)
I don’t care which single convention we end up using, but if it’s a choice between two conventions because each one looks awful in some cases, or one convention that looks reasonable in both cases, I prefer one convention.
If infix notation is always an option, a codebase can use a linter to keep things consistent within that codebase.
If infix notation is gated by library designers, it’s nearly guaranteed that no non-trivial project will be able to consistently use infix notation - eventually they’ll need to depend on a library that doesn’t bother with adding the annotations.
But in that case, you shouldn’t let the library authors decide. You should just remove infix as an option except on symbolic methods. Letting them decide still gives you both conventions, just inconsistently.
You certainly don’t want stuff like this:
import org.foo.Foo
import org.bar.Bar
object Main {
val foo = new Foo
val bar = new Bar
def main(args: Array[String]) {
for (arg <- args) {
foo append arg // annotated @infix
baz.append(arg) // Not @infix
}
}
}
And there’s no way to avoid this when the compiler is the arbiter of what can be infix (assuming you can’t just -Xinfix to enable everything without warnings, at which point the annotations were pretty pointless anyway).
So I agree with @dcsobral - I cannot distinguish arguments for @infix from arguments against having infix at all. In fact, the former seems only to accomplish the latter, but in an awkward inconsistent way where eventually everyone knows that you should never use infix, but you get ugly inconsistencies for a while while everyone is figuring it out.
Also, it’s simply not true that method syntax is always really close to infix notation in clarity.
val p = foo and (bar or (baz and bippy) or quux)
val q = foo.and(bar.or(baz.and(bippy).or(quux)))
(Snap quiz: is p the same as q or not?)
Operators get lost in the maze of parens and dots. You can hope someone would @infix them, but if they didn’t, you’re stuck. Furthermore, people can add @infix in version 2.7.1, because it was noticed in 2.7.0, but then it’s source incompatible which is awfully annoying for a point release.
So, to reiterate:
For consistency, as libraries differ, style should be enforced by linters or formatters, not compiler warnings or errors
Users are in the best position to know which style works for both the code they’re writing and the project they’re involved with. Thus, users, not library writers, should have the final say on the syntax.
I haven’t advocated removing infix, just for consistency in how it is used.
This doesn’t seem logical to me.
If users differ, and libraries are all the same, then we should certainly give the choice to the user. This is what we do with indentation, whitespace, and other things: users care (sometimes), libraries care never. We don’t see libraries defining whether or not their methods require whitespace inside curly brackets, for example.
If libraries differ, then I think we should give the choice to the library as to how it is called. If the library has a broken API, then fix it, or write a wrapper, or live with the broken API. This is exactly the same tradeoff as the use-site-parens-vs-declaration-site-parens decision, or the definition-site-method-name-casing. If a library author chooses bad names for their functions, e.g. using snake_case instead of camelCase, we don’t say “let’s just let the users call the functions with whatever names they like, teams can install linters if they want consistency”.
In the case of infix-vs-dot-notation, do libraries differ, or do users differ? Does the fact that 1 + 2 + 3 make more sense than 1.+(2).+(3) depend on the library codebase, or a user codebase? How about foo and (bar or (baz and bippy) or quux) and foo.and(bar.or(baz.and(bippy).or(quux)))? In both cases, it seems to me it depends entirely on the library: 1 + 2 + 3 making more sense is a property of +, and foo and (bar or (baz and bippy) or quux) making more sense in your example is a property of the definition of foo#andbar#orbaz#and etc… As you say, libraries differ!
Whether I am writing a reactive akka application, or a pure-functional scalaz application, or a better-java jetty-jackson-hibernate application, whether to use infix syntax or not seems to depends entirely on the library being called, and not on the user. Since it depends on the library, I say it should be determined by the library. “library may define a bad API” is not a good reason for letting users do whatever they want as a way of “improving” it.