That’s also fine with me.
Just to push on this a bit more, I don’t think it’s as bad as you make it seem. We already have ad hoc rules for semicolon inference, operator precedence/binding, and underscore shorthand, which dont cause too much confusion in practice
-
There are certainly edge cases in Scala semicolon inference where e.g. two newlines behaves differently from one newline, but by and large it is not a real problem for users. We even had breaking syntax changes in Scala 3 around this (e.g. Surprising line continuations in Scala 3)
-
Precedence/binding confusion does happen sometimes, but not more than any other programming language. And adding parens to adjust precedence issues, similar to what you described users would need to do for dot companion shorthand, is something people have been doing since they were 8 years old in math class
-
We have a similar bunch of ad hoc rules for
_.foo
shorthand, which binds to the nearest enclosing()
,,
, or=
sign. Again, there are edge cases, and people do hit them sometimes e.g. how{ println("hello"); _.foo }
desugars, but only very rarely
These features could easily have been made unambiguous by adding more syntax - explicit semicolons, explicit parens around every expression, explicitly named/scoped parameters for every lambda - but I think they are better off being concise despite the edge cases.
I think dot companion shorthand falls in a similar category of language feature, and would benefit from the single-dot making it as concise as possible while still being semantically unambiguous
It’s arguable that these shorthands can cause compounding confusion more than the sum of their parts, hence the caution of adding one more shorthand despite the existing ones being OK. Coffeescript may be an example of that. But Swift has a very similar syntax as Scala: semicolon inference, method chaining, operator precedence, and dot companion shorthand. The ambiguity in theory does not turn out to be a problem in practice, and people seem to read and write code like the snippet below including both of these language features without issue
let newButton = UIButton(type: .custom)
.backgroundColor(.blue)
.title("Just a button").titleStyle(font: .systemFont(ofSize: 12), textColor: .white)
.touchUpInside(target: self, selector: #selector(buttonAction))
The rule is not ad hoc. The informal explanations are ad hoc.
But I think “leading infix” from point 1 deserves its own line item separate from semicolon inference.
As with optional braces, “proper formatting” makes everything just work, but deviation suddenly requires rules you can’t remember (and wouldn’t want to if you could).
Maybe the test is: What are the unintuitive ways people will bend or break the syntax?
Well, one can certainly exaggerate how bad it is, though underscore shorthand was bad enough to get substantial changes from 2 to 3; and operator precedence is an ongoing pain point when it gets too ad-hoc–the situation with :
in extension methods vs regular methods isn’t great.
So, yes, it’s not necessarily a disaster. Wouldn’t be the first time.
However, it’s bad enough that I think alternatives are worth thinking about.
foo(Color.Red) // works
foo(
println("Hi")
Color.Red
) // works
foo(.Red) // works
foo(
println("Hi")
.Red
) // FAILS
foo(
println("Hi")
(.Red)
) // works
This isn’t obviously a rare use-case.
So, anyway, I agree that it’s not a showstopper. But given that (1) .Foo
is kinda clunky anyway and (2) there seems to be a pretty good solution, I think it’s worth carefully assessing whether ad-hoc rules are worth it for enabling .Foo
.
After thinking about it for a while, I have come to the conclusion that the .Red
syntax is not a good fit for Scala. It causes syntactic ambiguities and I find it an eyesore, since .
is so entrenched as an infix operator. Even if we make an analogy with path separators /
, the prefix .
is still different since prefix /
indicates the global scope but prefix .
indicates a very specific local scope.
That said, we could come back to the alternative without the .
. Why was that dismissed? Ambiguities could be resolved by ordering, i.e. Red
as a member of the companion of the target type would be considered only if it does not resolve to anything by other means. We do lots of disambiguation rules like that for selections. So far, it would be the first for simple identifiers, but there’s no hard rule why identifiers could not have fall-back resolvers.
The main reason I can see against is that it would be fragile. An identifier like Red
in a program would be OK or give a “not found” error, depending where it appears. If you see lots of code that uses Red
without qualification, you might be surprised if your use does not pass. And the reasons for this could be subtle. For instance, adding an overloaded variant to a method would mean that the method arguments now need full qualification since no target typing is available.
I don’t think this is a good analogy, (hence why it seems counterintuitive) I think a better analogy would be " 's " in english: Mike’s red ↔ Mike.Red, in that way, a prefix period is like using “one’s”, “someone’s” or “his”: his red ↔ .Red
In that light, it makes a lot more sense, and feels more intuitive
Maybe we could add something instead of removing the period, to be more in line with “his”, for example *.Red
, ?.Red
or ...Red
(But regardless, I am not sure either analogy is useful in deciding if the period is a good choice)
I don’t think “dismissed” is the right word here. They were discussed in detail and their tradeoffs enumerated repeatedly. I can repeat some of them again below:
-
.foo
syntax is ambiguous in a small number of cases due to method chaining over multiple lines. But un-prefixedfoo
syntax is ambiguous all the time with any variable that you may have in scope with the same name -
Un-prefixed
foo
can be disambiguated by resolution fallback rules in the typer, which is less obvious both to machines and to humans than.foo
syntax which can be disambiguated by precedence rules in the parser. Having to run a full typechecking name resolution to figure out where the feature is taking effect is much more involved for machines and humans than simply parsing the code in question (even if you then can only expand the.foo
to its fully qualified path during/after typechecking) -
Un-prefixed
foo
can come in two variants: either it is opt-in with a flag (per-method param, or per-type) to enable, or it applies universally to every definition side- If it is opt-int, that means it cannot be used on existing libraries unless retrofitted. This is less than ideal, since there’s a ton of existing code that could benefit right away. e.g. all code using
enum
s,sealed trait
s with theircase
s in the companion, types with factory methods in their companion, etc. - if it is on by default for everyone, then it probably brings into scope way too much stuff into every expression that has a target type. “everything in the companion
object Foo
” is a lot of stuff to bring into scope every time atype Foo
is expected - Alternately, it could only apply to special members of the companion, e.g. only the constructor. This limits the scope pollution, but limits the usefulness v.s. the original implementation in Swift where calling factory methods of the target type on its companion was a major use case (including those taking parameters)
- If it is opt-int, that means it cannot be used on existing libraries unless retrofitted. This is less than ideal, since there’s a ton of existing code that could benefit right away. e.g. all code using
-
IIRC @odersky himself has repeatedly rejected the idea of scope injection, or bringing additional identifiers into scope in a user-configurable way. I can’t google up any examples at the moment, but I quite clearly remember that being the case, going back long before this particular proposal. Therefore it is not surprising that an approach that involves bringing new identifiers into scope in a user-configurable way is deemed unlikely to get support
It’s not so much dismissal as a study of pros and cons. I don’t dispute that .foo
has an “ick” factor and looks very unusual. But Swift seems to demonstrate that the “ick” factor is a non-issue for a wide base of not-necessarily-sophisticated users (iOS app developers), which to me indicates that the Scala community should be able to get used to it as well.
If we decide to go with an un-prefixed foo
, I would be fine with that too. It’s just the arguments in favor of having a prefixed .foo
do seem very reasonable
If it is on by default for everyone, then it probably brings into scope way too much stuff into every expression that has a target type. “everything in the companion
object Foo
” is a lot of stuff to bring into scope every time atype Foo
is expected
I think after an initial experimental phase it should be on always. or we should drop the idea. I am against adding additional mode switches. About the concern of bringing into scope “way too much stuff”, I was imagining to restrict it to members that actually return a value of the target type. E.g. if the expected type is Color
then Red
could be referenced unqualified but values
could not.
It’s true that this is a form of scope injection and I am generally not a fan of that. So I am still sitting on the fence here. However, if we want to have this form of target scoping, then would prefer unqualified over prefix .
.
I gave a solid use-case above where this limitation is too restrictive.
Which one ?
I could not recall an example where expected type alone was not enough
I think this sounds like a reasonable restriction. That should significantly cut down on the scope pollution, while still bringing in everything that would be useful. And if we make it a fallback scope only looked up if the existing name resolution falls through, it would be 100% source and binary compatible
Restricting it to only members that return a value of the type does rule out things like factory methods inside nested objects, e.g.
trait Foo
object Foo{
object stuff{
def nestedFactory(): Foo
}
}
But maybe that’s an uncommon enough use case it’s OK.
One thing I’d like to call out, that maybe hasn’t been said explicitly here, is that this “relative scoping” should work for pattern matching as well. e.g. This is the case in Java enum
s and switch
statements, where un-qualified names are required:
enum Level {
LOW,
MEDIUM,
HIGH
}
class HelloWorld {
public static void main(String[] args) {
Level myVar = Level.MEDIUM;
switch(myVar){
case LOW: System.out.println("low"); break;
case MEDIUM: System.out.println("medium!"); break;
case HIGH: System.out.println("high!!!"); break;
}
}
}
And in Swift, where qualified names are allowed, but dot-prefixed shorthand is normally used:
switch state {
case nil:
removeLoadingSpinner()
removeErrorView()
renderContent()
case .loading?:
removeErrorView()
showLoadingSpinner()
case .failed(let error)?:
removeLoadingSpinner()
showErrorView(for: error)
}
What about marking at the definition site which parameter allows relative scoping, as in:
final case class Shape(@relative geometry : Shape.Geometry, @relative color : Shape.Color)
Or even do this for target typing as such:
final case class Shape(@tagetType geometry : Shape.Geometry, @targetType color : Shape.Color)
It would allow to disambiguate in case of overloaded variants.
Wouldn’t it work tho ?
options.CompilerOptions.ParserLogLevel
has a companion object options.CompilerOptions.ParserLogLevel
with a member INFO
of that type ?
(since ParserLogLevel <: LogLevel
)
Not with the proposed restriction. LogLevel.INFO
is not the type of ParserLogLevel
. It can only be that through implicit or explicit conversion.
But it is of type LogLevel
, no ?
Definition site differences aren’t apparent when reading code. Having magic be fickle in response to the whimsy of the library designer just means you can’t rely on it, so your code style standard for readability should be: always use the fully qualified name.
So I think this would defeat the point.
True, but i don’t see the problem here. The way things are defined at the definition site are always relevant. See for example the whole infix
discussion. I agree that a solution that works without being dependant on the definition site is attractive, but it depends on the price that it comes with. There is always some catch, so choosing what is “better” is a matter of opinion.
Reading through all the posts in this thread, I see a lot of resistance for a leading dot. Yes, other languages may have this too, but these are other languages, so the language construct may induce a different feeling. For Scala it does not feel good. Allowing changes at definition site it just one way to solve this, as I also tried to put forward. There are others as well.
Now i am just an other Scala user, not even an expert, so what gives. In the end, the number of people that use our language will show if we made the right choices overall.