There wasn’t much appetite for my “Even less explcit typing on def implementations” suggestion. However, it (and some other suggestions I’ve seen here, including for irrefutable deconstruction in method arguments) all could be handled in practice by supporting method (un-)currying. Like this:
The logic for this is already implemented for turning functions into SAMs through @FunctionalInterface. Making this change, to allow defs to be implemented through adapting functions would be a mechanic to address a host of ergonomic issues with explicit but boring type annotations and deconstructions, since those are all already handled by the function-to-sam stuff.
The problem of overloading remains: what type is the expected type for the right-hand-side of def onSuccess = ... if onSuccess is overloaded in the parents?
It’s also unclear how we reconcile “true” defs with such “uncurried” defs at the bytecode level, and therefore in terms of interoperability with Java/JavaScript. If they override a true def, then they should have the uncurried signature for bytecode/interop. But if they aren’t, they shouldn’t be uncurried. But then that’s an inconsistency which is bound to blow up at some point.
I’d be happy if it simply failed with overloaded methods. Or only worked if there was exactly one overloaded variant with the expected number of arguments.
It’s also unclear how we reconcile “true” def s with such “uncurried” def s at the bytecode level
Would it even be visible at that level? Could this not be just front-end sugar? So when you find a matching def to override, over-ride it as normal, and just plumb in the arguments to the function.
So:
def onFailure = _.stringify
becomes rewritten as:
def onFailure(f: F) = f.stringify
modulo any other rewrites or expansions that take place.
Oh - so you’re thinking of the case where the super class method is refactored between def foobar(x: Int) and def foobar: Int => after you’ve written your implementation?
Would this not be the usual problems arising from somebody refactoring a super-type that you are extending?
what i mean is, you have the same problem if somebody changes your supertype def from def foobar(x: Int) to def foobar(x: Int, s: String = "Hi Mum") - you get a linking error, and when you recompile it all works again
Yeah, I concur. Inferring arguments when inheriting is reasonable; has tradeoffs, but it’s certainly workable.
But equivocating between def foo(x: Int): Int and def foo: Int => Int is not a good idea if we’re ever going to care in any context about the difference between methods and functions.
I cannot know from that definition how it will appear to other languages
Sorry, I’m really not following what the problem is. Perhaps it’s a bad case of Friday. Can you lay out a specific witness scenario where it is a problem?
The above should behave the same w.r.t. expected types as the proposed:
trait A {
def onSuccess(...): ...
def onSuccess(...): ...
}
trait B extends A {
def onSuccess = res => ...
}
I also don’t think there are problems with the bytecode representation. Obviously when implementing a proper uncurried def, you’ll get a proper uncurried def in the bytecode, which means eta expanding what’s used on the rhs of def as necessary.
Having spent the weekend thinking on this, over some beers, I think it’s a larger change than I first thought. I’d be happy to back off from this if we can get types on parameters in def implementations made optional. To really make defs-from-functions work beautifully, I think you need haskel-style partial function application, which is probably a bridge too far for scala.
you cannot exactly know how it’s going to be compiled down:
Left as is. You’d have a foobar: Int => Int, or in bytecode: ()LFunction1; (Scala actually does some optimization here to a specialized Function-type)
This means, you’d call it like this from Java:
foo.foobar().apply(42) == 43;
Expanded to a method. You’d have a foobar(Int): Int, or in bytecode: (I)I.
This means, you’d call it like this from Java:
foo.foobar(42) == 43;
With your proposed scheme, the chosen variant entirely depends on the superclasses of the method defining this foobar, making it non-obvious and non-local.
Note that you have exactly the same problem with the “lambda syntax for SAM types” that both Scala and Java support:
abstract class A { def test(n: Int): Int }
trait Base { def foo: A }
class Derived extends Base {
def foo = { x: Int => x + 1 }
// ^ It's a lambda..? It's a SAM type..? (It's Superman!)
}
(new Derived).apply(42) // error: value apply is not a member of A
// ^ oops, not a lambda actually! (must be Superman then)
So there really is nothing extraordinary with the approach proposed here.
I think the difference with lambdas for sams is that you are (usually) very aware that you are building a sam instance from the lambda. You know that the lambda is being packaged up into a new form before it is handed off.
So I guess the real question, which gets us back to my no-arg types thread, is if there’s a clean way to extend the SAM syntax to multi-method types. Perhaps there isn’t. After all this talking, this seems to be the nub of the missing functionality we are skirting about.
How exactly would you be more aware of that than, under this proposal, of the fact you would be building a method with lambda syntax? The information required for both is precisely the same non-local knowledge: that the type signature of the overridden method.
Well with SAMs we’re basically retreating a bit from the idea that every function object has .apply, instead we say Function1 is part of a group of function-like types, and function application will expand to whichever method call is appropriate for the function type.
Over here we’re talking about blurring the distinction between function objects and methods. That’s a whole new level of magic.