Allow defs to be implemented from functions

There wasn’t much appetite for my “Even less explcit typing on def implementations” suggestion. However, it (and some other suggestions I’ve seen here, including for irrefutable deconstruction in method arguments) all could be handled in practice by supporting method (un-)currying. Like this:

triat SucceedOrFail[S, F, T] {
  def onSuccess(s: S): T
  def onFailure(f: F): T
}

trait Stringifyer[T] {
  def (t: T) stringify: String
}

implied successFailStringifier[S, F, T]
    given Stringifier[S], Stringifier[F] for SucceedOrFail[S, F, String] {
  def onSuccess = _.stringify
  def onFailure = _.stringify
}

The logic for this is already implemented for turning functions into SAMs through @FunctionalInterface. Making this change, to allow defs to be implemented through adapting functions would be a mechanic to address a host of ergonomic issues with explicit but boring type annotations and deconstructions, since those are all already handled by the function-to-sam stuff.

2 Likes

The problem of overloading remains: what type is the expected type for the right-hand-side of def onSuccess = ... if onSuccess is overloaded in the parents?

It’s also unclear how we reconcile “true” defs with such “uncurried” defs at the bytecode level, and therefore in terms of interoperability with Java/JavaScript. If they override a true def, then they should have the uncurried signature for bytecode/interop. But if they aren’t, they shouldn’t be uncurried. But then that’s an inconsistency which is bound to blow up at some point.

if onSuccess is overloaded in the parents?

I’d be happy if it simply failed with overloaded methods. Or only worked if there was exactly one overloaded variant with the expected number of arguments.

It’s also unclear how we reconcile “true” def s with such “uncurried” def s at the bytecode level

Would it even be visible at that level? Could this not be just front-end sugar? So when you find a matching def to override, over-ride it as normal, and just plumb in the arguments to the function.

So:

def onFailure = _.stringify

becomes rewritten as:

def onFailure(f: F) = f.stringify

modulo any other rewrites or expansions that take place.

No, I don’t think that’s acceptable, because then the bytecode/interop of

def foobar = { (x: Int) => x + 1 }

becomes highly dependent on whether or not there is a def foobar(x: Int) in one of the superclasses. This would be extremely surprising.

1 Like

Oh - so you’re thinking of the case where the super class method is refactored between def foobar(x: Int) and def foobar: Int => after you’ve written your implementation?

Would this not be the usual problems arising from somebody refactoring a super-type that you are extending?

what i mean is, you have the same problem if somebody changes your supertype def from def foobar(x: Int) to def foobar(x: Int, s: String = "Hi Mum") - you get a linking error, and when you recompile it all works again

No, that’s not what I meant. I mean when I see the following definition:

def foobar = { (x: Int) => x + 1 }

I cannot know from that definition how it will appear to other languages. Now the very shape of this method is method can be altered by superclasses.

Also it poses a problem for binary compatibility within Scala (for the exact same reason).

I really think this is not a sane design.

Yeah, I concur. Inferring arguments when inheriting is reasonable; has tradeoffs, but it’s certainly workable.

But equivocating between def foo(x: Int): Int and def foo: Int => Int is not a good idea if we’re ever going to care in any context about the difference between methods and functions.

I cannot know from that definition how it will appear to other languages

Sorry, I’m really not following what the problem is. Perhaps it’s a bad case of Friday. Can you lay out a specific witness scenario where it is a problem?

I tend to agree with @sjrd, however note that Dotty has a related optimization limited to result types which are implicit functions: https://github.com/lampepfl/dotty/blob/b26740d1681fdd7b187aed708e3474cfb8196cc6/compiler/src/dotty/tools/dotc/transform/ShortcutImplicits.scala#L16-L43
So there might be a reasonable argument to be made to extend this to regular functions (or get rid of the optimization completely…)

4 Likes

Exactly the same type as if you had:

def provideImpl(onSuccess: ...): ... = ...
def provideImpl(onSuccess: ...): ... = ...

provideImpl(onSuccess = res => ...)

The above should behave the same w.r.t. expected types as the proposed:

trait A {
  def onSuccess(...): ...
  def onSuccess(...): ...
}
trait B extends A {
  def onSuccess = res => ...
}

I also don’t think there are problems with the bytecode representation. Obviously when implementing a proper uncurried def, you’ll get a proper uncurried def in the bytecode, which means eta expanding what’s used on the rhs of def as necessary.

Having spent the weekend thinking on this, over some beers, I think it’s a larger change than I first thought. I’d be happy to back off from this if we can get types on parameters in def implementations made optional. To really make defs-from-functions work beautifully, I think you need haskel-style partial function application, which is probably a bridge too far for scala.

What sjrd means is the following I think:

If, from Scala, you see a method defined as

def foobar = { (x: Int) => x + 1 }

you cannot exactly know how it’s going to be compiled down:

  • Left as is. You’d have a foobar: Int => Int, or in bytecode: ()LFunction1; (Scala actually does some optimization here to a specialized Function-type)
    This means, you’d call it like this from Java:
foo.foobar().apply(42) == 43;
  • Expanded to a method. You’d have a foobar(Int): Int, or in bytecode: (I)I.
    This means, you’d call it like this from Java:
foo.foobar(42) == 43;

With your proposed scheme, the chosen variant entirely depends on the superclasses of the method defining this foobar, making it non-obvious and non-local.

1 Like

Understood. Thanks. So I’m happy to can this proposal.

Note that you have exactly the same problem with the “lambda syntax for SAM types” that both Scala and Java support:

abstract class A { def test(n: Int): Int }
trait Base { def foo: A }

class Derived extends Base {
  def foo = { x: Int => x + 1 }
  // ^ It's a lambda..? It's a SAM type..? (It's Superman!)
}
(new Derived).apply(42) // error: value apply is not a member of A
// ^ oops, not a lambda actually! (must be Superman then)

So there really is nothing extraordinary with the approach proposed here.

I think the difference with lambdas for sams is that you are (usually) very aware that you are building a sam instance from the lambda. You know that the lambda is being packaged up into a new form before it is handed off.

So I guess the real question, which gets us back to my no-arg types thread, is if there’s a clean way to extend the SAM syntax to multi-method types. Perhaps there isn’t. After all this talking, this seems to be the nub of the missing functionality we are skirting about.

How exactly would you be more aware of that than, under this proposal, of the fact you would be building a method with lambda syntax? The information required for both is precisely the same non-local knowledge: that the type signature of the overridden method.

Well with SAMs we’re basically retreating a bit from the idea that every function object has .apply, instead we say Function1 is part of a group of function-like types, and function application will expand to whichever method call is appropriate for the function type.

Over here we’re talking about blurring the distinction between function objects and methods. That’s a whole new level of magic.

OK, well if you’re blurring the lines then there’s an obvious projection to:

trait Foo {
  def a(i: Int): String
  def b(s: String): Float
}

from

(a: Int => String, b: String => Float)

So perhaps there’s some solution to this riffing on the idea of re-writing the record with named arguments into an anonymous instance.

Maybe there’s something like that in https://github.com/ThoughtWorksInc/feature.scala ?