Even less explicit typing on def implementations

sbt:foo> ++2.12.8!
sbt:foo> set scalacOptions += "-Yinfer-argument-types"
sbt:foo> consoleQuick
[info] Starting scala interpreter...
Welcome to Scala 2.12.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_152).
Type in expressions for evaluation. Or try :help.

scala> trait Foo {
     |   def doFoo(age: Int, name: String): List[Double]
     | }
defined trait Foo

scala> val myFoo = new Foo {
     |   def doFoo(age, name) = age.toDouble :: name.length.toDouble :: Nil
     | }
myFoo: Foo = $anon$1@36ca6b9

See https://github.com/scala/scala/pull/6505

2 Likes

In case of SAM types you can use function syntax to implement them, which allows to omit parameter types together with return type.

So this is the point, isn’t it. The compiler is clearly capable of doing this inference because it will adapt the SAM to a function expression. Buf this isn’t possible for callbacks with, say, a success and failure method.

type T1 = Future[Option[Either[List[String], Map[Int, Long]]]]

Yeah, this is exactly what I want to avoid because a) it’s me doing the compiler’s job for it, and b) at no point do we end up with readable code. We now have to use T1 and T2 in the method declaration, which is yet another thing that the next person reading the code needs to read and then chase down.

To reiterate, it’s at the point of overriding abstract methods in throw-away implementations that I want to be able to skip providing types.

1 Like

Cheers. I didn’t know that exists in 2.12.8, but wouldn’t it be great if it was baked into scala 3 <3

That -Yinfer-argument-types option has been removed in 2.13. We don’t intend to bring it back, sorry. Multiple reasons: compiler performance and complexity, also the potential for abuse: 99% of the time you shouldn’t have those inferred.

1 Like

The thing is, in order to have them inferrred, I just end up mechanically converting:

def something(something: A, somethingElse: B): C

into:

def something: (A, B) => C

Just to get type inference at the throw-away implementation sites. But now we’ve lost the opportunity for stylized and enforceable documentation in the original declaration. And at the potential run-time cost of allocating that function instance.

2 Likes

There wasn’t much positive feedback for this, and I’ve thought about it some more and come up with something more general that handles this and other things: Allow defs to be implemented from functions

I’m sorry, but that proposal is much worse. It’s a half-step towards erasing the distinction between functions and methods, making it impossible to ensure what you mean while still making people care about which it is.

This one is far better. The rules are simple and clear, and the infrastructure mostly exists to do it; the only question is whether it’s a good idea.

This is not the same thing as -Yinfer-argument-types. There’s no type inference going on here, just looking it up in the parent. (That’s done anyway with explicit types to figure out whether you need to override.)

Okay, yes it is. Oops.

So, plus is: less typing and less keeping track of potentially big and awkward types; the minus is: less explcit, and another language feature to keep track of.

1 Like

I thought that that is what Yinfer-argument-types actually does, despite its name.

Yeah, oops, you’re right.

I really don’t understand @adriaanm’s comment then.

I really don’t get the “potential for abuse” part. This forces the implementer to use the required types. The compiler yells at you now anyway if you don’t exactly copy the type signature. There’s no flexibility; the only “abuse” is that you don’t need to repeat the types when they’re completely defined anyway. I don’t see how it’s any more of an abuse than closures.

The only tiny difference (benefit!) is that if you’re confused about which trait implementations have defaults and which don’t, and you also muck up the types, inferring the types will catch your (double) mistake. I’ve done this before; thought I had to supply method foo with types P and Q, but actually foo was provided and used types P and R, and so my code compiled but failed. Inferring argument types fixes that.

I can’t easily tell how it would affect compiler performance, but the relevant code doesn’t seem extensive, and the return type is already inferred so the wildcarding of types already has to happen. And the equivalent computation already has to be performed for the corresponding method that returns a function.

So I can’t really see how these could be the most compelling reasons not to have the (already implemented!) feature. Maybe there are implementation details that actually make it muck things up a lot more than it seems. But on the surface, it seems that there must be other reasons, and these factors are just (small) fringe benefits?

3 Likes

:(.

I’ve never heard about this feature. To be honest for me it also seams to be good idea. With inferring result type we could screw up public api. Arguments seams to be much safer and can not imagine how it could hurt us.

Having to compute the overrides / overloads of a member while computing its signature is potentially very expensive (all base types need to be considered, potentially causing a long chain of class loading), and it creates more coupling between different compilation units (which is also bad purely from a SW eng perspective). This reduces the potential for incremental compilation / parallel type checking (in future / in hydra). To improve type checking, we must make type checking (and name + signature resolution) a more local operation than it is now.

1 Like

So, under this proposal, if you don’t supply types for the arguments of a def, the compiler goes up the hierarchy until it finds a method with the same name and arity, and use those types. If it’s ambiguous or none matching can be found, it barfs. Do we not have to essentially do this work already for every def marked overrides? We need to validate that it does, in fact, override something. And there’s the edge case where the overriding declaration potentially widens the argument types that the current explicitly typed arguments needs to handle. Lastly, we have to do this walking on all the other defs, to check that they aren’t overriding a final declaration.

1 Like

I think that happens at a later phase. I’m not sure if that explains it.

So the overrides and finally checks happen way after the types are all worked out? OK. But ultimately that work has to happen, and it has to be checked even for an IDE to give early feedback as you type. So I’m not seeing how in practice inferring parameter types pulls in classes that wouldn’t be pulled in anyway. But I don’t maintain the compiler so I’ll take your word for it.

RIght, that’s why I said I’m not sure if it suffices to explain why.

I don’t even see how this could be, as the expected return type of the overridden method is used to type the overriding body:

trait A { def f: Int => Int }
object B extends A { def f = _ + 1 }

To clarify, I should have generalized my comment to talk about preferring to avoid inferring signatures for non-private members – whether it’s missing types for arguments or its result. For the result type, however, you have the RHS to validate the inferred type against, whereas with argument types you just have to take the inherited one. Also, what do you do for overloaded methods? Imagine you were inheriting the argument types from some method in a super class, and now someone adds an overload for that method – what should be the inferred signature in the subclass?

Regarding other checks happening later – yes, correct use of override/final is checked during refchecks, which is a phase after typers.

I’m not sure whether you’re using “validate” to mean “check” (as in the top-down process in bidirectional type inference, opposite to “infer” which goes bottom-up). But in any case, don’t you agree that if you leave the return type off, the inherited return type is needed in order to complete type inference of the RHS?

Now, I agree that public methods should probably have all their types specified. But there are plenty of cases where you’re not implementing public methods, especially when creating anonymous class instances, which can be very common in some libraries/DSLs.

I don’t think anyone should expect that adding an overload to a base class will not break any code using that base class. Besides the brittle base class problem (what if the subclass was already defining the overload?), it may obviously break plain user code by making it ambiguous.

1 Like

Thanks @adriaanm - that’s clearer to me now. Sorry, I don’t have the guts of how the compiler works baked into my brain yet. Perhaps one day.

The major case where this is an issue is in implementing anonymous, disposable instances. So you are typically implementing public methods, but you are not doing so in a context where a 3rd party can ever see a documentable implementation corresponding to that instance.

“whereas with argument types you just have to take the inherited one”

This is exactly the behaviour I want. No inference. No magic. Just take exactly the type you’d expect from the def being implemented.

“Also, what do you do for overloaded methods?”

Must override a def of

  • the same name
  • the same arity
    Must not
  • have zero candidates
  • have more than 1 candidates

“Imagine you were inheriting the argument types from some method in a super class, and now someone adds an overload for that method – what should be the inferred signature in the subclass?”

Damn those people who re-engineer superclasses that we extend! But in practice this happens now for normal defs. You only catch it on re-compile. And under my rules restricting this feature to things with exactly one candidates to override, it would then fail to compile, complaining that you’d attempted to not specify arg types on a def with multiple candidates for overriding.

I’ve refactored my code in some places: to use functions instead of anonymous classes
mostly due to lack of inferring arguments types.

//instead of 
new Maker[T] { 
  def applyTo(a:LongTypeName[AndGeneric,WithF[T]]) = ...
  def stateOf(a:LongTypeName[WithF[T],AndGeneric]) = ...
}

//i ended up with something like this which is shorter
new Maker[String](
  applyTO = { a => ... }, 
  stateOf = { b => ... }
)

//even if this could have better performence/readability:
new Maker[String] { 
  def applyTo(a) = ...
  def stateOf(a) = ...
}

If cost of this feature is too High then OK. Hopefully it could be reconsidered later (Scala 3.1) because as i see it does not break anything and all the stuff happens on compile time only.

1 Like