@steven-collins-omega The way lift-json construct values by doing implicit conversions everywhere makes it very hard for type inference to figure out what it should be doing, since it can’t rely on the expected type. To avoid adding extra annotations, you could replace List by JList with JList defined a bit like this (I don’t know much about lift-json so no clue if this is a good idea in practice, but it works for your example):
@ollijh Because for expressions desugar to map/flatMap/… calls, what you’re asking for would be equivalent to having decode("123").map(x: Int => x) infer intDecoder because the map call happens to only work in that case. This would be impossible to do in general since the result of the implicit search can influence the type of the following .map, and thus the relationship between the type of the argument passed to map and the result type of the decode call. In a nutshell, I’m afraid that because for-expressions are not restricted enough to let us improve type inference in this case, but maybe I’m overlooking something ?
@Lasering As discussed in the linked issue and the linked gitter discussion in the issue, this is doable, but it’s not clear that the extra complexity in the implementation would be worth it since this is such an edge case.
import cats.arrow.Arrow
import cats.implicits._
val f = ((_:Int) + 1) *** ((_:Int) * 2)
(10,20) |> f // ok
(10,20) |> (_ + 1) *** (_ * 2) // missing parameter type for expanded function
When a definition is overridden, if no explicit result type is given, we now always pick the result type of the overridden definition instead of inferring a more specific type (I’m actually not sure if this is the exact behavior Scala 2.12 follows but it is Dotty’s behavior). This avoids accidentally changing your API when you change the bodies of definitions.
When the result type of an anonymous class is inferred, Dotty never infers a structural type with extra definitions (i.e. Foo { val x: Int } in your example), these types are problematic since calling x requires using Java runtime reflection.
It’d be useful to know more about your usecases to think of alternative patterns that could be used for them.
Like @jackkoenig above, I’m involved in chisel. In attempting to see how much work porting the firrtl sub-project will be, I ran into an issue. It boils down to this:
trait A {
val a: Int = 0
}
trait B {
val b: Int = 1
}
object C {
def apply(b: B): Unit = b match {
case a: A => println(s"${a.a + a.b}")
}
}
This example typechecks in scala 2.12, but not dotty 0.13.0-RC1. Is this intended behavior that is considered good? The advice the compiler gives is ${a.a + b.b}, which seems bad because it hides the fact that a and b are the same. I also tried case a: A | B => which I thought should work but is evidently wrong.
That one is an actual issue you can work around by doing case a: A & B => ..., see https://github.com/lampepfl/dotty/issues/3208, if this is blocking you from porting code we can try to prioritize it.
Can this restriction be lifted for final val overrides (which are supposed to always narrow, under the Literal Singleton Types proposal, afaik)? I have multiple pieces of code that intentionally use override narrowing to either 1. compute String singletons 2. derive & recover type members from overrides of GADT member values, e.g.
private[example] trait HandlerBase {
type Target
def handlesTarget: TargetGroup.Aux[Target]
protected val target: TargetGroup
def process(t: Target): Unit
}
// derive type Target from `final val target`
trait Handler extends HandlerBase {
override type Target = target.T
override def handlesTarget: TargetGroup.Aux[target.T] = target
}
sealed trait TargetGroup { type T }
case object strTgt Textends TargetGroup { type T = String }
// usage
final class Handler1 extends Handler {
final val target = strTgt
def process(t: String): Unit = {}
}