Non-deterministic has to be understood as a metaphor only in this context. Technically Scala implicit conversions are deterministic. They don’t change program behaviour on each run or re-compile. Non-deterministic is sometime used in arguments against implicit conversions to describe non-obvious behaviour.
It is based directly on an example taken right from Scala Documentation Where does Scala look for implicits? in FAQ where it explains Implicit scope of an argument’s type.
class A(val n: Int) {
def +(other: A) = new A(n + other.n)
}
object A {
implicit def fromInt(n: Int) = new A(n)
}
1 + new A(1)
Example doesn’t specify what should be a resulting type of 1 + new A(1)
expression. How anyone learning should understand how compiler assumes that it should be A? From there his train of thoughts can go out of rails and I think it can make it hard to even understand how it works. It is not a good example for this alone but it is worse than that.
Let’s try to fully understand how it can manifest as a problem. The reason is that this supposedly simple example actually shows type inference driven by implicits.
In a given example there is Int => A
implicit conversion defined in a scope which drives expression to return A
type and results in A(2)
value.
But if we would define implicit conversion for A
as A => Int
instead, the result would be 2
and type Int
.
class A(val n: Int)
object A {
implicit def toInt(a: A) = a.n
}
1 + new A(1) // result is 2
It seems I can just change implicit conversion in a library and client code can potentially compile but run differently. I’m sure it doesn’t happens often but when it does I can imagine it could be quite hard to debug. For sure if experience and knowledge of implicit conversions is not top-notch.
It is driven by which implicit conversions are in the scope. Such code as 1 + new A(1)
(returned from def without return type) or val x = 1 + new A(1)
(which can be part of more complex computation)
can be driven by different imports or implicit conversions available from implicit scope of types withing the same package (which is even more hidden). Or changed by redefining implicit conversion(s) in A, see puzzler. The problem is that if I change implicit conversion it can still work with type inference.
Documentation is promoting it by using an example which is prone to it.
Puzzler for you
What is the result? Which rule is used and why? Can you tell without trying?
Where is the rule defined?
class A(val n: Int) {
def +(other: A) = new A(n + other.n)
}
object A {
// Defining both conversions in A
implicit def fromInt(n: Int): A = new A(n)
implicit def toInt(a: A): Int = a.n
}
1 + new A(1)
What is the result of the last expression?
- 2
- A(2)
- Compilation fails as ambiguous
What if anyone adds the second implicit to A
sometime later? Hopefully it doesn’t compile. But it might manifest a bit far away if type inference would make it work. The worst case is that it will compile (I believe).
Conclusion
It seems that using plain implicit conversion and relaying on type inference is quite dangerous.
Would requiring that implicit conversions are tried only if there is explicit target type annotation? When implicit conversions are used for type substitution the target type is known.
// it wouldn't compile if implicit coversion is needed
val x = 1 + new A(1)
def compute() = 1 + new A(1)
// these two would compile
val x: A = 1 + new A(1)
val y: Int = 1 + new A(1)
def compute(): A = 1 + new A(1)
As a side note the example makes it needlessly harder because it uses infix syntax. If beginner tries to read it he is possibly not used to it yet. Docs should expect that reader is not fluent yet in all Scala syntax sugars.