This isn’t quite right either though. The role that <: Singleton
plays is to modify type inference by preventing the widening that would happen otherwise. Granted that a subtype bound doesn’t work for all the reasons rehearsed in this thread and elsewhere, but a context bound doesn’t work either because, if anything, it has even less of a relationship with inference than a subtype relationship would.
def foo[T: Singleton](t: T) ...
is supposed to be equivalent to,
def foo[T](t: T)(implicit st: Singleton[T]) ...
In the normal run of things the inference and widening or otherwise of T
is done and dusted before we consider resolving Singleton[T]
. At least with <: Singleton
we have a marker which is available at the point at which we need it for influencing the solving of T
.
A Singleton
type class could be made to work if it had magical type inference affecting properties, but it’s not clear to me that that would be particularly desirable.
I’ve come round to the idea that an annotation is the simplest way forward for indicating that we don’t want the normal widening rules applied,
def foo[T @dontwiden](t: T) ...
foo(1) // T =:= 1
On its own this doesn’t require that T
be a singleton … if we also want that constraint then we can use ValueOf
,
def foo[T @dontwiden: ValueOf](t: T) ...
foo(1) // OK, T =:= 1
def bar: Int = 23
foo(bar) // Nope, no `ValueOf` instance
FWIW, I think what we have here is another example where elaboration comes apart from typing (see my comments on implicits here). I think it would be worth thinking about whether we can design a uniform mechanism for allowing programmers to influence these sorts of decision.