Excellent! Yes, singleton is a “type of types”, or a kind. In the Dotty issue, what I suggest is:
we should consider something like an annotation or a modifier (tho modifiers on type variables don’t exist yet).
the only problem is that we dislike using annotations for such core features, and modifiers on type variables don’t really exist yet. Honestly I’d use annotations for this.
Nitpick: the Scala language specification already mentions scala.Singleton, with exactly the same problems. This might affect possible choices regarding transition.
We could make singleton a typeclass that’s known to the compiler. I.e. instead of
[T <: Singleton]
we’d write
[T: Singleton]
The compiler would create an inferred Singleton argument if T is a singleton type. Ideally, we’ll find a way to eliminate the evidence parameter afterwards. This is not (yet) straightforward since we are lacking a way to make context bounds define erased parameters.
This isn’t quite right either though. The role that <: Singleton plays is to modify type inference by preventing the widening that would happen otherwise. Granted that a subtype bound doesn’t work for all the reasons rehearsed in this thread and elsewhere, but a context bound doesn’t work either because, if anything, it has even less of a relationship with inference than a subtype relationship would.
def foo[T: Singleton](t: T) ...
is supposed to be equivalent to,
def foo[T](t: T)(implicit st: Singleton[T]) ...
In the normal run of things the inference and widening or otherwise of T is done and dusted before we consider resolving Singleton[T]. At least with <: Singleton we have a marker which is available at the point at which we need it for influencing the solving of T.
A Singleton type class could be made to work if it had magical type inference affecting properties, but it’s not clear to me that that would be particularly desirable.
I’ve come round to the idea that an annotation is the simplest way forward for indicating that we don’t want the normal widening rules applied,
def foo[T @dontwiden](t: T) ...
foo(1) // T =:= 1
On its own this doesn’t require that T be a singleton … if we also want that constraint then we can use ValueOf,
def foo[T @dontwiden: ValueOf](t: T) ...
foo(1) // OK, T =:= 1
def bar: Int = 23
foo(bar) // Nope, no `ValueOf` instance
FWIW, I think what we have here is another example where elaboration comes apart from typing (see my comments on implicits here). I think it would be worth thinking about whether we can design a uniform mechanism for allowing programmers to influence these sorts of decision.
Hmm, so it seems we have to special case Singleton as a context bound in more places. That makes it less appealing as a solution. But I am not fond of annotations either. It would be the first instance where an annotation influences typing.
Also I’m not clear on the use-case where you want to mandate that a singleton-type value is passed explicitly, when you also have access to it implicitly. Why
The point is to have a way to influence type inference so it infers singleton types. Typical scenario is when you have a class that needs path-dependence on a module passed in argument.
I proposed a solution based on supertype bounds on the original thread. The advantage is that the compiler can use it to direct type inference more easily than with the type class approach.
I think it could also be used to allow sound type projections, if they are made on singleton types; i.e. allow syntax Singleton[A]#T, which can be handy as we sometimes want to talk about the members of a singleton type without having the value around to express the proper path-dependent type selection.
Here is an example. In that case it widens even with the Singleton bound, but at least it gives an error saying it needs a singleton type.
❯ scala [16:37:37]
Welcome to Scala 2.12.7 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_191).
Type in expressions for evaluation. Or try :help.
scala> class C { type T }
defined class C
scala> class D[T <: C](val t: T)
defined class D
scala> val c = new C
c: C = C@575e572f
scala> val d = new D(c)
d: D[C] = D@50b0afd7
scala> implicitly[c.T =:= d.t.T]
<console>:14: error: Cannot prove that c.T =:= d.t.T.
implicitly[c.T =:= d.t.T]
^
scala> val d2 = new D[c.type](c)
d2: D[c.type] = D@3a11c0eb
scala> implicitly[c.T =:= d2.t.T]
res2: c.T =:= d2.t.T = <function1>
scala> class E[T <: C with Singleton](val t: T)
defined class E
scala> val e = new E(c)
<console>:13: error: inferred type arguments [C] do not conform to class E's type parameter bounds [T <: C with Singleton]
val e = new E(c)
^
<console>:13: error: type mismatch;
found : C
required: T
val e = new E(c)
^
scala> val e = new E[c.type](c)
e: E[c.type] = E@d949bc4
scala> implicitly[c.T =:= e.t.T]
res3: c.T =:= e.t.T = <function1>
I agree with @nafg, but I don’t know how much code will break if this is changed.
As I mentioned on this issue, there are two type inference problems I see here.
Inconsistency between singleton literals and objects
For objects:
trait Foo
object Bar extends Foo
def foo[T <: Foo](t : T) : T = t
val f = foo(Bar)
//f is Bar.type and NOT Foo
While OTOH, for literals:
def foo[T <: Int](t : T) : T = t
val f = foo(1)
//f type is Int with value of `1` and not the type `1`
The meaning of <: Singleton is “Must be a singleton”
I opt for a mechanism that preserves the original type narrowness. Currently we get:
def fooInt[T <: Int](t : T) : T = t
def fooXInt[T <: Int with Singleton](t : T) : T = t
val one = 1
fooInt(1) //Int = 1
fooInt(one) //Int = 1
fooXInt(1) //Int(1) = 1
fooXInt(one) //fails
Just to try to draw out something actionable before diving further into alt-designs, do we all agree that the current incarnation of Singleton as an upper bound should be marked experimental for the 2.13 release, while these alt-designs cook?