Singleton subtyping, and can we mark scala.Singleton as experimental?


I was chatting with @adriaanm earlier about the new scala.Singleton marker trait which is slated to be included in Scala 2.13, and thought this forum would be a better place to hold that discussion.

I was wondering if we could keep Singleton as experimental, behind a language import for 2.13? It seems to me that how it works may need to drastically change, because of issues found when combining singletons and unions (see lampepfl/dotty#1551).


I think the fundamental issue is that Singleton is not an appropriate kind of type to be considered an upper bound, as described in lampepfl/dotty#4944

My reason for thinking Singleton shouldn’t be an upper bound is that (concrete) types behave like sets. So the following types could be described:

Int = {..., -2, -1, 0, 1, 2, ... }

1.type = { 1 }

2.type = { 2 }

and that <: behaves exactly like subsetOf in sets. We definitely have the results that 1 <: Int, 2 <: Int and 1 | 2 <: Int because all of 1, 2 and 1 | 2 are subsets of Int, because

1.type = { 1 } subsetOf {..., -2, -1, 0, 1, 2, ... }
2.type = { 2 } subsetOf {..., -2, -1, 0, 1, 2, ... }
1 | 2 = { 1 } U { 2 } = { 1, 2 } subsetOf {..., -2, -1, 0, 1, 2, ... }

However, Singleton describes the set of of all types that contain a single value. So
really, Singleton is more like:

Singleton = { { 1 }, { 2 }, { 3 }, { "hello" }, { "world" }, { 'a' }, ... }

Notice that the 1.type = { 1 } is not a subset of this set, rather it is an element of it.

Therefor, 1 <: Singleton violates this idea that A <: B if and only if A subsetOf B, and this is why IMO it interacts poorly with union types. If you do not violate this property, then you do not break union types. For example, we have:

Int <: AnyVal and Char <: AnyVal
Int | Char <: AnyVal

this works because Int is a real subset of AnyVal.

If we instead use Singleton in supertype position:

1 <: Singleton and 2 <: Singleton
1 | 2 <: Singleton

but {1, 2} is not a subset of Singleton! { { 1 }, { 2 } } is a subset of Singleton,
but that is not the set described by 1 | 2.


I don’t see how this mixing of concepts could ever be reconciled, and I hope its inclusion won’t shut the door permanently on allowing singletons to participate in Unions. That would be an enormous missed opportunity (see: TypeScript)!

I think the solution is to hold our horses on allowing users to specifically name the type Singleton, especially as an upper bound. Perhaps it could be enabled with a compiler flag, or behind a language feature import, and treat it as experimental i.e. may change in 2.14 or 3.0.

I think that this idea of singleton-ness could be better expressed via a typeclass, and in fact we already have one such typeclass, it is called ValueOf[T]

Although ValueOf[T] is more general than just Singleton types – users can create their own instances by:

implicit val valueOfString = ValueOf[String]("Hello")

– if a restriction is desired for only specifically Singleton types, then perhaps a Singleton[T] typeclass could be created which works in the same way ValueOf[T] does, but which is not instantiable by users (i.e. it has a private constructor).

Thank you for your time!


Excellent! Yes, singleton is a “type of types”, or a kind. In the Dotty issue, what I suggest is:

we should consider something like an annotation or a modifier (tho modifiers on type variables don’t exist yet).

the only problem is that we dislike using annotations for such core features, and modifiers on type variables don’t really exist yet. Honestly I’d use annotations for this.

Nitpick: the Scala language specification already mentions scala.Singleton, with exactly the same problems. This might affect possible choices regarding transition.


We could make singleton a typeclass that’s known to the compiler. I.e. instead of

[T <: Singleton]

we’d write

[T: Singleton]

The compiler would create an inferred Singleton argument if T is a singleton type. Ideally, we’ll find a way to eliminate the evidence parameter afterwards. This is not (yet) straightforward since we are lacking a way to make context bounds define erased parameters.


@odersky Yes I like that!


I think we should widen the scope of this discussion by also addressing the following issue:


scala.Singleton is not new though. It already exists, and is rather widely used (in scala-reflect API at least)


This isn’t quite right either though. The role that <: Singleton plays is to modify type inference by preventing the widening that would happen otherwise. Granted that a subtype bound doesn’t work for all the reasons rehearsed in this thread and elsewhere, but a context bound doesn’t work either because, if anything, it has even less of a relationship with inference than a subtype relationship would.

def foo[T: Singleton](t: T) ...

is supposed to be equivalent to,

def foo[T](t: T)(implicit st: Singleton[T]) ...

In the normal run of things the inference and widening or otherwise of T is done and dusted before we consider resolving Singleton[T]. At least with <: Singleton we have a marker which is available at the point at which we need it for influencing the solving of T.

A Singleton type class could be made to work if it had magical type inference affecting properties, but it’s not clear to me that that would be particularly desirable.

I’ve come round to the idea that an annotation is the simplest way forward for indicating that we don’t want the normal widening rules applied,

def foo[T @dontwiden](t: T) ...
foo(1) // T =:= 1

On its own this doesn’t require that T be a singleton … if we also want that constraint then we can use ValueOf,

def foo[T @dontwiden: ValueOf](t: T) ...
foo(1)   // OK, T =:= 1

def bar: Int = 23
foo(bar) // Nope, no `ValueOf` instance

FWIW, I think what we have here is another example where elaboration comes apart from typing (see my comments on implicits here). I think it would be worth thinking about whether we can design a uniform mechanism for allowing programmers to influence these sorts of decision.


Surely that would be

def foo[@dontwiden T: ValueOf](t: T) ...

Though I’m wondering if this wouldn’t be a good place to introduce a new syntax in line with - and + type annotations.

def foo[^T: ValueOf](t: T) ...

Would almost certainly be easier on the parser and, to my eyes, is more readable - albeit at the cost of self-documentation


Hmm, so it seems we have to special case Singleton as a context bound in more places. That makes it less appealing as a solution. But I am not fond of annotations either. It would be the first instance where an annotation influences typing.


Could a “special function” be provided by the compiler that just always infers most specific type?

def foo[T: ValueOf](t: T): Unit = println(t)


or similar?


Also I’m not clear on the use-case where you want to mandate that a singleton-type value is passed explicitly, when you also have access to it implicitly. Why

def foo[T <: Singleton: ValueOf](t: T) = println(t)


rather than

def foo[T: ValueOf] = prinln(valueOf[T])


Or rather, more charitably:


def foo[T <: Singleton](t: T) = println(t)


rather than

def foo[T: ValueOf] = prinln(valueOf[T])


The point is to have a way to influence type inference so it infers singleton types. Typical scenario is when you have a class that needs path-dependence on a module passed in argument.


I proposed a solution based on supertype bounds on the original thread. The advantage is that the compiler can use it to direct type inference more easily than with the type class approach.

I think it could also be used to allow sound type projections, if they are made on singleton types; i.e. allow syntax Singleton[A]#T, which can be handy as we sometimes want to talk about the members of a singleton type without having the value around to express the proper path-dependent type selection.


Remind me why or when the default is to widen? Is it possible that revisiting that question would obviate the need to control it?


Here is an example. In that case it widens even with the Singleton bound, but at least it gives an error saying it needs a singleton type.

 ❯ scala                                                                                                                                                    [16:37:37]
Welcome to Scala 2.12.7 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_191).
Type in expressions for evaluation. Or try :help.

scala> class C { type T }
defined class C

scala> class D[T <: C](val t: T)
defined class D

scala> val c = new C
c: C = C@575e572f

scala> val d = new D(c)
d: D[C] = D@50b0afd7

scala> implicitly[c.T =:= d.t.T]
<console>:14: error: Cannot prove that c.T =:= d.t.T.
       implicitly[c.T =:= d.t.T]

scala> val d2 = new D[c.type](c)
d2: D[c.type] = D@3a11c0eb

scala> implicitly[c.T =:= d2.t.T]
res2: c.T =:= d2.t.T = <function1>

scala> class E[T <: C with Singleton](val t: T)
defined class E

scala> val e = new E(c)
<console>:13: error: inferred type arguments [C] do not conform to class E's type parameter bounds [T <: C with Singleton]
       val e = new E(c)
<console>:13: error: type mismatch;
 found   : C
 required: T
       val e = new E(c)

scala> val e = new E[c.type](c)
e: E[c.type] = E@d949bc4

scala> implicitly[c.T =:= e.t.T]
res3: c.T =:= e.t.T = <function1>


It wouldn’t be influencing typing … it would be influencing type inference.


The scalac plugin API uses trailing annotations … see the CPS plugin, for example.


I agree with @nafg, but I don’t know how much code will break if this is changed.
As I mentioned on this issue, there are two type inference problems I see here.

Inconsistency between singleton literals and objects

For objects:

trait Foo
object Bar extends Foo
def foo[T <: Foo](t : T) : T = t
val f = foo(Bar)
//f is Bar.type and NOT Foo

While OTOH, for literals:

def foo[T <: Int](t : T) : T = t
val f = foo(1)
//f type is Int with value of `1` and not the type `1`

The meaning of <: Singleton is “Must be a singleton”

I opt for a mechanism that preserves the original type narrowness. Currently we get:

def fooInt[T <: Int](t : T) : T = t 
def fooXInt[T <: Int with Singleton](t : T) : T = t

val one = 1 
fooInt(1) //Int = 1 
fooInt(one) //Int = 1 
fooXInt(1) //Int(1) = 1 
fooXInt(one) //fails


If we wanted to be bold, we could add a notion of modifiers for type parameters combined with a singleton soft keyword:

def foo[singleton T](t: T) = ...


This could open the door to other modifiers that influence inference of type parameters, e.g. a precise modifier to always infer the most precise type (which could be a singleton or a union, but not necessarily), and an erased modifier to only infer types representable after erasure (see for why this might be useful).