Pre-SIP: Exact type annotation


Adding [!T] or [@exact T] to prevent scalac from (eagerly) widening types.


The scala compiler tends to eagerly widen types in various situations and that helps avoid too much divergence in type signatures. However, there are cases we want the type to remain as narrow/exact/precise as possible. Lets discuss some examples:

Literal arguments widening

def id[T](arg : T) : T = arg
final val one = id(1)
val testOne : 1 = one //error

The compiler widens 1 to Int when being applied at arg of id.
Currently to keep the narrow 1 type, we can apply a Singleton upper-bound to prevent the widening. However the Singleton upper-bound presents two problems. One is that then this function will not be able to accept non-singleton values. Another problem is that Singleton is recommended to be deprecated because it may produce unsoundness (see related issues).

Another way to overcome this issue is through a direct macro or an inline implicit conversion macro (which we want to minimize in the future). The macro can inspect the type of the underlying tree and use that instead of the type T. Clearly this is a hack and not a good way to maintain type narrowness.

Variance type widening in implicit search

Summoning a covariant or contravariant type class that forwards the variant type argument to another implicit search results in unexpected widening to the lower or upper bound, respectively. This looks like a bug, but it’s a result of the underlying widening mechanism. To change that may cause too much breakage.

trait NotNothing[T]
given [T](using util.NotGiven[T <:< Nothing]): NotNothing[T] = new NotNothing[T] {}

trait Fuzzy[+T]
given [T](using ce: NotNothing[T]): Fuzzy[T] = new Fuzzy[T]{}

class Foo
summon[Fuzzy[Foo]] //error

Same issue affects type class implicit search of contravariant type argument TC[-T], which is quite a common pattern. We cannot have T be a literal and keep its exact type without it being widened unless we wrap it in an invariant trait.

Variance type widening in method arguments

Variance type widening plaques not only implicit search, but also when being directly applied as arguments.

class Foo[+W]

def id[W](that : Foo[W]) = that

final val f1 = new Foo[1]
final val f1id = id(f1)

val f1Works : Foo[1] = f1    //works!
val f1Fails : Foo[1] = f1id  //Error: type mismatch;


Add a unique keyword or annotation to signify that we want the exact/narrow/precise type and that will prevent the widening mechanism from taking place.

Keyword annotation

Keyword annotation options we need to choose from:

  1. [!T]
  2. [@exact T]
  3. [@precise T]
  4. [@preserve T]
  5. [@narrow T]

The annotation can be mixed with variance annotations: [!+T], [@exact -T]
If we choose [!T] we need to decide if it’s better to allow [!+T], [+!T], or both.

The “narrowest possible type”

So what is the narrow type that we’re aiming at here?
We don’t change what Scala considers as the narrow type, but only prevent widening.
So for the following:

def id[!T](t : T) : T = t
val x = id(1)

I would opt for a behavior that will get equivalent result to:

final val $t = 1
def id[T](t : T) : T = t
val x = id($t)

Again, this is just what I’m aiming at type-wise. I don’t think we should have the desugaring.
The exception is tuples. See next section.


I believe tuples construction should be special cased if we apply exact type annotation on it.

def id[!T](t : T) : T = t
val x = id((1, 2, 3, "hi"))
val xTest : (1, 2, 3, "hi") = x //should work

Union Types

Should union types be preserved as well?

Multiple Arguments and Varargs

What should happen in the following example:

def id1[!T](arg1: T, arg2 : T) : List[T] = ???
def id2[!T](args: T*) : List[T] =???

I believe we should disallow this and generate an error when an exact type is affiliated with more than one argument.
Alternatively we can allow T to become a union type of the individual arguments’ exact type. I don’t think this is really useful.


I believe list construction should behave the same when applied to an exact type argument.

def id[!T](arg : T) : T = arg
id(1 :: Nil) // : List[Int]
id(1 :: 2 :: Nil) // : List[Int]
id(List(1, 2, 3)) // : List[Int]

Lists construction is not special. Even if we allow varargs to accommodate a union of its arguments as the exact type, it still does not matter because we won’t change the standard library to use exact type annotation on :: and List.apply.

Related issues and discussions

Discussion / Open Questions

  1. If you agree with the proposal, what annotation should we introduce?
  2. This proposal does not deprecate Singleton or claim to be its replacement. Should it?
  3. What about tuples? Should we special-case them?
  4. Should union types be preserved?
  5. How to deal with multiple arguments and varargs?
  6. Anything else?

IMO this is clearly something that has been missing from Scala for a while, even though I never ran into these issues myself. I am in favor of something like this to make it.

Some comments to your discussion points:

“precise” has come up a lot in discussions I’ve had about this topic over the years. So far it seems to me that it is the most accurate.

IMO it should at least claim to be its replacement, yes.

It’s not so much special-casing tuples as adding the precise annotation to their constructors and methods, is it?

Yes, definitely. Why should they not be preserved? We should get as precise a type as possible at the time of inference, period.

Infer the most precise type that allows stuff to typecheck. That will most likely mean a non-widened union type of the arguments’ types. For example:

def preciseList[@precise T](xs: T*): List[T] = xs.toList

val x: Int = 6
val list1 = preciseList(true, x, "foo")
val list2: List[true | Int | "foo"] = list1 // correct

I believe there are 2 distinct problems that we want to address here:

  1. Find a replacement for Singleton bounds.
  2. Have a way to disable widening of inferred types in certain situations.

In your example, the first problem is about getting a precise type for the instantiation of T in id, which is what we can currently achieve using the Singleton bound:

def id[T](x: T): T = x
def id2[T <: Singleton](x: T): T = x
val x = id(3)
val y = id2(3)

The snippet above is typed as (reproduce using scala-cli compile -Xprint:typer

val x: Int =[Int](3)
val y: Int = test.id2[(3 : Int)](3)

Here we can see, that T is instantiated to 3, but this only helps so much as 3 is then widened to Int (therefore trying to assign y to val z: 3 causes an error). This second problem is not and I think should not be linked to type variables.

  1. To solve the first problem, I like your idea of a @precise annotation. It could also be a keyword (def id[precise T]) or a type-class (def id[T: precise]). This could be straightforward to implement (see quick tests here: [Experiment] Precise annotation · lampepfl/dotty@15fa50c · GitHub, would need additional desugaring to get a nicer syntax).

  2. To address the wider (no pun intended!) problem of disabling widening, I think we need some additional syntax/concepts/machinery. I recently explored a few (long-standing) ideas that I summarized here: Precise type inference in Scala 3, but all have drawbacks for which we don’t have good solutions yet.

1 Like

That’s an issue and could break things. What you suggest changes all tuples such that the type of (1, 2, 3, "hi") is (1, 2, 3, "hi") and not (Int, Int, Int, String). We must not touch Tuple.apply or TupleX[...] declarations.

1 Like

I think both problems are one of the same, and Singleton helps us avoid widening in all the cases I presented, but this is a bad solution from reasons I mentioned in the OP.

I could go either way on the keyword/annotation and have no preference.

1 Like

While I don’t mind wider angle lens here, I feel like dependent def is a carpet bomb while the precise keyword is a surgical strike. There is an issue of timeline and how we introduce such features into the language. Can precise and dependent def co-exist? Should they co-exist? We at least need to answer these questions to know if this SIP can move forward or should it be held back to see if there is a larger concept that can take place.

On a personal note, I need the precise feature like yesterday (heck, years ago)! You should see the hacks I’ve got in my code base to avoid the widening plaguing it on every turn. So personally I would really like a quick resolution here. But if we have a better concept for the language, then I will hold this SIP back.


I guess my point is that for your first “id” example with the literal to work, you need both the Singleton bound and the final modifier. The first avoids widening of the type variable and the second of the inferred type for the whole right-hand side.

And in this example, it seems like !T impacts both the widening of the type variable and of the inferred type for the whole right-hand side. So !T would have a stronger meaning than T <: Singleton.

Singleton and precise are clearly not the same. The first is a guarantee, the second is best effort. The first requires the argument to be a singleton type and in turn allows subtypes of Singleton type to be used in place of paths. The second does not restrict the actual argument, and does not give you a path.


Just a note from the peanut gallery: I don’t find the [!T] notation especially intuitive, even by our usual standards of symbols. I’d favor the [@precise T] version (or something like that) – it’s wordier, but for what I expect won’t show up often in typical application-land code I think that’s fine, and it’s much more glanceable.


I’ve experimented with this a little. And what I found currently lacking is preventing contravariant widening when the compiler tries to make things compile:

  trait TC[-T]
  given [T <: Int @precise](using Int <:< T) : TC[T] with {}
  given [T <: String](using String <:< T) : TC[T] with {}

  final val shouldFail = summon[TC[1]]
  final val shouldWork = summon[TC["hi"]]

The code compiles successfully, although it should have failed on summon[TC[1]]. The reason is that the compiler is still eager widen to make things compile by utilizing the contravariance. This SIP should prevent this from happening.

1 Like

On what grounds should it not compile? You request a T such that Int <:< T. The compiler chooses T = Int, which is the most precise type that allows the application to compile.

1 Like

OK, I simplified the example too much so it looks weird. The point of the example is to show that the compiler will widen a narrow type to make things compile, but I don’t want it to happen in some cases.

Here is a better example that illustrates my point. I want to make a positive value constraint on an argument. I want that constraint to be at compile-time, but in case there is no compile-time information, then the runtime information should be checked.

import compiletime.ops.any.IsConst
trait Positive[-T <: Int]
type ITE[I <: Boolean, T, E] <: T | E = I match
  case true  => T
  case false => E
//Should pass compilation if `T` is positive or if `T` is not known at compile-time
given [T <: Int @precise](using ITE[IsConst[T], T > 0, true] =:= true): Positive[T] with {}

def positive[T <: Int @precise](arg: T)(using Positive[T]): Unit = assert(arg > 0)

positive(1) // should pass
positive(-1) // should get compile error, but we get runtime error
val one = 1
positive(one) // should pass
positive(-one) // should get runtime error

The compiler is eager to widen T in order to get this to compile, although it is clearly not what we want here.
(Even this example is more simplified and does not show why we need contravariance, but enough to make the argument [pun intended])


OK now I see better what you are trying to achieve.

That said, I still think it is not reasonable to expect the compiler to do what you wish it would do in this case. I still fail to see what would be the principled reason preventing the compiler from inferring T = Int. It is still the most precise type that allows the call to typecheck. Type inference is all about finding an instantiation of type variables that allows a call to typecheck. Within one application (a method call), the type inference is going to try its best to find an instantiation that makes it typecheck.

IMO, the real problem here is that ITE[IsConst[T], T > 0, true] =:= true is “unsound” in that it does not respect the LSP: there are things you can achieve with T1 (Int) that you cannot achieve with T2 <: T1 (-1). Because of that, you are fundamentally fighting against the type system, and you will forever need non-principled ways to “hide” stuff from the compiler.

1 Like

That’s a matter of how we specify what @precise does and this is what this SIP is about. I think it’s just as fine to define @precise to act like upper-bound of the precise type representation. So if T can have -1 as its upper bound, then @precise forces it so.

Even without this trick, it’s something I can achieve in macros (and have done so). My current workaround this widening problem is that I have this type Exact[T], that converts T into ValueOf[T] in case T is a literal, just to have an invariant holder. So the above example looks like:

trait Positive[-T]
given ct[T <: Int](using (T > 0) =:= true): Positive[ValueOf[T]] with {}
given rt[T <: Int]: Positive[Int] with {}

def positive[T](arg: Exact[T])(using Positive[T]): Unit = assert(arg.value > 0)

I’m fighting against the compiler because I need contravariance (not shown why in this example), but I also need to keep the precise type of compile-time values. Why should not @precise enable me to do so?

The thing on which we disagree is that, for me, T cannot have -1 as its upper bound, since it prevents the application (the call) to typecheck.

Then what if instead of changing the semantics of the upper-bound, @precise disables the contravariance for the type argument it is applied on within the definition it is applied?

Thanks @soronpo for trying my rough prototype and for sharing your examples. It really helps to have real-world use-cases to guide the discussion.

A general question: it looks to me that type parameters that could be instantiated to singleton types only and that could appear only once look similar in purpose to term-reference types (x.type) which do exactly that: give you a singleton type for a given parameter.

For example, positive could be defined as (currently buggy but will work after is merged):

type Positive2[T <: Int] = ITE[IsConst[T], T > 0, true] =:= true
def positive2(arg: Int)(using Positive2[arg.type]): Unit = assert(arg > 0)

id could also be typed as:

def id2(arg : Any): arg.type = arg

So how do term-references compare to @precise type parameters? In which situation are @precise type parameters needed and term-references can not be used?

.type cannot always be applied. It’s like having a Singleton upper-bound. Some values are not singletons and I want to use them as arguments.

I also made progress, and it seems that I can make it work without contravariance for the implicit search. A bit annoying, but if the semantics do not fit with how Scala is supposed to work, then so be it.

The only thing left is special-casing tuples. Currently this yields an error:

class Id[T <: Any@precise](t: T)

final val x = Id((1, 2, "hi"))
val xTest: Id[(1, 2, "hi")] = x //error

As I mentioned earlier, we should not change the tuple generation to use @precise because that can break existing code. So we need T to special case direct tuple application to keep that precise type.

Made some progress on the implementation:


OK, I advanced some more on this. Checkout the test cases to understand the semantics: