Scala.compiletime.ops.int limitation

One use-case of scala.compiletime.ops.int is to avoid having to write code like

sealed trait Nat
case class Zero() extends Nat
case class Succ[N <: Nat](n: N) extends Nat

type Sum[A1 <: Nat, A2 <: Nat] <: Nat = A1 match
  case Zero => A2
  case Succ[predA1] => Sum[predA1, Succ[A2]]

. However, note that Sum[Zero, N] reduces to N, even when N is unknown. The same cannot be said of 0 + N.

So one can write

def foo[N <: Nat](n: N): Unit =
  summon[Sum[Zero, N] =:= N]

but not

def bar[I <: Int](i: I): Unit =
  summon[0 + I =:= I]

. Could those kinds of reduction rules be added to scala.compiletime.ops?

Could? Yes.
Should? I’m not so sure. Mainly because when we do this, then it’s not only this. Maybe the entire algebraic rules for integers should come into play. And the real question beyond the interesting research subject is what are the use-cases in the real world.

There are not that many basic identities. I can really only think of

n + 0 = n,  0 + n = n,  n * 0 = 0,  0 * n = 0,  1 * n = n,  n * 1 = n

I don’t think rules beyond these need to be covered (though it wouldn’t hurt). One could even argue that any rule coverage is better than none; it doesn’t have to be all or nothing.

Without these rules, a Scala project using scala.compiletime.ops.int might eventually have to move away from it and define its own Int compile-time types. The latter can at least cover half of these cases (and would be able to cover all of them with this addition).

For a “real-world” use-case, see here. Given how physical quantities are encoded in this library, no generic function like nano could be coded as easily and safely without the n + 0 = n reduction rule.

More generally, these rules are about allowing certain types of generic functions to type-check. The advantages of being able to write generic code are well known.

Not just identity rules, but all Z field rules. That includes associativity, commutativity, distributive, additive inverses, etc.

Those would be a bonus (but I can imagine that full reduction to a canonical form would be much harder).

The basic rules I proposed would already allow scala.compiletime.ops.int to achieve more than parity with a user-defined version.

@mbovel worked on something along those lines, bringing some operation chains in a common representation (1 + x becomes x + 1, Y + 0 becomes Y, etc)
That way you get a lot of these properties “for free”

It was a really cool project, he’ll be able to tell you more about it ^^

1 Like

Indeed, I worked on something similar.

My project was about adding a normalization step to arithmetic type-level operation to be able to derive more equivalences. It enabled all your basic identities, and some more:

// Non-constant arguments are sorted.
val m: Int = 2
val n: Int = 3
summon[1 + m.type =:= m.type + 1]
summon[1 + m.type + 1 =:= m.type + 2]
summon[m.type + n.type =:= n.type + m.type]
summon[2 * m.type * n.type =:= 2 * n.type * m.type]

// -x is normalized to -1 * x
summon[m.type - n.type =:= -1 * n.type + m.type]

// Summing x n times is normalized to x * n.
summon[2 * m.type =:= m.type + m.type]
summon[2 * m.type + 2 * m.type =:= m.type + 3 * m.type]

// Coefficients 0 are removed
summon[1 + m.type -1 =:= m.type]

In the first version I made, operations are normalized to sums of products, while in the second version products are not automatically distributed over sums to avoid the potential exponential blow-up. Also, in my first version, the operations are normalized as soon as possible, while this is only done in last resort in the type comparer in my second version.

In the end, this work was not merged. We’re instead investigating more general solutions to terms equivalence and precise types in the framework of liquid/refinement types for Scala.

2 Likes