I was writing some scala 3 code where I wanted something like:
val one = 1 | 2 = 1
val two = 2 | 3 = one + 1
but this does not compile.
on the other hand:
val one: 1 = 1
val two: 2 = one + 1
does compile. So, it seem that the compiler understands that if you add 1 to a value of type 1, you get one of type 2, but it can’t distribute that across a union of all integer values. It seems tractable to me where you have a union of primitives of the same type that you could distribute operations with singleton types across those unions.
Are there any issues that I am missing to adding support in the compiler for these kinds of operations on primitive values? It could be rather useful, I think.
It occurs to me if you have a A1 | A2 | A3 ... where Ai <= A and you call a method m defined on A but each Ai may override with a more specific return type Ri then you should get as a result R1 | R2 | R3 ... This wouldn’t require any reflection since it is just propagating static information about the refined override return types.
Then, we can consider Int as having 2^(32) subtypes, and each have different values for def +[N <: Int](n: N): scala.compiletime.ops.int.+[this.type, N], but that could be done virtually at typechecking time.
Maybe: combinatorial explosion of possibilities resulting in slow compilation times? I don’t know for sure but it looks likely this would be misused precisely because it would be so powerful.
You could pretty easily do arbitrary bitset arithmetic at that point, which means that your type tests could need to store multiple not-very-sparse 2^32-bit type occupancy sets. At least, that’s what you have to watch out for.
The time taken wouldn’t be exponential (unless you have exponentially many types) since as far as I can tell the occupancy would always be concrete and thus could just be directly calculated. But it might require unreasonable amounts of computation and memory, depending on what exactly was allowed.
I mean I’m not talking about making 2^32 subtypes of Int. Just keep actual singleton type computation but make the compiler check each “side” of an union:
val one: 1 = 1
val twoOrThree: 2 | 3 = one+1 //Attempt to cast `one+1` into `2` and if unable, try to `3`.
Anyway, it depends what is allowed. Is val two: 1 + 1 = 2 okay? What about type nuggets = 4 | 6 | 10 | 20? What about val complicated: (2 | 4 | 6 | 8 | 10 | 12 | 16) & (1 | 4 | 9 | 16) = (one + one)*(one + one)? type sum = a + b, with parameterized a and b?
I personally would allow any of it–if you’re going to do type computations on subsets of ints, just go the whole way and put a timeout in the compiler and a size-check, and complain if people do anything too intractable.
But depending on what you allow, it could require almost arbitrary computation.
I don’t get your point. How can a singleton type be, for example, 1and2 ? This shouldn’t be some sort of bitset. It’s still union and intersection types.
Pretty sure that would be possible, but the compiler would have to check for both, type 1 and 2 that the resulting type is subtype of 2 | 3. This should not be a problem to compute, as long as only one operand is a union type.
However, if both operands are union types, we get the combinatoric explosion @Ichoran mentiones:
What should (0 | 1) + (0 | 2) (as types) be? 0 | 1 | 2 | 3, right?
But then, (0 | 1) + (0 | 2) + (0 | 4) already is a union type of 8 types, if we add a + (0 | 8), we get 16 types. So the number of types doubles for each summand, (and there is not really a proper way to represent these types more efficiently) so we would get exponential compilation times.
Yes, you could definitely have exponential blowup in compile times, but I think only in cases where you are typechecking a union type ascription. In practice, I think these ascriptions would not get enormous.
Also, there have traditionally been many ways to write code that typechecked very slowing (I think you can make an exponential implicit search without too much trouble), but that usually isn’t a reason in and of itself to not have the feature. I think the more important case is: are there important cases that do not take too long to compile, and I think the answer to that question is yes in this case.