I wish to discuss this subject. It may lead to a SIP.
Here are some examples of what I wish to do:
case class Foo[N <: Int with Singleton]()
@unroll for(i <- 0 to 5)
Foo[i]()
inline def inFoo(num : Int) : Int = ???
@unroll for(i <- 0 to 5)
Foo[inFoo(i)]()
@unroll for(i <- 0 to 5)
@unroll for (j <- 0 to inFoo(i))
Foo[j]()
A few points/issues:
@unroll may be possible to implement using meta and macros, but annotation cannot be placed at the for expression.
I only require compile-time unrolling and not run-time unrolling. What do I mean by this? I want the compiler to check all the generated unrolled expressions for compilation errors (since we can have implicit constraints on the types), but keep the loop as-is for run-time execution. Maybe we should have two types of @unroll?
One option may be to create some kind of language mechanism to translate a constrained inline-able run-time variable into a union of compile-time types. E.g. i <- 0 to 5 becomes a type union 0 | 1 | 2 | 3 | 4 | 5.
My own use case is a DSL library I’m building for hardware design.
But I’ll give a simple example that implements a fixed-size vector (without the internal structure).
This example also uses the singleton-ops library (XInt is alias for Int with Singleton).
class FixedSizeVector[L <: XInt]() {
def concat[L2 <: XInt](that : FixedSizeVector[L2])(implicit l : L + L2) = new FixedSizeVector[l.OutInt]
def + (that : FixedSizeVector[L]) = new FixedSizeVector[L]
}
object FixedSizeVector {
def apply[L <: XInt](implicit check : Require[L > 0]) = new FixedSizeVector[L]
}
object TestVector {
val v1 = FixedSizeVector[5]
val v2 = FixedSizeVector[2]
val v3 : FixedSizeVector[40] = v1 concat v2 concat v1 concat v2 concat v1 concat v2 concat v1 concat v2 concat v1 concat v2 concat v1
// val v4 = FixedSizeVector[-1] //Will lead to error could not find implicit value for parameter check: singleton.ops.Require[singleton.ops.>[-1,0]]
}
This example works well. However, it is not possible to write:
Honestly, it sounds like an edge case to me – I don’t think I’ve ever desired this particular feature, in 30+ years of programming. I can see it as a compiler plugin, but I’m a bit skeptical that it’s going to have anywhere near enough usage to be part of the standard system…
FixedSizeVector[i.type] does not work for this. i.type does not change every loop cycle.
I don’t mind writing this, but SIP-23 does not support it.
In general I think it should be possible to place any inline-able literal in a literal type position. If .type syntax is required to do this, I don’t mind, but this concept is not supported as a language feature (yet?).
Is that a reason not do this? You don’t see literal types used a lot, but they were implement in SIP-23 scalac (TLS) and dotty. If I can write Foo[5], but not Foo[2+3] or better yet Foo[foo(<literal>)], isn’t that a half of a feature? You can upgrade libraries like Spire to support matrices and vectors with type-safe lengths. How is this a bad thing? Yes, I am not aware how hard it is to implement and how it effects the compiler’s performance.
At the surface no, but as I understand it they’re key to some major libraries like Shapeless.
Hmm. Fair argument. Mind, I’m not dead-set against it, just always a bit cautious about adding stuff to the standard platform – I’m generally of the “each feature must fight for its life” philosophy about such things…
I agree with you here. Every feature implemented should be subject to a cost-effective check.
I’m fairly set on making a SIP and letting the committee decide this. First I need as much input from potential users for one and maybe others who know the inner workings of the compiler. If someone can tell me for sure that this is way too complicated to implement (or even not possible), then I’ll drop this entirely.
I’m not sure about the status of SIP-23. An issue is that you can’t mark i as final val, so it doesn’t get a singleton type—that might be why this is rejected.
You can do that today in many languages (Agda, Coq, Idris…). However, you’ll soon demand the typechecker knows that 2+3=5, that n + m = m + n, and so on, and that is far from trivial.
Refinement types (like Liquid Haskell) use SMT solvers to figure these things out and they work much better, but IIUC nobody yet integrated those in the production version of a major compiler. There’s work on this for Scala though (though I’m not sure on the details), see
Actually I got most of this covered using the singleton-ops library. It was pretty simple to implement via Macro, so I doubt it is much more complicated to implement with the compiler itself. However, what I mainly require is the ability to unroll a loop in compile-time.
Hmmm… Nice hack. I believe it will pass enough compilation phases to allow the macro to run.
However, AFAIK (please correct me if I’m mistaken), the macro will still get the result of the loop and not the loop structure.
Macros which accept structure are annotations, and they cannot be placed in front of a for loop.
I suppose it may be possible to do the following:
@unroll
class ToBeUnrolled() {
for (i <- 0 to 5)
Foo[i]
}
ToBeUnrolled()
And yet feel this is too cumbersome and will become very restrictive in what @unroll supports (or make it very complicated to implement and test).
The AST representing for(i <- 1 to 5) Foo[i.type] will be passed to the unroll macro. In the worst case the AST of something like intWrapper(1).to(5).foreach( i => Foo[i.type] ) will be passed to your macro, but I don’t think that would make much difference.