Status of specialization in Scala 3

What is the status of specialization in Scala 3? While I was recently posting a few bug reports regarding @specialized for Scala 2.13, I was made aware Dotty 0.27 was simply ignoring @specialized.

What is the current status and plan?

  • will @specialized be implemented eventually?
  • will some automatic specialization be implemented?
9 Likes

See https://github.com/lampepfl/dotty/blob/d8f1b4f7de4dbef63b16fda14d12a9e718668573/docs/docs/typelevel.md
Ā§ Code Specialization

2 Likes

Is it now here?

This is very different from the @specialized annotation, as it only specializes methods. It canā€™t export specialized types like @specialized does, so it is quite limited. It is also less practical, as it requires users to instantiate each specialization manually, and separately-instantiated specializations will not be shared.

1 Like

We are currently working on to support function specialization:

7 Likes

Could transparent also work on classes? For example, here is a snippet from FScape (DSP framework):

  type Shp[E] = FlowShape[E, E]

  class Stage[A, E <: BufElem[A]](implicit tpe: StreamType[A, E])
    extends GraphStage[Shp[E]] {

    val shape: Shp[E] = ???

    def createLogic(): Logic[A, E] = {
      val res: Logic[_, _] = if (tpe.isDouble) {
        new Logic[Double, BufD](shape.asInstanceOf[Shp[BufD]])(_ - _)
      } else if (tpe.isInt) {
        new Logic[Int   , BufI](shape.asInstanceOf[Shp[BufI]])(_ - _)
      } else {
        assert (tpe.isLong)
        new Logic[Long  , BufL](shape.asInstanceOf[Shp[BufL]])(_ - _)
      }
      res.asInstanceOf[Logic[A, E]]
    }
  }

  class Logic[@specialized A, E <: BufElem[A]](shape: Shp[E])(diff: (A, A) => A)
                                              (implicit tpe: StreamType[A, E]) 
    extends GraphStageLogic(shape) { ... }

(BufElem wraps the primitive arrays in this case).
This is a predictable way I found to have guaranteed specialized instances in Scala 2. Is it conceivable to allow for transparent class? I think that would be great and capture most of the performance optimisations possible today in Scala 2.

Forgive me if Iā€™m being an idiot since I know very little about @specialized, but why canā€™t inline of a union between the primitives be sufficient?

type Specialized2[A, B] = A | B
object Specialized2 {
  type Res[A, B] = (A, B) match {
    case (Int, Int) => Int
    case (Long, Long) => Long
    case (Float, Float) => Float
    case (Double, Double) => Double
  }
  extension [A, B](inline x : Specialized2[A, B])
    inline def + (inline y : Specialized2[A, B]) : Res[A, B] = (x, y) match {
      case (x : Int, y : Int) => (x + y).asInstanceOf[Res[A, B]]
      case (x : Long, y : Long) => (x + y).asInstanceOf[Res[A, B]]
      case (x : Float, y : Float) => (x + y).asInstanceOf[Res[A, B]]
      case (x : Double, y : Double) => (x + y).asInstanceOf[Res[A, B]]
      case _ => ???
    }
}

import Specialized2._
inline def foo(a : Specialized2[Int, Double]) = a + a

 
@main def main : Unit = {
  val fooDbl = foo(1.0)
  val fooInt = foo(1)
  println(fooDbl)
  println(fooInt)  
}

(I would have rather have used opaque types, but they are currently cannot be used with inline methods)

This does not come close to even beginning to solve any of the problems @specialized solves.

Inlining can give you some ā€œabstraction without regretā€ with regard to type classes such as in that example, but itā€™s a totally different thing than what @specialized does.

trait Numeric[A] {
  def zero: A
  extension (a: A) {
    def +(b: A): A
    def *(b: A): A
    def -(b: A): A
    def /(b: A): A
  }
}

given Numeric[Double] {
  inline def zero = 0.0
  extension (a: Double) {
    inline def +(b: Double) = a + b
    inline def *(b: Double) = a * b
    inline def -(b: Double) = a - b
    inline def /(b: Double) = a / b
  }
}

abstract class MathLib[N: Numeric] {
  def computeThing(a: N, b: N): N
}
object MathLib {
  inline def apply[N: Numeric] = new MathLib[N] {
    def computeThing(a: N, b: N): N = {
      (a + b) * (a - b)
    }
  }
}

Now we have written one generic method that can work on all Numeric types, and generate efficient code without having to incur all the usual indirection.
What this doesnā€™t do however is eliminate boxing, like @specialized does. When you call MathLib[Double].computeThing(4.2, 2.4) the compiler will generate bytecode equivalent to the following:

final class $anon() extends MathLib[Double](given_Numeric_Double) {
  def computeThing(a: Double, b: Double): Double = {
    (a + b) * (a - b)
  }

  override def computeThing(a: Object, b: Object): Object = {
    Double.box(computeThing(Double.unbox(a), Double.unbox(b)))
  }
}

val a: MathLib[Double] = new $anon()

Double.unbox(a.computeThing(Double.box(4.2), Double.box(2.4)))

So boxing and unboxing still happens (twice). The bytecode optimizerā€”if enabledā€”may still be able to optimize this away though, if Scala 3 can reuse the 2.13 optimizer.
@specialized does a lot of fragile magic so the compiler knows at the call sites that heā€™s dealing with a specialized class and call the specialized methods directly.

I had not noticed this, thank you so much. I guess that is the reason for the carefully chosen dot product example with array arguments so that boxing is not an issue.

I am confused and concerned after reading those answers. Does this mean transparent is supposed to replace @specialized? I have used @specialized to prevent boxing as that was causing significant performance issues in my code. Can transparent be used to the same goal?

What is the experience of people writing high performance code with this solution?

1 Like

Iā€™d like to revive this topic, seeing that function specialization was added and that thereā€™s a PR for tuple specialization.

The lack of support for @specialized is a blocker for a multitude of libraries that were left with no viable alternative (neither inline nor code generation produce a stable, binary compatible api, that hides the specialization from the user)

15 Likes

Rectifying what I wrote, there is no PR, but an open issue.

Recently Spire also ported to scala 3, and lament the loss of performance. I donā€™t see projects like breeze and algebird (and transitively, Spark) making the upgrade without this either.

Waiting for project Valhalla to deliver this functionality on the JVM means probably waiting 3+ years, donā€™t think we can wait that much.

10 Likes

See also On-demand specialization . One of Martinā€™s remarks there is:

The best bet is probably to compile and specialize from Tasty

There are some more recent discussions at:

3 Likes