What kinds of macros should Scala 3 support?


I have authored two open-source Scala projects (Chymyst and curryhoward), and in both I use def macros. Both projects have an embedded DSL flavor, so most likely my perception is skewed as to what features were “important” in def macros. In brief, here is what def macros do for me - and I don’t see mention of these features in Olafur’s summary:

  1. Enable compile-time reflection: for example, I can say f[A => B => Int](x + y) where f is a macro, and I can use reflection to inspect the type expression A => B => Int at compile time. Then I can build a type AST for that type expression and compute something (e.g. a type class instance or whatever). Note that def macros do not convert type parameters into AST; only the x+y will be converted to an AST when the macro is expanded. So, here I am using macros to have a staged compilation, where I use reflection at the first stage, and create code to be compiled at the second stage. (The curryhoward project uses this to create code for an expression by using a logic theorem prover on this expression’s type signature.)
  2. Inspect the name and the type signature of the expression to the left of the equals sign. For example, in the curryhoward project I can write def f[A,B](x: A, y: A => B): B = implement where implement is a macro. In that macro, I can see that the left-hand side is defining a method called f with type parameters A and B, return type B, arguments x, y of specific types. I can also inspect the enclosing class to determine that f is a method of that class, and to see what other methods that class have. Another use of the “left-side inspection” that’s great for DSLs is a construction such as val x = makeVar where makeVar is a macro that uses the name x as a string to initialize something, instead of writing val x = makeVar(name="x"). The result is a more concise DSL.

It would be a pity to see such useful features disappear when the new Scala 3 macro system is created.

On the other hand, I also encountered the breakage in the current def macros, as soon as I tried to transform ASTs. Even a very simple transformation - such as removing if true and replacing { case x if true => f(x) } by { case x => f(x) } - leads to a compiler crash despite all my efforts to preserve the syntax tree’s attributes. It would be good to fix this kind of breakage in the new macros.

Another question: I noticed that type class derivation in Kittens and in Shapeless is limited to polynomial functors. Contravariant functors, for example, are not derived, and generally function types such as A => B are not supported for type class derivation. I wonder if this limitation will continue in Scala 3. I hope not!


The way it seems to me, ' means “quote,” and is a good choice because it’s a quote symbol but is not used in Scala for strings – actually it’s used for Symbols, which are like code identifiers. So '{ ... } is completely new syntax – a new kind of literal expression for Exprs. On the other hand, ~ is a standard prefix operator in Scala (along with -, +, and !), and normally means “bitwise negation.” So it looks like it’s just a method on Expr that “flips” it from an Expr into an actual value.


To summarize for people like me who haven’t done much macro programming and did not understand the docs well:

  • ': Real Thing into Representation of Thing (It now stands for “something”)
  • ~: Representation of Thing into Real Thing (It now is something)

Which fits very much into @nafg’s reasoning for ' and ~.


The real question is what kinds of macros shouldn’t Scala 3 support?

– a whitebox fan

ps) but in today’s racially-charged environment, I’m glad blackbox is catching a break for once. Is there an actual Marvel hero named Blackbox? because there oughtta be.


I don’t think it should support a white box-fan


Thanks, good to know these use cases. The way you describe it, it looks like these would work in the new system. The whole point of exposing Tasty trees is to allow decompositions like the ones you describe.


Hi folks new poster here! just to let you know we (me+NEU/CTU folks) are working on a large-scale analysis of macro usage for exactly this purpose. The idea is to look at how macros (of all kinds) are used in the wild and generate some use cases so we can have an informed decision of how many constructs would be supported by a transition. It seems there is some interest in this already so it would be interesting to hear your thoughts if you haven’t posted already!


Hi there!

We use macros in couple projects:

It allows getting handy and efficient JSON/binary serialization. Here are code and results of benchmarks which compares (in some custom domain) both of them with the best Scala/Java serializers that have binding to Scala case classes and collections:

The most interesting feature that we are waiting for is opaque types (SIP-35) that would be work properly with macros. It would allow us to avoid using annotations (like @named, @stringified, etc.) for case class fields to tune representation properties or binding.

Instead, we want to use some configuration functions for macro calls which will override defaults without touching sources of data structures, like some of these configuration functions:

But instead of strings, they should have some type parameter(s) like it is modeled here:


Kotlin now has a ticket on this and looks like they are working towards it:


It would be great for scala to also support creating and consuming API jars (or any API description), which would enable much better interop with buck, bazel, pants and similar tools.


In AVSystem commons library we’re using macros for:

The last use case is the one that doesn’t seem to have gotten enough love so far in all the macro discussions and it would be a serious blow for us if it wasn’t supported in Scala 3.

Also, our macro engines rely heavily on annotation processing, i.e. accessing annotations of inspected types, classes, methods, parameters, etc which influence how code is generated.

When using macros, we also try to follow these principles where possible:

  • use only blackbox macros
  • avoid implicit macros, especially the ones that generate a lot of code
  • avoid arbitrarily deep type inspection (e.g. only inspect the shallow structure of case class, don’t go into fields) - this means e.g. lack of fully recursive typeclass derivation
  • in order to avoid problems with incremental compilation, macros that inspect types should only be invoked in the same compilation unit where inspected types are defined (e.g. in companion object of inspected class)


I agree with you.

I think it would be great if Scala 3 has ability to do it with black box macros.

May be It can be done with something like:
Pre-SIP: export, dual* of import

class  SomeThingProxy extend SomeThing{
  //Generate empty method for black box macros. 
  export SomeThing


There is a new proposal for typelevel programming on the table, which is intended as a safer alternative for whitebox macros.

In a nutshell:

  • If you need to create new types and see these types in the same compilation run, transparent functions are the mechanism to use.

  • If you need to create new data and types for usage in a downstream projects, you can use principled meta programming and Tasty reflection.


I like the new transparent mechanism quite a lot, and can see its power. The only thing that seems to be missing, offhand, is something equivalent to Generic and LabelledGeneric – a way to convert between the various sorts of strongly-typed products. Are there plans to add a mechanism for that?


Wouldn’t the new breed of macro annotations (which generate code that becomes visible in dependent projects) fulfill this use case?


Quite possibly, but I’m concerned that it may make applications structurally messy. Or possibly I’m misunderstanding how these macro annotations would come into play.

I guess the question is, if I am writing an application that is depending New Circe (however that works in the new world), can I do my serialization without having to break things down into multiple projects? I don’t mind things getting a little complex for libraries; I’m more concerned if that’s the case for routine applications.

That’s really the use case that I’d like to see fully-worked in the new world – my observation is that many problems seem to be Circe-complete. (That is, they turn out to want essentially the same machinery as Circe.) So if a consumer of New Circe could operate with reasonably minimal boilerplate, I’ll believe that many use cases are solved. But so far, I don’t quite grok how that would work in the new environment.


No. Generics are currently generated on the fly for any case class and are available in the same project where case class is defined. Limiting them only to dependent projects would kill the entire ecosystem of typeclass derivation…


In that case, I think we would need one standard Generic-like instance generated by the Scala compiler for each case class, so that libraries like circe can leverage it via implicits. And transparent methods could simplify this process (reducing the amount of implicit definitions needed). Additionally, it would be really cool if transparent methods and/or implicits could be cached somehow, so we don’t end up generating as much duplicated code as today.


I think we would need one standard Generic-like instance generated by the Scala compiler for each case class

I’d rather that we retain a generic way to do compile-time reflection and surfacing that information in generated code - terms or types, rather than blessing one compiler-baked Generic implementation. Many projects rely on the ability to do their own compile-time and runtime reflection, such as beanpuree - https://github.com/limansky/beanpuree/blob/master/README.md - that inspects Java classes, not Scala classes, and as such can’t be served by scala-specific Generic, jsoniter - https://github.com/plokhotnyuk/jsoniter-scala - that relies on compile-time reflection and macros to generate very efficient low-level JSON parsing code, distage - https://github.com/pshirshov/izumi-r2 - our own project, hybrid runtime and compile-time dependency injection framework that crucially relies on TypeTags - but we’re also generating custom type tags for higher-kinded types - https://github.com/pshirshov/izumi-r2/blob/develop/distage/distage-model/src/main/scala/com/github/pshirshov/izumi/distage/model/reflection/universe/WithDITypeTags.scala

Additionally, it would be really cool if transparent methods and/or implicits could be cached somehow, so we don’t end up generating as much duplicated code as today.

Implicits cannot be cached because they’re not coherent and not pure. Transparent methods are all that + dependent on call site scope.


We plan to put typeclass derivation in the language, by adding a scheme where typeclasses can be defined automatically for case class hierarchies. What transparent functions give us in this respect is that we can have a simple fold-based derivation mechanism and still get the full power of Generic and LabelledGeneric. At least I hope so - we still have to try that out.


How to minimize def mImpl(x: Expr[T]) invocation?

If I understand correctly in the case:

transparent def m(x: T) = ~mImpl('x)
def mImpl(x: Expr[T]) = ...

The mImpl will be called in every usage of method m :

val m1 = m(1) //first call mImpl
val m2 = m(2) //second call mImpl

It can be very expensive.
Is it posible mImpl('x) to be called only in class definition?

transparent  trait Factory(val  a:T){
    transparent def apply(x: T) = ~mImpl('x,'a)

object  Factory extends Factory("type")

At compile time it transforms

object  Factory extends Factory("type"){
  transparent def apply(x: T) = x match {
    case 0 => ...
    case _ => ...