What kinds of macros should Scala 3 support?


If the first run of the type checker succeeds, then running it again but with refined types will presumably also succeed, and will not achieve anything. Whitebox macros are useful when they are needed for the (first) type-checking phase to complete successfully – i.e., they guide the type checker for the right types and implicits to be inferred.


If that would be interesting for the development group, we’d be happy to help run such a more detailed survey on a larger scale.

It would definitely be interesting!


I was wondering what would the best method here be, and I suppose any automated means of checking the source code would raise both privacy concerns and be hard to do accurately (because of transitive dependencies).

So a self-reported usage report would be the way to go. We could ask people which macro libraries they (consciously :slight_smile: ) use in their projects, with a division between blackbox/whitebox/annotation macros if the library offers multiple. Maybe @olafurpg has a good starting list of such projects? Plus a free-form area where people could state if they have custom macros, and what are the use-cases.

What do you think?



When designing the research try to keep in mind that, at least in my experience, a lot of scala codebase is close to Java and written and maintained by people not that familiar with advanced topics such as macro and their differentiation. So if you are interested in real statistics, try to keep questions very specific, so the person who answers them doesnt need to know what is whitebox/blackbox macro or if they’re even using macros in the first place :slight_smile:


Ah yes, sure, I wouldn’t expect anyone not interested in macros development to know about the difference. What I had in mind is asking about specific features of a library, if it offers both blackbox/whitebox/annotation macros.


@adamw But I think that users would likely not know what kinds of macros they were using. At least not the difference between whitebox and blackbox. Annotation/def macros is easier to answer, so this could be a useful datapoint.

Maybe it’s easier to ask: Can you give us a shortlist of the macros you use most often (name of macro and library where it comes from)? Given the list, we can do the classification ourselves.


This is very true, and I like the question. Perhaps you should even add a list of 10 or 20 “common” macro, because perhaps user don’t really know that what they use is a macro. That’s the counterpart of having macro so nicely integrated, most user may not even they use them.
For ex, from an user point of view, it is not evident that:

  def values = ca.mrvisser.sealerate.values[Whatever]

Is a macro call.


Yes, you are right, sorry I didn’t express what I had mind clearly enough before :). As I was replying to @Krever, I thought about explicitly asking about usage of a set of known libraries which use macros (shapeless, circe etc.). Here the data set obtained by @olafurpg might be very helpful.

Plus a free-form field so that people might add to the list if anything is missing.


That looks like a good plan. So we collect a set of libraries that define macros and do are survey which of these libraries are used in their projects?


What if a library defines blackbox and whitebox macros, you won’t know which are being used.

Also how do you collect the set of libraries?


I have authored two open-source Scala projects (Chymyst and curryhoward), and in both I use def macros. Both projects have an embedded DSL flavor, so most likely my perception is skewed as to what features were “important” in def macros. In brief, here is what def macros do for me - and I don’t see mention of these features in Olafur’s summary:

  1. Enable compile-time reflection: for example, I can say f[A => B => Int](x + y) where f is a macro, and I can use reflection to inspect the type expression A => B => Int at compile time. Then I can build a type AST for that type expression and compute something (e.g. a type class instance or whatever). Note that def macros do not convert type parameters into AST; only the x+y will be converted to an AST when the macro is expanded. So, here I am using macros to have a staged compilation, where I use reflection at the first stage, and create code to be compiled at the second stage. (The curryhoward project uses this to create code for an expression by using a logic theorem prover on this expression’s type signature.)
  2. Inspect the name and the type signature of the expression to the left of the equals sign. For example, in the curryhoward project I can write def f[A,B](x: A, y: A => B): B = implement where implement is a macro. In that macro, I can see that the left-hand side is defining a method called f with type parameters A and B, return type B, arguments x, y of specific types. I can also inspect the enclosing class to determine that f is a method of that class, and to see what other methods that class have. Another use of the “left-side inspection” that’s great for DSLs is a construction such as val x = makeVar where makeVar is a macro that uses the name x as a string to initialize something, instead of writing val x = makeVar(name="x"). The result is a more concise DSL.

It would be a pity to see such useful features disappear when the new Scala 3 macro system is created.

On the other hand, I also encountered the breakage in the current def macros, as soon as I tried to transform ASTs. Even a very simple transformation - such as removing if true and replacing { case x if true => f(x) } by { case x => f(x) } - leads to a compiler crash despite all my efforts to preserve the syntax tree’s attributes. It would be good to fix this kind of breakage in the new macros.

Another question: I noticed that type class derivation in Kittens and in Shapeless is limited to polynomial functors. Contravariant functors, for example, are not derived, and generally function types such as A => B are not supported for type class derivation. I wonder if this limitation will continue in Scala 3. I hope not!


The way it seems to me, ' means “quote,” and is a good choice because it’s a quote symbol but is not used in Scala for strings – actually it’s used for Symbols, which are like code identifiers. So '{ ... } is completely new syntax – a new kind of literal expression for Exprs. On the other hand, ~ is a standard prefix operator in Scala (along with -, +, and !), and normally means “bitwise negation.” So it looks like it’s just a method on Expr that “flips” it from an Expr into an actual value.


To summarize for people like me who haven’t done much macro programming and did not understand the docs well:

  • ': Real Thing into Representation of Thing (It now stands for “something”)
  • ~: Representation of Thing into Real Thing (It now is something)

Which fits very much into @nafg’s reasoning for ' and ~.


The real question is what kinds of macros shouldn’t Scala 3 support?

– a whitebox fan

ps) but in today’s racially-charged environment, I’m glad blackbox is catching a break for once. Is there an actual Marvel hero named Blackbox? because there oughtta be.


I don’t think it should support a white box-fan


Thanks, good to know these use cases. The way you describe it, it looks like these would work in the new system. The whole point of exposing Tasty trees is to allow decompositions like the ones you describe.


Hi folks new poster here! just to let you know we (me+NEU/CTU folks) are working on a large-scale analysis of macro usage for exactly this purpose. The idea is to look at how macros (of all kinds) are used in the wild and generate some use cases so we can have an informed decision of how many constructs would be supported by a transition. It seems there is some interest in this already so it would be interesting to hear your thoughts if you haven’t posted already!


Hi there!

We use macros in couple projects:

It allows getting handy and efficient JSON/binary serialization. Here are code and results of benchmarks which compares (in some custom domain) both of them with the best Scala/Java serializers that have binding to Scala case classes and collections:

The most interesting feature that we are waiting for is opaque types (SIP-35) that would be work properly with macros. It would allow us to avoid using annotations (like @named, @stringified, etc.) for case class fields to tune representation properties or binding.

Instead, we want to use some configuration functions for macro calls which will override defaults without touching sources of data structures, like some of these configuration functions:

But instead of strings, they should have some type parameter(s) like it is modeled here:


Kotlin now has a ticket on this and looks like they are working towards it:


It would be great for scala to also support creating and consuming API jars (or any API description), which would enable much better interop with buck, bazel, pants and similar tools.


In AVSystem commons library we’re using macros for:

The last use case is the one that doesn’t seem to have gotten enough love so far in all the macro discussions and it would be a serious blow for us if it wasn’t supported in Scala 3.

Also, our macro engines rely heavily on annotation processing, i.e. accessing annotations of inspected types, classes, methods, parameters, etc which influence how code is generated.

When using macros, we also try to follow these principles where possible:

  • use only blackbox macros
  • avoid implicit macros, especially the ones that generate a lot of code
  • avoid arbitrarily deep type inspection (e.g. only inspect the shallow structure of case class, don’t go into fields) - this means e.g. lack of fully recursive typeclass derivation
  • in order to avoid problems with incremental compilation, macros that inspect types should only be invoked in the same compilation unit where inspected types are defined (e.g. in companion object of inspected class)