As a macro fan, wanted to add two cents: IMHO, the strength of macros has always been that they help eliminate boilerplate. They accomplish that by (either directly, for custom macros; or indirectly, in shapeless and magnolia) generating the boilerplate at compile time, from the smallest and/or most idiomatic way to express that boilerplate. For example, writing a case class
(a small and idiomatic thing) can get you the boilerplate of a JSON serializer, or a pretty-printer, or what have you. Annotation macros are in a similar vein, but for use cases where the boilerplate can’t be captured in a value and must be a generated definition.
Physically generating the boilerplate and writing it back to the source file is not remotely the same thing. It solves the problem of writing the boilerplate, but that was only a small problem to start with compared to the problem of reading the boilerplate. After all, code is read many more times than it’s written. The reduced form of the boilerplate (e.g. the case class
) is much more readable than the expanded boilerplate itself. Evidence for this can be seen in Java projects which use code generation via annotation processors – the generated code is never checked in to source control; it’s instead generated only during the compilation pipeline.
I think the idea of using the rewrite system as a code generation tool is clever (smarter, even ) but it won’t find much adoption unless it can be a transparent part of the compilation pipeline (i.e. it does not modify source files, only modifies the AST that goes to the next compilation phase – basically what annotation macros do in Scala 2)