you can definitely run something post-namer pre-typer (I have some examples if you want to see… e.g. tests of pcplod)
The problem is that “naming” doesn’t really name stuff… so everything is pretty much verbatim what’s in the source code. We need another offical phase in there that typechecks annotations.
Even if something can be rewritten as a compiler plugin, it makes the experience for the user less pleasant. A way to mitigate this somewhat is for authors to release an SBT plugin that sets up the right compiler flags. And that’s only helps for SBT users. So it’s a step down for users and for authors.
What exactly is the cost of supporting annotation macros?
@fommil Are you sure? I’ve cloned the repo and NoddyPlugin runs after parser and before namer. Maybe you meant other tests?
I would be surprised if you can actually run things in the middle of the frontend. It seems that PhaseAssembly should respect the hard links in packageobjects and typer specified by runsRightAfter. Otherwise a globalError is thrown (IIRC, this is what I got when I forced the hard link in my plugin too).
We can move this discussion to Gitter to avoid hijacking this thread. @fommil
Annotation macros have never been officially supported by scalac, they have always required an external compiler plugin. The “cost” of supporting annotation macros is not free, there are challenges that need to be solved
semantic APIs, annotation macros can only robustly be supported with syntactic APIs. By exposing semantic APIs to macro annotations you introduce a cyclic dependency between expanding annotation macros and typechecking (that depends on public members synthesized by annotations)
tooling: IntelliJ, the REPL, scaladoc, zinc, etc., need to accommodate the annotation macro expansion pipeline. For scaladoc, annotations should be able to attach docstrings to synthesized public members.
All of these problems are solvable, but addressing them means less time spent on other improvements. I would say that scripted code generation is a stronger contender to annotation macros than compiler plugins. Compiler plugins are not portable and therefore suffer from the tooling problem.
Thank you for sharing your thoughts @chandu0101. Can you estimate how easy it would be to migrate your library from annotation macros to code generation via scripting? What do you think would be the tradeoffs from that change?
Can you estimate how easy it would be to migrate your library from annotation macros to code generation via scripting? What do you think would be the tradeoffs from that change?
If out of box macro annotations support is going to be huge work, then i don’t mind going with scripting for now, could you please share some material(links) on scala code generation via scripting …
Would it make sense to have macro annotations that do not change the annotated class, but do code generation instead? For me, having an official language-integrated way to do code generation would be an enormous advantage (compared to scripting) and it would cover most of my use cases for macro annotations.
As an example usage, one could have a set of annotated classes in some model package, that automatically generate corresponding transformed classes in some sibling gen package.
There are several advantages:
All the code base can be type checked, and then the macro annotations can be invoked (they would obviously have to generate code into a different project). Many macro annotations would benefit from being able to see type-checked trees with return types inferred, symbols available and fully-resolved names.
This would prevent “abuses” of macro annotations (such as this) where one uses syntactic forms that are not valid Scala but pass the parser and can be manipulated by syntactic macro annotations. I think this was one of the original design goals of macro annotations, but it was also recognized as being too flexible and making Scala look not like Scala.
I think so. As long as the generated class mirrors everything defined in the original (model) class, but adds members to the companion object. Users would always use the gen version –– the model version existing only for the purpose of easing code-generation.
Macro annotations are instrumentals for frameworks like Freestyle and as far as I can tell what we do today is not possible with codegen,
Freestyle modifies the definitions of the annotated classes to remove boilerplate and make those classes implement the patterns it needs for programs to remain compatible in vanilla scala contexts where Freestyle is not in use.
This could not be done with codegen without creating additional classes and types.
In the context of Free and Tagless carrying over a F[_] representing in the former the total Coproduct of Algebras and in the later the target runtime to which programs are interpreted is just boilerplate to most users and removing that and using the FS dependent type materialized by EffectLike is one of Freestyle features and foundations in which the entire library and companion libraries are built.
There is many other frameworks that modify in place annotated classes doing trivial things like adding companions or members to companions. Not being able to do that in place and just code generating things will change not only the usage but also semantics as to where implicits are placed and potentially result in other undesired behaviors that users will be responsible to fix.
Not supporting annotation macros would be a major breaking change and will break a ton of user code not only from a compilation stand point but also semantics and usage of these project public APIs.
There are a few points I’d like to clarify about my understanding of what the community requirements are, just so we’re all on the same page:
annotation macros are currently provided by a third party plugin and are notoriously broken in build tooling (presentation compiler, intellij, coverage, etc) and have some very bizarre behaviour in many cases when types get involved.
We have many existing macro libraries using annotation macros, e.g. simulacrum, deriving.
meta paradise offered an improved API over macro paradise, but unfortunately the tooling breakages are very bad (doesn’t support for comprehensions, crashing the presentation compiler, scaladocs, etc) with technical challenges mounting.
compiler plugins allow placement of meta programming at a specific phase in the compiler and are therefore more stable for tooling (only intellij requires custom support, and it is not hard to write it). But unfortunately quasiquotes (and the meta API) are not supported for compiler plugins and the ability to typecheck arbitrary trees is not available (which is only really “best efforts” in macros anyway, discovered through trial and error).
the difference for a scala developer to use a compiler plugin vs a macro is completely negligible. It’s a one liner in both cases and involves no code changes.
we should strive to use codegen where we can (e.g. the creation of new ADTs), but there are many places where we simply cannot (e.g. when modifying a compilation unit to have access to knownSubTypes, existing classes, and companions)
many current macro annotations require access to type information that is not available from early phases in a compiler plugin.
The obvious options seem to be:
add support for annotation macros to the Production Ready Macros, including full support for all tooling (including the presentation compiler, intellij, etc).
or, make a meta-like API available to compiler plugin authors and add some earlier typechecking / fqn / dealiasing phases (and the testkit!), such that existing plugins like simulacrum, freestyle and deriving can all be easily ported.
Synthesising members that need to be visible to other code currently being typechecked is a very delicate procedure, e.g. our implemenation of case classes in the Namer is pretty tough to understand. The hard part is that you want to base the logic of the macro on Types, rather than just syntax, but to get types you might trigger typechecking of code that will observe a scope before your macro has added/amended it. This could manifest as a fatal cyclic error, or as a failure to typecheck code that expects the synthetic members to be available.
How can we providing API/infrastructure support to macro authors who try to walk this tightrope? This is a core question, but one that requires a serious investment of research.
This question is orthoganal to the question of what API should be used to interrogate or synthesize trees or types, or the whether macros can be executed in the IntelliJ presentation compiler (which are both Hard Problems™️ in and of themselves!)
the last time I looked at Java annotation processors, they didn’t work in IntelliJ and maintaining a separate impl of each expansion was necessary. e.g. Lombok.
However, I’m coming around to the thinking of having a separate impl for the IDE and the compiler, for perf reasons. For example, in stalactite (the deriving macro) we don’t do any implicit derivations in IntelliJ… which dramatically speeds up the dev cycle. It’s only during the actual compile that errors will be discovered, and a lot of people prefer that because it doesn’t get in the way and then it means you’re interacting with the real compiler to fix tricky implicit problems. I’m in favour of less false red squigglies, and letting more through to the real compiler to catch.
To add to this, Scalameta annotation macros and the inline/meta proposal don’t expose semantic APIs to annotation macros. They are purely syntactic. This limits the capabilities of annotation macros but avoids introducing a cyclic dependency between typechecking and macro expansion. It seems that syntactic APIs are still sufficient to accomplish many impressive applications.
I’m using macro annotations in my scala-js-preact library. This is the scala.js facade for the js library. Macro annotations help me to reduce boilerplate code in the public API. Resulting API looks a lot more clean and simple.
The first implementation of the library API was without macro annotations. It contains a few objects with factory methods for creating entities. And I was not happy with this API. It requires a lot more description in the docs and was a lot more complex. Macro annotations contain all this logic inside and at the same time its effect is still understandable (not too much magic).
I’m not sure that I can switch current API to another metaprogramming technique. As far as I understand, atm I have 2 alternatives:
Using code generation with build tool. Actually, in my case, it could work, but I don’t want to provide my library with extra sbt plugin. It could confuse users and it makes the library less convenient for getting started.
Compiler plugin. It could work too, but it has the same disadvantages as sbt plugin. And also, it makes the code less portable.
Just wanted to chime in here to plea for macro annotations to not be abandoned. As mentioned before, libraries like freestyle allow to make complex patterns easy, otherwise requiring quite some boilerplate.
For my personal project i use them to build interfaces between scala (jvm) server and a scala jsclient, in the js client i use it to, for example, convert scala futures to js promises, scala lists to js arrays, so that i can easily call the methods in scala-js from from plain vanilla js.
Replacing this by compiler plugins would be way over my head, macro annotations are easy. Some other plain old codegen seems reinventing the wheel to me, it would be way more cumbersome, as it would require parsing the annotated code one way or another.
One of the big plusses (hate to mentioned it here) of scala over kotlin are its meta programming capabilities and the amount of boilerplate that they can reduce. Loosing macro annotations would hurt the language in this respect. I understand that making things work nicely in the ide is hard, but i’d be perfectly happy with macro annotations that have no ide support. Ide support for fancy scala is abysmal as is, red squiggly lines everywhere. In the case of annotation based code gen i’m not bothered by the ide borking, no big deal.
We use them for grafter which is a dependency-injection library
why are macro annotations important for you and your users?
Because they allow us to modify the dependency graph of an application by basically doing nothing but add a new dependency to a case class. This is lots of productivity gained.
can you use alternative metaprogramming techniques such as
code generation scripts or compiler plugins to achieve the same
functionality? How would that refactoring impact your whitebox macro?
I have no idea if I would know how to replace them but I would be extremely annoyed to have to change this code, which would have a major impact on our applications and components.