Annotation macros

you can definitely run something post-namer pre-typer :slight_smile: (I have some examples if you want to see… e.g. tests of pcplod)

The problem is that “naming” doesn’t really name stuff… so everything is pretty much verbatim what’s in the source code. We need another offical phase in there that typechecks annotations.

Even if something can be rewritten as a compiler plugin, it makes the experience for the user less pleasant. A way to mitigate this somewhat is for authors to release an SBT plugin that sets up the right compiler flags. And that’s only helps for SBT users. So it’s a step down for users and for authors.

What exactly is the cost of supporting annotation macros?

4 Likes

@fommil Are you sure? I’ve cloned the repo and NoddyPlugin runs after parser and before namer. Maybe you meant other tests?

I would be surprised if you can actually run things in the middle of the frontend. It seems that PhaseAssembly should respect the hard links in packageobjects and typer specified by runsRightAfter. Otherwise a globalError is thrown (IIRC, this is what I got when I forced the hard link in my plugin too).

We can move this discussion to Gitter to avoid hijacking this thread. @fommil

Annotation macros have never been officially supported by scalac, they have always required an external compiler plugin. The “cost” of supporting annotation macros is not free, there are challenges that need to be solved

  • semantic APIs, annotation macros can only robustly be supported with syntactic APIs. By exposing semantic APIs to macro annotations you introduce a cyclic dependency between expanding annotation macros and typechecking (that depends on public members synthesized by annotations)
  • tooling: IntelliJ, the REPL, scaladoc, zinc, etc., need to accommodate the annotation macro expansion pipeline. For scaladoc, annotations should be able to attach docstrings to synthesized public members.
  • annotation macro signature and discovery, see https://github.com/scalacenter/macros/issues/6

All of these problems are solvable, but addressing them means less time spent on other improvements. I would say that scripted code generation is a stronger contender to annotation macros than compiler plugins. Compiler plugins are not portable and therefore suffer from the tooling problem.

Thank you for sharing your thoughts @chandu0101. Can you estimate how easy it would be to migrate your library from annotation macros to code generation via scripting? What do you think would be the tradeoffs from that change?

@olafurpg

Can you estimate how easy it would be to migrate your library from annotation macros to code generation via scripting? What do you think would be the tradeoffs from that change?

If out of box macro annotations support is going to be huge work, then i don’t mind going with scripting for now, could you please share some material(links) on scala code generation via scripting

1 Like

Would it make sense to have macro annotations that do not change the annotated class, but do code generation instead? For me, having an official language-integrated way to do code generation would be an enormous advantage (compared to scripting) and it would cover most of my use cases for macro annotations.

As an example usage, one could have a set of annotated classes in some model package, that automatically generate corresponding transformed classes in some sibling gen package.

There are several advantages:

  • All the code base can be type checked, and then the macro annotations can be invoked (they would obviously have to generate code into a different project). Many macro annotations would benefit from being able to see type-checked trees with return types inferred, symbols available and fully-resolved names.

  • This would prevent “abuses” of macro annotations (such as this) where one uses syntactic forms that are not valid Scala but pass the parser and can be manipulated by syntactic macro annotations. I think this was one of the original design goals of macro annotations, but it was also recognized as being too flexible and making Scala look not like Scala.

Would it make sense to have macro annotations that do not change the annotated class, but do code generation instead?

Would this support cases like @deriving above that inserts members into the companion object?

I think so. As long as the generated class mirrors everything defined in the original (model) class, but adds members to the companion object. Users would always use the gen version –– the model version existing only for the purpose of easing code-generation.

material(links) on scala code generation via scripting

It’s quite a general topic, and seems to be more popular in other programming languages. Here are some Scala projects using code generation

In other languages

Code generation definitely has problems, but it’s a capable replacement for macro annotation in many scenarios.

1 Like

Macro annotations are instrumentals for frameworks like Freestyle and as far as I can tell what we do today is not possible with codegen,

Freestyle modifies the definitions of the annotated classes to remove boilerplate and make those classes implement the patterns it needs for programs to remain compatible in vanilla scala contexts where Freestyle is not in use.

For example the following definition:

@free trait Algebra {
  def doSomething: FS[Int]
}

Is expanded to: (simplified)

trait Algebra[F[_]] extends EffectLike[F] {
  def doSomething: FS[Int] = FreeS.liftF(...)
}

This could not be done with codegen without creating additional classes and types.
In the context of Free and Tagless carrying over a F[_] representing in the former the total Coproduct of Algebras and in the later the target runtime to which programs are interpreted is just boilerplate to most users and removing that and using the FS dependent type materialized by EffectLike is one of Freestyle features and foundations in which the entire library and companion libraries are built.

There is many other frameworks that modify in place annotated classes doing trivial things like adding companions or members to companions. Not being able to do that in place and just code generating things will change not only the usage but also semantics as to where implicits are placed and potentially result in other undesired behaviors that users will be responsible to fix.

Not supporting annotation macros would be a major breaking change and will break a ton of user code not only from a compilation stand point but also semantics and usage of these project public APIs.

6 Likes

There are a few points I’d like to clarify about my understanding of what the community requirements are, just so we’re all on the same page:

  • annotation macros are currently provided by a third party plugin and are notoriously broken in build tooling (presentation compiler, intellij, coverage, etc) and have some very bizarre behaviour in many cases when types get involved.
  • We have many existing macro libraries using annotation macros, e.g. simulacrum, deriving.
  • meta paradise offered an improved API over macro paradise, but unfortunately the tooling breakages are very bad (doesn’t support for comprehensions, crashing the presentation compiler, scaladocs, etc) with technical challenges mounting.
  • compiler plugins allow placement of meta programming at a specific phase in the compiler and are therefore more stable for tooling (only intellij requires custom support, and it is not hard to write it). But unfortunately quasiquotes (and the meta API) are not supported for compiler plugins and the ability to typecheck arbitrary trees is not available (which is only really “best efforts” in macros anyway, discovered through trial and error).
  • the difference for a scala developer to use a compiler plugin vs a macro is completely negligible. It’s a one liner in both cases and involves no code changes.
  • we should strive to use codegen where we can (e.g. the creation of new ADTs), but there are many places where we simply cannot (e.g. when modifying a compilation unit to have access to knownSubTypes, existing classes, and companions)
  • many current macro annotations require access to type information that is not available from early phases in a compiler plugin.

The obvious options seem to be:

  1. add support for annotation macros to the Production Ready Macros, including full support for all tooling (including the presentation compiler, intellij, etc).
  2. or, make a meta-like API available to compiler plugin authors and add some earlier typechecking / fqn / dealiasing phases (and the testkit!), such that existing plugins like simulacrum, freestyle and deriving can all be easily ported.
2 Likes

Annotation macros were suggested as a solution to a concern regarding opaque types; while the issue might also be solvable using scripted code gen, it would be substantially less user-friendly for the simple use case in question.

2 Likes

Synthesising members that need to be visible to other code currently being typechecked is a very delicate procedure, e.g. our implemenation of case classes in the Namer is pretty tough to understand. The hard part is that you want to base the logic of the macro on Types, rather than just syntax, but to get types you might trigger typechecking of code that will observe a scope before your macro has added/amended it. This could manifest as a fatal cyclic error, or as a failure to typecheck code that expects the synthetic members to be available.

How can we providing API/infrastructure support to macro authors who try to walk this tightrope? This is a core question, but one that requires a serious investment of research.

This question is orthoganal to the question of what API should be used to interrogate or synthesize trees or types, or the whether macros can be executed in the IntelliJ presentation compiler (which are both Hard Problems™️ in and of themselves!)

One small part of this research I’d like to see is a review of Java’s annotation processors (http://docs.oracle.com/javase/9/docs/api/javax/annotation/processing/package-summary.html#package_description). What can we learn from the design? What restrictions are imposed that would be too strict for our uses? How (if at all) do Java annotation processors integrate into IntelliJ’s presentation compiler?

6 Likes

the last time I looked at Java annotation processors, they didn’t work in IntelliJ and maintaining a separate impl of each expansion was necessary. e.g. Lombok.

However, I’m coming around to the thinking of having a separate impl for the IDE and the compiler, for perf reasons. For example, in stalactite (the deriving macro) we don’t do any implicit derivations in IntelliJ… which dramatically speeds up the dev cycle. It’s only during the actual compile that errors will be discovered, and a lot of people prefer that because it doesn’t get in the way and then it means you’re interacting with the real compiler to fix tricky implicit problems. I’m in favour of less false red squigglies, and letting more through to the real compiler to catch.

To add to this, Scalameta annotation macros and the inline/meta proposal don’t expose semantic APIs to annotation macros. They are purely syntactic. This limits the capabilities of annotation macros but avoids introducing a cyclic dependency between typechecking and macro expansion. It seems that syntactic APIs are still sufficient to accomplish many impressive applications.

I’m using macro annotations in my scala-js-preact library. This is the scala.js facade for the js library. Macro annotations help me to reduce boilerplate code in the public API. Resulting API looks a lot more clean and simple.
The first implementation of the library API was without macro annotations. It contains a few objects with factory methods for creating entities. And I was not happy with this API. It requires a lot more description in the docs and was a lot more complex. Macro annotations contain all this logic inside and at the same time its effect is still understandable (not too much magic).

I’m not sure that I can switch current API to another metaprogramming technique. As far as I understand, atm I have 2 alternatives:

  1. Using code generation with build tool. Actually, in my case, it could work, but I don’t want to provide my library with extra sbt plugin. It could confuse users and it makes the library less convenient for getting started.
  2. Compiler plugin. It could work too, but it has the same disadvantages as sbt plugin. And also, it makes the code less portable.
1 Like
  • I use the @dom and @fxml macro annotations to convert XML literal into HTML elements or FXML JavaBeans in Binding.scala
  • I use macro annotations to generate test suite in example.scala
  • For Binding.scala, I need the untyped tree to extract original for/yield expression, which will be messed up in typer due to implicit CanBuildFrom.
  • For example.scala, I need code generation for a class, not an expression.

Compiler plugins can achieve the same functionality. I need to rewrite the outer structure of those libraries as compiler plugin.

1 Like

Just wanted to chime in here to plea for macro annotations to not be abandoned. As mentioned before, libraries like freestyle allow to make complex patterns easy, otherwise requiring quite some boilerplate.

For my personal project i use them to build interfaces between scala (jvm) server and a scala jsclient, in the js client i use it to, for example, convert scala futures to js promises, scala lists to js arrays, so that i can easily call the methods in scala-js from from plain vanilla js.

Replacing this by compiler plugins would be way over my head, macro annotations are easy. Some other plain old codegen seems reinventing the wheel to me, it would be way more cumbersome, as it would require parsing the annotated code one way or another.

One of the big plusses (hate to mentioned it here) of scala over kotlin are its meta programming capabilities and the amount of boilerplate that they can reduce. Loosing macro annotations would hurt the language in this respect. I understand that making things work nicely in the ide is hard, but i’d be perfectly happy with macro annotations that have no ide support. Ide support for fancy scala is abysmal as is, red squiggly lines everywhere. In the case of annotation based code gen i’m not bothered by the ide borking, no big deal.

7 Likes

Hi,

towards what end do you use macro annotations?

We use them for grafter which is a dependency-injection library

why are macro annotations important for you and your users?

Because they allow us to modify the dependency graph of an application by basically doing nothing but add a new dependency to a case class. This is lots of productivity gained.

can you use alternative metaprogramming techniques such as
code generation scripts or compiler plugins to achieve the same
functionality? How would that refactoring impact your whitebox macro?

I have no idea if I would know how to replace them but I would be extremely annoyed to have to change this code, which would have a major impact on our applications and components.