What kinds of macros should Scala 3 support?

I agree with you.

I think it would be great if Scala 3 has ability to do it with black box macros.

May be It can be done with something like:
Pre-SIP: export, dual* of import

@RpcProxy
class  SomeThingProxy extend SomeThing{
  //Generate empty method for black box macros. 
  export SomeThing
}

There is a new proposal for typelevel programming on the table, which is intended as a safer alternative for whitebox macros.

In a nutshell:

  • If you need to create new types and see these types in the same compilation run, transparent functions are the mechanism to use.

  • If you need to create new data and types for usage in a downstream projects, you can use principled meta programming and Tasty reflection.

7 Likes

I like the new transparent mechanism quite a lot, and can see its power. The only thing that seems to be missing, offhand, is something equivalent to Generic and LabelledGeneric – a way to convert between the various sorts of strongly-typed products. Are there plans to add a mechanism for that?

Wouldn’t the new breed of macro annotations (which generate code that becomes visible in dependent projects) fulfill this use case?

Quite possibly, but I’m concerned that it may make applications structurally messy. Or possibly I’m misunderstanding how these macro annotations would come into play.

I guess the question is, if I am writing an application that is depending New Circe (however that works in the new world), can I do my serialization without having to break things down into multiple projects? I don’t mind things getting a little complex for libraries; I’m more concerned if that’s the case for routine applications.

That’s really the use case that I’d like to see fully-worked in the new world – my observation is that many problems seem to be Circe-complete. (That is, they turn out to want essentially the same machinery as Circe.) So if a consumer of New Circe could operate with reasonably minimal boilerplate, I’ll believe that many use cases are solved. But so far, I don’t quite grok how that would work in the new environment.

1 Like

No. Generics are currently generated on the fly for any case class and are available in the same project where case class is defined. Limiting them only to dependent projects would kill the entire ecosystem of typeclass derivation…

In that case, I think we would need one standard Generic-like instance generated by the Scala compiler for each case class, so that libraries like circe can leverage it via implicits. And transparent methods could simplify this process (reducing the amount of implicit definitions needed). Additionally, it would be really cool if transparent methods and/or implicits could be cached somehow, so we don’t end up generating as much duplicated code as today.

1 Like

I think we would need one standard Generic-like instance generated by the Scala compiler for each case class

I’d rather that we retain a generic way to do compile-time reflection and surfacing that information in generated code - terms or types, rather than blessing one compiler-baked Generic implementation. Many projects rely on the ability to do their own compile-time and runtime reflection, such as beanpuree - beanpuree/README.md at master · limansky/beanpuree · GitHub - that inspects Java classes, not Scala classes, and as such can’t be served by scala-specific Generic, jsoniter - GitHub - plokhotnyuk/jsoniter-scala: Scala macros for compile-time generation of safe and ultra-fast JSON codecs - that relies on compile-time reflection and macros to generate very efficient low-level JSON parsing code, distage - GitHub - 7mind/izumi: Productivity-oriented collection of lightweight fancy stuff for Scala toolchain - our own project, hybrid runtime and compile-time dependency injection framework that crucially relies on TypeTags - but we’re also generating custom type tags for higher-kinded types - https://github.com/pshirshov/izumi-r2/blob/develop/distage/distage-model/src/main/scala/com/github/pshirshov/izumi/distage/model/reflection/universe/WithDITypeTags.scala

Additionally, it would be really cool if transparent methods and/or implicits could be cached somehow, so we don’t end up generating as much duplicated code as today.

Implicits cannot be cached because they’re not coherent and not pure. Transparent methods are all that + dependent on call site scope.

We plan to put typeclass derivation in the language, by adding a scheme where typeclasses can be defined automatically for case class hierarchies. What transparent functions give us in this respect is that we can have a simple fold-based derivation mechanism and still get the full power of Generic and LabelledGeneric. At least I hope so - we still have to try that out.

1 Like

How to minimize def mImpl(x: Expr[T]) invocation?

If I understand correctly in the case:

transparent def m(x: T) = ~mImpl('x)
...
def mImpl(x: Expr[T]) = ...

The mImpl will be called in every usage of method m :

val m1 = m(1) //first call mImpl
val m2 = m(2) //second call mImpl

It can be very expensive.
Is it posible mImpl('x) to be called only in class definition?

transparent  trait Factory(val  a:T){
    transparent def apply(x: T) = ~mImpl('x,'a)
}

object  Factory extends Factory("type")

At compile time it transforms

object  Factory extends Factory("type"){
  transparent def apply(x: T) = x match {
    case 0 => ...
    case _ => ...
  }
}
1 Like

@AMatveev but mImpl is called at compile-time just like current macros, so what’s the difference?

-compilation time
-code size

If we had 4 000 000 lines of code, and 10000 custom generated types it would be very important :slight_smile:

(These are real numbers from one of our projects)

@AMatveev It seems that something like what you’re asking for (run a macro once per class) could be achieved by typeclass derivation. But a macro like transparent def apply(x: T) is too powerful to be optimized like you ask: it must be expanded at each invocation (just like with current macros) because it can generate different code for each x.

I know, but I was asking about the difference between transparent and Scala 2 macros in this regard, which seems to be “none”.

It will be great. We use code generation at this time.
It has disadvantages. But I can not see any simple alternative.

I’m not sure how relevant this thread is anymore, but one more note:

I recently started to play with quill which is an awesome lib for DB access. It relies on quoted dsl to construct sql queries during compilation. I believe it won’t be possible without the whitebox macros. It would be very sad if such libs won’t be possible in scala 3.

Excuse me if it was discussed before, but I could find neither quill nor quoted DSL in this thread.

3 Likes

I believe it won’t be possible without the whitebox macros.

Can you explain why it won’t be possible without whitebox macros? quill is surely a good use case so it’s worth figuring out what’s needed to support it.

1 Like

Quill propagates refinement types along with their quoted SQL snippets in order to perform SQL construction & optimization at compile time. Greatly simplified, it basically takes

val x = quote{1}
val y = quote{x + 2}
val z = quote{y + 3}

And the quote refines the types being returned so the contents of each call is visible in the type as an annotation:

val x: Quote[Int] @ast("1") = quote{1}
val y: Quote[Int] @ast("1 + 2") = quote{x + 2}
val z: Quote[Int] @ast("1 + 2 + 3") = quote{y + 3}

And then proceeds to perform optimizations at compile-time based on the annotated AST:

val x: Quote[Int] @ast("1") = Quote(1)
val y: Quote[Int] @ast("1 + 2") = Quote(3)
val z: Quote[Int] @ast("1 + 2 + 3") = Quote(6)

Quill doesn’t need all the power of whitebox macros. The annotations it uses to pass data around between invocations is “side channel”-ish: they never affect the “primary” type e.g. Quote[Int], but only the contents of downstream annotations.

Without the ability to pass annotations between calls via their types, Quill falls back to performing the optimisations and transformations at runtime. This works, but pushes computation-overhead and failure-reporting to runtime, whereas with the annotations present it can do all that and report any errors before you ever run any code. This would be the case if whitebox macros did not exist (blackbox macros cannot refine the types they return based on the AST captured).

5 Likes

Thanks for the explanations!

Note that in the proposed new meta programming framework, we still can create new types after typing, it’s just that those types are unavailable for the main type checking itself. So refinement types could be created as a side channel during macro expansion.

However, there’s another problem: We need to solve the problem of dependencies and separate compilation. Refinement types depend on other refinement types in fairly arbitrary ways. This has to be handled. A large part of the more sophisticated parts of Scala’s typer are there to deal with this problem. It seems a later phase of macro expansion has to implement something similar.

1 Like

It seems that it’d be possible to implement Quill’s compile-time query generation with inline. @odersky will the tree of the method marked with inline be visible if it is used as a parameter of a macro?

will the tree of the method marked with inline be visible if it is used as a parameter of a macro?

Yes. You can get at it using Tasty reflection. @nicolasstucki can give more details. He just presented a paper at the Scala Symposium that shows a key technique for doing this.