Pre-SIP: Usable quotes API in std lib

Motivation

After much effort, as a user of the scala language, it seems apparent that the manner in which the quotes API is defined offers an unfortunately low level of usability. This is caused by several factors.

  1. The manner in which the std lib makes all the types within quotes.reflect abstract makes the API extremely difficult to use.

    1. Intellij can not figure out the extension methods whatsoever. So, for the many developers who use Intellij and would like to program against this API, they are severely limited in the code completion capabilities. This is debilitating when trying to use an API with compiler-level complexity. Maybe metals is better, not sure.

    2. One might say “Intellij just needs to fix its completion”, but this does not fix the fact that this manner of definition is A: strange, and B: completely loses exhaustive pattern matching capabilities. Now, imagine trying to write symbol.tree mat... looking to match on the tree of a symbol, and not only can the IDE not find .tree, but even if you told it with (symbol.tree: Tree) mat.., you can’t exhaustively match on what the options are. Many people would probably quit here. Speaking from personal experience, I gave up on this API twice before I finally wrote an entire wrapper around it, and was then able to make actual progress.

      type DefDef <: ValOrDefDef
      
      given DefDefTypeTest: TypeTest[Tree, DefDef]
      
      val DefDef: DefDefModule
      
      trait DefDefModule { this: DefDef.type =>
        def apply(symbol: Symbol, rhsFn: List[List[Tree]] => Option[Term]): DefDef
        def copy(original: Tree)(name: String, paramss: List[ParamClause], tpt: TypeTree, rhs: Option[Term]): DefDef
        def unapply(ddef: DefDef): (String, List[ParamClause], TypeTree, Option[Term])
      }
      
      given DefDefMethods: DefDefMethods
      
      trait DefDefMethods:
        extension (self: DefDef)
          def paramss: List[ParamClause]
      
          def leadingTypeParams: List[TypeDef]
      
          def trailingParamss: List[ParamClause]
      
          def termParamss: List[TermParamClause]
      
          def returnTpt: TypeTree
      
          def rhs: Option[Term]
        end extension
      end DefDefMethods
      
  2. The scoped nature where types like Term and TypeRepr are defined make it very restrictive to work with at scale. When writing a large or complicated program, its necessary to be able to split code up into multiple files, and define types and abstractions for your domain logic.

    1. Imagine trying to define the following type:

      final case class Function(
          rootTree: Tree,
          params: List[Function.Param],
          body: Term,
      )
      object Function {
      
            final case class Param(
                name: String,
                tpe: TypeRepr,
                tree: Tree,
                fromInput: Option[Expr[Any] => Expr[Any]],
            )
      
      }
      
    2. In order to do this, you have to either define it within your function body, like:

      def myCode(using quotes: Quotes): Any = {
        import quotes.reflect.*
        final case class Function // ...
        
        // do stuff with Function
      }          
      
    3. Or within a class, like:

      final class MyCode(using val quotes: Quotes): Any = {
        import quotes.reflect.*
        final case class Function // ...
      
        // do stuff with Function
      }
      
    4. It would be insane to try and define all this within a single function, so many open-source libs go with an approach like #3. But, even then, its very easy to end up with multi-thousand line files inside something like MyCode, because the type system is making it very difficult to split things out. It is sometimes possible, if you try really hard, to split things out into separate files, but even then there are limitations, and it makes things very messy:

      final class Types1(using val quotes: Quotes) {
        import quotes.reflect.*
      
        final case class Function(
            rootTree: Tree,
            params: List[Function.Param],
            body: Term,
        )
        object Function {
      
          final case class Param(
              name: String,
              tpe: TypeRepr,
              tree: Tree,
              fromInput: Option[Expr[Any] => Expr[Any]],
          )
      
          def parse(term: Term): Function =
            ??? // TODO (KR) :
      
        }
      
      }
      
      final class Types2[Q <: Quotes](using val quotes: Q) {
        import quotes.reflect.*
      
        final case class Function(
            rootTree: Tree,
            params: List[Function.Param],
            body: Term,
        )
        object Function {
      
          final case class Param(
              name: String,
              tpe: TypeRepr,
              tree: Tree,
              fromInput: Option[Expr[Any] => Expr[Any]],
          )
      
          def parse(term: Term): Function =
            ??? // TODO (KR) :
      
        }
      
      }
      
      final class Logic1(using val quotes: Quotes) {
        val types: Types1 = Types1(using quotes)
        import quotes.reflect.*
        import types.*
      
        def getFunction(term: Term): Function =
          Function.parse(term) // error, wrong `Quotes` type
      
      }
      
      final class Logic2(using val quotes: Quotes) {
        val types: Types1 = Types1(using quotes)
        import types.*
        import types.quotes.reflect.* // this matters
      
        def getFunction(term: Term): Function =
          Function.parse(term)
      
      }
      
      final class Logic3(using val quotes: Quotes) {
        val types: Types2[quotes.type] = Types2(using quotes)
        import quotes.reflect.*
        import types.*
      
        def getFunction(term: Term): Function =
          Function.parse(term)
      
      }
      
    5. It seems like it should be very intuitive to be able to do something along the lines of:

      import scala.quoted.ast.*
      
      final case class Function(
          rootTree: Tree,
          params: List[Function.Param],
          body: Term,
      )
      object Function {
      
        final case class Param(
            name: String,
            tpe: TypeRepr,
            tree: Tree,
            fromInput: Option[Expr[Any] => Expr[Any]],
        )
      
        def parse(term: Term): Function =
          ??? // TODO (KR) :
      
      }
      
      def myCode(expr: Expr[Any])(using quotes: Quotes): Function =
        Function.parse(expr.asTerm)
      
    6. As a general principle, it seems that if using an API borderline forces you to define any related logic in a single file, it is not designed properly.

Potential Downfalls

It is possible that there is something inherent to the Quotes API that forces all instances to be scoped to the same Quotes instance, but this seems unlikely, for a few reasons:

  1. The API enforces that the only way you are getting an instance of Quotes is via an inline def + interpolate impl, so its not like there are many instances of Quotes coming from different roots. You are only ever getting an initial instance from 1 place, and any other nested instances are derived from that one.
  2. If you really care about the exact instance of Quotes which a Symbol or Tree belongs to, it seems far more detrimental to have an API that encourages files thousands of lines long, with dependent types everywhere, and the only thing making it usable is a global import quotes.reflect.* at the top. Therefore, if you are quoting and splicing Exprs, and have helper types with something like final case class MyType(repr: TypeRepr), then any MyType created in some nesting technically has the wrong Quotes instance.

Suggested Design


package scala.quoted.ast

trait Quoted private[ast] {
  def quotes: Quotes
}

trait Symbol private[ast] extends Quoted

sealed trait Tree extends Quoted {

  def symbol: Symbol

}

sealed trait Statement extends Tree

sealed trait Term extends Statement

sealed trait Definition extends Statement

sealed trait ValOrDefDef extends Definition

trait ValDef private[ast] extends ValOrDefDef
object ValDef {
  
  def apply(symbol: Symbol, rhs: Option[Term])(using quotes: Quotes): ValDef =
    quotes.reflect.ValDef.apply(symbol, rhs)
  
}

This way, you still need an instance of Quotes to create instances of things, but you are not burdened with AST nodes being scoped as an inner class. And then the implementations can happen elsewhere, like:

package scala.quoted.ast.impl

private[quoted] trait Tree { self: ast.Tree =>

  def symbol: ast.Symbol = implemented
  
}

private[quoted] final case class ValDef(quotes: Quotes, /* ... */) extends ast.ValDef

Also, without the limitation of the inner classes, you can have nice top level definitions like:

final class Expressions[F[_]] // ...

type Id[A] = A

trait ProductMirror[A] {

  val tpe: Type[A]
  val label: String
  val fields: Seq[ProductMirror.Field[?]]

  final case class Field[B](
      idx: Int,
      name: String,
      sym: Symbol, // no quotes nesting, just a normal class
      tpe: Type[B],
      get: Expr[A] => Expr[B],
  ) {

    def getExpr[F[_]](expressions: Expressions[F]): F[B] = ??? // ...

    def typeClass[F[_]]: Expr[B] = ??? // ...

  }

  def typeClasses[F[_]]: Expressions[F] = ??? // ...

  def instantiate(f: [b] => Field[b] => b): A = ??? // ...

  def instantiateEither[L](f: [b] => Field[b] => Either[L, b]): Either[L, A] = ??? // ...

  // and many other very easily usable builders, no fighting with scope

}

And derivation is just as easy:

trait Show[A] {
  def show(a: A): String
}
object Show {

  def product[A](g: ProductMirror[A])(using quotes: Quotes): Expr[Show[A]] = {
    def fields(a: Expr[A]): Seq[Expr[String]] =
      g.fields.flatMap { f =>
        Seq(Expr(f.name + " = "), '{ ${ f.typeClass[Show] }.show(${ field.get(a) }) })
      }
    def all(a: Expr[A]): Seq[Expr[String]] =
      Seq(
        Seq(Expr(g.label + "(")),
        fields(a),
        Seq(Expr(")")),
      ).flatten

    new Show[A] {
      def show(a: A): String = '{ ${ Expr.ofSeq(all('a)) }.mkString }
    }
  }

}

In a world without this nesting constantly getting in your way, IMO, there is really no need for mirrors. All the mirrors do is give you a boatload of asInstanceOf, and uncertainty about what is inlined and what isnt.

With a usable API, its actually easier and more type safe to just implement generic type classes directly with quotes/exprs, instead of type-level summon functions and mirrors. But currently doing this requires all sorts of extra hoops to jump through.

It also gives you way more control over the code thats generated. And you can easily do things like caching lazy vals outside your instance, and then when you summon an instance, it just gets the cached lazy val.

TLDR: lots of amazing things you can do with quotes & exprs, but the way the API is defined quite heavily makes the programmers life more difficult.

1 Like

I would suggest you attempt to create this as an external library first that uses the existing scheme internally.

I would say I don’t see this ever changing again to avoid breakage for all those who already moved to Svala 3 macros. But if you create a library that is better and sits on top, that’s the proper way forward to support the ‘old’ API and offer a better one.

4 Likes

@soronpo, before I posted this yesterday, this is what I was going through the exercise of doing. The reason I made the post suggesting the change be made to the std lib instead of an external library is as follows:

  1. This API is HUGE, and unless there was some kind of code generation mechanism, keeping it in sync with the std lib would be humanly impossible.
  2. The effort of coverting these types back and forth was very grueling, and compiling very slowly, so it seemed easier to convert it at the std lib level.

Do you think there would be an openness to coontributing this to the std lib, if it could be done in a backwards compatible manner?

The std lib very rarely changes and the changes are also minor and backwards compatible. And I bet you can actually do most of the work with AI. Just write a partial API of what you want to do so a pattern is clearly understandable by the AI and ask it to complete it for you.

I really don’t see why it should be like this and even if so, I’m skeptical a new API from scratch would be much faster.

If you can make this backwards compatible, sure. I’m not sure this needs to go through a SIP process. Currently changes to the stdlib are up to the decision of the compiler team.

1 Like

What would be the best place to get feedback from the compiler team? Discord? Github issue?

I feel your pain. Not only is the API hard to use, it is a footgun, too.

Regarding exhaustivity, though, I don’t think you can hope for exhaustive pattern matches, unless you freeze the language. If you pattern-matching on user-provided code, I think you have to settle for supporting a only defined subset of code constructs, and try to give a helpful error message otherwise.

Regarding the current design of Quotes as a module with type definitions inside, one can legitimately question, as you do, whether it does not cause more problems than it solves. I’d even say that your example of

final class Logic3(using val quotes: Quotes) {
  val types: Types2[quotes.type] = Types2(using quotes)

is still rather mild in terms of how complex things can get.

However, my preferred solution would be to improve usability of this sort of modular programming, as it would be useful much more broadly than just the Quotes API. There are some ideas in this thread.

1 Like

To clarify, I think that the functionality provided by macros in scala-3 is absolutely amazing. The amount of power under the hood, and the things you can achieve, is amazing. The only thing I have a problem with here is usability and scoping.

On your point about not having exhaustive pattern matching, this is a huge bummer, and a totally fair point. That being said, I still think having actual traits defined, instead of a dependent type B <: A improves both:

  1. The compilers ability to understand the code, and generate match statements at all. Having the IDE write this for you:
    x match {
      case A => ???
      case B => ???
      case C => ???
      case _ => ???
    }
    
    is still WAY better than getting:
    x match {
      case 
    }
    
    because it cant figure out anything.
  2. The programmers ability to understand the code. Having all of this module or dependent nesting, whether the semantics and usability of such a concept is improved, feels overkill and confusing, in my opinion. Why should the user be exposed to such levels of complexity? It feels very natural to me that I have a scala.quoted.Expr[?], and as long as I have a given Quotes instance, I can do .asTerm, and get a scala.quoted.ast.Term, and then I can click on .asTerm, because the IDE actually knows that exists, and then can see trait Term, and its defined like a normal type that I understand and see every day as a scala developer.

@TomasMikula My 2 questions for you would be:

  1. What value do you see being derived from having these types defined within some dependent module, and just having a better way to express that, as opposed to being top-level definitions.
  2. Could you provide an example of how your module improvement proposal would look for the following example? Admittedly, I had a bit of a difficult time following the other example, potentially because it was not as related to real examples I have experienced. Would it be possible to define the following helper as a top-level definition?
    final class K0[Q <: Quotes](using val quotes: Q) {
      import quotes.reflect.*
    
      trait ProductGeneric[A] {
    
        val fields: Seq[Field[?]]
    
        final case class Field[I](
            idx: Int,
            symRepr: Symbol,
            constructorSymRepr: Symbol,
            typeRepr: TypeRepr,
            tpe: Type[I],
            valDef: ValDef,
            get: Expr[A] => Expr[I],
        )
    
      }
    
    }
    

The quotes API was designed this way since it needs to hide the compiler. Without the compiler doing the actual work you would need to re-implement most of its functionality in the quotes implementation. This will take many years and the result will probably still not be a 100% match. So, that’s not a viable option. That leaves you with two possibilites:

  • Hide by type abstraction. That’s what’s done in the Quotes API.
  • Hide by wrapping everything. You’d need to wrap every exposed type and introduce global bijective maps to go back and forth without losing reference identity. I believe that’s also a lot of work, and I doubt the added complexity of the interface layer gets amortized by easier usage. But if you want to go ahead with the idea of an alternative facade for the quotes API that would be the way to go.

One caveat though: There is no way a massive blob like that will land in the standard library without extensive trials in the community at large. So it will need to start life as a separate library. Then, if most people would agree that it’s an important improvement for their work, we can discuss whether to include this in stdlib at some later point.

4 Likes

When I say “my preferred solution would be to improve usability of this sort of modular programming”, it is where I’d prefer efforts be directed, given the current state. I didn’t mean to imply that the current design was superior.

Your K0 is already a top-level definition, so I suppose you want to make ProductGeneric top-level. But you already know how to do that, too. For example, you can add the [Q <: Quotes] type parameter or val quotes: Quotes member to the ProductGeneric trait:

trait ProductGeneric[Q <: Quotes, A] { ... }

// or 

trait ProductGeneric[A] {
  val quotes: Quotes
  ...
}

The problem arises when you need to convince the typechecker that, for example, TypeRepr inside p1: ProductGeneric is the same type as TypeRepr inside p2: ProductGeneric. The proposal (and the proposal linked from it) are supposed to help with that problem.

For example, it would allow you to define a type alias

type ProdGeneric(using q: Quotes)[A] = ProductGeneric[q.type, A]

// or

type ProdGeneric(using q: Quotes)[A] = ProductGeneric[A] { val quotes: q.type }

In any context where given q: Quotes is available, you would simply use the type ProdGeneric[A] (inferred to be ProdGeneric(using q)[A]). If you had p1, p2: ProdGeneric[A], the compiler would know that TypeRepr inside p1 is the same as TypeRepr in p2, which is the problem I was trying to solve.

as it is currently defined, a Quotes object is tightly coupled to the exact symbol and position where a macro is expanded from, its not “global” in that sense

1 Like

Thanks for all the responses! It seems like the feedback is that my best path forward is to create a 3rd-party lib, so I did that.

After embarking on this journey, I am fully convinced that using mirrors is not worth it. It seems to me that what scala developers looking to derive using macros really need is a sort of “macro mirror”, which exposes similar APIs to something like shapeless/magnolia, but doesnt try to hide behind mirrors. The implementation in scala-3 of quoting and splicing is SO good, that with the right helpers, its actually easier than mirrors, generates better code than mirrors, and gives you much more control over knowing what will be generated. This is what oxygen-meta aims to provide.

If anyone comes across this post and is interested in seeing how this works, I’ve started a video series:

1 Like

I wouldn’t say Mirrors are not worth it. Shapeless, Magnolia and now Mirrors lowered the entry-level required to generate code in compile time, without requiring users to learn low-level APIs. Without them: Circe/Pureconfig/etc would not become a thing. Libraries like Chimney or Ducktape also started as something Shapeless/Mirrors based, and only later were rewritten to macros.

But I would say that to deliver something with as good developer experience as Java’s counterparts type class derivation should be more user friendly: provide some logs, show output code on demand, have good human-readable errors, great runtime performance, low compilation times - and that is much harder do with Mirrors and quite easy with macros… provided that there are some good utilities for common cases. And these utilities are easier to develop as a third-party library (that can iterate as fast as necessary) than something tied to compiler’s release cycle and backward compatibility requirements.

I am happy for a new effort, if you are interested, then we can create somewhere some working group where people developing on such utilities could exchange experience?

1 Like

@MateuszKubuszok, I would encourage you to watch the monoid video linked above. I would strongly challenge that a user of shapeless/magnolia could implement a typeclass any easier than I have done above, using a macro implementation. As I mention, in the vast majority of common cases, even though you are implementing using macros, with the right helpers, I have made it through 5 typeclass derivation videos without once being exposed to Tree/Term/TypeRepr. I feel pretty confident saying this is some pretty great stuff, and I feel like Ive hit the nail on the head with this one… I would love to get some sort of working group together on this to continue to improve!!!

provided that there are some good utilities for common cases

I think that this is the most important part to hint on. In the mirror implementations, my experience is that these libraries provide helpers for common cases. If your use case goes beyond the simple cases, its borderline impossible to get what you need done using mirrors. With a macro implementation, this is not the case. As demonstrated in the video series above, things are not set up in this way of “this hard-coded case either works for me or doesnt”. Everything is done with composable building blocks in their own right, and then there are some super-usable helpers on top of those for the most common of cases.

I’ve seen it and I looked at the code on GitHub. I have no doubt that there can be some API that would help people migrate away from Shapeless/Mirrors to macros, and not deal with with Trees - because I shown the benchmarks of a code implemented with such API. :wink:

I have no doubt that it is already useful, just like I have no doubt that any approach you take with such a library would sit well with some and not so much with other people, especially if there might be different goals between different libraries. There is more than 1 such a library already - for Chimney I had to develop chimney-macro-commons to make it easier to write cross-platform-macro code. But there is a lot of Chimney-specific assumptions in it, which might limit its usability in other cases. (Which is why I started working on a new library without these limitations, but still allowing macro-cross-compilation).

But I don’t want this post to become a library contest: there is place for more than 1 such a library in the community (we have like 7-8 newtypes libraries). I write it just to clarify that it would probably be more about maintainers of possibly several different approaches sharing their findings and collaborating. The biggest issue with such a library (well, any macro library), is that in a long run your users are running into more and more corner cases, and it quickly becomes a collection for workarounds. And it is a lot of duplicated effort when several maintainers have to independently design from scratch workaround for the same problem. Whether in a long run there will be 1 such a library or multiple is irrelevant - it would be valuable for people dedicated to solving that problem to have some common place for discussion with compiler team what would be OK and what would be cursed-and-soon-to-break-again, place for sharing the solutions, newly found gotchas etc.

You don’t have to sell me on macros :wink: I made a whole presentation where I showed that if you want to:

  • optimize the compilation times
  • emit fast bytecode
  • provide sane human-readable errors, ideally all found issues at once, not the “first issue fails the compilation with implicit not found”
  • provide opt-in diagnostics how the derivation went, what decisions were made and why
  • do all of the above for nested derivation

with macros one can just write the same code as they do to manipulate data in FP codebases. With Shapeless/Mirrors one is doing a Prolog programming on method signatures, and often with equally good failure messages as Prolog’s false (“implicit not found”) and making things user-friendly… well, quickly turns into a hard riddle. I didn’t start thinking about “macro standard library” because I found macros easier than Mirrors, but because I believe them to be the only way to deliver mature, production-ready libraries (for me “when it’s wrong, it fails to compile” on its own is just a PoC quality, and we’ll keep loosing developers to Java/Kotlin if we keep thinking that’s enough).

Sorry for the digression :folded_hands:

1 Like

We are definitely in agreement here. And I wasnt trying to say this this library is the one true macro library, just saying that the concept of a “mirror representation” within a macro, which is using Symbol, Term, TypeRepr, etc, under the hood, as opposed to the current representation of Mirrors using type Labels <: Tuple, is 100% I think the direction scala, and all libraries, should move towards.

You say it best here:

but because I believe them to be the only way to deliver mature, production-ready libraries

The implied statement under the hood here is that Mirrors do not meet that standard, and I 100% agree with that sentiment.

1 Like

I would like to revive this thread. Since its post 6 months ago, I have been working on my 3rd party version of the ideas I proposed here, and I think it is worth sharing, and exploring the possibility of merging this into the standard library. The change proposes the same API that currently exists, except moves it into top level definitions. Here is the draft MR that I am hoping can gain acceptance from scala maintainers: DRAFT (seeking feedback) : started refactoring of Quotes.reflect into a top level package by Kalin-Rudnicki ¡ Pull Request #24833 ¡ scala/scala3 ¡ GitHub

TLDR:

The world of Scala meta-programming using macros opens up like a blooming flower when everything you need to use them isn’t hidden away in a 5,500+ line file that is very unparsable by both humans and IDEs.

You can use these types exactly like you’d expect to use any other Scala type

package myProject.pkg

import scala.quoted.compiletime.*

final case class MyHelperThing(term: Term, typeRepr: TypeRepr, symbol: Symbol)
object MyHelperThing {
  def unapply(term: Term)(using Quotes): Option[MyHelperThing] = ???
}

There is no crazy (using quotes: Quotes)(term: quotes.reflect.Term, typeRepr: quotes.reflect.TypeRepr, symbol: quotes.reflect.Symbol) going on everywhere that makes it extremely difficult to make things type check. In such a world, have fun wasting hours or days trying to figure out how to define two helpers in two different files and making the types line up.

Here are a couple examples of things I was able to achieve using a wrapper like this, and the only “gotcha” I ran into was forgetting to change the symbol owner when defining a lazy val, which the existing API doesnt protect you from anyway.

HTTP Client + Server

Allows you to parse a Scala trait into a matching HTTP client + server, using the same concepts as canonical derives JsonCodec, except you are diving into a trait instead of using the more easily accessible caseFields.

@experimental
trait UserApi {

  @route.get("/user/%")
  def userById(
      @param.path id: UUID,
  ): IO[ApiError, User]

  @route.get("/user")
  def allUsers(): UIO[Chunk[User]]

  @route.post("/user")
  def createUser(
      @param.body.json create: CreateUser,
  ): UIO[User]

  @route.get("/user/search")
  def userSearch(
      @param.query firstName: Option[String] = None,
      @param.query lastName: Option[String] = None,
  ): UIO[Set[User]]

  @route.get("/user/%/events")
  def userEvents(
      @param.path userId: UUID,
      @param.query numEvents: Option[Int],
  ): ServerSentEvents[String, UserEvent]

  @route.get("/abc/%/ghi")
  def macroTest(
      @param.path.custom value: CustomPathItem,
      @param.query instant: Option[Instant],
      @param.query limit: Option[Int],
      @param.header authorization: String,
  ): IO[String, String]

}
object UserApi {

  given DeriveEndpoints[UserApi] = DeriveEndpoints.derived
  given DeriveClient[UserApi] = DeriveClient.derived

}
// if you have a generic HTTP `Client`, then you get an instance of `Api`, implemented over HTTP
trait DeriveClient[Api] {
  def client(client: Client): Api
}

// if you have an implementation of you trait, you can serve it over HTTP
trait DeriveEndpoints[-Api] {
  def endpoints: Growable[Endpoint[Api]]
  final def appliedEndpoints(api: Api): AppliedEndpoints = AppliedEndpoints { endpoints.map(_(api)) }
}

SQL query DSL

Allows you to derive read/write Codec for JDBC PreparedStatement/ResultSet, Scala query DSL + turning your classes into SQL migrations

final case class Person(
    @primaryKey id: UUID,
    groupId: UUID,
    first: String,
    last: String,
    age: Int,
)
object Person extends TableCompanion[Person, UUID](TableRepr.derived[Person])

final case class Ints(
    a: Int,
    b: Int,
)
object Ints extends TableCompanion[Ints, Unit](TableRepr.derived[Ints])
  @compile
  val intsOrderByABOffsetOptional: QueryIO[(Option[Int], Option[Int]), Ints] =
    for {
      l <- input.optional[Int]
      o <- input.optional[Int]
      i <- select[Ints]
      _ <- orderBy(i.a.asc, i.b.desc)
      _ <- limit(l)
      _ <- offset(o)
    } yield i

  @compile
  val personSearch: QueryIO[(Option[String], Option[String]), Person] =
    for {
      first <- input.optional[String]
      last <- input.optional[String]
      p <- select[Person]
      _ <- where if p.first == first && p.last == last
    } yield p

  @compile
  val selectSubQuery1: QueryO[(Person, Option[Note])] =
    for {
      p <- select.subQuery("sub1") {
        for {
          p <- select[Person]
          _ <- orderBy(p.first.asc)
          _ <- limit(const(2))
        } yield p
      }
      n <- leftJoin[Note] if n.personId == p.id
    } yield (p, n)

Extremely easy typeclass derivation

import oxygen.meta.k0.*
import scala.quoted.*

trait MyMonoid[A] {
  def zero: A
  def join(a: A, b: A): A
}
object MyMonoid {

  given int: MyMonoid[Int] =
    new MyMonoid[Int] {
      override def zero: Int = 0
      override def join(a: Int, b: Int): Int = a + b
    }

  given string: MyMonoid[String] =
    new MyMonoid[String] {
      override def zero: String = ""
      override def join(a: String, b: String): String = a + b
    }

  private def zeroImpl[A: Type](gen: ProductGeneric[A], instances: Expressions[MyMonoid, A])(using Quotes): Expr[A] =
    gen.instantiate.id { [b] => (_, _) ?=> (field: gen.Field[b]) =>
      val fieldInstance: Expr[MyMonoid[b]] = field.getExpr(instances)
      '{ $fieldInstance.zero }
    }

  private def joinImpl[A: Type](gen: ProductGeneric[A], instances: Expressions[MyMonoid, A])(aExpr: Expr[A], bExpr: Expr[A])(using Quotes): Expr[A] =
    gen.instantiate.id { [b] => (_, _) ?=> (field: gen.Field[b]) =>
      val fieldInstance: Expr[MyMonoid[b]] = field.getExpr(instances)
      val fieldAExpr: Expr[b] = field.fromParent(aExpr)
      val fieldBExpr: Expr[b] = field.fromParent(bExpr)
      '{ $fieldInstance.join($fieldAExpr, $fieldBExpr) }
    }

  private def derivedImpl[A: Type](using Quotes): Expr[MyMonoid[A]] = {
    val gen: ProductGeneric[A] = ProductGeneric.of[A]
    gen.cacheVals.summonTypeClasses[MyMonoid]().defineAndUse { instances =>
      '{
        new MyMonoid[A] {
          override def zero: A = ${ zeroImpl[A](gen, instances) }
          override def join(a: A, b: A): A = ${ joinImpl[A](gen, instances)('a, 'b) }
        }
      }
    }
  }

  inline def derived[A]: MyMonoid[A] = ${ derivedImpl[A] }

}
final case class MyCaseClass(a: Int, b: String) derives MyMonoid

// generates:
  given MyMonoid[MyCaseClass] = {
    lazy val instance_a: MyMonoid[Int] = MyMonoid.int
    lazy val instance_b: MyMonoid[String] = MyMonoid.string
    new MyMonoid[MyCaseClass] {
      override def zero: MyCaseClass =
        MyCaseClass(
          instance_a.zero,
          instance_b.zero,
        )
      override def join(a: MyCaseClass, b: MyCaseClass): MyCaseClass =
        MyCaseClass(
          instance_a.join(a.a, b.a),
          instance_b.join(a.b, b.b),
        )
    }
  }

Auto-derivation of FromExpr and ToExpr

// FromExprT + ToExprT is basically the same as FromExpr/ToExpr, but you requires a `Type[A]`

final case class CC1(int: Int, string: String, boolean: Option[Boolean]) derives FromExprT, ToExprT

sealed trait myAnnot extends scala.annotation.Annotation derives FromExprT, ToExprT
final case class myAnnot1() extends myAnnot
final case class myAnnot2(a: String) extends myAnnot
final case class myAnnot3(b: Int) extends myAnnot
final case class myAnnot4(c: List[Boolean], cc1: CC1) extends myAnnot

Conclusion

If all of this can be achieved by one person with a wrapper around a wrapper of the std-lib, I’m not sure what other proof is needed that this is a viable and valuable approach.

Links:

4 Likes