Does the compiler recognize and optimize the same macro expansions?

Does the compiler recognize and optimize the same macro expansions?

For example, scalamock relies on macros.

myClass.myMethod.when(1).returns(0)
myClass.myMethod.when(2).returns(3)

// ...
myClass.myMethod.verify(*).once()

or

myClass.myMethod.returns(1)

// ...

myClass.myMethod.times == 1

All expansions of myClass.myMethod in both examples produce same results. Does compiler already optimize that?

If not, optimizing it should hugely improve compilation times.

No, that’s not possible, since we have no way of knowing whether the macro is pure or not. Even if it is pure, we don’t know what are actually its inputs, which the compiler should check for equality. The set of information available to a macro is huge.

Is it impossible - impossible one. Or impossible - hard to make one?

I don’t know internals, but probably there could be solution involving a cache:

  1. Hash based on signature
  2. If there is no cache hit - curry a function, precompute generic part and store it, then apply to provided arguments.
  3. If there is a cache hit - take precomputed part and apply it to provided arguments
    Maybe also assigning a unique id to each signature - so if signature changes, cache is invalidated.

Choosing where to apply to can be done on client side, e.g. with annotation and some restrictions from compiler

In order to use a cache, you need a priori knowledge that the function is idempotent. Macros are arbitrary functions, so they are not known to be idempotent. Therefore, it’s not even worth exploring technically how a cache could be built. Using a cache at all would be wrong.

In your macro, you may be able to build a cache of your own, since you know more about the semantics of your macro. But the compiler cannot do that for you.

Ok, thank you for the explanation