PRE SIP: ThisFunction | scope injection (similar to kotlin receiver function)

That’s good, esp Kotlin example in https://github.com/lampepfl/dotty/issues/5591 shows that Kotlin does resolve extension methods of this unqualified. So at least for designers of Kotlin it made sense that this.method and method are the exact same thing without weird exceptions…

You’re talking about inconsistency between two very different mechanisms. It doesn’t need a lot of effort to find multiple (potential?) inconsistences, e.g.:

  • there’s case object and case class, but no case trait - inconsistency
  • you can do import stableIdentifier._ but can’t do import unstableIdentifier._ - inconsistency
  • you can have class parameters, soon have trait parameters, but no object parameters in plan - inconsistency (this is actually absurd one, but still there’s some kind of inconsistency here)
  • you can apply var to constructor parameters but not to method parameters - inconsistency
  • you can import from any stable identifier but when you save stable identifier into a new variable it may not be a package (i.e. you can do val newSomething = stableIdentifier for any stable identifier that is not a package) - inconsistency
  • you need value.type to get type of non-literal, but for literals you don’t need that .type suffix - inconsistency
  • etc

Shall we solve all of them? Maybe, maybe not, we need to consider the consequences, the need for them, how they fit in the language (are they consistent with spirit of Scala?), etc

Sometimes this.member(args) is the same as member(args) but not always. Sometimes one form compiles and other don’t. Sometimes both compile, but refer to something different. That’s because there’s some overlap between them, but they are in fact governed by completely different rules.

A big counterargument for extension methods on this is that extension methods are mostly useful for achieving ad-hoc polymorphism for types you don’t control. E.g. you can enrich java.util.String only with extension methods (Dotty ones or Scala 2 ones, whatever). You can’t mix in additional traits to java.util.String as you don’t have control over it, i.e. you can’t edit it and add directly the methods you want. But (in current version of Scala) if you write code that uses this then this means you have full control over the class of which this is the instance. Therefore it’s much more natural to just add extra traits to that class instead of going through the contortions of adding extension methods.

Instead of this (extension methods on class you control):

  class X(val v: Int) {
    def hello(): Unit = {
      this.extensionMethod()
      // extensionMethod() // alternative potential syntax
    }
  }

  implicit class RichX(x: X) {
    def extensionMethod(): Unit = println(x.v + 5)
  }

you write this (ordinary OOP mixins):

  class Y(val v: Int) extends Mixin {
    def hello(): Unit = {
      methodFromMixin()
    }
  }

  trait Mixin { this: Y =>
    def methodFromMixin(): Unit = println(v + 5)
  }

In order for extension methods on this to make more sense, this would have to be of foreign type you don’t control, e.g.

library.method { this => // rebinding 'this' to an instance of foreign type
  // now this makes sense as we can't mixin anything to 'this'
  this.extensionMethod()
  // or the more concise but confusing syntax
  extensionMethod()
}

But how often such thing would happen in idiomatic Scala code? I think very rarely (but can be wrong here).

Kotlin is heavily oriented toward integrating with Java libraries and frameworks thus Kotlin programmers are often forced to deal with deficient Java APIs and ad-hoc extending them makes a lot of sense. OTOH, Scala APIs are usually pretty rich out of the box.

1 Like

Now that gets interesting with the recent proposal for self types syntax using val this: T. If you adopt that and the above suggestion to allow someType.baz { this =>, what you get is that the whole “auto prefix names with this“ is generalized so that rather than this meaning the current class scope, it means a variable in scope named this, which happens to default to the class scope.

(Note: I am not personally in favor of these, but if you’re going to look into them I think this would be more elegant.)

Interesting idea. I’m not sure how we should understand OuterClass.this in your interpretation, though I guess it could keep its usual interpretation as a way of referring to some potentially shadowed this that happen to be associated with a class name.

How are Java frameworks related to this? You can’t get a “foreign this” in Java either, there’s no way an extension method on this could help interface with a Java framework – it only makes sense with Kotlin’s own rebinding of this in scopes.

Thing is, implicit function types, together with the already implemented lexical scoping for implicits already implement scope enhancement – but bizarrely, only for methods – the only result of this inconsistency is that to introduce new top-level functions/values people will just add extension methods for constants or objects that are always visible, e.g. 't or 0. Instead of underscore.js, we’ll have zero.scala, let me demonstrate:

trait Fixture[A] extends Conversion[0, A]

trait TestFramework[A] {
  def (testName: String) in (test: given (Fixture[A]) => Unit): Unit = ???
}

This is enough to solve the problem of importing fixture members in test – all members of fixture are now accessible from 0. prefix inside the test:

trait Greeter {
  def greet(name: String): String = s"Hello $name"
}

case class MyFixture(name: String, greeter: Greeter)

object MyTest extends TestFramework[MyFixture] with App  {
  "say hello" in {
    assert(0.greeter.greet(0.name) == s"Hello ${0.name}")
  }
}

There are always literals or objects around to attach names to for scope injection and I expect that at least some DSLs, like Akka’s GraphDSL will migrate to implicit functions in Scala 3. However, the above is a hack, names “injected” onto literals like this are second-class, e.g. you can’t import from them and you can’t inject type names into scope this way, the fact that scope injection can be “almost done” using patterns just means that it should be provided in full power by the language instead, otherwise we’ll be stuck with zero.scala – I will 100% use this approach in dotty, because typing in { ctx => import ctx._ ; ... } in every test tires me.

Alternately, you could go the route that Scalatest has and implement your test DSL with intermediate objects and by-name parameters.

Aside from avoiding the scope injection issue, it’s much easier for users to extend your DSL if there are types they can add extension methods to.

To concretize this, your example would look like this in Scalatest:

trait Greeter {
  def greet(name: String): String = s"Hello $name"
}
class MyFixture {
  val name: String = ???
  val greeter: Greeter = ???
}

class MyTest extends WordSpec  {
  "say hello" in new MyFixture {
    assert(greeter.greet(name) == s"Hello $name")
  }
}

@morgen-peschke:
Here’s a quite neat overview of testing styles in ScalaTest: Fixtures in Scala — three simple ways to reduce your test code boilerplate | by Jakub Dzikowski | SoftwareMill Tech Blog
The way you presented has the advantage of easy composability new Fixture with ExtraData1 with ExtraData2 but for now they it suffers from lack of direct support for parametrization (traits do not have parameters yet, but will have in Scala 3) and you don’t get tear down (you would have to wrap the new Fixture with Whatever { <test code> } with some extra function to get e.g. withTeardown(new Fixture with Whatever { <test code > }).

@kai:

Extension methods are useful when the type interface is deficient. I was presuming that this would be more useful for Java types as Java e.g. for a long time lacked multiple inheritance of behaviour (Java 8 brought that) so making rich APIs was tedious.

As I’ve written before - marking something implicit doesn’t add any named member to any scope. That’s how it works in both Scala 2 and current Dotty / Scala 3. Look here if you want to read about scopes: https://www.scala-lang.org/files/archive/spec/2.13/02-identifiers-names-and-scopes.html

OTOH marking something implicit is sometimes required to make something else compile. If I have def myMethod[A: MyContext] ... then when invoking it I need to have implicit MyContext[A] instance in scope. That’s how it worked since the beginning of implicit contexts.

  1. I have already proposed:
function { this =>
  <some code using members of new this directly>
}

It’s explicit, comprehensible and use-site configurable.

  1. I don’t see much improvement in:
  "say hello" in {
    assert(0.greeter.greet(0.name) == s"Hello ${0.name}")
  }

over

  "say hello" in { f =>
    assert(f.greeter.greet(f.name) == s"Hello ${f.name}")
  }

Using 0.something instead of f.something wouldn’t pay off when using IDE, as IDE would first suggest methods defined directly on Int and only after that you would see extension methods.

  1. Let’s not mix things together. Rebinding this is a completely different thing than unqualified extension methods / implicit views.

Here’s how you search for binding in current scope:

Bindings of different kinds have a precedence defined on them:

  1. Definitions and declarations that are local, inherited, or made available by a package clause and also defined in the same compilation unit as the reference to them, have highest precedence.
  2. Explicit imports have next highest precedence.
  3. Wildcard imports have next highest precedence.
  4. Definitions made available by a package clause, but not also defined in the same compilation unit as the reference to them, as well as imports which are supplied by the compiler but not explicitly written in source code, have lowest precedence.

Here’s how you search for implicit views:

  1. In a selection e.m with e of type T, if the selector m does not denote an accessible member of T. In this case, a view v is searched which is applicable to e and whose result contains a member named m. The search proceeds as in the case of implicit parameters, where the implicit scope is the one of T. If such a view is found, the selection e.m is converted to v(e).m .
  2. In a selection e.m(args) with e of type T, if the selector m denotes some member(s) of T, but none of these members is applicable to the arguments args. In this case a view v is searched which is applicable to e and whose result contains a method m which is applicable to args. The search proceeds as in the case of implicit parameters, where the implicit scope is the one of T. If such a view is found, the selection e.m is converted to v(e).m(args) .

How do you want to merge searching for implicit views on this to searching for binding in current scope? There has to be some precedence rules. Currently for e.m the precedence rules state that implicit views are tried last. That is good because searching for implicits is costly, so it should be done last. If we carry that rule to searching for binding in scope, then all packages, class members (including class members from outer classes), local variables and methods, functions and methods parameters, etc will have precedence over extension methods / implicit views on this.

Implicits are quite heavy, so we should rather strive to find a way to reduce their compilation performance impact rather than going to enable implicits everywhere. I don’t use scalaz / cats on regular basis but I remembers that when I was adding import scalaz._ ; import scalaz.Scalaz._ to classes a few years ago that slowed down IntelliJ IDE and Scala compiler considerably. Simply being more selective in implicits made performance much better, e.g. import scalaz.std.syntax.options._ (or something like that). Shapeless library is another proof that heavy use of implicits drag compilation speed down. People are inventing extra macros that use few implicits (if any) to implement things that could be done without that extra macros in Shapeless, but with much higher complation performance cost.

As I’ve written before - marking something implicit doesn’t add any named member to any scope.

Your statement is non-sequitur, I’m literaly showing how to add new methods in scope from a programmer’s point of view. The low-level details of how compiler will implement this are completely irrelevant.

Kotlin has a feature that receivers nest - that is, if identifier is not found in inner this, it will be searched in outer this. Seems like your proposal does not have this - new binding of this should make the previous one inaccessible if it follows the rules for variables.

On the contrary, extension methods on 0 do stack, and since in dotty implicits have lexical priority, methods on 0 introduced in inner scopes will override methods in outer scope, but the outer scope conversion will still be available for non-intersecting methods.

From a programmer’s point of view they achieve very similar things - allow introduction of unqualified names within a scope without imports. Latter is much better in presence of the former, of course.

I think it can follow rules for nested classes where there is fallback from inner this to outer this.

I’ve thought a bit on receiver functions in Kotlin in context of type-safe builders and wondered if they are really type safe. It turns out they aren’t until you use special extra annotations:
https://kotlinlang.org/docs/reference/type-safe-builders.html

When using DSLs, one might have come across the problem that too many functions can be called in the context. We can call methods of every available implicit receiver inside a lambda and therefore get an inconsistent result, like the tag head inside another head :

html {
  head {
    head {} // should be forbidden
  }
  // ...
}

In this example only members of the nearest implicit receiver this@head must be available; head() is a member of the outer receiver this@html , so it must be illegal to call it.

To address this problem, in Kotlin 1.1 a special mechanism to control receiver scope was introduced.

Also somehow I don’t find Kotlin receiver functions particularly succint on definition site (in case of type-safe builders at least). In Dotty 0.19.0-RC1 I can do:

// imagine A, B, C and D are some sort of tags or other nested structures

type X = A|B
type Y = B|D
type Z = X|Y

case class A(items: X|B*)
case class B(items: A|C|D*)
case class C(items: A|Z*)
case class D(items: C|D*)

@main def main = {
  val x = A(B(D(C())), A())
//  val x = A(B(D(C())), A(), C()) // doesn't typecheck
  println(x)
}

How to emulate that with type-safe builders based on receiver functions instead? I guess it will be rather convoluted and verbose.

Yep, to get tear-down with scope injection I usually stack the cleanup with an abstract class. Works pretty well:

trait Greeter {
  def greet(name: String): String = s"Hello $name"
}
def greeterFixture[A](body: Greeter => A): A = {
  val greeter: Greeter = ???
  try body(greeter)
  finally cleanup(greeter)
}
abstract class MyFixture(val greeter: Greeter) {
  val name: String = ???
}

class MyTest extends WordSpec  {
  "say hello" in greeterFixture(new MyFixture(_) {
    assert(greeter.greet(name) == s"Hello $name")
  })
}

It’s not perfect, but certainly gets the job done, and the boilerplate hasn’t been annoying enough to look into abstracting it into a helper … yet :wink:

Actually with Implicit Function Types it can be done more succinct.

“say hello” in greeterFixture{new FixtureContext{
 assert(greeter.greet(name) == s"Hello $name")
}}

We write something like that today.

I have understood after this words that I really need to import only implicits. Because overriding “this” has significant disadvantages.

So I can compare template:

 aspect{new aspectContext{
    
 }}

with

 aspect{implicit context => 
    
 }

for the case where there are single implicit instead of multiple.

Ok, we can live with such boilerplate code and we live with it.

But I do not understand why
https://dotty.epfl.ch/docs/reference/contextual/implicit-function-types.html
It is very good for single implicit, and It does not need for multiple.
I need inject multiple implicits more often than a single implicit. Because my main single implicit is declared in the root class.

Based on this part of the implicit function types reference:

Conversely, if the expected type of an expression E is an implicit function type (given T_1, ..., T_n) => U and E is not already an implicit function literal, E is converted to an implicit function literal by rewriting to

It doesn’t look like this is limited to a single implicit.

Note: the reason the syntax I mentioned works for tests is that it is generally local to the test, so finding out what’s in scope only involves a quick intra-file check. It wouldn’t work nearly as well if it were used across multiple files.

I’m not quite sure what you’re looking for that the current proposed syntax doesn’t offer. One of the examples in the reference doc does use multiple implicits (both Row and Table are implicitly given to the cell helper).

I was playing around in scastie, and the syntax it supports is pretty much what it sounds like you’re asking for:

  import dsl._
  val t = table {
    row {
      cell("top left")
      cell("top right")
    }
    row {
      cell("bottom left")
      cell("bottom right")
    }
  }
  println(t)

The only difference is that you’d have to import the DSL once per file, which isn’t exactly a bad thing.

You are right, but in practice I will not transfer package\object implicit in the arguments of implicit function. It is the way to emulate:

import implicit it._

But I am afraid such techniques.

I think it is a bad practice. I do not know why it is in documentation.
It is my personal opinion of course. But I never will use such approach for builders.
I would prefer ‘it’ technique that have been introduced here.
It have been discussed already, so I do not know whether it needs to repeat.

I persanaly prefer for orphan tasks and dsl like:

(I do not advartise “basic” but implicits make code more succinct in some case)

new doSomeThingContext{
  println("good bye delayInit")
}.execute()

instead of

execute{
  import implicit doSomeThingContext._ 
}

But it seems that it my personal taste. It happens :slight_smile:

Wouldn’t that make it difficult to provide alternate implementations in the tests?

Can you expand on why you think it’s a bad practice to explicitly import new syntax?

I do not think so. Actually it help to manage scope. We use objects to implement scope reusing, it is actual for library integration.

I have said that there is bad practice to use implicit import type for builder pattern.
I think so because the grammar of such builder has very low coupling so

  • it is difficult to navigate
  • it is difficult to documentate
  • it has a problem of name clashing

I think the usual people are not smart enough to use such technique in our company.

bad practice to explicitly import new syntax?

I have not said this. It is just less succinct.
There is full analogy with motivation of implicit function type. It just makes code more short.
I like good work of a code assistant.

1 Like

Scope injection has nothing to do with mutable builders, this is non-sequitur. I want scope injection for ZIO Test’s immutable builder – right now all tests written with it must include boilerplate imports:

import zio.test.Assertion._
import zio.test._

Every time – only because ZIO Test chose to use constructor parameter for test construction instead of inheritance. Having scope injection for top-level members evens the odds and would allow suite, testM and other functions to be used in Spec expression without using inheritance and without constantly repeating boilerplate imports.

1 Like

Somehow this thread started with discussion on mutable builders for dozens of posts. If scope injection has really nothing to do with mutable builders then maybe this thread should be split into two?

Depends on what could be injected. If author of ZIO doesn’t give you anything to inject then scope injection doesn’t help you at all.

Author of DefaultRunnableSpec could go for e.g. abstract lazy val instead of by-name constructor parameter and that would open the ability to mix in helper classes just as in ScalaTest you’re mixing in lots of them into your test base classes.

The way ZIO DefaultRunnableSpec is made prevents from creating helpful base classes, so even if you get rid of:

import zio.test.Assertion._
import zio.test._

then you’re still left with:

import caliban.GraphQL._
import caliban.TestUtils._

because there’s no way to mix them in to be available in constructor parameters.

This thread is a discussion of scope injection, not scope injection itself, obviously.

The fact that you need “base classes” to extend a scope is a direct consequence of a lack of scope injection – code must be arbitrarily structured ahead of time to allow for scoping – this means inheritance is hugely favored vs. composition – however, Dotty actually adds many tools to boost composition, e.g. export

If author of ZIO doesn’t give you anything to inject then scope injection doesn’t help you at all.

Nope, you don’t need prescience on the part of author with scope injection, you may mix your own arbitrary environment with export:

object ZIOEnv {
  export zio.test.Assertion._
  export zio.test._
}

object CalibanEnv {
  export caliban.GraphQL._
  export caliban.TestUtils._
}

class CalibanSpec(spec: (scope ZIOEnv.type) => (scope CalibanEnv.type) => ZSpec[TestEnvironment, Any, String, Any])
  extends DefaultRunnableSpec(spec(Env))

object ExecutionSpec extends CalibanSpec(
  suite(...)
)

It is important to note, that wildcard import implicits have been always error prone for me. So I feel comfortable when I just avoid it.

1 Like