PRE SIP: ThisFunction | scope injection (similar to kotlin receiver function)

Of course we do not make all our calculations on jvm. But it has a price, and it does not change the fact:
I would prefer a language with better integration with jdbc. 1,5 - 3 times faster is good win rate for free.

The collection library has been rewriten because of less.

Actually, JDBC doesn’t look so obvious any more. Check out the Skunk library, for example (still in early development, but one of the most intriguing things out there) - it eschews JDBC in favor of talking raw Postgres. More expressive, far more powerful, arguably easier to use, and more efficient.

JDBC is an ancient library, and it shows. It’s an important consideration, but it’s by no means trolling to question whether it’s still the right way to go…

It is just well known example.
You need data, metadata and iterator in general. And it is a bad idea to contain data and metadata in one place for big data. So oop is bad :slight_smile:
If you use wrong abstraction you always will get more worse result.

it is very beautiful idea. I only have doubts that there will be stable well supported scala drivers for most data servers in general.

You are right it can look uncomfortably.

Scope Functions have two implementations( this or it). Each one has advantages and disadvantages.
The documentation says:
The scope functions do not introduce any new technical capabilities, but they can make your code more concise and readable.

Due to the similar nature of scope functions, choosing the right one for your case can be a bit tricky. The choice mainly depends on your intent and the consistency of use in your project.

I think most importan is

  • but they can make your code more concise and readable

Here is my take on what it takes to implement a scoped it using Dotty with all the latest bells and whistles (If there is a better way to do this please share!)

class Foo(var a: Int = 0, var b: String = "init")
  override def toString: String = 
    s"Foo($a, $b)"

class InContext[A](val a: A)

def it[A](given inContext: InContext[A]): A = inContext.a

def (a: A) inContext[A](block: (given InContext[A]) => Unit): A =
  block(given InContext(a))
  a

@main def run(): Unit = 
  val f = Foo().inContext {
    it.a = 42
    it.b = "hello"
  }

  println(f)

I have a few gripes with this:

  1. class InContext[A] is littering the namespace: It is not usable by itself, but still needs to be public due to the signature of inContext;
  2. def it[A] has the same problem, only arguably more so;
  3. This is a personal preference, but I find implicit function types hard to read. It looks like the caller is expected to pass a function InContext[A] => Unit when in fact it is inContext itself that creates and applies the context. This is possibly just a case of unfamiliarity.

While they assert this, I don’t know if I buy that. Having to look up the function definition to figure out what was in scope was really annoying, back when I programmed in Perl.

The it version is less bad, as at least there’s something for the autocomplete to key off, but the this version dumps stuff into the current namespace, without a clear way to figure out what’ in scope.

1 Like

Scala 2.13:

import scala.util.chaining._

class Foo(var a: Int = 0, var b: String = "init") {
  override def toString: String = 
    s"Foo($a, $b)"
}

val f = new Foo().tap { it =>
  it.a = 42
  it.b = "hello"
}

println(f) // prints: Foo(42, hello)

I don’t see how that’s relevant. The topic is creating a scoped context along with functions to work on that context. Implicit function types was touted as a solution to this.

I don’t see how that’s relevant. The topic is creating a scoped context along with functions to work on that context. Implicit function types was touted as a solution to this.

Implicits are an even worse transgression. You have to look into

  • Companion objects;
  • The surrounding scope;
  • givens (local/implicit function args);
  • The import list;
  • Predef.scala;

I like the power they bring but they are surely a lot more complex than rebinding this in a closed scope.

edit: Not to mention that we already have this-rebinding in a form:

class A {
  val x = 1
  class B {
    val y = 2
    def addThem = y + x
  }
}

I don’t mind implicits partially because they’re stuff you’re not supposed to have to interact with them directly, and when you do need to interact with them, it’s as arguments rather than identifiers (which really narrows the relevant namespace).

Extension methods are likewise OK for me because they’re anchored on some known object, so you have a place to start from, and can’t shadow the methods defined on an instance so it might fail to compile, but it won’t surprise you at runtime.

Rebinding this means you have to aware of the names injected into the local namespace, which shadow the existing values in a non-obvious way.

There’s a big difference between rebinding this in the context of an obvious class definition, and rebinding this in completely ordinary-looking code!

What should a user udnerstand, when they see this?

val x = foo(bar)
someType.baz {
  x + 1
}

Is x coming from the outside, or is it coming from some imported this scope? No one can tell without looking at the definiton. It’s a net loss in terms of language understanding IMHO.

Now, say x was coming from the outside scope, but that the baz method did in fact import some this scope (where x was not defined). What if someone later adds an x field to the corresponding type? We may break (sometimes silently) all this-importing user code out there that were capturing reference to an outside x variable. For instance, the meaning of the code above will change.

This is not dissimilar to surprises that can arise when you’re wildcard-importing an object into your scope (as in import obj._). But people are more careful with this sort of things, and it catches the eye. Wildcard imports should only be performed on robust APIs that are not expected to change in unexpected ways. But this scope injection exposes people to the API of random classes, without a visual clue of what’s happening from the use site.

3 Likes

The only real difference between my solution and yours is:

value.tap { it =>
  assert(it == value)
}

vs

value.inContext {
  assert(it == value)
}

It doesn’t seem to me that extra implicits are bringing any improment here. Your mechanism is very convoluted.

No. Implicit function types were to reduce bolierplate when passing the same implicits over and over (the primary example given by Martin Odersky was Context in Scala or Dotty compiler).

Currently we have:

def functionWithContext(param: Param)(implicit context: Context): ReturnType = {
  // use 'context' directly
}

or

// Context is uselessly bound to ParamType here
def functionWithContext[ParamType: Context](param: Param): ReturnType = {
  val context = implicitly[Context[ParamType]]
  // use 'context' directly
}

or a hacks like https://twitter.com/viktorklang/status/841702704749637632 but that brings scoping problems. I’ve tried hacks like that and had problems with managing implicit scope.

Implicit function types let you instead formulate that:

// concise and non-hacky way
// also you can shovel multiple implicit parameters into 'InContext' type
def functionWithContext(param: Param): InContext[ReturnType] = {
  // use 'context' directly
}

Despite of the syntax InContext[ReturnType] the implicit parameter doesn’t have to be related to ReturnType in any way. Desugaring could be as follows:

def functionWithContext(param: Param): Context => ReturnType = context => {
  // use 'context' directly
}
1 Like

As for now you can’t even explicitly import this into scope. I think below code is explicit enough to be comprehensible:

val x = foo(bar)
someType.baz { this => // this screams: caution! things can be shadowed here!
  x + 1
}

That would be very useful in tests:

"sth" must "be red" in test(args) { this => // importing fixture members
  ... // short test
}
"sth" must "be green" in test(args) { this => // importing fixture members
  ... // short test
}
"sth" must "be blue" in test(args) { this => // importing fixture members
  ... // short test
}

private def test(params: Params)(action: Fixture => Unit): Unit = ???
2 Likes

Now, say x was coming from the outside scope, but that the baz method did in fact import some this scope (where x was not defined). What if someone later adds an x field to the corresponding type?

Can’t the same thing be used as an argument against inheritance?

// otherfile.scala
trait T {
  // def x = 5 // user-added field
}

// main.scala
class A {
  def x = 1
  class B extends T {
    def y = 2
    def addThem = y + x // where does x come from?
  }
}

You still have to look at otherfile.scala to figure out where x is coming from.

This is not dissimilar to surprises that can arise when you’re wildcard-importing an object into your scope (as in import obj._ ). But people are more careful with this sort of things, and it catches the eye.

In my experience a lot if not most Scala libraries recommend or at least use wildcard imports in their ‘Getting started’-section. This is often more or less a requirement to get the library working, not something that’s carefully added by the user.

But I’m not married to the idea of being able to rebind this specifically. The mechanism that enables all this in Kotlin is scope-local extensions (which, for Kotlin, is also the enabler of this-rebinding)

1 Like

I’ve mentioned this in the very beginning of my answer: in code like yours it’s obvious that you’re creating a new scope in which some things may be implicitly imported (by inheritance). Not the case at all in regular code where this would be imported Kotlin-style without any clue at the use site.

On a side note, your example also reminds me of the “fragile base class” problem. Which is why deep, wild class hierarchies are discouraged in favor of small ADTs, type class hierarchies, and modules with clear interfaces.

I think most library authors are careful about what the put in the objects they recommend importing with a wildcard. At least, they should be!

4 Likes

Nevertheless it is not the most comfortable thing to write
import some.word.that.i.always.forget._

The receiver function can help in such situation.

Of course the good programmer know by heart all that he needs. But I it is annoying situation for me.
There are some decisions:

But receiver function looks more comfortable for optional libraries

  doSomeAspect{
     ....
  }

instead of

    doSomeAspect{ it => 
       import implicit it 
   }
1 Like

Would scala-records solve your request for named tuples, if they were still published for Scala 2.12-2.13 (or if you’re using 2.11) ?

I think we’re all missing that this can already be enriched by scoping in Dotty, via implicit functions:

object StringScope {
  def (s: String) capWords = s.split(" ").map(_.capitalize).mkString(" ")
}

def stringScope[A](f: (given StringScope.type) => A): A = {
  f(given StringScope)
}

object TopLevelScope {
  def (`this`: Any) capWords(s: String) = StringScope.capWords(s)
}

def topScope[A](f: (given TopLevelScope.type) => A): A = {
  f(given TopLevelScope)
}

object App extends App {
  stringScope (
    println("hello world".capWords)
  )
  topScope {
    // this is ok
    println(this.capWords("hello world again"))
    // but not this...
    // println(capWords("hello world again"))
  }
}

The only restriction is that you HAVE to write this as a prefix to call an extension method, you can’t call it as a top-level name. I think this restriction is arbitrary, all scopes in Scala 2 so far have an implicit this, if extensions methods could apply to this when calling an unqualified method this would in effect allow adding new “top-level” methods by enriching (self: Any)

IIUC: There would be dynamic overhead which would be too significant for us.