PRE SIP: ThisFunction | scope injection (similar to kotlin receiver function)

Perhaps I’m missing something. The builder pattern appears to work by using a block of statements to set mutable values in the object being built, together with some support for lifting into scope verbs for configuring that object. The tagless DSL approach works by configuring a value through an argument list (rather than block of statements) and the verbs need to be brought into scope by conforming to the argument type(s). I don’t want to get into a holy war about mutability vs expressions. But if the functionality we want to provide is a mechanism to flexibly build complicated, domain-specific expressions with type-safety, then it looks more like an IDE tooling issue to me than a language one. That the IDE can reliably suggest to you what verbs are available within the value building context.

1 Like

OK, this sounds interesting. Do you have a link so I can take a look? Perhaps looking at a real example, and at a real scale, I’ll see why builder support is required. Cheers :slight_smile:

Sorry, I cannot provide link to our real project it is not open source yet.
There are many dsl in kotlin, but I do not think we need to go far.
Let’s imagine an implemantation of scalafx in

  • implicit function type
  • DeleayedInit(deprecated)
  • kotlin receiver function

IMHO: receiver function will be an etalon of simplicity

OK, What I’m struggling with if we use a block-based notation is this:

html {
  head { title("My awesome page") }
  body {
    head // this is available because we are nested within `html` despite not being directly within it

Now, with the dsl based upon application, this doesn’t happen because body.apply(...) doesn’t accept the return value of head.

1 Like

Ah, interesting point – yeah, I can see that forbidding certain sub-nestings isn’t going to work in the obvious block-based approaches. That’s a good argument for your approach. (Which I generally like, although I wish there was an obvious way to avoid the extra parens.)

I do not think it would be a problem with receiver function, and implicit argument if reciver function were support implicit val shadowing.
Something like:

But It is second task.
The first task is hierarchical scope management which allows increase cohesion.
IMHO: It is just simplify writing, support and using

  • code
  • documentation
  • etc
    by several times
    (at least for my tasks )

Perhaps I’ve written it somewhere else, but IMO scope injection would make tests much better.

Consider this code:

package scope

import java.nio.file.{Files, Path}

import org.scalatest.{FlatSpec, MustMatchers}

class FixturesDemoSpec extends FlatSpec with MustMatchers {
  behavior of "something"

  it must "do thing 1" in test(1, "a") { fixture =>
    import fixture._
  // test body

  it must "do thing 2" in test(2, "b") { fixture =>
    import fixture._
  // test body

  it must "do thing 3" in test(3, "c") { fixture =>
    import fixture._
  // test body

  class Fixture(val resource: Any, val service: Any, val tempDir: Any)

  def test(arg1: Int, arg2: String)(body: Fixture => Unit): Unit = {
    val resource: Closeable = ???
    val service: Closeable  = ???
    val tempDir: Path       = ???
    try {
      body(new Fixture(resource, service, tempDir))
    } finally {

Assuming average test body is just a few lines then additional import fixture._ makes code a bit clumsy.

Replacing this:

  it must "do thing 1" in test(1, "a") { fixture =>
    import fixture._
  // test body


  it must "do thing 1" in test(1, "a") { this =>
  // test body

would make tests much more elegant (no repetition) and somewhat more readable.

I don’t see how implicit functions, implicit conversions, etc would replace scope injection without adding a lot of bloat (effectively making that pointless).

I really think this proposal is worth serious consideration. Kotlin receiver functions solve many of the problems Scala’s implicits claim to solve, but often without “dirtying” the file-context with a very wide import.

Basically, they’re “just” scoped extension methods, which naturally carries clear scope boundaries while being incredibly flexible in practice.

A small primer

A simple extension method in Kotlin looks like fun String.scream() = this + "!!!".
Note how the scope of the first term is transferred into the body of the extension method. This gets really powerful when you combine it with scoping, as in fun foo(bar: String.() -> Unit) = "hello".bar(), where in the context of the foo body, bar is available on all strings, and the function body rebinds this to String such that foo{ println(this + "!!!") } prints hello!!!

Kotlin uses this feature to spice up operations over POJOs to allow things like

val initializedBean = JavaBean().apply {
    x = "hello"
    y = 42

where a globally imported apply is defined as inline fun <T> T.apply(block: T.() -> Unit): T (full implementation here)

I’ve been tinkering with implicit function types in Dotty and they are far more complex while not quite being able to achieve the functionality of the simple function above.

This is nice, but I think Scala’s scoping rules are already flexible and complex enough IMHO.

Overloading the meaning of this would make the language significantly more complex and even harder (not simpler) to understand when reading code. All that for very little return — today, you can already implement the following, which I think is preferable as it’s slightly less implicit and less magic:

val initializedBean = JavaBean().apply {
  it.x = "hello"
  it.y = 42
1 Like

It is good logic when there is no choice. Today it is not so. Scala has a very big disadvantage for us. People prefer kotlin jdbc library at our company. Named tuple or receiver function can make situation better. But scala has not such functionality. :frowning:

Named tuple or receiver function can make situation better. But scala has not such functionality.

But didn’t support case classes the role of named tuples?

I think case classes can not enough support the role of named tuples.
IMHO: OOP is quite badly fits for big relational data.
There are some related themes:

But It seems very doubtful for me.
When I think about scala I think about performance and memory for big data(performance test)

When I want to process multidimensional relational arrays it is very doubtful:

  • Use access by key
  • Emulate anonymous classes which does not really exists

See also a very ancient question: Relational VS Object Oriented Database Design

That looks uncomfortably like dynamic scoping

1 Like

Tuples are case classes:

package scala

/** A tuple of 3 elements; the canonical representation of a [[scala.Product3]].
 *  @constructor  Create a new tuple with 3 elements. Note that it is more idiomatic to create a Tuple3 via `(t1, t2, t3)`
 *  @param  _1   Element 1 of this Tuple3
 *  @param  _2   Element 2 of this Tuple3
 *  @param  _3   Element 3 of this Tuple3
final case class Tuple3[+T1, +T2, +T3](_1: T1, _2: T2, _3: T3)
  extends Product3[T1, T2, T3]
  override def toString(): String = "(" + _1 + "," + _2 + "," + _3 + ")"

The main drawback of a tuple is that it’s a case class with undescriptive field names. Thus, if you want proper field names you create your own case class. In a way, custom case class is a named tuple then.

Also most tuple types are unspecialized and that means they suffer from boxing, unlike custom made case classes.

Why do you use raw JDBC?

1 Like

I understand that to continue constructive discussion I need to make very good sip proposal I am not language designer, so it is not shameful not to have enough qualification to do it myself.
I just say I do not think that case classes is very good for relational big data. Thay require memory copying for jdbc wrappers at least

I think it is just inconvenient. It is very good seen in any scala jdbc library documentation.

Are you kidding or trolling :slightly_smiling_face:? Of course we use. The kotlin wrappers are just more comfortable.

Since tuples are case classes themselves creating (instances of) them takes at least as much copying as creating case classes. Sometimes more as tuples are more prone to boxing due to lack of specialization (as I’ve said before in post above).

I was asking why do you use raw JDBC? There are many higher level abstractions for relational databases to choose from.

It is important to note that there is a big difference between creating objects and filling it with unsorted data.
Just for fun. Let us compare “copyarray” system function and scala “for” statement

I am not sure I can understand your question completely, so it is just my thoughts.
Of course the java jdbc is most powerful and flexible way to work with databases. But all has a price. It also requires high qualification, it is error prone and it has much boilerplate code. So we need compromise. But if we had good row abstraction we would have compromise on the next level of quality.

If scala had good row abstraction and better scope management I would be more happy.

A tuple is (at least shallowly) immutable by design (and that’s a good design). You cannot do:

val tuple = (1, "abc", 'z')
tuple._2 = "something else" // doesn't compile

OTOH with case classes you can mark fields as vars instead of default vals:

case class MyType(var a: Int, var b: String, var c: Char)
val myValue = MyType(1, "abc", 'z')
myValue.b = "something else" // compiles fine

If you want to reuse already created objects then custom case classes offer more flexibility than standard tuples. However, I don’t recommend that in general - side-effect free code is often easier to reason about.

JDBC is a low level API, not meant to be concise, readable, strongly typed or programmer friendly. Therefore there are tons of higher level libraries (built on top of JDBC) that provide improved API. Scala is oriented towards high-level abstractions rather than low-level ones.

let us change the name of the proposal.
I am sure a good row abstraction must be able to be mutable also. There should be a choice at least.

I can not understand such logic. I am afraid with such answer on request of a better row abstraction it is a matter of time when scala will have become yet another language for me. Kotlin evolve much faster than I thought when we choosed scala. :slightly_smiling_face:

Actually, if you really care about avoiding allocations at all cost (which include excess software maintenance costs) then you should avoid the whole Java platform. Project Panama and Project Valhalla are not ready yet, so extracting maximum possible performance is hard under Java. Because of that e.g. network drivers written in Java are much slower than ones written in Rust, C# or Go: (IIRC they or someone else came to a wrong conclusion that Java is generally bad for network related purposes, because they missed the fact that Java platform contains a lot of C/C++ code for performance critical tasks).

Otherwise, if you care about final performance of your application and not performance of a small part of it, then you should profile the performance and measure what’s the impact of a specific portion of your app on the whole app.

1 Like