PRE SIP: ThisFunction | scope injection (similar to kotlin receiver function)

You are right. If they solve the problem of name clashes. We can use implicit functions for example.

//decorate for init method call 
html(new{ //automate scope injection
   body{
       table{
       }
   }
})

But as I have said the code assistance will be awful.
The library will be very complicated, because of the global variables.
For example our dsl markup language has over 100 classes and over 500 properties between this classes.
When I think about it in a global list I feel some toothache :slight_smile:
We use pure xml now and I am not sure that dsl with implicits will have a sense.But we have java pojo classes and it is usefull.

If we make dsl for such markup one day I am not sure that usual people can support its code and documentation.

A use case will be type checking inside a context

for example

type Context

def (this: Context) infer(term: Term) = {
  term match {
    case Term.Pi(domain, codomain) =>
      extend(assertIsTypeAndEval(domain)).run {
         infer(codomain)
         // other stuff in the new context
      }
  }
}

Sorry to be that annoying person, but I’m unsure what the problem is that this addresses. Using out-of-the-box scala 2 features, I have fluent DSLs for HTML and JSON that ‘just work’ and have representational independence.

val myPage = 'html()(
  'head()(
    'title()("My page")),
  'body('class := "myStyle")(
    'h2('id := "ex_123")("This is my awesome page"),
    'div()("I wrote this page.")
  )
)

val js = jObject {
  'name := "Matthew",
  'food := "Fish" :: "Chips" :: Nil,
}

This uses an underlying tagless DSL for HTML and Json, respectively, and some syntactic flourishes to provide a := assignment syntax and to coerce native types into the DSL types. So you can interpret an HTML expression to, e.g., strings, or DOM, or harvest all IDs, or to validate classes. Similarly, the JSON dsl can be evaluated to JSON strings, to an ultra-light in-memory DOM, to first-cuts at type dictionaries from values, and so-forth. Now I will grant that there is some machinery involved in achieving this. Scala 3 makes this machinery much nicer. But I don’t see what a dedicated builder extension adds that is not met by using an appropriate tagless encoding with a suitable layer of sugar over the top. What does a builder extension add?

3 Likes

FYI symbol literals have been / are being dropped https://github.com/scala/scala-dev/issues/459

Yeah - I ran into that when I started porting this to scala 3 over a beer one night. The updated version allows string literals in place of the symbol literals.

IIUC:

val myPage = 'html()(
  'head(),'div()("I wrote this page.")
)

It will be compiled correctly.

It is not problem for json, it is not big problem for html(Html has a very flexible structure).
But for many dsl which have a lot of context dependent grammar.(sql,xml[with xsd],etc)
Static type checking can highly improve usability.

:))))
Sorry, I will not prove you that

  • static type cheking
  • auto code completion

is a very good
:))

If you are in the camp of dynamic schemes you just do not have such problems :wink:

That’s fair, but to @drdozer’s other point – is there anything you need that can’t be done with Scala 3’s implicit function types? I mean, I tend to think of this sort of strongly-typed context-specific syntax as the killer app for that feature…

The DSLs I showed are, intentionally, fairly unstructured. However, it is not difficult to extend these DLS to make them statically type checked, and provide auto completion. It would be relatively trivial to provide an alphabet of HTML tag names taken from e.g. the X-HTML DTD, and constrain that elements can only be nested within others as the DTD allows. The glue is all provided by an implicit that provides the .apply method to use the left-hand-side as if it were a constructor. I didn’t bother adding strong child type constraints because life is short, and these things are best done by code generation from the external type system, but it’s not in principle difficult to do. And you can still render it into a tagless abstraction which retains the ability to separate construction from interpretation.

The difference I think I’m discerning is that my style of building structures is a standard, immutable, referrentially transparent expression. Builder patterns seem to locally construct a mutable object and then set properties on it, before “releasing” it to the containing scope. That is, incidentally, the big difference between my HTML DSL and the one in scalatags.

I disagree with you, I think it is difficult in comparison to kotlin.
https://kotlinlang.org/docs/reference/type-safe-builders.html
And I have not seen any simple example which has comparable flexibility out of the box.

Yes, there is.

I just do not know how it can be simply done with implicit functions.
it is very simple with 5 functions, but when there are 500 -1000 functions it is implicit hell :slight_smile:

Kotlin receiver functions provide simple hierarchical scope management.

Perhaps I’m missing something. The builder pattern appears to work by using a block of statements to set mutable values in the object being built, together with some support for lifting into scope verbs for configuring that object. The tagless DSL approach works by configuring a value through an argument list (rather than block of statements) and the verbs need to be brought into scope by conforming to the argument type(s). I don’t want to get into a holy war about mutability vs expressions. But if the functionality we want to provide is a mechanism to flexibly build complicated, domain-specific expressions with type-safety, then it looks more like an IDE tooling issue to me than a language one. That the IDE can reliably suggest to you what verbs are available within the value building context.

1 Like

OK, this sounds interesting. Do you have a link so I can take a look? Perhaps looking at a real example, and at a real scale, I’ll see why builder support is required. Cheers :slight_smile:

Sorry, I cannot provide link to our real project it is not open source yet.
There are many dsl in kotlin, but I do not think we need to go far.
Let’s imagine an implemantation of scalafx in

  • implicit function type
  • DeleayedInit(deprecated)
  • kotlin receiver function

IMHO: receiver function will be an etalon of simplicity

OK, What I’m struggling with if we use a block-based notation is this:

html {
  head { title("My awesome page") }
  body {
    head // this is available because we are nested within `html` despite not being directly within it
  }
}

Now, with the dsl based upon application, this doesn’t happen because body.apply(...) doesn’t accept the return value of head.

1 Like

Ah, interesting point – yeah, I can see that forbidding certain sub-nestings isn’t going to work in the obvious block-based approaches. That’s a good argument for your approach. (Which I generally like, although I wish there was an obvious way to avoid the extra parens.)

I do not think it would be a problem with receiver function, and implicit argument if reciver function were support implicit val shadowing.
Something like:

But It is second task.
The first task is hierarchical scope management which allows increase cohesion.
IMHO: It is just simplify writing, support and using

  • code
  • documentation
  • etc
    by several times
    (at least for my tasks )

Perhaps I’ve written it somewhere else, but IMO scope injection would make tests much better.

Consider this code:

package scope

import java.io.Closeable
import java.nio.file.{Files, Path}

import org.scalatest.{FlatSpec, MustMatchers}

class FixturesDemoSpec extends FlatSpec with MustMatchers {
  behavior of "something"

  it must "do thing 1" in test(1, "a") { fixture =>
    import fixture._
  // test body
  }

  it must "do thing 2" in test(2, "b") { fixture =>
    import fixture._
  // test body
  }

  it must "do thing 3" in test(3, "c") { fixture =>
    import fixture._
  // test body
  }

  class Fixture(val resource: Any, val service: Any, val tempDir: Any)

  def test(arg1: Int, arg2: String)(body: Fixture => Unit): Unit = {
    val resource: Closeable = ???
    val service: Closeable  = ???
    val tempDir: Path       = ???
    try {
      body(new Fixture(resource, service, tempDir))
    } finally {
      resource.close()
      service.close()
      Files.delete(tempDir)
    }
  }
}

Assuming average test body is just a few lines then additional import fixture._ makes code a bit clumsy.

Replacing this:

  it must "do thing 1" in test(1, "a") { fixture =>
    import fixture._
  // test body
  }

with:

  it must "do thing 1" in test(1, "a") { this =>
  // test body
  }

would make tests much more elegant (no repetition) and somewhat more readable.

I don’t see how implicit functions, implicit conversions, etc would replace scope injection without adding a lot of bloat (effectively making that pointless).

I really think this proposal is worth serious consideration. Kotlin receiver functions solve many of the problems Scala’s implicits claim to solve, but often without “dirtying” the file-context with a very wide import.

Basically, they’re “just” scoped extension methods, which naturally carries clear scope boundaries while being incredibly flexible in practice.

A small primer

A simple extension method in Kotlin looks like fun String.scream() = this + "!!!".
Note how the scope of the first term is transferred into the body of the extension method. This gets really powerful when you combine it with scoping, as in fun foo(bar: String.() -> Unit) = "hello".bar(), where in the context of the foo body, bar is available on all strings, and the function body rebinds this to String such that foo{ println(this + "!!!") } prints hello!!!

Kotlin uses this feature to spice up operations over POJOs to allow things like

val initializedBean = JavaBean().apply {
    x = "hello"
    y = 42
}

where a globally imported apply is defined as inline fun <T> T.apply(block: T.() -> Unit): T (full implementation here)

I’ve been tinkering with implicit function types in Dotty and they are far more complex while not quite being able to achieve the functionality of the simple function above.

This is nice, but I think Scala’s scoping rules are already flexible and complex enough IMHO.

Overloading the meaning of this would make the language significantly more complex and even harder (not simpler) to understand when reading code. All that for very little return — today, you can already implement the following, which I think is preferable as it’s slightly less implicit and less magic:

val initializedBean = JavaBean().apply {
  it.x = "hello"
  it.y = 42
}
1 Like

It is good logic when there is no choice. Today it is not so. Scala has a very big disadvantage for us. People prefer kotlin jdbc library at our company. Named tuple or receiver function can make situation better. But scala has not such functionality. :frowning: