val myPage = 'html()(
'head(),'div()("I wrote this page.")
)
It will be compiled correctly.
It is not problem for json, it is not big problem for html(Html has a very flexible structure).
But for many dsl which have a lot of context dependent grammar.(sql,xml[with xsd],etc)
Static type checking can highly improve usability.
:))))
Sorry, I will not prove you that
static type cheking
auto code completion
is a very good
:))
If you are in the camp of dynamic schemes you just do not have such problems
That’s fair, but to @drdozer’s other point – is there anything you need that can’t be done with Scala 3’s implicit function types? I mean, I tend to think of this sort of strongly-typed context-specific syntax as the killer app for that feature…
The DSLs I showed are, intentionally, fairly unstructured. However, it is not difficult to extend these DLS to make them statically type checked, and provide auto completion. It would be relatively trivial to provide an alphabet of HTML tag names taken from e.g. the X-HTML DTD, and constrain that elements can only be nested within others as the DTD allows. The glue is all provided by an implicit that provides the .apply method to use the left-hand-side as if it were a constructor. I didn’t bother adding strong child type constraints because life is short, and these things are best done by code generation from the external type system, but it’s not in principle difficult to do. And you can still render it into a tagless abstraction which retains the ability to separate construction from interpretation.
The difference I think I’m discerning is that my style of building structures is a standard, immutable, referrentially transparent expression. Builder patterns seem to locally construct a mutable object and then set properties on it, before “releasing” it to the containing scope. That is, incidentally, the big difference between my HTML DSL and the one in scalatags.
I just do not know how it can be simply done with implicit functions.
it is very simple with 5 functions, but when there are 500 -1000 functions it is implicit hell
Kotlin receiver functions provide simple hierarchical scope management.
Perhaps I’m missing something. The builder pattern appears to work by using a block of statements to set mutable values in the object being built, together with some support for lifting into scope verbs for configuring that object. The tagless DSL approach works by configuring a value through an argument list (rather than block of statements) and the verbs need to be brought into scope by conforming to the argument type(s). I don’t want to get into a holy war about mutability vs expressions. But if the functionality we want to provide is a mechanism to flexibly build complicated, domain-specific expressions with type-safety, then it looks more like an IDE tooling issue to me than a language one. That the IDE can reliably suggest to you what verbs are available within the value building context.
OK, this sounds interesting. Do you have a link so I can take a look? Perhaps looking at a real example, and at a real scale, I’ll see why builder support is required. Cheers
Sorry, I cannot provide link to our real project it is not open source yet.
There are many dsl in kotlin, but I do not think we need to go far.
Let’s imagine an implemantation of scalafx in
implicit function type
DeleayedInit(deprecated)
kotlin receiver function
IMHO: receiver function will be an etalon of simplicity
Ah, interesting point – yeah, I can see that forbidding certain sub-nestings isn’t going to work in the obvious block-based approaches. That’s a good argument for your approach. (Which I generally like, although I wish there was an obvious way to avoid the extra parens.)
I do not think it would be a problem with receiver function, and implicit argument if reciver function were support implicit val shadowing.
Something like:
But It is second task.
The first task is hierarchical scope management which allows increase cohesion.
IMHO: It is just simplify writing, support and using
Perhaps I’ve written it somewhere else, but IMO scope injection would make tests much better.
Consider this code:
package scope
import java.io.Closeable
import java.nio.file.{Files, Path}
import org.scalatest.{FlatSpec, MustMatchers}
class FixturesDemoSpec extends FlatSpec with MustMatchers {
behavior of "something"
it must "do thing 1" in test(1, "a") { fixture =>
import fixture._
// test body
}
it must "do thing 2" in test(2, "b") { fixture =>
import fixture._
// test body
}
it must "do thing 3" in test(3, "c") { fixture =>
import fixture._
// test body
}
class Fixture(val resource: Any, val service: Any, val tempDir: Any)
def test(arg1: Int, arg2: String)(body: Fixture => Unit): Unit = {
val resource: Closeable = ???
val service: Closeable = ???
val tempDir: Path = ???
try {
body(new Fixture(resource, service, tempDir))
} finally {
resource.close()
service.close()
Files.delete(tempDir)
}
}
}
Assuming average test body is just a few lines then additional import fixture._ makes code a bit clumsy.
Replacing this:
it must "do thing 1" in test(1, "a") { fixture =>
import fixture._
// test body
}
with:
it must "do thing 1" in test(1, "a") { this =>
// test body
}
would make tests much more elegant (no repetition) and somewhat more readable.
I don’t see how implicit functions, implicit conversions, etc would replace scope injection without adding a lot of bloat (effectively making that pointless).
I really think this proposal is worth serious consideration. Kotlin receiver functions solve many of the problems Scala’s implicits claim to solve, but often without “dirtying” the file-context with a very wide import.
Basically, they’re “just” scoped extension methods, which naturally carries clear scope boundaries while being incredibly flexible in practice.
A small primer
A simple extension method in Kotlin looks like fun String.scream() = this + "!!!".
Note how the scope of the first term is transferred into the body of the extension method. This gets really powerful when you combine it with scoping, as in fun foo(bar: String.() -> Unit) = "hello".bar(), where in the context of the foo body, bar is available on all strings, and the function body rebinds this to String such that foo{ println(this + "!!!") } prints hello!!!
Kotlin uses this feature to spice up operations over POJOs to allow things like
val initializedBean = JavaBean().apply {
x = "hello"
y = 42
}
where a globally imported apply is defined as inline fun <T> T.apply(block: T.() -> Unit): T (full implementation here)
I’ve been tinkering with implicit function types in Dotty and they are far more complex while not quite being able to achieve the functionality of the simple function above.
This is nice, but I think Scala’s scoping rules are already flexible and complex enough IMHO.
Overloading the meaning of this would make the language significantly more complex and even harder (not simpler) to understand when reading code. All that for very little return — today, you can already implement the following, which I think is preferable as it’s slightly less implicit and less magic:
It is good logic when there is no choice. Today it is not so. Scala has a very big disadvantage for us. People prefer kotlin jdbc library at our company. Named tuple or receiver function can make situation better. But scala has not such functionality.
I think case classes can not enough support the role of named tuples.
IMHO: OOP is quite badly fits for big relational data.
There are some related themes:
package scala
/** A tuple of 3 elements; the canonical representation of a [[scala.Product3]].
*
* @constructor Create a new tuple with 3 elements. Note that it is more idiomatic to create a Tuple3 via `(t1, t2, t3)`
* @param _1 Element 1 of this Tuple3
* @param _2 Element 2 of this Tuple3
* @param _3 Element 3 of this Tuple3
*/
final case class Tuple3[+T1, +T2, +T3](_1: T1, _2: T2, _3: T3)
extends Product3[T1, T2, T3]
{
override def toString(): String = "(" + _1 + "," + _2 + "," + _3 + ")"
}
The main drawback of a tuple is that it’s a case class with undescriptive field names. Thus, if you want proper field names you create your own case class. In a way, custom case class is a named tuple then.
Also most tuple types are unspecialized and that means they suffer from boxing, unlike custom made case classes.
I understand that to continue constructive discussion I need to make very good sip proposal I am not language designer, so it is not shameful not to have enough qualification to do it myself.
I just say I do not think that case classes is very good for relational big data. Thay require memory copying for jdbc wrappers at least
I think it is just inconvenient. It is very good seen in any scala jdbc library documentation.
Are you kidding or trolling ? Of course we use. The kotlin wrappers are just more comfortable.