Implicit Function Types

Effect tracking is one of those things where if you try to get it “right”, you often have incredibly complex solutions to them that also have a fair few downsides. MT just have a real problem where they are very hard to compose (when you also need to take into account code maintenance and changes)

True as far as it goes, but I think the more recent solutions (e.g. free coproducts) are not watering down the principles, rather they are looking to retain the principled effect tracking of MT while offering something easier to compose and use. We’ve had a Cambrian explosion of such libraries over the last year or so, now it’s a case of letting a consensus form and the documentation/tutorials/etc. catch up. I really think this process is going fine, and language-level support for sum/union types will make it even better.

And again, even with how cumbersome they currently are, MTs are working, deployed in production, because they solve real problems.The most important thing is to not mess up the working cases.

Admittedly I am a bit late to this conversation, but after seeing @odersky and Dmitry’s talks at Scala Days, I feel somewhat obliged to comment (you guys asked for feedback, so here it is :slight_smile: ).

I asked a variation of this question after Dmitry’s talk, but here it is again explained further. I agree that having implicit parameter boilerplate on many functions is not ideal, so I can get behind the stated motivation for this change. What bothers me, like @lchoran, is that this implementation takes what is fundamentally an input to your function (some implicit context) and expresses it in the output type of the function.

I think this is generally quite confusing, especially for new developers, and is made worse by the fact that there’s an already-confusing subject beneath it (implicits). And it’s doubly confusing, because a developer might a) not know that they have to provide the implicit, and b) not know what to do with that weird contextualized return type.

Further complicating matters is that the name of the implicit parameter is no longer found in the function signature (e.g. thisTransaction in the blog example. While you can figure out where this comes from, it is yet another source of confusion for the developer who is not aware of what’s happening.

I also question whether it’s really saving that much code. In the Dotty compiler example, there’s ~2K instances of (implicit ctx: Context) being replaced by ~2K instances of Contextualized[ReturnType] or similar, unless I’m misunderstanding something.

Perhaps there’s another reason to make this change, besides the one in the blog post? Perhaps that reason is much more compelling, and outwighs the additional confusion this change introduces?

Unless somone can convince me otherwise, this not a feature I would use in Dotty. Explicitly stating the implicit parameters seems much more straightforward and isn’t really that much more code.

1 Like

Hi,
I have watched Mr. Odersky presentation on Scala Days and read his blog post on this subject and just would like to say that from my Scala user point of view the solution presented feels excelent.

In the code bases I have at work, this change would easily save thousands of lines of codes, particularly if you use implicits as a form of DI.

i.e. If you are doing work with akka-http, you will probably have functions that look like this

def doSomething(param: String)(implicit client: HttpExt, ec: ExecutionContext, materializer: Materializer) = ...

This is just a minimal list of dependencies but its easily possible to have many more. Implicit function types can greatly reduce the boilerplate in such cases.

1 Like

Hi, it is great.
Unfortunately, I think there still is a lack of user friendly for code assistance.
Will there be possible something like “implicit import”?

For example, we use SQL DSL to work with EclipseLink:

for( ra <- new OQuery(EmployeeAta.Type) {
  where((t.firstName === "John".ns) and (t.lastName === "Way".ns))
  forUpdate()
}){
  val t = ra.copyAro()
  println(s"${t.firstName},${t.lastName}")
}

It will be great to replace anonymous classes with implicit Function Types. However, if we do this, methods “where” and “for update” will be global.
The sql gramma quite complex and we need those dsl methods to be context dependent.
For example, “forUpdate» must be visible only in “OQuery”.
It is important because, our application programmers use some methods quite rare, and without smart code assistance, it is uncomfortable.

How can we add “context dependency” in such case?

for( ra <- OQuery(EmployeeAta.Type) { implicit q: OQuery =>
  import OQuery.helper._
  where((t.firstName === "John".ns) and (t.lastName === "Way".ns))
  forUpdate()
}){
  val t = ra.copyAro()
  println(s"${t.firstName},${t.lastName}")
}

For DSL it will be quite usable.
For example with Anorm it will be great that “import anorm._” will be done automatically in some cases, for example:

doSql{
SQL"""
select a.address_id AS country_lang from address a
 where a.address_id =$id
  """. as(SqlParser.int("country_lang").single)
}

Now we need to write:

doSql{ impiclit connection =>
import anorm._
SQL"""
select a.address_id AS country_lang from address a
 where a.address_id =$id
  """. as(SqlParser.int("country_lang").single)
}

Ideas for this kind of tighter scoping of contextual identifiers were brought up quite some time ago by @lihaoyi and @stanch on the scala-debate mailing list. Short recap: They used scala-async as an example, which provides two main operations: async and await. await is defined to be only meaningful within the context of an async block. However, as scala-async behaves today, await is simply sitting in the global scope cluttering the namespace:

import scala.async.Async.{async, await}

val future = async {
  val f1 = async { ...; true }
  val f2 = async { ...; 42 }
  if (await(f1)) await(f2) else 0
}

With Scope Injection, you could define async as:

object Holder {
  val await = ???
}
def async[T](thunk: import Holder => T): Future[T] = ???

Which would give you a call-site syntax:

import scala.async.Async.async

val future = async {
  val f1 = async { ...; true }
  val f2 = async { ...; 42 }
  if (await(f1)) await(f2) else 0
}

There were also some other ideas for boilerplate-free implicit and scope injection, especially useful for writing DSLs in Scala (among them one idea which is basically Dotty’s implicit function types). You can find more in the current README.

DSL Paradise: Current State

The initial discussion resulted in an early prototype: http://github.com/dsl-paradise/dsl-paradise

I took the liberty to continue the work on this prototype. The current state is a Scala compiler plugin that supports the proposed implicit context propagation (i.e., implicit functions for Scala 2) and scope injection: GitHub - pweisenburger/dslparadise: Scala compiler plugin for boilerplate-free context propagation and scope injection

It supports the following three use cases:

  • Implicit Context Propagation

    def f(a: Int `implicit =>` String) = println(a(5))
    
    def g(implicit x: Int) = x.toString
    
    > f("Hi, " + g)
    // desugaring
    > f { implicit imparg$1 => "Hi, " + g }
    Hi, 5
    

    Note: This issue has already been addressed in Dotty by implicit function types. The original DSL Paradise proposal is more restricted in the sense that that it only defines implicit function types with a single function argument (however, implicit functions can be curried). The proposal also does not specify that arguments to implicit functions should be resolved from the implicit scope on the call-site. The current implementation, however, supports this by now.

  • Scope Injection

    class Thingy {
      val u = 6
      val v = 7
    }
    
    def f(a: Thingy `import =>` Int) = println(a(new Thingy))
    
    > f(4 + u - v)
    // desugaring
    > f { imparg$1 => import imparg$1._; 4 + u - v }
    3
    
  • Static Scope Injection

    object Thingy {
      val u = 6
    }
    
    def f(a: Int `import` Thingy.type) = println(a)
    
    > f(u + 1)
    // desugaring
    > f { import Thingy._; u + 1 }
    7
    

The current implementation defines the implicit and import functions simply as type aliases for the standard function type (and thus, retains compatibility with standard Scala):

type `implicit =>`[-T, +R] = T => R
type `implicit import =>`[-T, +R] = T => R
type `import =>`[-T, +R] = T => R
type `import`[T, I] = T

Nested Contexts: When nesting implicit functions, using a fresh name for each compiler-generated implicit argument can result in ambiguous implicit values (this problem is solved in Dotty by refining the precedence rules for implicit resolution).

This compiler plugin allows to specify a fixed name to be used for the implicit argument to enable implicit argument resolution for nested contexts by shadowing the implicit argument of the outer context (the README gives a more detailed example).

2 Likes

This is very cool. Perhaps this implicit function type implementation should be candidate (after a little rework) for inclusion in Scala 2.14, which is supposed to be the bridge between Scala 2 and Scala 3. I am looking forward to having implicit function types, and would support that wholeheartedly.

The scope injection feature also looks interesting, but it seems to me that in terms of capabilities it should be entirely subsumed by implicit function types, no? (i.e., you can encode anything scope injection does with implicit function types)

Sounds like a great idea :slight_smile:

I’m not entirely sure. I could imagine an encoding like this:

Definition site:

def await(implicit ev: AwaitCapability) = ???
def async[T](thunk: implicit AwaitCapability => T): Future[T] = ???

Call site:

import scala.async.Async.{async, await}

val future = async {
  val f1 = async { ...; true }
  val f2 = async { ...; 42 }
  if (await(f1)) await(f2) else 0
}

This example encoding ensures that await can only be called if an AwaitCapability is in the implicit scope (which is the case in async blocks thanks to implicit function types). But even in this example, you still need to make sure that await is in the lexical scope. So we still have identifiers in the global scope (increasing the chance for name clashes) that are actually only ever useful in some specific lexical contexts.

I think that scope injection is a more direct solution to this issue. I guess that you cannot encode the exact same feature with implicit function types only. Could be that the encoding is good enough though, so that we don’t want to have scope injection as a separate concept; not fully sure about the best trade-off.

Thanks a lot.

I think, “Static Scope Injection” is quite easy and enough. “Scope Injection” can be implemented with “Static Scope Injection” and “implicit function type”

I hope that one day “Static Scope Injection” will be implemented in dotty. It will make scala-dsl more powerful and more convenient.

1 Like

Yes, you end up with the identifiers in the global scope, also accessible from outside of the restricted context, so the encoding is not exactly equivalent. But IMHO it doesn’t warrant adding such a complex feature as scope injection, because you can have a nice error message when the construct is used outside of its scope and the implicit is not found, reading something like “this construct can only be used inside the scope of […] construct”, which I’d say is good enough.

1 Like

I’m interested, will it work with “Currying”:

def someFuntion(arg:SomeType)(implicit arg:SomeImplicitType)

In any case, the scope injection allow to:

  • Use third libraries
  • Minimize possible conflicts in global scope

@mdedetrich Thanks for the reply which I just managed to see now (yikes). Your example is compelling and something I see frequently. Just to make sure I’m understanding, I think you’re saying that your example would be replaced as follows:

def doSomething(param: String)(implicit client: HttpExt, ec: ExecutionContext, materializer: Materializer) = ...

becomes:

def doSomething(param: String): Contextualized[...]

Where Contextualized essentially defines those implicit dependencies. This does cut down on the boilerplate, however I will say that I also see a pattern like this:

def doSomething(param: String)(implicit ctx: Context) = {
  implicit val ec = ctx.ec
  implicit val materializer = ctx.materializer
  // ...
}

I suppose this last variation also has boilerplate that gets stripped away in the declaration of the implicit vals, so it does save something. I suppose what I continue to question is the cost of muddying the waters between inputs and outputs, as discussed above.

I am getting the idea that implicit function types are useful for more than just cutting down on boilerplate, and that’s good. But originally I was responding to the stated motivation.

For comparison, there is extension in kotlin:
see: https://kotlinlang.org/docs/reference/extensions.html
With this functionality we can make sql dsl:

fun search(name: String, minAge: Int?) = list {
    where {
        upper(c.name) like upper("%$name%")
        if (minAge != null) {
            and {
                c.age gte minAge
            }
        }
    }
}

In this example “where” is declared as:

fun where(op: WhereExpr.() -> Unit) =
            add(WhereExpr(this), op)

see:https://github.com/edvin/kdbc/blob/master/src/main/kotlin/kdbc/expression.kt

This is equivalent of scope injection.
It’s very convenient.

Right, I can totally see both points. Scope injection is very convenient, but it also adds complexity to the language which may be unnecessary since you can also encode it. Having all values in the global scope (using the encoding) can be become more of problem when you start to have name clashes between identifiers.

Yes, currying should work, e.g., with the current plugin, you can write:

def someFuntion(arg:SomeType): Int `implicit =>` (SomeTypeT `implicit =>` SomeTypeU)

Currying does not have overloading, for example:

class OverloadingTest extends FunSuite {
  def doNothing(a: Int)(implicit b:String): Unit = {}
  def doNothing(a: Int)(implicit b:BigDecimal): Unit = {}
  test("string") {
    implicit val  b: String = ""
    doNothing(1)
  }
}

compiled with:

Error:(13, 5) ambiguous reference to overloaded definition,

So, in global scope, there will be conflicts very quickly.

I have understood that scala has many complexity. And views(implicit val) make code quite surprising.
But, "import anorm._ " in the head of the file can be real headache. And there is no ability to easily overcome it now.
:frowning:

I’m a little surprised to learn that there’s no syntax for implicit function values.

I was expecting that if given something like,

type Foo = implicit (Int, String) => implicit Boolean => Int

I would be able to write,

val foo: Foo =
  implicit (i: Int, s: String) => implicit (b: Boolean) =>
    s.length + i + (if(b) 1 else 0)

As it stands it seems that you’d either need to write methods to be eta expanded,

def foo1(implicit b: Boolean) = s.length + i + (if(b) 1 else 0)
def foo2(implicit i: Int, s: String): implicit Boolean => Int = foo1
val foo: Foo = foo2

or work with implicitly,

val blah: Foo =
  implicitly[String].length + implicitly[Int] + (if(implicitly[Boolean]) 1 else 0)

Neither of which are particularly pleasant.

I think we need to be able to name implicit function arguments if we’re going to be able to combine this feature cleanly with dependent function types. For instance, I think we ought to be able to write things along the lines of,

trait Bar { type T }
type Foo = implicit (b: Bar) => b.T => ...

More generally, I think we should be aiming to make implicit/dependent/polymorphic completely orthogonal, at least sufficiently to allow for a one-one correspondence between method types and function types.

8 Likes

Good point. Implicit closures are supported (same as in Scala 2.12) but the syntax does not allow to give a parameter type. I.e. all you can do is:

implicit x => ...

We should generalize that to arbitrary closures.

3 Likes

Alternatively, we can create non-implicit functions and implicitly apply it.

The alternative approach is implemented in a library called feature.scala:

Examples can be found in its Scaladoc.

I think non-static scope injection is very nice feature, it is heavily exploited in Kotlin for convenient and easy
way to define builders/dsl and prove to be really powerful thing.
It is build on top of Katlin’s extension methods/ extension functions / extension lambadas
And in fact it is really close to

But use rather “extension” instead of “implicit” word
So Kotlin’s

ThisArg.() -> ResultType

Should become something like

ThisArg `extension import =>` ResultType

using DSL Paradise notation, or alternatively I personally would prefer something like

// without regular arguments - similar to Kotlin's `ThisArg.() -> ResultType`
(@this @import ThisArg) => ResultType
// or with them  - similar to Kotlin's `ThisArg.(RegularArg1, RegularArg2) -> ResultType`
(@this @import ThisArg, RegularArg1, RegularArg2) => ResultType

Also I think it could be very interesting to know that Kotlin also allows to resolve “scope injection clashes” via
marking different blocks/lambdas by labels: it is available as Qualified this expression

Brief demonstration of that Kotlin’s features may look like this:

Kotlin’s extension methods/functions/lambadas:

    fun testExtensions() {
        data class PointXY(val x: Int, val y: Int)

        // extension method/function
        fun PointXY.mullXY(): Int {
            // first argument of extension method passed through `this`
            assert(this is PointXY)
            assert(this.x == x)
            assert(this.y == y)
            return x * y;
        }

        // extension filed / local variable / method argument - member of extension function type initialized with extension lambada
        val addXY: PointXY.() -> Int = {
            // first argument of extension lambada passed through `this`
            assert(this is PointXY)
            assert(this.x == x)
            assert(this.y == y)
            x + y;
        }

        // usage
        val p0: PointXY = PointXY(10,6)
        assert(60 == p0.mullXY())
        assert(16 == p0.addXY())
    }

Kotlin’s resolution of hidden names (Qualified this expression):

    fun testExtensionsScopeResolution() {
        data class PointXY(val x: Int, val y: Int)
        data class PointYZ(val y: Int, val z: Int)

        val pXY = PointXY(1,2)
        val pYZ = PointYZ(3,4)

        // Kotlin( `with(obj) {...}` ) == Kotlin( `obj.apply {...}`  ) == Scala( `{import obj._; ... }` )
        with(pXY) labelOuterLambda@ {
            pYZ.apply labelInnerLambda@ {
                // `x` and `z` resolves without clashes
                assert(x == pXY.x)
                assert(z == pYZ.z)
                // `this` resolves to `pYZ` hover all outer `this`-es are also available via qualified notation
                assert(this@labelOuterLambda == pXY)
                assert(this@labelInnerLambda == pYZ && this == pYZ)
                // ambiguity of `y` resolution can be resolved by labels, by default it resolved in nearest scope
                assert(y == pYZ.y && y == this.y)
                assert([email protected] == pYZ.y)
                assert([email protected] == pXY.y)
            }
        }
    }

So I think it would be nice if Scala has something similar or (something better). In my opinion without scope injection in Scala, Kotlin will look much better then Scala in this area.

2 Likes

Yes indeed this is correct

Yeah the point is that sometimes you can’t put all of the config variables inside one global config (or doing so is detrimental for other design reasons).