Implicit Function Types

TL;DR this could be solvable by abstracting over non-value types.

That seems a very good point. In fact, it is somewhat orthogonal to implicits and applies to the reader monad as well, though it matters less there. I think the current syntax should mean option 2 for consistency (I haven’t checked what’s implemented, I’d call 1 a bug), but there is a consistent way (though I’m not 100% happy with it).

Nowadays we can’t abstract over parts of a method type, because method types are not value types, but type alias only support value types.
We know that f1 and f2 aren’t equivalent operationally because of the function-method distinction, while f2 and f3 are equivalent.

def f1(a: A)(b: B): C = foo
def f2(a: A): B => C = b => foo
def f3(a: A): Reader[B, C] = b => foo
type Reader[S, T] = Function1[S, T]

To abstract over parts of a method type, you might consider something like this, with f4 behaving like f1, ignoring the concrete syntax:

type ImaginedReader[S, T] = (s: S): T
def f4(a: A): ImaginedReader[B, C] = b => foo

I don’t know the concrete syntax for the body of f4, what I wrote doesn’t make so much sense (trick questions: is s in scope in foo? Why is closure syntax not creating a closure?). But let me ignore syntax bikeshedding for this post—also because this problem doesn’t appear in the implicit case.

I’m not sure how this would look in Haskell syntax happens to support this better (I’m not 100% sure this actually works in Haskell, but it makes sense):

type ImaginedReader s t = s -> t
f4: A -> ImaginedReader B C
f4 a b c = foo

Allowing that is a non-trivial extension, but at least it fits in what we understand — the compiler should just eagerly inline such type aliases.
Note 1: I considered having a separate keyword from type, but couldn’t pick a decent one—I had picked non-value-type as a strawman but it was so ugly I didn’t like my own idea.
Note 2: “eagerly inline type aliases” is actually very non-trivial since all type definitions in Scala are mutually recursive, so I expect significant implementation restrictions and I’m not sure such aliases could be exported.

Back to @Ichoran’s concern, it could be solved by

type ImplicitReader[S, T] = (implicit s: S): T
// or maybe it's
//  type ImplicitReader[S, T] = implicit (s: S): T
// ? I'm not sure, though `implicit` outside parens makes more sense.
def f5: ImplicitReader[Foo, Bar] = f5body // no abstraction here needed, just like the blog post, so fewer syntax questions

Do we want literally that? More brainstorming needed, please, but I think that’s a start.

And it turns out this goes in a similar direction—if one understands type ImplicitTuple3[A, B, C] = ... as a non-value type alias to be inlined at each use site (which I think gives the effect you want).
A technical problem is that nowadays a parameter block is not a type at all, just part of the syntax of method types, but changing that sounds conceptually plausible. Alternatively, just abstract over the whole binder.

Another technical problem is that various non-value types (such as method types) have not only free type variables but free term variables. So you can write foo1 and foo2, but Baz and foo3 aren’t allowed.

def foo1(ctx: Context)(tree: ctx.Tree): ctx.Tree
type Bar = (ctx: Context)(tree: ctx.Tree): ctx.Tree
def foo2: Bar
type Baz(ctx: Context) = (tree: ctx.Tree): ctx.Tree
def foo3(ctx: Context): Baz(ctx)

But you could generally allow type members to abstract over paths, like it happens in DOT — for non-eager types, the value arguments could be erased during compilation. Right now, any such abstraction can only be done by defining Baz as a class, and it’s less obvious ctx can be erased there.

1 Like

Beware: This change is dropping an arbitrary restriction from the language in comparison to implicits in many other systems (like Agda, Coq, …). If a method can take some arguments, a closure should be able to abstract over those. Another such restriction forbids polymorphic function values.

Sure; I can certainly sympathise with the argument that, given that we already have implicit parameters, they should be first-class. I’m just worried about the use of implicit parameters that are parameters in a genuine sense (i.e. not fully determined by their type, the way implicits used as typeclasses are). We have only one major example of this kind of implicit in the standard library, the ExecutionContext passed implicitly to various methods on Future, and many (myself included) regard it as a mistake; I really don’t want to see that style proliferate.

2 Likes

I’m just worried about the use of implicit parameters that are parameters in a genuine sense (i.e. not fully determined by their type, the way implicits used as typeclasses are). We have only one major example of this kind of implicit in the standard library, the ExecutionContext passed implicitly to various methods on Future, and many (myself included) regard it as a mistake; I really don’t want to see that style proliferate.

This style for me is the essence of Scala. I see lots of examples where it is used to great effects in the code we write: implicit Contexts in dotty, implicit Positions in Sebastien’s tree transformers, implicits for capabilities and effects. All of it in clean first-order style, no bulky monads required. I’ll bet you it’s the future :slight_smile:

3 Likes

I’d bet against it, having had to maintain that kind of code. Effectful call as <- vs = is subtle but still visible, allowing the code for effects to walk the line between completely invisible (like say Quasar or the various async things in Python) and overly verbose; for me that’s the very best part of Scala. If all you have is a “this effect exists somewhere in this block” then that’s the same user experience as checked exceptions (“one or more of these functions can throw this exception, but I can’t tell which without inspecting them all”).

I’m not attached to monads as such (indeed I share your concern about composability, I’m just hoping getting sum types will make it a non-issue). What I want is a consistent representation of effects, with the ability to shift perspective between viewing an effectful function as a simple function as an effect, or as a (more complex) pure value.

And indeed this PR helps that, as far as it goes. Having seen a roadmap that has “effect system” further down the road for Scala, I guess the things I see as important are a) that any work on implicits now is with a view to extending/subsuming them with the future general effect system, so that implicit parameters will ultimately be (perhaps sugar for) “plain old scala effects” rather than a funny thing with their own semantics. b) that the syntactic representation adopted for effects preserves the visibility they have in the current for/yield encoding, which for all its performance issues and compositionality concerns has proven very successful in today’s Scala.

I see that in the other direction: the future effect system is ultimately (perhaps sugar for) plain old Scala implicits, rather than a funny thing with their own semantics.

1 Like

If that were possible I would agree. But many useful effects (e.g. optionality, nondeterminism, async) can’t be modeled as an extra parameter. Whereas an extra parameter can be very neatly modeled as an effect (Reader).

Effectful call as ← vs = is subtle but still visible, allowing the code for effects to walk the line between completely invisible (like say Quasar or the various async things in Python) and overly verbose

I don’t get that part of it. I guess we both agree that in the future we will have very fine-grained effect systems, so much so that a lot of code with be effectful in some way or another. But then you end up with large parts of your program written in monad structure. The structure itself does not tell you which effects the code has; you need to turn to the types for that.

On the other hand, every time you introduce a new effect category (and it could be as ubiquitous as “not proven to be total”) you need to completely refactor your code into the monadic scheme. And that means you are willing to take a slowdown of (I estimate) 10, risk stackoverflows, be extremely verbose, and have a really hard time composing all your fine grained effects. Or you go free which means better composition but likely even more boilerplate. I can see this work in the sense that you very forcefully tell your users: “don’t use effects, it’s just too painful”. So it could have education value. But if you have to deal with effects, it’s utterly suboptimal in several dimensions.

Now, of course sometimes you need to use monads. E.g. for Future, or Task, or probabilistic programming, or generally every effect that needs a continuation to model (I am ignoring async here). If I look at algebraic effects (as in Eff, for instance), there’s no way you can model these in their full generality with implicit parameters. You need resumable exceptions or something equivalently powerful to model them all. That’s fine. It’s why we have monads.

But there is also a large class of effects which I call “simple effects” in contrast to the general “fancy effects” described above. These just indicate that your code might “do” something (such as change state, raise an exception, do I/O, access high-security data, raise an NPE, the list goes on). These effects can very conveniently be modelled with something like implicit parameters (we also need a way to avoid capture, so there’s something still missing here). And I see no reason why I should go into monadic contortions to model these effects. They are perfectly compatible with direct style execution, and the types tell me what the code does in any case.

In summary: To characterize what your code does, use a type. There’s no need to contort your program logic.

4 Likes

Martin just addressed this comment in
https://github.com/lampepfl/dotty/pull/1775/commits/9d57d81096de12f69cbcd8f4113243cc041cd520 (and fixups thereof), making my comment outdated. If this is the approach, I’d like the same optimization to apply to closure-returning methods though.

I don’t get that part of it. I guess we both agree that in the future we will have very fine-grained effect systems, so much so that a lot of code with be effectful in some way or another. But then you end up with large parts of your program written in monad structure. The structure itself does not tell you which effects the code has; you need to turn to the types for that.

I’m deeply concerned about this already to be honest. Checking the signatures on mouseover isn’t good enough (as seen with checked exceptions - I have experience with the difficulty of debugging such cases); at the very least I would want the IDE to distinguish each effect visually, perhaps with coloured underlines like those currently used for implicit conversions. (I appreciate that types will always make this possible, but possible isn’t enough if it isn’t implemented consistently by the tools). The monad transformer encoding of effects is cumbersome but it does explicitly distinguish the different varieties of effect on each call; if the effect is only in the type signature and not visible at the call site then I think that’s going to be too magic to be maintainable.

you are willing to take a slowdown of (I estimate) 10, risk stackoverflows, be extremely verbose, and have a really hard time composing all your fine grained effects.

The fact that people are willing to pay these costs in today’s Scala should show how important and valuable proper management of effects is. And really there’s no fundamental reason all these costs should be there - I expected to complain about these issues to you, not have you complain about them to me :slight_smile: . I would say: fix the causes of these problems properly, rather than hacking around them by introducing special cases. Proper tail call support would eliminate the stack overflow issues and hence the biggest performance problem (trampolining). Proper sum/union types (already coming AIUI?) would make it easy to compose fine grained effects. Removing excessive intermediates in a for/yield is a general issue with various possible solutions (a colleague today claimed rewrite rules would solve it?) that will need to be fixed one way or the other anyway (e.g. for collections), and would address the rest of the performance overhead.

But there is also a large class of effects which I call “simple effects” in contrast to the general “fancy effects” described above. These just indicate that your code might “do” something (such as change state, raise an exception, do I/O, access high-security data, raise an NPE, the list goes on). These effects can very conveniently be modelled with something like implicit parameters (we also need a way to avoid capture, so there’s something still missing here). And I see no reason why I should go into monadic contortions to model these effects.

It’s not a contortion, it’s a perfectly natural style that’s already very common in idiomatic Scala. I just don’t see these things as different enough to be worth having two different sets of semantics where one can cover all the use cases (especially given how tight the complexity budget for Scala is). Isn’t Scala all about having a unified, coherent model, and then recovering the performance details as optimisations where necessary? (compare the unified type hierarchy and @specialized to the Java approach to primitives).

In summary: To characterize what your code does, use a type. There’s no need to contort your program logic.

I have to disagree. Effectful code needs to look different at the call site if it’s to be maintainable.

I believe the argument being made is that monad transformers are way too cumbersome for the benefit they provide (versus other lighter solutions that are much easier to compose), and I tend to agree. Scala is actually slowly starting to move away from MT, whether its by using stacks or other methods entirely. Effect tracking is one of those things where if you try to get it “right”, you often have incredibly complex solutions to them that also have a fair few downsides. MT just have a real problem where they are very hard to compose (when you also need to take into account code maintenance and changes)

Proper TCO isn’t going to happen any time soon (at least on the JVM). The JVM mandates a stack trace as part of its security model, and current tools for attempting to rewrite the AST to remove tail call issues have created problems in other areas. AST rewriting also can’t really happen across libraries due to the way that JVM is designed. This may be solved by scala-native, but that isn’t something that universally can be solved.

Re TCO, have you seen https://github.com/wheaties/TwoTails?

Also I guess the question is whether the reason monad transformers and the free monad are cumbersome is inherent or could be fixed.

Yes, and this here (GitHub - wheaties/TwoTails: A Scala compiler plugin for mutual tail recursion) is what I meant by limitations. Also I believe that Jason Zaugg has directly commented on TwoTails and the reasons why its techniques are not being currently implemented by scalac (you would probably have to ask him to clarify more, I remember him mentioning it on Gitter once)

Note that the difference between full TCO and Martin’s optimization solution is that full TCO is very aggressive optimisation technique that is likely to get a lot of things “wrong” (either in terms of degrading performance, or at worst creating a semantic change) where as the closure inlining only in the specific case of implicit functions is very safe (and it would still be faster than Monad comprehension with full TCO unless the full TCO is also combined with aggressive inlining)

Effect tracking is one of those things where if you try to get it “right”, you often have incredibly complex solutions to them that also have a fair few downsides. MT just have a real problem where they are very hard to compose (when you also need to take into account code maintenance and changes)

True as far as it goes, but I think the more recent solutions (e.g. free coproducts) are not watering down the principles, rather they are looking to retain the principled effect tracking of MT while offering something easier to compose and use. We’ve had a Cambrian explosion of such libraries over the last year or so, now it’s a case of letting a consensus form and the documentation/tutorials/etc. catch up. I really think this process is going fine, and language-level support for sum/union types will make it even better.

And again, even with how cumbersome they currently are, MTs are working, deployed in production, because they solve real problems.The most important thing is to not mess up the working cases.

Admittedly I am a bit late to this conversation, but after seeing @odersky and Dmitry’s talks at Scala Days, I feel somewhat obliged to comment (you guys asked for feedback, so here it is :slight_smile: ).

I asked a variation of this question after Dmitry’s talk, but here it is again explained further. I agree that having implicit parameter boilerplate on many functions is not ideal, so I can get behind the stated motivation for this change. What bothers me, like @lchoran, is that this implementation takes what is fundamentally an input to your function (some implicit context) and expresses it in the output type of the function.

I think this is generally quite confusing, especially for new developers, and is made worse by the fact that there’s an already-confusing subject beneath it (implicits). And it’s doubly confusing, because a developer might a) not know that they have to provide the implicit, and b) not know what to do with that weird contextualized return type.

Further complicating matters is that the name of the implicit parameter is no longer found in the function signature (e.g. thisTransaction in the blog example. While you can figure out where this comes from, it is yet another source of confusion for the developer who is not aware of what’s happening.

I also question whether it’s really saving that much code. In the Dotty compiler example, there’s ~2K instances of (implicit ctx: Context) being replaced by ~2K instances of Contextualized[ReturnType] or similar, unless I’m misunderstanding something.

Perhaps there’s another reason to make this change, besides the one in the blog post? Perhaps that reason is much more compelling, and outwighs the additional confusion this change introduces?

Unless somone can convince me otherwise, this not a feature I would use in Dotty. Explicitly stating the implicit parameters seems much more straightforward and isn’t really that much more code.

1 Like

Hi,
I have watched Mr. Odersky presentation on Scala Days and read his blog post on this subject and just would like to say that from my Scala user point of view the solution presented feels excelent.

In the code bases I have at work, this change would easily save thousands of lines of codes, particularly if you use implicits as a form of DI.

i.e. If you are doing work with akka-http, you will probably have functions that look like this

def doSomething(param: String)(implicit client: HttpExt, ec: ExecutionContext, materializer: Materializer) = ...

This is just a minimal list of dependencies but its easily possible to have many more. Implicit function types can greatly reduce the boilerplate in such cases.

1 Like

Hi, it is great.
Unfortunately, I think there still is a lack of user friendly for code assistance.
Will there be possible something like “implicit import”?

For example, we use SQL DSL to work with EclipseLink:

for( ra <- new OQuery(EmployeeAta.Type) {
  where((t.firstName === "John".ns) and (t.lastName === "Way".ns))
  forUpdate()
}){
  val t = ra.copyAro()
  println(s"${t.firstName},${t.lastName}")
}

It will be great to replace anonymous classes with implicit Function Types. However, if we do this, methods “where” and “for update” will be global.
The sql gramma quite complex and we need those dsl methods to be context dependent.
For example, “forUpdate» must be visible only in “OQuery”.
It is important because, our application programmers use some methods quite rare, and without smart code assistance, it is uncomfortable.

How can we add “context dependency” in such case?

for( ra <- OQuery(EmployeeAta.Type) { implicit q: OQuery =>
  import OQuery.helper._
  where((t.firstName === "John".ns) and (t.lastName === "Way".ns))
  forUpdate()
}){
  val t = ra.copyAro()
  println(s"${t.firstName},${t.lastName}")
}

For DSL it will be quite usable.
For example with Anorm it will be great that “import anorm._” will be done automatically in some cases, for example:

doSql{
SQL"""
select a.address_id AS country_lang from address a
 where a.address_id =$id
  """. as(SqlParser.int("country_lang").single)
}

Now we need to write:

doSql{ impiclit connection =>
import anorm._
SQL"""
select a.address_id AS country_lang from address a
 where a.address_id =$id
  """. as(SqlParser.int("country_lang").single)
}

Ideas for this kind of tighter scoping of contextual identifiers were brought up quite some time ago by @lihaoyi and @stanch on the scala-debate mailing list. Short recap: They used scala-async as an example, which provides two main operations: async and await. await is defined to be only meaningful within the context of an async block. However, as scala-async behaves today, await is simply sitting in the global scope cluttering the namespace:

import scala.async.Async.{async, await}

val future = async {
  val f1 = async { ...; true }
  val f2 = async { ...; 42 }
  if (await(f1)) await(f2) else 0
}

With Scope Injection, you could define async as:

object Holder {
  val await = ???
}
def async[T](thunk: import Holder => T): Future[T] = ???

Which would give you a call-site syntax:

import scala.async.Async.async

val future = async {
  val f1 = async { ...; true }
  val f2 = async { ...; 42 }
  if (await(f1)) await(f2) else 0
}

There were also some other ideas for boilerplate-free implicit and scope injection, especially useful for writing DSLs in Scala (among them one idea which is basically Dotty’s implicit function types). You can find more in the current README.

DSL Paradise: Current State

The initial discussion resulted in an early prototype: http://github.com/dsl-paradise/dsl-paradise

I took the liberty to continue the work on this prototype. The current state is a Scala compiler plugin that supports the proposed implicit context propagation (i.e., implicit functions for Scala 2) and scope injection: GitHub - pweisenburger/dslparadise: Scala compiler plugin for boilerplate-free context propagation and scope injection

It supports the following three use cases:

  • Implicit Context Propagation

    def f(a: Int `implicit =>` String) = println(a(5))
    
    def g(implicit x: Int) = x.toString
    
    > f("Hi, " + g)
    // desugaring
    > f { implicit imparg$1 => "Hi, " + g }
    Hi, 5
    

    Note: This issue has already been addressed in Dotty by implicit function types. The original DSL Paradise proposal is more restricted in the sense that that it only defines implicit function types with a single function argument (however, implicit functions can be curried). The proposal also does not specify that arguments to implicit functions should be resolved from the implicit scope on the call-site. The current implementation, however, supports this by now.

  • Scope Injection

    class Thingy {
      val u = 6
      val v = 7
    }
    
    def f(a: Thingy `import =>` Int) = println(a(new Thingy))
    
    > f(4 + u - v)
    // desugaring
    > f { imparg$1 => import imparg$1._; 4 + u - v }
    3
    
  • Static Scope Injection

    object Thingy {
      val u = 6
    }
    
    def f(a: Int `import` Thingy.type) = println(a)
    
    > f(u + 1)
    // desugaring
    > f { import Thingy._; u + 1 }
    7
    

The current implementation defines the implicit and import functions simply as type aliases for the standard function type (and thus, retains compatibility with standard Scala):

type `implicit =>`[-T, +R] = T => R
type `implicit import =>`[-T, +R] = T => R
type `import =>`[-T, +R] = T => R
type `import`[T, I] = T

Nested Contexts: When nesting implicit functions, using a fresh name for each compiler-generated implicit argument can result in ambiguous implicit values (this problem is solved in Dotty by refining the precedence rules for implicit resolution).

This compiler plugin allows to specify a fixed name to be used for the implicit argument to enable implicit argument resolution for nested contexts by shadowing the implicit argument of the outer context (the README gives a more detailed example).

2 Likes

This is very cool. Perhaps this implicit function type implementation should be candidate (after a little rework) for inclusion in Scala 2.14, which is supposed to be the bridge between Scala 2 and Scala 3. I am looking forward to having implicit function types, and would support that wholeheartedly.

The scope injection feature also looks interesting, but it seems to me that in terms of capabilities it should be entirely subsumed by implicit function types, no? (i.e., you can encode anything scope injection does with implicit function types)

Sounds like a great idea :slight_smile:

I’m not entirely sure. I could imagine an encoding like this:

Definition site:

def await(implicit ev: AwaitCapability) = ???
def async[T](thunk: implicit AwaitCapability => T): Future[T] = ???

Call site:

import scala.async.Async.{async, await}

val future = async {
  val f1 = async { ...; true }
  val f2 = async { ...; 42 }
  if (await(f1)) await(f2) else 0
}

This example encoding ensures that await can only be called if an AwaitCapability is in the implicit scope (which is the case in async blocks thanks to implicit function types). But even in this example, you still need to make sure that await is in the lexical scope. So we still have identifiers in the global scope (increasing the chance for name clashes) that are actually only ever useful in some specific lexical contexts.

I think that scope injection is a more direct solution to this issue. I guess that you cannot encode the exact same feature with implicit function types only. Could be that the encoding is good enough though, so that we don’t want to have scope injection as a separate concept; not fully sure about the best trade-off.