Implicit Function Types

But that would change the rules of the game completely. It would not resemble implicits in Scala at all, it would be a completely different language design. And you would have to answer a host of other questions, like how these things behave under generics.

1 Like

Depending on what restrictions were placed, the design could be rather different, yes. But it’s the logical extension of the direction implicit functions are already going. Maybe it is too much extension.

The current proposal allows implicits to be part of the type only for functions and provides the equivalent of some automatic currying/uncurrying to jump between a view of the implicit as part of the return value and a “plain” return value. I am convinced that this saves a good deal of boilerplate, but I’m not convinced that it’s the only extension that saves a good deal of boilerplate or that there aren’t alternate or complementary changes that would save more.

I’m also not sure that this as it is doesn’t “change the rules of the game completely”. I can now say that a method is returning a Foo[A], and it doesn’t give me a Foo[A] at all when I use it. That’s a pretty new game.

In the surface syntax, yes. But the crucial point here is that implicit A => B is just syntactic sugar for ImplicitFunction1[A, B], and we know what that is. By contrast, we have no idea what adding implicit to every type would mean, nor if it would make sense in the end to do so (my hunch would be it wouldn’t).

1 Like

I guess it depends how and whether one attempts to abstract the idea.

After thinking it through a bit more, I think that there is a natural interpretation of implicit that applies to anything but it is not identical to the idea in implicit functions, so it is an awkward fit to extend it in the way I was thinking. In particular, conflating an implicit (function from arguments to result) and a function from (implicit arguments) to result seems, on further reflection, like a dangerous idea.

If you use an implicit-argument function as a return value, you can consider it either of two ways:

  1. It is a formalism that expresses your intent to require an implicit argument, or
  2. It is an actual object that is created that evaluates when an implicit argument is available

If the latter, then I worry about the performance hit; this is definitely not a zero-cost abstraction if you’re adding the creation of an ImplicitFunction object to every method call. If the former, then I don’t think we’re necessarily prevented from coming up with other low-boilerplate syntax to express the intent when what you really mean is “I don’t care how you do it, just carry along these implicit values” not “I really want a function that takes implicit arguments”. Especially if you tackle the group-of-implicits problem and have an ImplicitTuple3 etc. that is isomorphic to a parameter block of three implicits (and then ImplicitFunction2[A, B, Z] would have a natural relationship with ImplicitTuple2[A, B] => Z).

It seems like implicit tupling is slightly more general than implicit functions in that implicit tuples allow one to talk in a generic way about groups of implicit parameters distinct from what you might use them for, whereas implicit functions require you to have a transformation in mind.

Beware: This change is dropping an arbitrary restriction from the language in comparison to implicits in many other systems (like Agda, Coq, …). If a method can take some arguments, a closure should be able to abstract over those. Another such restriction forbids polymorphic function values.

I agree the blog post is showing a single (though potentially important) use case, and it’s not 100% clear whether you’d want to use that. In particular, instead of seeing when this pattern works, I’m curious to know when this pattern fails, to see if there are pitfalls, use cases which can’t be expressed, cases where you get the wrong value inferred, and so on.

I’m much more concerned about unintended side effects for this change:

We fix the problem by introducing a new disambiguation rule which makes nested occurrences of an implicit take precedence over outer ones.

1 Like

TL;DR this could be solvable by abstracting over non-value types.

That seems a very good point. In fact, it is somewhat orthogonal to implicits and applies to the reader monad as well, though it matters less there. I think the current syntax should mean option 2 for consistency (I haven’t checked what’s implemented, I’d call 1 a bug), but there is a consistent way (though I’m not 100% happy with it).

Nowadays we can’t abstract over parts of a method type, because method types are not value types, but type alias only support value types.
We know that f1 and f2 aren’t equivalent operationally because of the function-method distinction, while f2 and f3 are equivalent.

def f1(a: A)(b: B): C = foo
def f2(a: A): B => C = b => foo
def f3(a: A): Reader[B, C] = b => foo
type Reader[S, T] = Function1[S, T]

To abstract over parts of a method type, you might consider something like this, with f4 behaving like f1, ignoring the concrete syntax:

type ImaginedReader[S, T] = (s: S): T
def f4(a: A): ImaginedReader[B, C] = b => foo

I don’t know the concrete syntax for the body of f4, what I wrote doesn’t make so much sense (trick questions: is s in scope in foo? Why is closure syntax not creating a closure?). But let me ignore syntax bikeshedding for this post—also because this problem doesn’t appear in the implicit case.

I’m not sure how this would look in Haskell syntax happens to support this better (I’m not 100% sure this actually works in Haskell, but it makes sense):

type ImaginedReader s t = s -> t
f4: A -> ImaginedReader B C
f4 a b c = foo

Allowing that is a non-trivial extension, but at least it fits in what we understand — the compiler should just eagerly inline such type aliases.
Note 1: I considered having a separate keyword from type, but couldn’t pick a decent one—I had picked non-value-type as a strawman but it was so ugly I didn’t like my own idea.
Note 2: “eagerly inline type aliases” is actually very non-trivial since all type definitions in Scala are mutually recursive, so I expect significant implementation restrictions and I’m not sure such aliases could be exported.

Back to @Ichoran’s concern, it could be solved by

type ImplicitReader[S, T] = (implicit s: S): T
// or maybe it's
//  type ImplicitReader[S, T] = implicit (s: S): T
// ? I'm not sure, though `implicit` outside parens makes more sense.
def f5: ImplicitReader[Foo, Bar] = f5body // no abstraction here needed, just like the blog post, so fewer syntax questions

Do we want literally that? More brainstorming needed, please, but I think that’s a start.

And it turns out this goes in a similar direction—if one understands type ImplicitTuple3[A, B, C] = ... as a non-value type alias to be inlined at each use site (which I think gives the effect you want).
A technical problem is that nowadays a parameter block is not a type at all, just part of the syntax of method types, but changing that sounds conceptually plausible. Alternatively, just abstract over the whole binder.

Another technical problem is that various non-value types (such as method types) have not only free type variables but free term variables. So you can write foo1 and foo2, but Baz and foo3 aren’t allowed.

def foo1(ctx: Context)(tree: ctx.Tree): ctx.Tree
type Bar = (ctx: Context)(tree: ctx.Tree): ctx.Tree
def foo2: Bar
type Baz(ctx: Context) = (tree: ctx.Tree): ctx.Tree
def foo3(ctx: Context): Baz(ctx)

But you could generally allow type members to abstract over paths, like it happens in DOT — for non-eager types, the value arguments could be erased during compilation. Right now, any such abstraction can only be done by defining Baz as a class, and it’s less obvious ctx can be erased there.

1 Like

Beware: This change is dropping an arbitrary restriction from the language in comparison to implicits in many other systems (like Agda, Coq, …). If a method can take some arguments, a closure should be able to abstract over those. Another such restriction forbids polymorphic function values.

Sure; I can certainly sympathise with the argument that, given that we already have implicit parameters, they should be first-class. I’m just worried about the use of implicit parameters that are parameters in a genuine sense (i.e. not fully determined by their type, the way implicits used as typeclasses are). We have only one major example of this kind of implicit in the standard library, the ExecutionContext passed implicitly to various methods on Future, and many (myself included) regard it as a mistake; I really don’t want to see that style proliferate.

2 Likes

I’m just worried about the use of implicit parameters that are parameters in a genuine sense (i.e. not fully determined by their type, the way implicits used as typeclasses are). We have only one major example of this kind of implicit in the standard library, the ExecutionContext passed implicitly to various methods on Future, and many (myself included) regard it as a mistake; I really don’t want to see that style proliferate.

This style for me is the essence of Scala. I see lots of examples where it is used to great effects in the code we write: implicit Contexts in dotty, implicit Positions in Sebastien’s tree transformers, implicits for capabilities and effects. All of it in clean first-order style, no bulky monads required. I’ll bet you it’s the future :slight_smile:

3 Likes

I’d bet against it, having had to maintain that kind of code. Effectful call as <- vs = is subtle but still visible, allowing the code for effects to walk the line between completely invisible (like say Quasar or the various async things in Python) and overly verbose; for me that’s the very best part of Scala. If all you have is a “this effect exists somewhere in this block” then that’s the same user experience as checked exceptions (“one or more of these functions can throw this exception, but I can’t tell which without inspecting them all”).

I’m not attached to monads as such (indeed I share your concern about composability, I’m just hoping getting sum types will make it a non-issue). What I want is a consistent representation of effects, with the ability to shift perspective between viewing an effectful function as a simple function as an effect, or as a (more complex) pure value.

And indeed this PR helps that, as far as it goes. Having seen a roadmap that has “effect system” further down the road for Scala, I guess the things I see as important are a) that any work on implicits now is with a view to extending/subsuming them with the future general effect system, so that implicit parameters will ultimately be (perhaps sugar for) “plain old scala effects” rather than a funny thing with their own semantics. b) that the syntactic representation adopted for effects preserves the visibility they have in the current for/yield encoding, which for all its performance issues and compositionality concerns has proven very successful in today’s Scala.

I see that in the other direction: the future effect system is ultimately (perhaps sugar for) plain old Scala implicits, rather than a funny thing with their own semantics.

1 Like

If that were possible I would agree. But many useful effects (e.g. optionality, nondeterminism, async) can’t be modeled as an extra parameter. Whereas an extra parameter can be very neatly modeled as an effect (Reader).

Effectful call as ← vs = is subtle but still visible, allowing the code for effects to walk the line between completely invisible (like say Quasar or the various async things in Python) and overly verbose

I don’t get that part of it. I guess we both agree that in the future we will have very fine-grained effect systems, so much so that a lot of code with be effectful in some way or another. But then you end up with large parts of your program written in monad structure. The structure itself does not tell you which effects the code has; you need to turn to the types for that.

On the other hand, every time you introduce a new effect category (and it could be as ubiquitous as “not proven to be total”) you need to completely refactor your code into the monadic scheme. And that means you are willing to take a slowdown of (I estimate) 10, risk stackoverflows, be extremely verbose, and have a really hard time composing all your fine grained effects. Or you go free which means better composition but likely even more boilerplate. I can see this work in the sense that you very forcefully tell your users: “don’t use effects, it’s just too painful”. So it could have education value. But if you have to deal with effects, it’s utterly suboptimal in several dimensions.

Now, of course sometimes you need to use monads. E.g. for Future, or Task, or probabilistic programming, or generally every effect that needs a continuation to model (I am ignoring async here). If I look at algebraic effects (as in Eff, for instance), there’s no way you can model these in their full generality with implicit parameters. You need resumable exceptions or something equivalently powerful to model them all. That’s fine. It’s why we have monads.

But there is also a large class of effects which I call “simple effects” in contrast to the general “fancy effects” described above. These just indicate that your code might “do” something (such as change state, raise an exception, do I/O, access high-security data, raise an NPE, the list goes on). These effects can very conveniently be modelled with something like implicit parameters (we also need a way to avoid capture, so there’s something still missing here). And I see no reason why I should go into monadic contortions to model these effects. They are perfectly compatible with direct style execution, and the types tell me what the code does in any case.

In summary: To characterize what your code does, use a type. There’s no need to contort your program logic.

4 Likes

Martin just addressed this comment in
https://github.com/lampepfl/dotty/pull/1775/commits/9d57d81096de12f69cbcd8f4113243cc041cd520 (and fixups thereof), making my comment outdated. If this is the approach, I’d like the same optimization to apply to closure-returning methods though.

I don’t get that part of it. I guess we both agree that in the future we will have very fine-grained effect systems, so much so that a lot of code with be effectful in some way or another. But then you end up with large parts of your program written in monad structure. The structure itself does not tell you which effects the code has; you need to turn to the types for that.

I’m deeply concerned about this already to be honest. Checking the signatures on mouseover isn’t good enough (as seen with checked exceptions - I have experience with the difficulty of debugging such cases); at the very least I would want the IDE to distinguish each effect visually, perhaps with coloured underlines like those currently used for implicit conversions. (I appreciate that types will always make this possible, but possible isn’t enough if it isn’t implemented consistently by the tools). The monad transformer encoding of effects is cumbersome but it does explicitly distinguish the different varieties of effect on each call; if the effect is only in the type signature and not visible at the call site then I think that’s going to be too magic to be maintainable.

you are willing to take a slowdown of (I estimate) 10, risk stackoverflows, be extremely verbose, and have a really hard time composing all your fine grained effects.

The fact that people are willing to pay these costs in today’s Scala should show how important and valuable proper management of effects is. And really there’s no fundamental reason all these costs should be there - I expected to complain about these issues to you, not have you complain about them to me :slight_smile: . I would say: fix the causes of these problems properly, rather than hacking around them by introducing special cases. Proper tail call support would eliminate the stack overflow issues and hence the biggest performance problem (trampolining). Proper sum/union types (already coming AIUI?) would make it easy to compose fine grained effects. Removing excessive intermediates in a for/yield is a general issue with various possible solutions (a colleague today claimed rewrite rules would solve it?) that will need to be fixed one way or the other anyway (e.g. for collections), and would address the rest of the performance overhead.

But there is also a large class of effects which I call “simple effects” in contrast to the general “fancy effects” described above. These just indicate that your code might “do” something (such as change state, raise an exception, do I/O, access high-security data, raise an NPE, the list goes on). These effects can very conveniently be modelled with something like implicit parameters (we also need a way to avoid capture, so there’s something still missing here). And I see no reason why I should go into monadic contortions to model these effects.

It’s not a contortion, it’s a perfectly natural style that’s already very common in idiomatic Scala. I just don’t see these things as different enough to be worth having two different sets of semantics where one can cover all the use cases (especially given how tight the complexity budget for Scala is). Isn’t Scala all about having a unified, coherent model, and then recovering the performance details as optimisations where necessary? (compare the unified type hierarchy and @specialized to the Java approach to primitives).

In summary: To characterize what your code does, use a type. There’s no need to contort your program logic.

I have to disagree. Effectful code needs to look different at the call site if it’s to be maintainable.

I believe the argument being made is that monad transformers are way too cumbersome for the benefit they provide (versus other lighter solutions that are much easier to compose), and I tend to agree. Scala is actually slowly starting to move away from MT, whether its by using stacks or other methods entirely. Effect tracking is one of those things where if you try to get it “right”, you often have incredibly complex solutions to them that also have a fair few downsides. MT just have a real problem where they are very hard to compose (when you also need to take into account code maintenance and changes)

Proper TCO isn’t going to happen any time soon (at least on the JVM). The JVM mandates a stack trace as part of its security model, and current tools for attempting to rewrite the AST to remove tail call issues have created problems in other areas. AST rewriting also can’t really happen across libraries due to the way that JVM is designed. This may be solved by scala-native, but that isn’t something that universally can be solved.

Re TCO, have you seen https://github.com/wheaties/TwoTails?

Also I guess the question is whether the reason monad transformers and the free monad are cumbersome is inherent or could be fixed.

Yes, and this here (GitHub - wheaties/TwoTails: A Scala compiler plugin for mutual tail recursion) is what I meant by limitations. Also I believe that Jason Zaugg has directly commented on TwoTails and the reasons why its techniques are not being currently implemented by scalac (you would probably have to ask him to clarify more, I remember him mentioning it on Gitter once)

Note that the difference between full TCO and Martin’s optimization solution is that full TCO is very aggressive optimisation technique that is likely to get a lot of things “wrong” (either in terms of degrading performance, or at worst creating a semantic change) where as the closure inlining only in the specific case of implicit functions is very safe (and it would still be faster than Monad comprehension with full TCO unless the full TCO is also combined with aggressive inlining)

Effect tracking is one of those things where if you try to get it “right”, you often have incredibly complex solutions to them that also have a fair few downsides. MT just have a real problem where they are very hard to compose (when you also need to take into account code maintenance and changes)

True as far as it goes, but I think the more recent solutions (e.g. free coproducts) are not watering down the principles, rather they are looking to retain the principled effect tracking of MT while offering something easier to compose and use. We’ve had a Cambrian explosion of such libraries over the last year or so, now it’s a case of letting a consensus form and the documentation/tutorials/etc. catch up. I really think this process is going fine, and language-level support for sum/union types will make it even better.

And again, even with how cumbersome they currently are, MTs are working, deployed in production, because they solve real problems.The most important thing is to not mess up the working cases.

Admittedly I am a bit late to this conversation, but after seeing @odersky and Dmitry’s talks at Scala Days, I feel somewhat obliged to comment (you guys asked for feedback, so here it is :slight_smile: ).

I asked a variation of this question after Dmitry’s talk, but here it is again explained further. I agree that having implicit parameter boilerplate on many functions is not ideal, so I can get behind the stated motivation for this change. What bothers me, like @lchoran, is that this implementation takes what is fundamentally an input to your function (some implicit context) and expresses it in the output type of the function.

I think this is generally quite confusing, especially for new developers, and is made worse by the fact that there’s an already-confusing subject beneath it (implicits). And it’s doubly confusing, because a developer might a) not know that they have to provide the implicit, and b) not know what to do with that weird contextualized return type.

Further complicating matters is that the name of the implicit parameter is no longer found in the function signature (e.g. thisTransaction in the blog example. While you can figure out where this comes from, it is yet another source of confusion for the developer who is not aware of what’s happening.

I also question whether it’s really saving that much code. In the Dotty compiler example, there’s ~2K instances of (implicit ctx: Context) being replaced by ~2K instances of Contextualized[ReturnType] or similar, unless I’m misunderstanding something.

Perhaps there’s another reason to make this change, besides the one in the blog post? Perhaps that reason is much more compelling, and outwighs the additional confusion this change introduces?

Unless somone can convince me otherwise, this not a feature I would use in Dotty. Explicitly stating the implicit parameters seems much more straightforward and isn’t really that much more code.

1 Like

Hi,
I have watched Mr. Odersky presentation on Scala Days and read his blog post on this subject and just would like to say that from my Scala user point of view the solution presented feels excelent.