PRE-SIP: Suspended functions and continuations

Because some of us don’t think monads everywhere are the answer, and that looking past that may bring us 5-10 years into the future.

4 Likes

I think this is a very intriguing proposal: while I haven’t done enough with this style in other languages to be fully comfortable with it myself yet, it’s clearly very popular, and a good implementation might well help broaden Scala’s appeal. I suspect I’d wind up using it at least a fair amount, if it provides good boilerplate reduction.

The most important thing I don’t fully grok yet is what this would look like when interoperating with monadically-wrapped code. If I have an existing codebase or collection of libraries written using cats-effect or ZIO and want to build a program using suspend (or vice versa), what do the interfaces between those look like? I suspect that it can be done (and hope that it’s straightforward), but don’t have the intuition for it yet. My guess is that clean interfaces here would be crucial for adoption, given the amount of investment folks have in the monadic approach.

Also: what are the implications for mixed code? I have multiple large projects that have evolved over time, such that they use both Future and IO/ZIO engines. Given that type classes are clearly important here, what would it look like when you need to interface between those different async models? I’m a little concerned that that could get messy at the call sites, but hoping that it wouldn’t be too hard to write clean adapters.

1 Like
2 Likes

Well, I agree. On the flipside though if this sip gets accepted we finally will get some tech debt and points for scala 4 to remove cruft from scala 3

1 Like

Note, that technically monadic reflection can be built over any continuation system, including proposed, if the Continuations interface will be available. [Ironically, the latest version of Loom does not expose Continuation directly to the userland, so, monadic reflection over Loom continuation in such form is not possible with the latest JVM version]

Exposing some form of continuation is good.

On another side, I’m uncomfortable with the suspended part, because I have a feeling that exists a better way that will produce fewer integration problems.
Why: because embedding a call of (code with continuation = suspended) into (code without continuations = non-suspendable) is relatively hard: if you have a higher-order function that accepts 'non-suspendable code`, then when you receive suspended functions, you can do nothing.

But embedding non-suspendable code into suspendable is trivial: just wrap it into an interface with a suspension point.

Therefore, the right continuations solutions should have an interface where a higher-order function can accept by default suspendable functions. This will allow us not to have two implementations of the HO method for suspended and resuspended arguments. For strict interfaces without continuations, we have (will have) pure functions.

So, instead of having a parallel hierarchy of functions, I prefer to have Continuation capacity in the library, which assumes that the code which uses this capacity is cps-transformed.

In such a way, the HO problem can be solved. Two other problems (integration with external libraries and coloring) become a little harder, but I think this is an acceptable cost for HO support.

1 Like

Yes, we are not strictly proposing function coloring. I believe the other way where we make Suspend and implicit requirement with context functions can potentially be used to automatically CPS transform functions that carry that requirement. This approach would be similar to the ergonomics of LOOM, where you get non-blocking CPS underneath through the VirtualThread or the previous Continuation interface.

The map(_.bind) case in our examples already works non-blocking with our implementation based on LOOM, Context functions with requirements seem to work with map. We could potentially piggyback on LOOM for the JVM if we figure out a way to have a different runtime in native and js.

Another alternative is to create a LOOM style runtime for just the continuation disregarding virtual threads and things we leave abstract to ExecutionContext but that is more or less the impl of what we proposed above.

1 Like

If we are relying on Loom, I would expect to be able to implement the .bind example or the LazyList generator example just using “blocking” of Loom virtual threads, without needing any Scala language changes at all. In fact, I believe that is the whole motivation of Loom in the first place: to be able to write direct-style async code without needing to worry about compile-time transformations or function coloring, by pushing the complexity into the JVM runtime.

If this proposal relies on Loom, what is the additional value of this proposal that justifies the addition of new Scala syntax and keywords? If we believe the proposa can be used without Loom, how do we expect to solve the HOF issue?

7 Likes

how does the compiler know if the user wants the doThing calls to happen async-sequentially or async-concurrently?

That would depend on the implementation of map and the ExecutionContext, Scheduler, or surrounding policies for structured concurrency.

Our tests in our internal library with LOOM show that HOF work already using just context functions and assuming a Structured scope that forks and tracks the fibers. Maybe we can figure a way to not color the functions and just use a Suspend or similar as context parameter while still performing the CPS transformation where needed.

The proposal exclusively seeks a direct style. Maybe we can do that with colored functions or context types. LOOM is JVM only, but Scala is multi-platform. LOOM currently does not provide functions like continuation, and it’s focused on virtual threads. We think depending on LOOM for this feature is fine if we have alternatives for Scala JS and Scala Native from which we can construct functions like continuation.

If the impl is not dependent on LOOM, we can solve the HOF problem by having a contextual type Suspend that works alongside the implicit system. The compiler could then treat lambdas and function bodies with this context type as candidates for CPS. This would probably require instrumenting both declarations and call sites, and not sure of the implications on bin compact but it would not require an additional suspend keyword.

Would it work If instead of a suspend keyword we have just Suspend ?=> A. Context functions carry implicits in a way that composition with HOFs and others work as is today. The missing part is CPS over the body to create the suspension points state machine and loops

class Suspend

val x : Suspend ?=> Int = 1

val res = List(1, 2).map(_ + x) // this is ok

given Suspend = new Suspend

println(res)  //List(2, 3)

I think such integration may look like this https://github.com/47deg/TBD/pull/30

There are better integrations for IO or any data type if its ADT has support for the particular case of suspension. In that case, it could also suspend instead of requiring the integration to run unsafeToCompletableFuture or a similar terminal function.

I think overall, what is missing from this proposal is a clear description of:

  1. What does this provide to a user that Loom does not? All your user facing examples can be implemented with Loom with no compiler support. Are there user-facing code snippets that cannot work on Loom, that you can get working with this proposal? The internal machinery like the continuation function isn’t that interesting, because unless we can show improvements to user-facing code, it’s just a new syntax for library authors to implement the same thing they could already implement

  2. What’s the plan for Scala.js and Native? You repeatedly list being “cross platform” is an advantage of your proposal over Loom, but AFAICT the only current implementation relies on Loom, and the whole cross-platform design is hand-waved away as “we’ll figure something out”. If you want to claim that being cross platform is an advantage of your design, you actually have to provide a design that works cross-platform. Loom is probably more than a person-decade of work at this point, so figuring out an equivalent for Scala.js and Native is not something we can take for granted

  3. What are the limitations of this proposal? Every other implementation of this in any language has significant user-facing limitations. Even hardcoded-into-the-runtime implementations like Loom or Go’s lightweight threading hit issues with FFI calls into C code or OS-level blocking syscalls. We need a clear picture of what trade-offs this proposal makes, and where in the solution space this fits.

I’m fully in support of the stated motivation: direct-style code is what everyone learns in programming 101. If we can express monads and stuff in a direct style, without all this map/flatMap inner-platform-effect noise, while preserving all the other benefits of using monads, that’s great. But it’s a tough problem to crack

16 Likes

TLDR: This pre-SIP proposal to add “suspended functions and continuations” to Scala 3 needlessly imports problemmatic legacy Kotlin approaches to solving a problem that Loom already solves in a theoretically more performant and user-friendly fashion.

In this discussion, I’m going to ignore “side-effects” since it is my firm belief that “side-effect tracking” is not commercially relevant, and if it were (which it’s not), it would have another solution that is not connected to the challenges around async programming.

Kotlin attempted to solve the async problem by introducing language-level coroutines, which are implemented by specially translating functions marked by suspend into computations that may asynchronously suspend through (essentially) “saving” JVM stack to the heap.

This solution allows Kotlin to support async programming without the level of ceremony and boilerplate common in “reactive” solutions, including those based on monads. More importantly, it allows Kotlin to support direct imperative style, which is familiar to nearly all programmers and requires no training.

Unfortunately, what was once an advantage for Kotlin has become a liability and legacy decision: Kotlin chose to solve the async problem by modifying the Kotlin language, rather than the Kotlin runtime.

This means that information on which functions are async leaks into the programming language syntax itself. Kotlin has re-invented the colored functions problem, with all of its well-known drawbacks.

An alternative solution to the async problem would have been to bake async into the Kotlin runtime, rather than the language, essentially by reinventing virtual threads, which would have allowed Kotlin developers to write Kotlin code that is obvlious to whether it is async or sync.

Unfortunately for Kotlin, the JVM innovated at a rapid pace that few could predict, and now Loom has given us full async programming in a direct imperative style, without the need to mark any functions. Loom also transparently works with all legacy code, making it fully modern and asynchronous (with a few exceptions such as synchronzied and file IO).

Kotlin is now in a weird situation where its (at the time) forward-looking support for async programming has become legacy–a solution that is inferior in terms of maximum theoretical performance, as well as user-friendliness, to the virtual threading solution powered by the Loom advancement of the JVM.

Scala 3 can already support fully async programming with no additional syntax or compiler passes, merely by running on a post-Loom JVM! In this world, we are able to write fully asynchronous code in a direct imperative style, without introducing introducing the two-colored functions problem, or importing Kotlin’s (now legacy) solution to the problem.

Now, in the spirit of fairness, I should point out that advancements in Loom do not help Scala.js or Scala Native. For all of the good that Loom does, its benefits are restricted to the JVM. Currently, the vast majority of all commercially relevant Scala development occurs on the JVM. While I love Scala.js and Scala Native and support their development, these markets are tiny and will remain so for the foreseeable future. The JVM is where all the commercial dollars are going and where Scala’s strengths remain.

In my view, the Scala.js and Scala Native markets are insufficient to justify any language-level changes to the async problem. If those markets were commercially relevant, then that might be an argument to invest resources in solving the problem, but it should be solved in the way that Loom solved it, which is a substantially and objectively superior solution (transparent runtime support) to the way Kotlin solved it (two-colored functions). Under no circumstances would I recommend that Scala 3 copy the Kotlin model to support the Scala.js and Scala Native communities.

In summary, there is no reason to pollute the Scala 3 language with two-colored functions when Loom already solves the async problem in a way that is significantly better than the way Kotlin solved it.

Notes on Proposal

  • Addition of continuations is not necessary or helpful to achieve structured concurrency; indeed, the JDK itself will be getting structured concurrency primitives
  • Being able to eliminate Either is a non-goal given Scala 3 can leverage exceptions, which are compatible with direct imperative style
  • Union types and flow typing provide a nice way to reduce the need of Option and keep things simple
  • Non-blocking sleep and generators are already free with Loom
23 Likes

Implementing continuations at the compiler-level, as you noted, benefits all compiler targets, such as Scala.js and Scala Native.

Personally, I don’t care about native compilation (even as I realize that many other people do), but I care about JavaScript — it is my opinion that, given the trends, JavaScript will dominate the market for scripting. And it’s already giving the JVM a run for its money. The appeal of languages like Scala or Clojure is the benefit one gets from using a mature runtime and ecosystem of libraries, such as those of the JVM or JavaScript/Node.js. And if the language is designed to leach off a host, might as well target multiple hosts. Because it’s awesome to use the same language and the same base libraries on both the server and the client (the major appeal of JavaScript, actually). Also, Java++ will always be the next Java.

There’s also something to be said about Java’s timeline. The next LTS may support Loom, but many companies are still on Java 8 (2014), and I predict another 2 decades at least for the Scala language and libraries to be able to comfortably target and rely on Loom.


Not sure what you mean by two-colored functions being undesirable — isn’t that the whole point of types and effect systems? What are IO data types, if not much needed coloring? And why can’t Loom be used as an optimization only?

Personally, I like Kotlin’s continuations due to Arrow Fx.

There are downsides to usage of suspend in Kotlin, versus IO, such that IO is no longer a data structure with a custom interpreter, which reduces its compossibility (via flatMap). However, suspend does suspend side effects, and has better performance, and I can’t see why it couldn’t interface with our IO data types. But in spite of all that, I believe it has a better UX in the context of Scala.

Truth be told, if the compiler did this, then our IO data types become a tougher sell for your average programmer.

7 Likes

I agree that Javascript has become hugely important, including for scripting, but the bulk of backend development occurs in traditionally server-side languages such as Python, Java, C#, C++, etc.

The Javascript ecosystem is quite well-established and mature, and features (via Typescript) an extremely capable and industry-supported solution for static typing.

In my view, it would be a mistake to bet Scala’s future on the possibility of “winning” the unproven Javascript market. If anything, we should double down on what’s working: bread & butter APIs (REST + GraphQL), microservices, and data engineering.

In any case, even if we all agreed that Javascript is the future of Scala (which I do not ascent to, I have no confidence in that whatsoever), I would strongly advocate for taking Loom’s approach to async programming rather than Kotlin’s approach, which would involve zero language-level changes to Scala 3. Rather, it would be an internal implementation detail in Scala.js (actually, a rather large number of them, which would ultimately mean Scala.js could support all the java.util.concurrent.* machinery).

That proposal (which is not this proposal), I could certainly get behind (assuming I were not footing the bill, because quite honestly, and in my opinion, there are much more pressing concerns to solve than async on Javascript!), because it would solve the async problem in the correct way, without permanently marring Scala 3 language syntax and semantics on account of niche platform-specific limitations.

7 Likes

Would it work If instead of a suspend keyword we have just Suspend ?=> A

It’s mean that function f: suspended A=>B. will have type. A => (Suspend? => B) ?

If yes, let we have List[A] and method map[B](f: A=>B):List[B].
Than List.map(x => f(x)). will have type. List[Suspended?=>Int]. Is it what we want ?

val result = List(1,2,3).map(f)
if (result.head == 1) then 
   // never will be here ?  or will ?

I think we need to distinguish two questions:

  • Should the sync/async distinction be reflected in types?
  • How does async get implemented?

I believe the answer to the first question is “yes, but not in the traditional way”. The traditional way leads to the colored function problem: We need to duplicate large bodies of code in sync and async versions. Or, alternatively, we have to sprinkle a lot of type variables around just to accommodate the
possibility that everything can be async. I believe we can do much better by switching the viewpoint
from effects to capabilities. An async computation would then be a computation that references the
Suspend capability. It turns out that this change in viewpoint gives a much better solution to the effect polymorphism problem.

The second question, how async is implemented, is largely orthogonal to the first. It could be by mapping to a monad, which is the most common solution for Scala today. Or by mapping to state machines, provided we can extend that to higher-order functions. Or by building on a runtime with support for coroutines and continuations, which (hopefully) Loom will provide. Once we buy into capabilities, we’ll have another interesting option: duplicate every function that can take a suspendable argument into sync and async versions. This may look like it reintroduces function coloring but doesn’t really since it is all codegen instead of source. Also, the capability type system tells us what we need to duplicate, so it’s hopefully not pervasive.

We are just starting a large scale (7 persons over 5 years) project to research these things. The aim is to find a unified approach to resources and effects that could become a high-level analogue to what Rust is for lower-level systems programing. The sync/async problem is one of the fundamental problems we study. We will work on a concrete type system for suspendability, and will look into different implementation strategies and compare them.

Here is a slide deck that describes the project. Btw, I am still looking to fill some roles in this project. Here’s a job add for a post doctoral scientist: https://recruiting.epfl.ch/Vacancies/2452/Description/2. Another ad for a research engineer will follow.

20 Likes

I strongly disagree with this position.

The async/sync distinction disappears in a language with green threads. Literally the only reason we have the async/sync distinction is because the JVM (unlike Go, etc.) chose not to give us green threads, but rather, to give us operating system level threads, which was a mistake given the highly concurrent nature of modern applications (perhaps foreseeable, perhaps not).

In a perfect world, Thread.sleep() (etc.) would always be efficient. It is only when Thread is an OS-level thread that it becomes impossible to make it so, at least, due to the way OS threads are implemented and handled. Loom brings us that perfect world, more or less, and with time it will be more and more true.

Tracking sync/async distinction is absolutely not remotely relevant to software development in any language with green threads (see also: Go, Haskell, Erlang, etc.), and is a bit like using the type system to track whether MOVAPS is being used to copy floating-point values (that’s an implementation deal that should be dealt with by your language, e.g. compiler + runtime).

We are just starting a large scale (7 persons over 5 years) project to research these things.

Given this research project, I hope it’s crystal clear that now is not the time to import legacy Kotlin syntax + semantics into the Scala 3 language, since any attempt to do so would be both indadvised in light of Loom, and premature in light of Scala 3 research topics.

6 Likes

It’s a fair question: what do we want to track in types? I agree it’s a tradeoff that has to be analyzed carefully, and that different people might make different choices.

In the particular case of suspension / delays. it seems to me that the main reason you want to track it is that a computation might hang on to some other critical resource that you also want to track. In that case it makes a difference whether the resource is only blocked for a short time, or indefinitely, until the suspended computation resumes. You might argue that even non-suspending long-running computations have the same problem. True, but I remember that Microsoft at some point decreed that any computaton running longer than 40ms (? or whereabouts) had to be wrapped in a future.

6 Likes

I most agree with JDG here. Lots of things are worth tracking in types, but I don’t think sync/async distinctions is one of them.

Let’s go back to the fundamentals; why dont people just spawn threads? One thread per Future? One thread per Actor? A lot of early Scala libraries did this. Why didn’t it work?

The basic issue with this approach here is one of performance and resource footprint: threads are expensive, context switching is expensive, so you typically won’t want to have more than O(1,000) threads in an application that may have O(100,000,000) objects. Thus people invented async to try and multiplex workflows onto a small number of threads, to reduce the performance and resource footprint concerns.

But in the end, the problem with threads is not semantics, but performance! All the libraries and frameworks around async is to try and make “async code” look exactly like “direct style” threaded code, because semantically that’s what people want. People invented “async backpressure”, because they no longer can rely on blocking to slow down upstream producers. Everyone wants the semantics of threads, but without their cost issues.

Consider another data point: threads are avoided in the JVM in high performance IO-bound use cases in favor of async, but they’re not avoided in languages like Python, where they are the preferred mechanism for IO bound operations. Why? Because everything else in Python is so expensive enough that threads are comparatively cheap! Again, this emphasises that avoiding threads is a performance/cost/footprint issue, and not an issue of programming semantics

Loom, by and large, fixes the cost of threads, and makes them cheap. You can now spawn O(10,000,000) threads without issue. Suddenly the whole reason for async goes away! Futures are still useful as a programming model, as are Actors, but they no longer need to be async. They can have a thread each, they can block sometimes, no problem at all. No need for async.

Thus, given the JVM has Loom, I wouldn’t think it’s worth it to integrate a compiler-level async functionality at this point. Async was important in the past, in some high performance use cases. That use case has largely been satisfied by Loom, in a much better way. Even without Loom, I would argue that it doesn’t quite meet the threshold of building into the language, and anyway all the implementations/proposals presented so far (including this one!) have so many limitations around HOFs as to be mostly un-adoptable. I say this as someone who tried in earnest to adopt Scala-Continuations back in 2012

Scalajs is cool - I am literally the world’s biggest proponent of Scalajs - but I don’t think such a major investment just for Scalajs is worth ir. Furthermore, if we’re willing to invest 7pax times 5 years of effort in Scalajs, there are a million other things the project could benefit from more than an async CPS tranformation.

20 Likes