Suspendable functions and coroutines

Kotlin’s suspendable functions and coroutines made async programming look very easy.

To be more precise, it made sequential async code look like just sequential code. The async/await and for comprehensions also do the same but when compared to kotlin async code, it seems like kotlin code gives the feeling of encapsulating the async effect inside the method (to some extent). The caller from another suspendable function calls it like a normal function.

It makes executing the async code sequential by default and makes you opt in if you want to execute multiple async functions in parallel. Not always, but IMO, most of the times we are executing async code sequentially which is why I find kotlins approach very attractive for modern async programming.

Would be nice if scala adopts this feature.

3 Likes

Before proceeding further I would like to mention a few things.

There are already some approaches for async / await sugar, e.g.

OpenJDK’s Project Loom is aiming to add JVM-level support for lightweight threads (fibers) and other features to support a wide variety of concurrency primitives: OpenJDK: Loom

Kotlin’s coroutines aren’t properly supported in GraalVM:

gilles-duboscq (Gilles Duboscq) · GitHub wrote in
https://github.com/oracle/graal/issues/366#issuecomment-383547970

At the moment, Graal does not support compilation of non-reducible loops (i.e., loops with multiple entry-points). This is fine on HotSpot as we can just delegate execution of such methods to either the interpreter or C1 and they are fairly rare (for example javac or scalac would never produce such code patterns).
However, as you have seen, there is no backup strategy available on SVM/native-image.

Structured loops at an important (an conscious) decision in the design of the Graal IR so i don’t think we want to change that. However there are ways we could improve support for non-reducible loops that appear in bytecodes.

In particular we could select the entry point that we want to keep and then duplicate parts of the loop on the other entry path so that a single entry point remains. This would cause some increase in code size but at least it would allow Graal to better support Kotlin coroutines .

I would probably wait until the situation is clearer. Right now no approach looks future-proof as they either have blocking issues and/ or are in immature state.

5 Likes

Working with kotlin and its coroutines feature, really makes async code really nice to work with. But I would like to mention that coroutines in kotlin are not coroutines as such, they are a form of delimited continuation, on which Martin and co have worked on before.

kotlin team have however given the delimited continuation a better (look and feel) or syntax, may be the Scala community can steal that from them by having a suspend annotation/keyword.Remember Kotlin took so much from Scala.

A suspend feature, I believe would make the development of libraries such as cats easier and as well make the performance of async code better. They have done wonders for the arrow library.

The ground work have already been done in Scala in the form of delimited continuation and Scala coroutines, so why not make it sexier and ready for jdk 14/15 are coming up soon.

3 Likes

I’m all for this.

I think that call more for improving the for-comprehensions, to something like what F# has

Even with F# Computation Expressions, the return types is ‘Async’.

You have to explicitly opt in and ‘let!’ So that it performs the async operation in a sequential manner.

the return types is ‘Async’

as it should be. A program performing side effects is a different thing from a value which it may eventually compute. Haskell also has IO, where IO Int is different from Int.

I agree.

I was just meaning to say that there are other ways to do concurrency. Not right or wrong. Not better or worse. Different ways which serve different purposes, dev styles, etc.

Coroutines and suspendable functions for example, do not capture the async effect in the return type of function. I find that extremely useful for certain use cases.

Disagree. Sync/async (or rather blocking/non-blocking) is just an implementation detail of underlying runtime and its threading model that the user should not have to worry about. We only have to worry about it because JVM makes us so.

Haskell’s IO makes a separate type not because it’s async but because it captures side effects. Being async itself should not be a side effect.

async/await is unfortunately a neglected area in Scala. It has mostly been ignored because we have for comprehensions for that and they can cover not only async but any monadic effect. They are not bad but the sole fact that being async forces us into completely different syntax for writing regular, sequential code is bad.

Scala is lagging behind in this area as compared to its competition and it’s not good.

3 Likes

Every language makes us worry about designating async boundaries. Keywords or types like async, await, promise, future, channel, task, parallel stream, parallel collection, concurrent collection, observable, actor etc all mark async boundaries. Synchronous code replaces them all with strict (eagerly evaluated) collections and values.

Making code asynchronous is equivalent to adding async boundaries. Similarly, making code synchronous is equivalent to removing async boundaries (e.g. by fully awaiting on them in a single place). You always need to do some code refactoring to change between those two styles.

The question is which primitives and which syntax should we choose for doing async programming. I suspect one of the main obstacles for convenient async programming syntax is return keyword semantics in Scala. Right now it returns from nearest enclosing method (source code wise, not generated code wise). This is done by throwing and catching exceptions when done inside a function. It breaks referential transparency, refactoring, is inefficient, suprising and now ultimately deprecated ( Deprecated: Nonlocal Returns ). Is properly working return keyword vital for async programming? What do you think?

2 Likes

You definitely want to be explicit when doing concurrency and parallelism so I have nothing against these tools. But today, we need to reach for async even when writing perfectly sequential code and that’s only because we want to avoid blocking the OS thread since these threads are expensive resource.

But wait, why do we even have to worry about OS threads in a high level language like Scala? Here’s the root of the problem: most of the time we need async only because language constructs (like method body) are inherently bound to underlying threading model. They should not be!

I’d be totally fine if return was removed. IMO, the biggest technical/language hurdle to convenient async programming are higher-order functions. Consider something as simple as any collection’s map method. The mapping function must be synchronous. If you want to map asynchronously then you must use some sequence/traverse abstraction.

Could you be more precise about what you want?

What exactly do you want to be cheap and how?

What do you mean by “inherently bound to threading model”? You can abstract away parallelism by using monads and having def method[F[_]: Monad](args): F[Whatever], then fixing F to e.g. FutureMonad when you want async code and IdMonad when you want sync code. The problem with this approach is that the abstraction is an illusion. Async boundaries are expensive, so you need to be perfectly aware where you put them to reduce the overhead. If that wasn’t the case then the compiler and/ or runtime could make everything async so programmers could then forget about sync/ async distinction.

2 Likes

It’s simply the fact that a block of code in Scala (syntax) during its execution cannot release or switch the OS (JVM) thread that it’s executing on.

I want to write a piece of business logic. This logic is sequential which means that it doesn’t involve any concurrency or parallelism. Because of that, I would like to use the normal syntax with standard control flow expressions like if, while, local variables, etc.

However, my sequential logic is long running and may contain waiting operations, e.g. waiting for some network response or a sleep. I want these to be non-blocking on the current thread so that I don’t exhaust the thread pool or create thousands of OS threads (if my sequential logic is invoked concurrently on the same pool).

I cannot do this now without lifting my perfectly sequential, boring business logic into completely different syntax with some async primitives.

Now I admit that in Scala it’s impossible to achieve this ideal situation where I could forget about async entirely, even when writing sequential code. As I mentioned earlier, the biggest hurdle here is usage of higher-order functions which accept only synchronous functions as parameters.

However, I also think that the situation could be much better than it is now if we had some nicer async-await primitive rather than relying entirely on for-comprehensions.