PRE-SIP: Suspended functions and continuations

So here are my 2 cents on this topic

Firstly this discussion where we use the terminology of sync vs async is missing the bigger picture where fundamentally we are debating on concurrency and parallelism. To make things clear

  • Concurrency: An abstraction that lets you represent the notion of preforming multiple computations at the same time
  • Parallelism: A hardware/physical feature (i.e. cores/threads) that lets you in reality perform computations at once

Various programming languages for both historical and design reasons have confused/mixed these concepts which underlies what is being discussed here. For example Java (up unti loom) has used threads as its core concurrency model which is great for CPU bound tasks since they bind directly to OS threads but is terrible for IO bound tasks where your bottleneck is not your CPU being busy computing some task but rather waiting for a response on the network/filesystem.

In my opinion, the most ideal case if you were to design a programming language from scratch is to have an abstraction for concurrency (not parallelism!) as your core abstraction and then have various interpreters/implementations to handle the different cases (i.e. a ForkJoinPool for your IO bound tasks in your webserver/database/filesystem and raw Thread's for your CPU bound algorithms). Even with their problems, this is what Scala and Rust do although with different results, i.e. both Scala (with Future/IO/Task etc) and Rust (with async) let you represent your logic in purely abstract terms while giving you fine grained control about how to run those concurrent computations (i.e. executors/execution contexts). The big difference here is that due to Scala being based on a VM (JVM) with seamless interopt on a high level language (Java) that doesn’t have the necessary abstractions (pre loom) there are performance overheads where as Rust will transform async code at compile time into a state machine that runs on your specified executor (this is the fastest way you can solve this problem).

Commentators that are parroting around saying “green threads will solve everything” and using examples of languages like Go or Elixir are conveniently ignoring that for CPU bound concurrent tasks you cannot rely on green threads alone. Green threads may be cheap but if you have a CPU bound concurrent task’s extra cost is not cheap anymore, in more detail

  • Erlang: Erlang actually has significant problems for long running CPU bound code, anyone with experience using it in a non trivial circumstance would have actually come across this. In practice Erlang solves this problem largely in the same way that Python does, i.e. these CPU bound functions are abstracted away with FFI which is often backed by C (for example a lot of number based operations don’t use Beam’s green threads otherwise it would be pitifully slow). Quick googling of “erlang CPU bound” demonstrates this
  • Go: Its been well documented that if you have pure CPU bound tasks and you use goroutines as their concurrency primitive, it is notably slower than the C/C++/Java/Rust equivalent using raw threads.
  • Haskell: Haskell has virtual threads but actually distinguishes between IO bound tasks (IO monad) and synchronous tasks (not bound in IO monad). Due to this ghc (and the fact that ghc compiles to native code and not a JVM) can take advantage of this. Also people doing CPU bound concurrency in Haskell use STM, Control.Concurrent.Thread (which are standard threads) or other techniques.

In practice this is why languages like Go and Elixir have fond their niche in areas where IO bound concurrency is much more important (webservers/network/etc etc) and not CPU bound problems where the JVM is still quite heavily used.

So really what does this mean? Well in essence it means that if you want to have a language (and a runtime) that handles both CPU and IO bound tasks performantly you are going to have the colored function problem or alternately different abstractions for dealing with CPU bound computations (or both). You need some way of marking filesystem/network/IO tasks to be multiplexed onto a small amount of threads and other CPU bound (or cheap CPU tasks such as boxing of datastructures) to NOT be needlessly multiplexed on multiple threads and ideally all be run on the same thread/s with minimal context communication between those threads. If you only have green threads then your language will be great for webservers but terrible for CPU bound number crunching algorithms and vice versa. If you want more evidence about why only having green threads is a terrible idea, this is what Rust initially tried to do and realised that they experienced all the problems I just described (Rust being a new native language had the liberty of experimenting in great detail with different solutions, and they ended up scrapping green threads which was their initial attempt).

Ironically the problem with Scala is that it has the basis for good abstractions to solve this problem in a principled manner but because of language stability reasons and the JVM its execution has been sublime (in contrast to Rust). For example with Future it would be ideal if the Future.map operation would, by default, run on a ExecutionContext.parasitic since almost all of the time the .map operations are cheap CPU tasks that are best executed quickly on the same thread (this is the same reason why when you read the akka/akka-stream codebase on pretty much every future.map operation they explicitly run this on ExecutionContext.parasitic). For blocking calls there is the blocking function but alas since Future is in the Scala stdlib its too late to change this. The bigger thing is that even though Scala has the necessary tools to solve this, because of its history (JVM and then multi-platform) there are issues in its execution/implementation.

So more to the topic of this SIP regarding suspended functions/continuations, this effort I think is going to be largely wasted and there are many people in Kotlin actually arguing that adopting co-routines was a bit of a mistake. As mentioned earlier, there are very clear issues with HKT’s/HOFs when it comes to suspended functions and unlike Kotlin/Java/Go this is extremely common in Scala. Also from personal experience in using co-routines in Kotlin, while its great for trivial linear code that you want to execute asynchronously its actually more difficult to reason about when dealing with more complex interruption/concurrent cases. I think it would be far wiser to spend the effort on coming up with and/or improving the current abstractions that we use rather than jumping to co-routines which work better with other languages that are designed to be simpler (i.e. go/co-routines), these languages are not Scala.

Also on another note, exceptions are not a replacement for value based propagation (i.e. Option/Either/Validation etc etc).I know that a lot of effort has been put into Scala 3 for making exception based error handling more principled and ergonomic, but no-matter what is done there are a whole set of problems where you don’t want to use exceptions whatsoever and thats because while throwing exceptions is free, catching them are extremely expensive. This is why, for example, you would never use exceptions for form validation on a webserver, these kinds of problems will always be using value based error handling. I think it would be fair to say that even with modern idiomatic Scala code, exceptions are only to represent cases where your program enters a state that it isn’t reasonable to continue and if you do end up catching exceptions its typically only to log/print the exception+stacktrace and continue crashing/recovering, otherwise you use Option/Either or some other validation type.

Why am I bringing this up? I am getting the impression that people here are arguing that we need to move away from Monad’s such as Option/Either to try/catch/throw because they are “complex to understand” and “don’t scale well when composed with other effects” (such as Future/IO/Task etc) which, while its true that this is a difficult problem; having to work with Monad’s (or more specifically value based error handling) is something we will always have to deal with in Scala and brushing this under the rug because its “annoying” and “we don’t like dealing with it” isn’t doing anyone any favors.

In summary, adding continuations is solving the wrong problem (in context of Scala) and there have been attempts at this before which have failed, i.e. https://github.com/scala/scala-async and while I am sure that this proposal would be more ergonomic than scala-async its still not enough of a reason for people to pick it up.

17 Likes