PRE-SIP: Suspended functions and continuations

On a somewhat related note there is another plausible point which is stack traces + exceptions. The nice thing about Future is its decoupled from stacktraces, that is by design you are not meant to expect the stack to be consistent or (even be there at all). As is evident by anyone that has used Future along with the standard ExecutionContext (such as ForkJoinPool) the stack traces are meaningless because the computation’s can jump between different at whim (thats the whole point of multiplexing computations onto real threads). You can see this even in the Future api, i.e. you have methods like Future.failed to designate a failed Future with a Throwable but its just passed around and propagated as a value. You can still recover from exceptions thrown in Future but as stated before its expensive, critically you don’t have to throw (i.e. you can just use Future.failed).

This is one area where I am a bit skeptical loom, although I haven’t looked at loom in great detail but if VirtualThread is meant to preserve stack traces and there is a lot of code out there that assumes stack traces are consistent and properly propagated, unless I am missing something this will have a performance penalty (versus not caring about the stack at all). This performance penalty is already visible right now, either in the case of Future with custom ExecutionContext's that preserve stack or other IO types that propagate the stack in the interpreter. I do believe that loom’s solution to this problem is not going to have the same overhead but as said previously I don’t see how it can be “cost free”.

Ultimately though this is one of the best benefits of doing value based error handling rather than throwing and catching exceptions, if you throw and expect to catch exceptions its expensive and Scala’s IO/Async takes forced programmers to not rely on the stack for basic error handling (which is a good thing). If my previous point about loom is correct (i.e. loom is forced to propagate stack in order to remain code compatible with existing code that relies on try/catch + preservation of stack to function). I also haven’t seen any ability for Loom to granularly handle stack propagation so you don’t deal with performance penalty if you don’t rely on exceptions.

For this reason alone (and others), despite what people claim Loom is not going to kill Scala’s Future even in the hypothetical where everyone runs JVM 19+ (w/e version is released with Loom) and Scala.js/scala-native is ignored.

This is not a nice thing. In fact, constructing exceptions in Future-based code incurs the cost of building out a stack trace, without the benefit–because the stack traces so constructed are useless and only reflect the callstack from the last “bounce” inside the execution context to the current operation.

Async stack traces in Loom are the same as sync stack traces, and have the same overhead. You pay this overhead only when (1) your code actually fails, and (2) your exception type is generating an exception (not all exception types are wired to generate exceptions, see also NoStackTrace).

I am not a fan of throwing exceptions (versus using typed values), but this is not a reason to prefer values, because if you are using exception types that do not generate stack traces, then it will be faster than value-based error propagation tends to be (Either, Try, etc.).

Stack tracing is absolutely and positively not a reason that Future will survive in a post-Loom world. Future has no benefits whatsoever with respect to stack tracing compared to Loom.

Future will survive only because people don’t want to rip apart legacy code bases, not because Future conveys any value with respect to stack traces (or anything else, really, since the marginal utility of other benefits is better obtained using more modern data types, like a typed-VirtualThread, for example).


I agree, re-throwing exceptions is one of the options. But then, should these exceptions be tracked in the type system or not? Previously, the main criticism of exceptions was that they were untracked. We now have a way to track them with experimental.saferExceptions made watertight with capture checking. But that means successor future should have the error type as a type parameter rather than fixed to Exception, because otherwise info about thrown exceptions is not propagated across futures.

1 Like

Well if you care that much about the cost of a the stack trace you can use scala.util.control.NoStackTrace for exactly this problem, you will just be passing (almost) a reference around.

But this is all besides the point because if you care so much about the cost of stack traces then you shouldn’t be even using Future.failed (or throwing/catching) which goes back to the point of using value based error handling.

How does this work exactly? The reason why stack traces are “free” with normal threads is that the stack is part of the OS thread and due to already paying the cost of a heavy OS thread passing the stack along doesn’t cost anything.

On the other hand the whole point with green thread/fiber implementations is they typically do not have any “stack” on them and they have a very small size (ergo 1kb for Erlag) so while catching/throwing can be made free, preserving the stack trace especially for very large non local calls is another story. Of course you can just pass the incrementally growing stack along in your virtual thread but that has a performance penalty and since you also experience problems due to cache locality of threads.

Well its another reason on a bucket list of reasons but regarding the rest of your point, there is no fundamental reason why error based value propagation is less performant then using try/catch without stack propagation because in the end it all amounts to the same thing, i.e. control flow mechanism. On the JVM using error based value handling can be slower but thats a because JVM isn’t that optimized for it, you can look at go instead which has optimized their runtime for value based error propagation (note that my response also takes into account that we are comparing apples to apples, i.e. if you are referencing error based values then you also need to compare that to catching exception’s to use the value of the exception being caught).

More concerningly though if you care that much about Loom and the JVM, typical JVM/Java code does preserve and propagate stack. scala.util.control.NoStackTrace is a Scala specific feature and I don’t even remember seeing Java programs create their own version of scala.util.control.NoStackTrace to mitigate cost of stack propagation, in fact in such cases they use values/null if they care about performance that much.

I think you misunderstood my point, the benefit of Future is precisely that it forced programmers to NOT care about the stack at all and also to NOT use it as the primary error handling mechanism.

This reminds me of the exact same argument that people were using to justify java.misc.unsafe having no reason to exist. In the worst case scenario, even in the context of a library creator/maintainer, such abstractions are necessary and it has nothing to do with legacy. Whether people like it or not, Future is not going anywhere for reasons aside of legacy.

1 Like

In order for you to see how my statements are correct, I would have to explain Future, the cost of stack trace generation, the connection between Future and stack traces, the cost of exception throwing, the cost of catching exceptions, and the cost of value-based error propagation (both theoretical and as practiced in Try, Either, ZIO, etc.), and possibly more besides.

I have no interest in explaining these things here, but I will repeat myself: that exceptions or error handling in general are NOT a reason to use Future, not even slightly, if anything, the reverse, and that Loom’s impact on exception handling are only net positive.


I think that’s an important question, but one not connected to async/sync or Loom.

If Scala decided to track exceptions via CanThrow (which I would say is not decided, it is opt-in and experimental, and as of yet, lacks broad buy-in), then you would use the following Future successor:

class Future[+E, +A](...) {
  def virtualThread: VirtualThread = vt 

  def result: A throws E = ...

  // etc.
object Future {
  def apply[E, A](code: => A throws E): Future[E, A] = ...

The only explicit support Scala would need, if any (presumably you can “cheat” with casting), is a way to transfer a capability between (virtual) threads, which is needed anyway for all capabilities.

This allows you to have a “handle” on a running computation that you can use for purposes of:

  • generating useful stack traces
  • checking the progress of the computation
  • interrupting the computation because the result is no longer needed

while still having a way to access the typed success value or exception from the completed computation.

Such a data type would have lots of other methods on it, too (e.g. Future#poll), but would omit nearly all of the callback-based machinery of existing Future, including all the monadic machinery (map, flatMap, etc.).

(And while we’re at it, Future is not a great name, it’s more like RunningTask[E, A]).

1 Like

These things I understand, what is being sold here as black magic here is that Loom provides zero cost catching of exceptions that preserves complete stack traces (if I am not misreading what is being said). The reason why I use the word black magic here is because no other language has solved this problem.

If you want negligible performance impact in the scenario of actually needing to reference the error/exception you have try/catch without full stack trace preservation and/or optimisations for local try/catch inside of functions (i.e. Erlang, OCaml) or you treat errors as values (and since its a value you can reference it).

I don’t disagree that Loom is obviously faster than current JVM in context of async/threads being blocked, what is the more pertinent point is if you accept the proposition that there is a lot of JVM/Java code that relies on full stack trace preservation (either directly or indirectly i.e. debugging) and Loom wants to preserve this property its giving away potential performance. Of course calculating how much it is would require benchmarking one implementation of Loom that never generates and/or preserves stack trace vs the current implementation (which apparently does).

Sure, but its more of a reason compared to using the conventional Java style try/catch in the scenario where you have catch (or reference) the value in the context if your program running and not just in the “let it crash”/500 internal server error scenario.

There is a reason why if you look at any high performing Java code where error cases need to be referenced in the normal running of the program, they don’t use exceptions even if there is a lot more boilerplate and/or its not as ergonomic/idiomatic.

Well I would say the discussion evolved beyond that, so there’s no need to come back to sync/async all the time, but maybe I’m wrong :wink:

Anyway, I don’t think either you or I have any data that would quantify in terms of e.g. lost revenue the effects of using a typed language in the first place, yet alone more advanced constructs. Though I agree that observing the most common sources of developer confusion and bugs is a good indicator as to how evolve our libraries & frameworks.

I agree that a typed error model might help avoid many bugs; but again I would say that this is not very different, if not the same as, an effect system (yes, you do get quite a lot of information from a function where the signature includes a ZIO[R, E, _], both from R and E as opposed to a “normal” one).

It’s great that ZIO successfully demonstrates how to implement error handling with the traits you enumerate. This might be a very good benchmark for other implementations out there, so that they might try to “do better” (I don’t know if that’s possible, but then the research program that @odersky mentions is supposed to take 5 years, so I think they don’t know it either :wink: )

As for focusing on areas that benefit the industry: I agree that languages like Ballerina look great on first sight. I would like at some point to write something bigger using it, to see how it scales (with such specialised approaches it’s often easy to do the common thing, and hard to do the uncommon). You know, Spring, RoR and Tapir all look great when looking at small examples ;).

Taming concurrency, properly handling errors in the presence of remote calls is something that always confused me and I find working with types such as IO helpful. So for me, it is an area where at least my coding would benefit. Of course I might be an isolated case, so it’s just one more datapoint :slight_smile:


Thinking a bit more about this, maybe you are right that what’s valuable is principled error handling, not an effect system. The difference (probably one of) is how the information propagates across method calls.

A follow-up question to “do we want methods which perform RPC to have a different signature than normal ones” is, “do we want this information to propagate to callers”. That is, do we want the “remoteness” to be viral (as IOs are today), or is it enough to handle all errors for the “remote” marker to disappear from the signature? In yet other words, should the “remote” effect be eliminatable?

If we eliminate the RPC errors, then at some point in the call chain the methods start looking as “normal” ones. If on the other hand we propagate the “remote” marker, it will infect all the callers, all the way to the root.

1 Like

Fundamentally speaking you cannot eliminate RPC call failures, you can only “hide” them at which point you are going to very quickly experience problems in any non trivial circumstance. Pretty much every single framework/library that tried to treat RPC calls the same as local calls has catastrophically failed in some way, the earliest example of this is probably CORBA.

For all of its faults, one of the biggest strengths of akka actors as a concurrency framework is it forced programmers to treat every call as a remote call (so you always had to handle potential failure) which means that even if you initially only implemented local concurrency, if you were to scale that out horizontally practically speaking you would just tweak instantiation of actors/actor refs and some other constants.

Treating RPC like normal local calls is doomed to failure because RPC calls can fail in ways that local calls cannot. Treating local calls like RPC calls works fine but is overkill in most scenarios unless you are specifically planning to scale out an initially local concurrent program into a remote/distribute one. Separating out RPC from local calls because you acknowledge they have different properties has the advantage of being granular while also being principled. Even on a pragmatic level, knowing calls are hitting the network/filesystem/remote computer from local calls is immensely powerful and in my opinion in many cases justifies the extra ceremony (whether it be done via types or other methods).


I am content if there is consensus that:

  1. Only actionable information is potentially useful to track.
  2. The costs of tracking must clearly be outweighted by the benefits, which implies some combination of (a) high benefits, and (b) low costs.
  3. The sync/async distinction is not relevant to track.

People can of course disagree on the specifics.

I enjoy effect systems such as ZIO and think they have great and lasting commercial value (not related to “effect tracking” whatsoever), but I am careful to try to avoid biasing language conversations in that direction because the audience of Scala is larger.

As the actionability stems from recoverability, namely, a category of recoverability where merely retrying stands some chance of succeeding, I do think the challenges of RPC are more closely solved by good and principled error system, as well as good compositional concurrency to achieve the efficiency gains made possible through timeouts and cancellations.

However, I view the domain of concurrency as quite outside the language level (at least in a general-purpose and late-stage language like Scala or Java, in which concurrency solutions manifest themselves as new libraries and frameworks), leaving only a good and principled error system as a target for future language evolution.



That’s exactly the problem we’ve been considering since about PRE-SIP: Suspended functions and continuations - #101 by adamw :slight_smile:

1 Like

It’s somewhat unfortunate that effect tracking topic spilled over to Impact of Loom on “functional effects” so I’ll continue effect tracking discussion here instead of scattering it over two topics.

Akka actors, or rather actors in general, try to hide the location of target actors and in general steer the programming model towards location independence, i.e. use isolation and general process management (supervision, restarts, propagating errors higher up hierarchy) for both local and remote actors. Systems based on Erlang (actor heavy platform) claims availability of multiple “nines”: High availability - Wikipedia and Erlang enthusiasts claim it comes from approach called “let it crash”: The Zen of Erlang It looks that the approaches used in actor systems are at odds with remote calls tracking.

1 Like

This all feels very, very strange and disjoint to me. It seems like this proposal is trying to create a new way of doing for-comprehensions here:

def getCountryCodeDirect(futurePerson: Future[Person])
    (using Structured, Control[NotFound.type | None.type]): String =
  val person = futurePerson.bind //throws if fails to complete (we don't want to control this)
  val address = person.address.bind //shifts on Left
  val country = //shifts on None
  country.code.bind //shifts on None

…but everything ultimately relies on having some kind of structure that can resolve F[A] => A with a context-shifting construct:

extension [R, A](fa: Either[R, A]) 
  suspend def bind(using Control[R]): A = 
    fa.fold(_.shift, identity) //shifts on Left

extension [A](fa: Option[A])
  suspend def bind(using Control[None.type]): A = 
    fa.fold(None.shift)(identity) //shifts on None

What about things that don’t fit this paradigm at all like collections? What about Quill queries? I’d love to be able to do something like this!

suspend def peopleAndAddresses: (Person, Address) =
  val p = query[Person].bind
  val a = query[Address].join(a => =
  (p, a).bind

…only, this kind of thing doesn’t fit into the continuation-shifting paradigm at all!

The beauty of the for-comprehension construct is that it unifies effects with collections. It means I show newcomers to scala a single construct that solves their needs.

for {
  p <- people: List[Person]
  a <- p.address: List[Address]
} yield (p, a)

Can be taught as the same concept as this:

for {
  p <- httpGet[Person]
  a <- httpGet[Address](
} yield (p, a)

So now with this proposal we’re saying that we want to chuck all of that thinking and have a totally different way to do async that is disjoint with everything else. I think that’s awful!

How about we just figure out how to do better for-comprehensions like my predecessor @fwbrasil in Monadless actually aimed to do? That way we can get an imperative style that works for the whole community, and everyone will benefit with zero changes on their end since they already do map/flatMap! Why not create something that everyone can benefit from instead of creating an entirely new syntax that will bifurcate the entire community (again!) and create more even ways of doing things!?


P.S. Here is what I think the vision of Monadless actually is.

Collections would benefit!

def join(people: List[Person]): List[(Person, Address)] =
  yield {
    val p = people.bind
    val a = p.address.bind
    (p, a)

Async would benefit!

def join(id: Int): Future[(Person, Address)] =
  yield {
    val p = httpGet[Person](id).bind
    val a = httpGet[Address](p.addressId).bind
    (p, a)

Quill and Slick would benefit!

def join(people: List[Person]): List[(Person, Address)] =
  yield {
    val p = query[Person].bind
    val a = query[Address].join(a => ==
    (p, a)

Make a better way to do for-comprehensions instead of doing this strange one-off implementation for a handful of specific constructs!


I’ve stumbled upon something that seems related to this topic at large. (Funny enough found it in the Scala repo. It’s a submodule there).

This project offers a very interesting alternative approach to concurrency.

Something like that looks very promising imho.

No monads and direct style. But still all the advantages of monads (and some more on top). Please have a look at the rationale in the README on GitHub for more info.

From the research standpoint it’s not too “exciting” as it employs “only” linear- & session-types—but that’s a feature: Tried out and well understood ideas are actually preferable when doing “serious stuff” in production. At least that’s my opinion.

I don’t know at the moment how Libretto works exactly, still reading the docs and will need to have a look at the code, but it seems to provide a type-checked DSL for its purposes. (Which is maybe something that could be criticized of course. But maybe the use of a “bolted on” DSL could be healed by some integration into the Scala language, which is also a reason I’m mentioning this here).

Conceptually at least this Libretto lib seems quite well founded at first look. Pleas find some basic docs below:

How and when type-, or better said, linearity-checking is implemented I’m not sure, but as far as I know Scala’s capture-checking could be used in theory to check linearity. In case that’s not possible out-of-the-box we should try to add such feature to the compiler soon as it would solve the concurrency problem (and some more, like resources) really nicely (and cross platform!).

I think “linear types” (or something with the same properties derived form checked captures) could truly change the world and finally liberate us form monads. (Yeah, no news here, but until now this wisdom did not get enough traction in mainstream—besides in some sense in Rust, Scalas “new Behemoth” :wink:).

Also, organizing programs in form of small little “machines” that work all the time concurrently on their own, and communicate with the the outside world only through well defined (which means type checked!) message-oriented interfaces (interfaces that have a build-in notion of partial order of events, which is important in case of effects) looks very promising to me. (You can think “actors” here, but also hardware components insides SoCs, or services on the network, and a lot of more things I guess… :slight_smile:)

[On a side note: Maybe this libs uses tagless final? If not, this could be an interesting way to implement such a DSL. Especially in addition with staging, which would allow to specialize such data-flow program at compile time, and further optimize it at runtime—similar to what @LPTK did with his linear algebra DSL⁽¹⁾. In fact linear algebra and data-streaming / -flow are even related topics as one can see in the field of AI; frankly I’ve never found the code of that Pilatus thingy, which is a real pity… The other thing is: Maybe such descriptions of concurrent data-flow programs could be even compiled directly down to hardware, or things like FPGAs; no clue though how it would relate / compare to something like SPACIAL⁽²⁾ in that case].

All that said, arguing for some supporting features in Scala for something like Libretto in case such features would be needed (I don’t know that at the moment) would not invalidate or make the idea to improve for-comprehensions even slightly less attractive! @deusaquilus just nailed it with his last comment.

⁽¹⁾ Finally, a polymorphic linear algebra language (Pearl) - ORA - Oxford University Research Archive

⁽²⁾ (Click through the first tutorial pages of Spatial to see some similarities with Libretto)

1 Like

This is quite interesting, just spent the last hour going through it and it actually reminds me of making custom graphs in akka-streams using input and output ports with the main distinction that in akka-streams concurrency is explicit (need to use functions like async) where as with libretto its implicit/automatic.

What appears to be missing however (at least for me at first glance) is error handling/cancellation/exceptions (do they exist?).

I think Dsl.scala 2.x has covered most of the features proposed in this PRE-SIP. Unlike Monadless, which converts Scala code to monadic calls, Dsl.scala 2.x converts Scala code into virtualized ASTs at type-level and interprets the AST with some type classes.

By the way, Project Loom does not provide multi-pass continuation, which is a good add-on I can see from this proposal.

In fact, Dsl.scala 2.x is used to implement the .bind method in Binding.scala, which could trigger reevaluation on partial of the data binding expression multiple times. Therefore, it cannot be implemented in Project Loom, because .bind requires something equivalent to multi-pass continuation, not the one-pass continuation provided by Project Loom, even if we just consider Binding.scala on JVM.

I’m glad people are noticing Libretto :slight_smile:

The custom graph DSL of Akka Streams is a good analogy. Another major distinction, in addition to concurrency in Libretto being implicit, is that the stream combinators written in Libretto are well-wired by construction, so you won’t end up with unconnected or doubly-connected ports. This is checked at what I call assembly time, i.e. when you assemble the blueprint of your program. (And blueprints in Libretto, unlike Akka Streams, are truly mere blueprints, without references to live objects of an already running program.)

What appears to be missing however (at least for me at first glance) is error handling/cancellation/exceptions (do they exist?).

You are not wrong :slight_smile:

Currently, the only way to handle errors is to have them explicit in your types (A |+| Error, analogous to Either[A, Error]). There is also a crash operation that let’s you raise an error without saying so in the types, but that one cannot be handled.

Likewise with cancellation—for now, you have to explicitly design for it (have it in your types). For example, a stream can be polled, or closed. Closing is an explicitly designed operation.

But that won’t be the end of story.

Here’s an opinion that might be controversial:

Cancellation of thread-like constructs is completely wrong, because it’s terrible for composition.

This becomes obvious if you consider cancelling a thread that was supposed to complete a promise (or more generally, communicate with other threads).
And by thread-like constructs I mean threads, actors (Akka), fibers (as in Cats Effect, ZIO), or Kotlin coroutines.

With Libretto, I want to explore an approach to cancellation that is composable, where things like leaky promises are impossible. It is a source of a lot of my personal excitement. (Though it will take some time before I can deliver that.)