PRE-SIP: Suspended functions and continuations

I think that’s an important question, but one not connected to async/sync or Loom.

If Scala decided to track exceptions via CanThrow (which I would say is not decided, it is opt-in and experimental, and as of yet, lacks broad buy-in), then you would use the following Future successor:

class Future[+E, +A](...) {
  def virtualThread: VirtualThread = vt 

  def result: A throws E = ...

  // etc.
}
object Future {
  def apply[E, A](code: => A throws E): Future[E, A] = ...
}

The only explicit support Scala would need, if any (presumably you can “cheat” with casting), is a way to transfer a capability between (virtual) threads, which is needed anyway for all capabilities.

This allows you to have a “handle” on a running computation that you can use for purposes of:

  • generating useful stack traces
  • checking the progress of the computation
  • interrupting the computation because the result is no longer needed

while still having a way to access the typed success value or exception from the completed computation.

Such a data type would have lots of other methods on it, too (e.g. Future#poll), but would omit nearly all of the callback-based machinery of existing Future, including all the monadic machinery (map, flatMap, etc.).

(And while we’re at it, Future is not a great name, it’s more like RunningTask[E, A]).

1 Like

These things I understand, what is being sold here as black magic here is that Loom provides zero cost catching of exceptions that preserves complete stack traces (if I am not misreading what is being said). The reason why I use the word black magic here is because no other language has solved this problem.

If you want negligible performance impact in the scenario of actually needing to reference the error/exception you have try/catch without full stack trace preservation and/or optimisations for local try/catch inside of functions (i.e. Erlang, OCaml) or you treat errors as values (and since its a value you can reference it).

I don’t disagree that Loom is obviously faster than current JVM in context of async/threads being blocked, what is the more pertinent point is if you accept the proposition that there is a lot of JVM/Java code that relies on full stack trace preservation (either directly or indirectly i.e. debugging) and Loom wants to preserve this property its giving away potential performance. Of course calculating how much it is would require benchmarking one implementation of Loom that never generates and/or preserves stack trace vs the current implementation (which apparently does).

Sure, but its more of a reason compared to using the conventional Java style try/catch in the scenario where you have catch (or reference) the value in the context if your program running and not just in the “let it crash”/500 internal server error scenario.

There is a reason why if you look at any high performing Java code where error cases need to be referenced in the normal running of the program, they don’t use exceptions even if there is a lot more boilerplate and/or its not as ergonomic/idiomatic.

Well I would say the discussion evolved beyond that, so there’s no need to come back to sync/async all the time, but maybe I’m wrong :wink:

Anyway, I don’t think either you or I have any data that would quantify in terms of e.g. lost revenue the effects of using a typed language in the first place, yet alone more advanced constructs. Though I agree that observing the most common sources of developer confusion and bugs is a good indicator as to how evolve our libraries & frameworks.

I agree that a typed error model might help avoid many bugs; but again I would say that this is not very different, if not the same as, an effect system (yes, you do get quite a lot of information from a function where the signature includes a ZIO[R, E, _], both from R and E as opposed to a “normal” one).

It’s great that ZIO successfully demonstrates how to implement error handling with the traits you enumerate. This might be a very good benchmark for other implementations out there, so that they might try to “do better” (I don’t know if that’s possible, but then the research program that @odersky mentions is supposed to take 5 years, so I think they don’t know it either :wink: )

As for focusing on areas that benefit the industry: I agree that languages like Ballerina look great on first sight. I would like at some point to write something bigger using it, to see how it scales (with such specialised approaches it’s often easy to do the common thing, and hard to do the uncommon). You know, Spring, RoR and Tapir all look great when looking at small examples ;).

Taming concurrency, properly handling errors in the presence of remote calls is something that always confused me and I find working with types such as IO helpful. So for me, it is an area where at least my coding would benefit. Of course I might be an isolated case, so it’s just one more datapoint :slight_smile:

2 Likes

Thinking a bit more about this, maybe you are right that what’s valuable is principled error handling, not an effect system. The difference (probably one of) is how the information propagates across method calls.

A follow-up question to “do we want methods which perform RPC to have a different signature than normal ones” is, “do we want this information to propagate to callers”. That is, do we want the “remoteness” to be viral (as IOs are today), or is it enough to handle all errors for the “remote” marker to disappear from the signature? In yet other words, should the “remote” effect be eliminatable?

If we eliminate the RPC errors, then at some point in the call chain the methods start looking as “normal” ones. If on the other hand we propagate the “remote” marker, it will infect all the callers, all the way to the root.

1 Like

Fundamentally speaking you cannot eliminate RPC call failures, you can only “hide” them at which point you are going to very quickly experience problems in any non trivial circumstance. Pretty much every single framework/library that tried to treat RPC calls the same as local calls has catastrophically failed in some way, the earliest example of this is probably CORBA.

For all of its faults, one of the biggest strengths of akka actors as a concurrency framework is it forced programmers to treat every call as a remote call (so you always had to handle potential failure) which means that even if you initially only implemented local concurrency, if you were to scale that out horizontally practically speaking you would just tweak instantiation of actors/actor refs and some other constants.

Treating RPC like normal local calls is doomed to failure because RPC calls can fail in ways that local calls cannot. Treating local calls like RPC calls works fine but is overkill in most scenarios unless you are specifically planning to scale out an initially local concurrent program into a remote/distribute one. Separating out RPC from local calls because you acknowledge they have different properties has the advantage of being granular while also being principled. Even on a pragmatic level, knowing calls are hitting the network/filesystem/remote computer from local calls is immensely powerful and in my opinion in many cases justifies the extra ceremony (whether it be done via types or other methods).

4 Likes

I am content if there is consensus that:

  1. Only actionable information is potentially useful to track.
  2. The costs of tracking must clearly be outweighted by the benefits, which implies some combination of (a) high benefits, and (b) low costs.
  3. The sync/async distinction is not relevant to track.

People can of course disagree on the specifics.

I enjoy effect systems such as ZIO and think they have great and lasting commercial value (not related to “effect tracking” whatsoever), but I am careful to try to avoid biasing language conversations in that direction because the audience of Scala is larger.

As the actionability stems from recoverability, namely, a category of recoverability where merely retrying stands some chance of succeeding, I do think the challenges of RPC are more closely solved by good and principled error system, as well as good compositional concurrency to achieve the efficiency gains made possible through timeouts and cancellations.

However, I view the domain of concurrency as quite outside the language level (at least in a general-purpose and late-stage language like Scala or Java, in which concurrency solutions manifest themselves as new libraries and frameworks), leaving only a good and principled error system as a target for future language evolution.

:100:

3 Likes

That’s exactly the problem we’ve been considering since about PRE-SIP: Suspended functions and continuations - #101 by adamw :slight_smile:

1 Like

It’s somewhat unfortunate that effect tracking topic spilled over to Impact of Loom on “functional effects” so I’ll continue effect tracking discussion here instead of scattering it over two topics.

Akka actors, or rather actors in general, try to hide the location of target actors and in general steer the programming model towards location independence, i.e. use isolation and general process management (supervision, restarts, propagating errors higher up hierarchy) for both local and remote actors. Systems based on Erlang (actor heavy platform) claims availability of multiple “nines”: High availability - Wikipedia and Erlang enthusiasts claim it comes from approach called “let it crash”: The Zen of Erlang It looks that the approaches used in actor systems are at odds with remote calls tracking.

1 Like

This all feels very, very strange and disjoint to me. It seems like this proposal is trying to create a new way of doing for-comprehensions here:

def getCountryCodeDirect(futurePerson: Future[Person])
    (using Structured, Control[NotFound.type | None.type]): String =
  val person = futurePerson.bind //throws if fails to complete (we don't want to control this)
  val address = person.address.bind //shifts on Left
  val country = address.country.bind //shifts on None
  country.code.bind //shifts on None

…but everything ultimately relies on having some kind of structure that can resolve F[A] => A with a context-shifting construct:

extension [R, A](fa: Either[R, A]) 
  suspend def bind(using Control[R]): A = 
    fa.fold(_.shift, identity) //shifts on Left

extension [A](fa: Option[A])
  suspend def bind(using Control[None.type]): A = 
    fa.fold(None.shift)(identity) //shifts on None

What about things that don’t fit this paradigm at all like collections? What about Quill queries? I’d love to be able to do something like this!

suspend def peopleAndAddresses: (Person, Address) =
  val p = query[Person].bind
  val a = query[Address].join(a => a.fk = p.id).bind
  (p, a).bind

…only, this kind of thing doesn’t fit into the continuation-shifting paradigm at all!

The beauty of the for-comprehension construct is that it unifies effects with collections. It means I show newcomers to scala a single construct that solves their needs.
This:

for {
  p <- people: List[Person]
  a <- p.address: List[Address]
} yield (p, a)

Can be taught as the same concept as this:

for {
  p <- httpGet[Person]
  a <- httpGet[Address](p.id)
} yield (p, a)

So now with this proposal we’re saying that we want to chuck all of that thinking and have a totally different way to do async that is disjoint with everything else. I think that’s awful!

How about we just figure out how to do better for-comprehensions like my predecessor @fwbrasil in Monadless actually aimed to do? That way we can get an imperative style that works for the whole community, and everyone will benefit with zero changes on their end since they already do map/flatMap! Why not create something that everyone can benefit from instead of creating an entirely new syntax that will bifurcate the entire community (again!) and create more even ways of doing things!?

10 Likes

P.S. Here is what I think the vision of Monadless actually is.

Collections would benefit!

def join(people: List[Person]): List[(Person, Address)] =
  yield {
    val p = people.bind
    val a = p.address.bind
    (p, a)
  }

Async would benefit!

def join(id: Int): Future[(Person, Address)] =
  yield {
    val p = httpGet[Person](id).bind
    val a = httpGet[Address](p.addressId).bind
    (p, a)
  }

Quill and Slick would benefit!

def join(people: List[Person]): List[(Person, Address)] =
  yield {
    val p = query[Person].bind
    val a = query[Address].join(a => a.fk == p.id).bind
    (p, a)
  }

Make a better way to do for-comprehensions instead of doing this strange one-off implementation for a handful of specific constructs!

9 Likes

I’ve stumbled upon something that seems related to this topic at large. (Funny enough found it in the Scala repo. It’s a submodule there).

This project offers a very interesting alternative approach to concurrency.

Something like that looks very promising imho.

No monads and direct style. But still all the advantages of monads (and some more on top). Please have a look at the rationale in the README on GitHub for more info.

From the research standpoint it’s not too “exciting” as it employs “only” linear- & session-types—but that’s a feature: Tried out and well understood ideas are actually preferable when doing “serious stuff” in production. At least that’s my opinion.

I don’t know at the moment how Libretto works exactly, still reading the docs and will need to have a look at the code, but it seems to provide a type-checked DSL for its purposes. (Which is maybe something that could be criticized of course. But maybe the use of a “bolted on” DSL could be healed by some integration into the Scala language, which is also a reason I’m mentioning this here).

Conceptually at least this Libretto lib seems quite well founded at first look. Pleas find some basic docs below:

https://continuously.dev/p/libretto/docs/latest/tutorial/basics.html

How and when type-, or better said, linearity-checking is implemented I’m not sure, but as far as I know Scala’s capture-checking could be used in theory to check linearity. In case that’s not possible out-of-the-box we should try to add such feature to the compiler soon as it would solve the concurrency problem (and some more, like resources) really nicely (and cross platform!).

I think “linear types” (or something with the same properties derived form checked captures) could truly change the world and finally liberate us form monads. (Yeah, no news here, but until now this wisdom did not get enough traction in mainstream—besides in some sense in Rust, Scalas “new Behemoth” :wink:).

Also, organizing programs in form of small little “machines” that work all the time concurrently on their own, and communicate with the the outside world only through well defined (which means type checked!) message-oriented interfaces (interfaces that have a build-in notion of partial order of events, which is important in case of effects) looks very promising to me. (You can think “actors” here, but also hardware components insides SoCs, or services on the network, and a lot of more things I guess… :slight_smile:)

[On a side note: Maybe this libs uses tagless final? If not, this could be an interesting way to implement such a DSL. Especially in addition with staging, which would allow to specialize such data-flow program at compile time, and further optimize it at runtime—similar to what @LPTK did with his linear algebra DSL⁽¹⁾. In fact linear algebra and data-streaming / -flow are even related topics as one can see in the field of AI; frankly I’ve never found the code of that Pilatus thingy, which is a real pity… The other thing is: Maybe such descriptions of concurrent data-flow programs could be even compiled directly down to hardware, or things like FPGAs; no clue though how it would relate / compare to something like SPACIAL⁽²⁾ in that case].

All that said, arguing for some supporting features in Scala for something like Libretto in case such features would be needed (I don’t know that at the moment) would not invalidate or make the idea to improve for-comprehensions even slightly less attractive! @deusaquilus just nailed it with his last comment.


⁽¹⁾ Finally, a polymorphic linear algebra language (Pearl) - ORA - Oxford University Research Archive

⁽²⁾ https://spatial-lang.org/ (Click through the first tutorial pages of Spatial to see some similarities with Libretto)

1 Like

This is quite interesting, just spent the last hour going through it and it actually reminds me of making custom graphs in akka-streams using input and output ports with the main distinction that in akka-streams concurrency is explicit (need to use functions like async) where as with libretto its implicit/automatic.

What appears to be missing however (at least for me at first glance) is error handling/cancellation/exceptions (do they exist?).

I think Dsl.scala 2.x has covered most of the features proposed in this PRE-SIP. Unlike Monadless, which converts Scala code to monadic calls, Dsl.scala 2.x converts Scala code into virtualized ASTs at type-level and interprets the AST with some type classes.

By the way, Project Loom does not provide multi-pass continuation, which is a good add-on I can see from this proposal.

In fact, Dsl.scala 2.x is used to implement the .bind method in Binding.scala, which could trigger reevaluation on partial of the data binding expression multiple times. Therefore, it cannot be implemented in Project Loom, because .bind requires something equivalent to multi-pass continuation, not the one-pass continuation provided by Project Loom, even if we just consider Binding.scala on JVM.

I’m glad people are noticing Libretto :slight_smile:

The custom graph DSL of Akka Streams is a good analogy. Another major distinction, in addition to concurrency in Libretto being implicit, is that the stream combinators written in Libretto are well-wired by construction, so you won’t end up with unconnected or doubly-connected ports. This is checked at what I call assembly time, i.e. when you assemble the blueprint of your program. (And blueprints in Libretto, unlike Akka Streams, are truly mere blueprints, without references to live objects of an already running program.)

What appears to be missing however (at least for me at first glance) is error handling/cancellation/exceptions (do they exist?).

You are not wrong :slight_smile:

Currently, the only way to handle errors is to have them explicit in your types (A |+| Error, analogous to Either[A, Error]). There is also a crash operation that let’s you raise an error without saying so in the types, but that one cannot be handled.

Likewise with cancellation—for now, you have to explicitly design for it (have it in your types). For example, a stream can be polled, or closed. Closing is an explicitly designed operation.

But that won’t be the end of story.

Here’s an opinion that might be controversial:

Cancellation of thread-like constructs is completely wrong, because it’s terrible for composition.

This becomes obvious if you consider cancelling a thread that was supposed to complete a promise (or more generally, communicate with other threads).
And by thread-like constructs I mean threads, actors (Akka), fibers (as in Cats Effect, ZIO), or Kotlin coroutines.

With Libretto, I want to explore an approach to cancellation that is composable, where things like leaky promises are impossible. It is a source of a lot of my personal excitement. (Though it will take some time before I can deliver that.)

6 Likes

Following-up on this conversation, we have just submitted a SIP Proposal SIP-55 - Concurrency with Higher-Order Coroutines by diesalbla · Pull Request #63 · scala/improvement-proposals · GitHub. It addresses some of the question regarding syntax and about dealing with higher-order funcions.

1 Like

Stupid question, but how is this related to the old coroutine experiment?

https://scala-coroutines.github.io/coroutines/

I don’t see it mentioned over there in GitHub.

This proposal looks much more complex than the old one, to be honest.

What I don’t understand: Does this proposal make it necessary to have at least two versions of any HOF? I didn’t get this part of the doc. Does it really say that all std. lib HOFs must be touched? What about user defined HOFs than?

  1. Meta: new thread?

  2. issue (and the same problem with epfl/async. ) - changing of API for already existing functions, (collection API in this SIP) which is a massive breaking change.

In dotty-cps-async, we search for additional function variants when we see async operations inside lamba. [in context of this SIP – this will be using Suspend inside lambda]. //and plan to generate those functions automatically in simple cases.

I think in SIP-55 is possible to adopt such an approach for cases when we have an existing API, which historically has not supported the suspension.

  1. Exists at least three distinct viewing on how async processing will work with context encoding:
  • All functions are suspendable on the platform with continuations support and not suspendable when continuations are not supported. [Java-way]
  • Suspendable functions are typed as f(A)(using SuspendContext) => B), [This SIP, upcoming context encoding in dotty-cps-async is quite similar], a suspendable variant of map is something like
c.map(f: A=>(Suspend ?=>B))=>  Suspend ?=> C[B] 

or

o.map(f=>M[B]): M[C] // [M[B], M[C] is for simplicified view] 

in monadic encoding.

  • Suspendable functions are typed as. {suspend} A=>B, where suspend is capacity. The suspendable variant of the map is typed
o.map(f:A=>B):  C,   

where. ‘=>’ is a notation for an unpure function. [It’s how I now understand. @odersky view. I can be wrong in this].

Looks like eventually, all 3 approaches will coexist… [One of the meanings of ‘scala-way’ :wink: ]

To be honest only the third example makes sense to me.

All other proposals don’t solve the effect polymorphism problem¹, and you end up with colored functions, which are poison to HOFs.

c.map(f: A=>(Suspend ?=>B))=>  Suspend ?=> C[B] 

That’s not map that’s a monster! (And map is the most trivial example. For more complex HOFs this would become super complex really fast!)

o.map(f=>M[B]): M[C]

That’s actually just flatMap… So monads… So not really a substantial improvement over the status quo. (Even CPS-Async hides a lot of the most ugly details and the need to perform a lot of contortions. Which would make it a good transition path to direct style without throwing away the current battle-tested “monad runtimes” at the same time, I guess.)


¹ Capabilities for Resources and Effects (Slide 12 and previous)