Impact of Loom on “functional effects”

Since this thread has attracted considerable attention and a large number of ZIO users, I will clarify my thoughts on the relationship of functional effect systems to Loom:

  1. Loom obviates the need for any special async support in the runtime system of a functional effect system, because Loom unifies sync and async programming under one model, providing very cheap “virtual threads” that function as the workhorse of all forms of computation (note: there are still a few rough edges in Loom but they will eventually be dealt with).
  2. If you were using a functional effect system (or Future) purely for async programming, then after Loom arrives, there is no need to use a functional effect system anymore. I think a lot of people use Future primarily for async programming, and thus, I think Loom will significantly affect Future usage; a Loom successor to Future (basically a typed VirtualThread that lets you check on the status of running computations, await their result, and interrupt them) would not look like Future, but I am sure we will have multiple choices of such a thing for heavy users of Future.
  3. Most users of functional effect systems are not relying on them primarily for async programming. Rather, they are relying on them for concurrency (compositional timeouts, races, parallelism), resource-safety in the presence of concurrency, typed errors, context, fiber-local state. Functional effect systems should be regarded as a “framework” for building applications in functional Scala, like Akka is a framework for building actor-based systems on the JVM. They will not go away in Loom. But, what will happen is that functional effect systems must be heavily optimized for Loom, and must focus their toolkits on non-async challenges (both of which are already true of ZIO).
  4. Similarly to functional effect systems, actor libraries, STM libraries, concurrency primitives, data flow libraries, streaming libraries, and so forth, will not suddenly go away after Loom. Loom only solves the async problem, but the meaty problems of how to structure concurrent and streaming applications will continue to be solved in the ways they are solved today, only with better interfaces and better performance caused by the Loom upgrade.
  5. Users of functional effect systems like ZIO can enjoy a high-level programming model that takes care of low-level concerns like async programming, virtual threads, blocking computations, resource-safety, and so forth, without having to concern themselves about upgrading code bases. Indeed, I would argue that for those who prefer the functional Scala style, using ZIO future-proofs your application, insulating you from the never-ending wheel of technological progress occuring on the JVM (this is also true of large JVM frameworks like Spring, which will surely seamlessly take advantage of Loom). @fanf has a great testimony along these lines.
  6. In summary, Loom is not going to affect functional effect systems, other than make them more performant and perhaps simplify their interfaces and internal models. Loom will surely kill-off async-only use cases for functional effect systems (or Future), which means it will significantly affect Future usage and adoption. Subjectively, I think Loom will accelerate adoption of ZIO because ZIO has an evolved model of concurrency that’s heavily informed by Haskell and academia, as well as 4 years production usage of virtual threads in a functional programming language (it’s way ahead of the structured concurrency enhancements that are coming to the JDK post-Loom).
4 Likes

My opinion (probably controversial too, why not? :slight_smile: take with a grain of salt) is that:

  • Future is a trap, because of it’s eager computation, so it’s really easy to lose track of which Future argument (in some function deep in call stack) is effectively call-by-name or call-by-value (even if parameter is by-name in current function then the Future could be cached in a val before). Also I see my colleagues doing e.g. parallelism restriction (i.e. locally limit the number of simultaneously running Futures) wrong, also because of Future tricky nature that mixes immediate and deferred computations. Under Project Loom we can replace Future[Something] with just a function () => Something or () => Try[Something] and have easier reasoning about execution order.
  • IO monad is easier to reason about than Futures as value of type IO[Something] is always a deferred computation, so it can be e.g. ignored or restarted and the results will be intuitive. ZIO is IMHO a practical low-ceremony IO monad (I’m talking about ZIO monad only, ZLayer and other mechanisms are different story).
  • monads (Future, IO, etc) are highly portable (I mean between Scala, Scala.js, Scala Native). Right now they are the main abstraction in Scala for writing code and will remain main abstraction for portable code as there’s little to no chance that Project Loom will be available in any form for Scala.js and/or Scala Native
  • IMHO Project Loom will obviate all the need for Futures, IO and other monads that deal with concurrency.

Project Loom authors are already working on structured concurrency primitives.

Races with timeouts: JEP 428: Structured Concurrency (Incubator) (included in Java 19)

<T> T race(List<Callable<T>> tasks, Instant deadline) throws ExecutionException {
    try (var scope = new StructuredTaskScope.ShutdownOnSuccess<T>()) {
        for (var task : tasks) {
            scope.fork(task);
        }
        scope.joinUntil(deadline);
        return scope.result(); // Throws if none of the forks completed successfully 
    } 
}

Replacement for ThreadLocal was here (Loom early access build): ScopeLocal (Java SE 19 & JDK 19 [build 5]) (ScopeLocal) - but somehow it’s not present in mainline builds. Maybe the design has too rough edges right now.

Resource safety in basic form is provided in Java by try-with-resources, in Scala by Scala Standard Library 2.13.6 - scala.util.Using . More advanced usages will be handled by research around safe resource handling in Scala 3.

Loom solves the syntax problem, i.e. with direct style I can do any nesting of ifs, matches, etc and higher order functions. It seems to me that sticking to IO monad removes the freedom that lightweight threads provide.

Sequential code in Loom is very simple and such style is (in 90%+ of cases) good enough to extract high throughput under many running virtual threads (e.g. in a web server).

1 Like

If you want to see JEP 428 and ZIO compared in detail, I’m giving a talk on that at Functional Scala 2022! My conclusion probably won’t surprise you. :laughing:

1 Like

How is that any different than for Future? Any amount of computation may be performed eagerly and deferred in a () => Something or () => Try[Something]:

def foo1(arg: T): () => Something =
  () => compute(arg)

def foo2(arg: T): () => Something =
  val res = compute(arg)
  () => res
1 Like

Note that Scala’s Future is a close analog of Loom’s VirtualThread already, whereas for the monadic effect systems, a closer analoge is () => VirtualThread (it is a function that, when executed, creates and runs a virtual thread).

Under Loom, the most capable replacement for Scala’s Future is something like:

class Future[A](f: CompletableFuture[Try[A]], vt: VirtualThread) {
  def virtualThread = vt 

  def result: A = f.get.get

  // etc.
}
object Future {
  def apply[A](code: => A): Future[A] = {
    val f = new CompletableFuture[Try[A]]()
    val vt = Thread.startVirtualThread { () =>
      f.complete(Try(code()))
    }
    new Future(f, vt)
  }
}

(Perhaps bulked up with appropriate methods for compability with legacy Future code bases, though most of them are really not necessary anymore.)

You want to capture the type with which your expression / code succeeds in the signature of future, but otherwise preserve all the properties that a “running computation” has (which, in Loom, is a VirtualThread). This addition of a type parameter solves one of the drawbacks of Java’s Thread interface: Java threads works off () => Unit rather than () => A, partially because of when in history they were created (a time long before generics and lambdas!).

2 Likes

I think it’s going a bit too far saying that Loom will obviate the need for Futures. One of their main use-cases, that is improving a program’s performance with the help of asynchronous programming, will become obsolete - that’s true.

However, Futures are useful in another dimension, as a programming model. Having a “typed handle to a computation running in the background” allows one to express solutions to some problems in a much nicer, more readable, and hence maintainable way.

With Loom, the implementation of a Future might change, as @jdegoes suggests. But for a user, does it really matter if inside there’s a VirtualThread, or a Promise to be completed asynchronously? Probably not that much. As a user, you want specific functionalities. One big change with Loom is that blocking on a Future won’t be considered a mistake or bad style anymore - it will just be a bridge between the wrapped and direct programming models.

I’ve been interested in this topic for a long time now, so it’s very informative to see these discussions! :slight_smile:

By the way - time and again, people in both this and the original thread, as well as in various Loom presentations and discussions, keep introducing various notions of “computation handles”. Even here, in the JEP 428 snippet, we’ve seen Callable<T>, which is a lazily computation reference. We’ve also seen references to () => A or () => Try<A>, not to mentions the Future just discussed.

I think it shows that ultimately we do need a way to represent a computation as a value. FP has shown that lazily-evaluated computations have better characteristic (again, from a purely programming model point of view) than eager ones.

In the end I think that’s it’s better to have a single datatype (even if it’s sometimes too powerful), rather than endlessly convert between Futures, lazy Futures, Callables, Runnables, homegrown () => Try[T] etc. Yes - that single datatype resembles the IOs that we know today.

But just as Future.get bridges the wrapped and direct programming models, can we bridge IO and direct? So that we can program in both styles, depending on our needs? In a way, that keeps the interruption semantics and properly propagates fiber/thread-locals?

4 Likes

I think Future and IO are fundamentally different because Future is a computation which not depend on the client and has an observation method, and IO is an application that depends on the client where the client should explicitly start IO computation. If we unite this into one data type, we will receive a generic monad type (+errors) united by map (which is await in direct style) which on the first look can be described as 'sequential composition of the result computations.` All other – the patterns of usage and the culture around are quite different.

With Loom, you can define IO.get. if you can build something like Dispatcher[IO] preserve passing a fiber/thread-locals and can pass the state back to the caller via callback. I have played with this during working on the ‘loom-support’ thread of dotty-cps-async. In such case, ‘await’. is creating a virtual thread (or assuring that you in a virtual thread), submitting the task to the dispatcher, and waiting for a result in a callback with this virtual thread.
One open question – is spawning a virtual thread is comparable with simple IO sequencing from the performance point of view? Need some benchmark to see this.

Also, you can do it now (even without Loom) with dotty-cps-async, if you agree to keep all potentially suspended computations in an async block. Both async[IO] and async[Future] will work following IO and Future rules.

What problem is unsolved in macro or Loom runtime and can be solved only with a compiler – one technical issue [codegeneration for HOF, the loom can simplicity this on JVM, but not eliminate fully], and one fundamental: coloring, i.e., how different are function types and result types for suspended and unsuspended functions. Now with nearly all existing techniques, suspended functions have types A => M[B] and direct: A=>B.

Note that view of suspended computation as Suspended ?=> A brings nothing new. We can say that this is a form of monad approach, M[X] = [X] =>> (Suspended ?=>X). What will be new - some mechanism which allows us to unite A and M[A] into one data type when needed and make a difference when this difference is essential for some purpose. Will this allow us to remove an explicit async block from async computation?

And here, as you already specified, we can see that the principal question is not about Loom at all – instead, it is about the expressivity of the scala type system.

One of the possible approaches which looks like a potential solution to me – is representing any computation as monadic and viewing the direct version as a result of optimization. But this will require ‘shifting’ of all scala semantics - afraid this is a significant change, hmm, maybe a few years of research if this is possible at all. I can imagine something like compiling function to expression over free monad during the first pass and on the second pass - define Futamura projection over an interpretation of free monad to the needed type on the call side.

It’s not too far. While having a handle on a running computation is useful (as others have pointed out), in Loom you already get a beefed-up version of that with VirtualThread, one which only lacks typed success values, which is trival to add in a successor to Future.

The reason why Future itself is dead is because 90% of its methods are callback-oriented: flatMap takes a callback, map takes a callback, foreach takes a callback, recover takes a callback, etc. Moreover, the monadic interaction style that derives from flatMap + map is no longer necessary in Loom, and instead, users can leverage direct imperative style to achieve the same benefits, but at less code and with no ramp up time.

Scala’s current Future is not what is needed in a post-Loom world. What is needed is for someone to add typed success values and some Scala conversion goodies + idioms to a virtual thread. That style of programming won’t use monads, won’t have flatMap / map, and won’t correspond to today’s notion of Future-based programming.

I think programming with values has compelling advantages (functional effect systems FTW!), but Future is not programming with a value, it’s programming with a running-computation (a typed VirtualThread) in a weird monadic style. It has the drawbacks of all possible worlds.

Moreover, while programming with a value does have benefits, I don’t think that pre or post-Loom, the whole world will embrace that style.

Yes, that’s correct. Future is like a typed version of VirtualThread (VirtualThread<A>), whereas IO[A] is like Callable<A>. One represents an unstarted computation, the other, a started computation: and the interfaces are necessarily different because the capabilities for unstarted versus started computations are different.

Not remotely, unfortunately. The difference in performance is probably close to 100x (yes, two orders of magnitude!).

I think Future and IO are fundamentally different because Future is a computation which not depend on the client and has an observation method, and IO is an application that depends on the client where the client should explicitly start IO computation.

Of course, they are very different, but that doesn’t mean that we won’t end up with a situation where we have to juggle both Futures and Callables in a single method. Hence it would be much better to work with a single type - and I think it should be a lazily-evaluated one - as I think our experience in FP shows that this is the better option. But, if this will happen … we’ll see :wink:

if you can build something like Dispatcher[IO] preserve passing a fiber/thread-locals and can pass the state back to the caller via callback.

Yes that’s the main problem, but I think propagating the dispatcher or whatever context is needed through a ScopeLocal (or whatever the name in loom will be) has the potential to work. But, I didn’t try this in code, so I might be missing some important obstacle.

Unfortunately I can’t say that I can follow the second part of your post, I’m probably lacking in theoretical background and in experience actually writing a cps transform.

But, there’s an interesting question hiding - one that was also raised in the original thread, as the proposal doesn’t really specify what happens - what are the interactions between HOFs and Loom?

Before, the type system forced us to explicitly deal with situations where e.g. the function passed to .map had side-effects, as they were represented at the type level with an IO. So we had to call .sequence, .traverse or some variant to actually get the value of the desired shape. We had to be explicit as to how the side-effects should be sequenced - and that was good.

With Loom, as we can block arbitrarily and without leaving a trace in the types, won’t this lead to bugs or at least surprising behavior? We can still be explicit about sequencing of effects of course, but now it will be a matter of discipline, not type-checking by the compiler. It will be up to the particular implementation of e.g. map.

Just like ZIO has Fiber and ZIO, it is not possible or desirable to unify between unstarted and started computations, because they have different capabilities:

  • An unstarted computation can be started in the background, or executed in the foreground.
  • An unstarted computation can be modified through application of retry or repetition policies
  • A started computation has a stack trace
  • A started computation has a status (is running, is done, is waiting, etc.)
  • A started computation can be cancelled or awaited upon

However, in each category, I agree there are too many different data types all vying to be the preferred data type, even in Java (partly for reasons of backward compatibility).

Ultimately each community needs only one of each: one for started computations, one for unstarted computations. And, arguably, some communities do not want one for unstarted computations, outside of niche use cases (e.g. scheduling logic at specific times).

This is already true. I can call InputStream#read inside of any List#map operation. The only difference is that Loom is making such “synchronously blocking” calls “asynchronously blocking”, i.e. more efficient.

1 Like

Ok, I was talking more about the general concept, whether you call it a beefed-up VT or a Future doesn’t matter much. Otherwise I think we mostly agree, callbacks will certainly be less useful, though I wouldn’t say they are completely dead. Maybe I’m just used to coding like this, but I can imagine that adding a completion callback using .onComplete results in cleaner code than starting a new VT just to get a future and run something after it returns.

Still, Future is a value, although representing something different than an IO. Yes, you can do less with it than with an IO, but they are still a useful thing. Maybe it would be interesting to check with kotlin/js programmers, did coroutines or async-await completely replace the usage of futures/promises? I would guess not, but that’s only a guess.

Just like ZIO has Fiber and ZIO , it is not possible or desirable to unify between unstarted and started computations, because they have different capabilities:

Ok, point taken, I did get too far in my unification attempts :wink: Two types it is.

This is already true. I can call InputStream#read inside of any List#map operation. The only difference is that Loom is making such “synchronously blocking” calls “asynchronously blocking”, i.e. more efficient.

Sure, it’s possible, but when writing an app using Akka, ZIO or cats-effect, I would say that this is something that should be fixed. Call it bad style, or discipline, but that’s how many Scala systems are written.

We can of course now remove this constraint, and say that it is fine to block in HOFs and the above map example is now correct. Or we can go the other way, and codify the conventions into something that the compiler enforces. Isn’t that part of the scope of Martin’s research project?

There are only two reasons for “fixing” such code:

  1. It’s inefficient because it will synchronously block a physical thread. Post-Loom, it will only “async block” a “virtual thread”, so this reason goes away.
  2. It violates referential transparency, which matters a lot to purely functional developers, and not at all to anyone else.

The case for tracking such a thing is niche at best because it ends up degrading (in the long run) to (2), which is a niche market.

1 Like

I’d add (3) having to be explicit about concurrency - which is by the way one of the advantages of IOs over Futures. But yes, it’s a niche, and in general I agree that it would be nice to expand that niche to other people Though you have to balance the expansion with sacrificing compiler-verified correctness. We are in a strongly-typed language after all, and the type system is one of the reasons why we’re using Scala, instead of Kotlin or Java.

1 Like

There is no connection between sync/async and concurrency. Sync/async is merely about: are the threads implemented in the operating system, or are they implemented in the language runtime.

This code is not concurrent:

List(1, 2, 3).map(_ => inputStream.read())

Under Loom, in some cases, it will shift from being synchronous code to being asynchronous code, which means that it will become more efficient. But it will not become concurrent. Neither Loom nor any JEP intends to automatically make code concurrent, nor are we likely to ever see such a thing (it’s academic research, and mostly failed, at that). Rather, we will see libraries and frameworks continue to provide a variety of different paradigms for doing concurrent computation.

To borrow the example from the other thread:

val list : List[String] = ???
list.map(str => google(str))

There is no concurrency here, and if there were, it would have nothing to do with sync/async.

Moreover, even if tracking the remote vs local bit proved useful and low-cost, that would not in any way be an argument for tracking a concurrent bit. If tracked, however, those two bits would be completely separate, because the presence or absence of one would have no implication on the other.

So in summary, I still see only two (not three) reasons for “fixing” the above example:

  1. It’s inefficient because it will synchronously block a physical thread. Post-Loom, it will only “async block” a “virtual thread”, so this reason goes away.
  2. It violates referential transparency, which matters a lot to purely functional developers, and not at all to anyone else.

Which degrades to (2), and I have personally accepted that I would love for Scala to be that language, it has no chance of becoming that language, and the current arrangement has some advantages for adoption.

On the “Loom killing Scala’s Future point”, there are other reasons why this is unlikely to ever happen, there is need for a typed value that represents an asynchronously running computation simply because its the correct abstraction to represent lazy IO/Task that has been executed. For example with akka-streams, the streams themselves are lazy/purely functional (i.e. you have to use explicit methods such as statefulMapConcat whenever doing side effects and the streams do not run until you explicitly execute them) but when you do run a stream you get back a materialized value which is inside of a Future.

This makes perfect sense because you do have a running computation (you just started the stream) and the computation is asynchronous/running in the background. In the context of akka-streams that materialized value can even be a Control which is how you gracefully shutdown the stream although other IO solutions like cats-effect have concept of cancellation/termination so its done in a different way there.

It can be argued that this is meaningless because in most cases that asynchronous result of running an IO/Task is at the edge of your application in Main (and you have solutions like IOApp for this) but not every application behaves this way. This is why I brought up the example of streams, you do have cases where you dynamically start your programs as values, not just when you start the process as is the case of Main and this is not that uncommon (think of Spark like use cases).

Ontop of all of this you also have the fact that since Scala is a cross platform language (Scala.js/scala-native), a strict asynchronous type is needed to have better FFI for different runtimes. There is a very strong argument that if Scala didn’t have Future in the stdlib, Scala.js would have been a lot less ergonomic/idiomatic and/or slower. To evidence this further, you can have a look at painful ghc-js is to use compared to Scala.js, due to Haskell’s lazy evaluation everything in ghc-js has to be wrapped where as with Scala.js this process is largely automated since we have a type (i.e. Future) that can represent outside of scala.js code that is being run asynchronously at the current time (happens all the time in Javascript).

By default most runtimes (and even “machines” depending on how low level you get) is strict and if you don’t have control over the runtime to do Haskell style strict analysis with ghc you get performance issues as well, i.e. thunks are not free (and we have to deal with this to a certain extent in JVM as well). While it is true that JVM is the main place where Scala is run, its also not the perfect place and not being tied to the JVM is keeping Scala from a language design honest when it comes to not just blindly accepting JVM/Java’s shortcomings (i.e. typical Java code being over reliant on exceptions rather than using errors as values).

No, it’s not. It’s completely the wrong abstraction in a post-Loom era, because 100% of the methods on Future that deal with the success or failure values are callback-based, and callback-based programming is known as “callback hell” for a reason: it is unpleasant, full of boilerplate, and greatly interferes with language constructs like try/catch/finally/try-with-resources/etc.

The correct abstraction for a computation that is running is in fact a VirtualThread. The only drawback of the java.lang.VirtualThread is that it fails to hold onto the return and / or exception type of the computation being executed, and therefore, fails to provide access to this information.

This is easily rectified, as I have done above, with a more modern-encoding of Future suitable for the Loom era (although it’s just a toy and should not be taken seriously).

Post Loom, and on the JVM, Future is dead, and serves no purpose that is not vastly better served by more modern data types.

1 Like

Are you arguing that map/flatMap are callback based and hence create callback hell? If so that is quite a re-definition of the word considering that the whole point of Future/Promise abstraction was to get rid of callback hell especially if you look at in context of programming languages in general where JS/ES6 popularised the Future/Promise in combination with yield/generators concept precisely to solve this issue (before in JS you just had the christmas tree ergo callback hell problem of massive nesting of callbacks/anonymous functions).

I think you are complaining about something else but its definitely not callback hell.

4 Likes

Yes, of course, these are separate topics, but both are relevant when discussing an IO type. Probably I didn’t write things clearly enough, let me try again :slight_smile:

If because of running list.map(str => google(str)) you end up with a List[IO[String]], then you have to be explicit in how you convert this into a IO[List[String]] (usually you are interested in the List[String], not the List[IO[...]]). And this explicitness is what I think is an advantage of tracing effects (here implemented by using an IO type). The compiler forces you to decide if these should be run in sequence (no concurrency), or somehow concurrently, and if so, with what parallelism.

That’s just one example of how the compiler has your back and forces you to pay attention whether you are calling a “normal” or - in this case - remote method.

4 Likes