Future transformation methods should null-check their parameters


even though a general rule of the Scala library is to not check for null parameters, I believe Future transformation methods should specifically check for them.

The reason for this is that null values can creep in due to defects in order-of-initialization of values in application code; those arise more easily in asynchronous executions expressed as chains of Future transformations. Letting them manifest as a NPE later in the execution instead of early at the point of the defect makes diagnosing such issues much harder.

Longer story:

Today I run into a weird NPE where everything on the stack was scala.concurrent._:


(apologies for the weird format; our system JSON-izes exception reports, story for another day.)

Anyway, this comes from:

  def zipWith[U, R](that: Future[U])(f: (T, U) => R)(implicit executor: ExecutionContext): Future[R] =
    flatMap(r1 => that.map(r2 => f(r1, r2)))(if (executor.isInstanceOf[BatchingExecutor]) executor else parasitic)

Basically, Future.zipWith will merrily accept null for that when invoked and only blow up when the transformation itself is eventually executed and that.map attempts to dereference the null that.

I eventually tracked this down to an order-of-initialization issue in my code, but only after I compiled my own version of Future.scala with an early null-check in zipWith that actually threw a NPE with a helpful stack trace that helped me identify where did I compose a future with another future that was defined in a val, but the definition only happened later, hence at that point it was still null.

Unfortunately, the original stack trace produced by unmodified Future implementation as above was of no help in establishing this.

I know that the usual principle of Scala libraries is to not check for null parameters 'cause people shouldn’t use them. However, specifically in the case of futures, I think an exception to the rule would be warranted because futures are all about asynchronous execution, and subtle order-of-initialization issues can crop up and cause an uninitialized val to be picked up with null value even though the developer never ever explicitly used null. It’d make the diagnosis of such problems much easier as the library would ensure the program fails fast close to the actual point of the defect. JVM is very good at optimizing null checks to a CPU trap, so using java.util.Objects.requireNonNull shouldn’t be much of a problem IMHO.

As an aside, Java 14 “more helpful NPEs” wouldn’t help with this at all; the only effect it would’ve had in my case would’ve been to display “that.map” as the exception message.

For order of initialization, isn’t that why we have -Xcheckinit?

See [WIP] Scala with Explicit Nulls


I tried recompiling my code (the version before I fixed it) with -Xcheckinit, and it didn’t catch this particular initialization order issue. As I said, the code uses futures composition heavily; it’s easy to get into territory where the compiler can’t help you anymore and earlier runtime detection would be very beneficial.

1 Like

Note that a sound compile-time initialization checker is in the work: Improve forward reference handling?

1 Like

Hi Attila,

I think the easiest path forward is for someone (nudgenudge) to add aforementioned checks, and then run the benches to see what the impact is. :slight_smile:

I don’t mind giving this a stab at all if there’s no opposition to it on principle. I wanted to get some initial feedback though on whether this makes sense at all to longtime contributors.