Possibility for the better contracts for functions


Let me talk a little about contracts in Scala.

By contracts I mean mostly pre- and preconditions for functions, type invariants and similar functional restrictions on code that are not contained in signatures of functions or in type/class definitions but must be held when you are using these functions or types/classes. Also, such contracts can include things like decreasing measure of a function for the termination proving and a lot of others. If interested, you can read nice classification of contract types.

Most of you know that there is a way to declare a kind of a pre- and postconditions using require and ensuring declared in the Predef. These definitions originally are used to add runtime assertions that are checked before or right after execution of a code of a function with a contract. But this is not the only way of using. For instance, you can perform deductive verification of your code using instruments like Stainless which allows you to prove formally that your function definition conforms the contract. Additionally, Stainless gives you an ability to use not only predefined require and ensuring but also decreases and etc.

My background

Probably, my background sets restrictions on how I’m thinking and thus it will be easier to understand why I feel something problematic if you’ll know couple of facts I came with.

By background except programming closely touches formal specification and deductive verification field. In particular, I used to teach a university course that covered Floyd methods verification technique. For formal specification of C-programs we used ANCI C Specification Language (ACSL). We were using a chain of instruments of Frama-C->Jessie2->Why3->SMT solvers for performing the verification. At some time at the past, this course was using VDM-style specifications, Dafny language and JML.

All these specification ways that I’m pretty familiar with (despite they differ even in their foundations) have some similarities. They definitely influenced on my perception of contracts in Scala. And I hope that some ideas could be constructively taken from these methods of behaviour specification to improve Scala’s way of doing this.

Problem in Scala contracts

I see some problems in the current way of defining contracts in Scala, both using standard way from Predef and additional specifications like Stainless declares. I must notice that this is mostly my opinion and I’m not trying to say that if you do not feel the following as a problem, you’re doing wrong. Also, I’m not trying to shame on authors of Stainless or contracts in Predef.

So briefly, problems I see are:

  • I feel a need of putting the whole contract of a single unit (a function or a class) in a single place just to follow the principle of separation of logically different things to be in different places in the code;
  • addition of a contract should affect the code minimally (including styling of the code), in particular not too much lowering its readability;
  • contracts generally can be assigned to codeless things, in particular to abstract methods of classes or traits which is impossible for current require and ensuring clauses;
  • according to the Liskov principle, contracts should keep during inheritance and overridings which is not fulfilled for current ways of definition;
  • different contract elements have different connotations, so there must be clean difference between interface contracts and implementational contracts.


We can take an example from the great the lecture for static verification in Scala by Viktor Kunčak, the lead of EPFL LARA, lab which develops the Stainless instrument.

Here is a simple function

def addNaturals(nats: List[Int]): Int = nats.foldLeft(0)(_ + _)

In the example this function intends to work with natural integer numbers only. We will not discuss the function name and its implementation, but you can see how definition looks when Viktor has added some contract on it:

def addNaturals(nats: List[Int]): Int = {
  require(nats forall (_ >= 0))
  nats.foldLeft(0)(_ + _)
} ensuring (_ >= 0)

This example uses those require and ensuring from the Scala’s Predef.
I hope you see how some of problems I mentioned raise here:

  • contract interleaves with the function definition which disturbs from both reading function definition itself and its contract;
  • you need to change the style of your function implementation from a one-liner to a block.

Minimal effort solution

The first thought to make contracts better was to invent twins for require-like functions that looks like ensuring-like ones.
For instance, to rewrite addNaturals in the following way:

def addNaturals(nats: List[Int]): Int = {
  nats.foldLeft(0)(_ + _)
} requiring (nats forall (_ >= 0))
  ensuring (_ >= 0)

The same can be applied for not more complex contracts.
For instance, instead of writing McCarthy 91 function in this way:

def M(n: BigInt): BigInt = {
  decreases(stainless.math.max(101 - n, 0))
  if (n > 100) n - 10 else M(M(n + 11))
} ensuring (_ == (if (n > 100) n - 10 else BigInt(91)))

you could write it in the following way:

def M(n: BigInt): BigInt = {
  if (n > 100) n - 10 else M(M(n + 11))
} decreasing (stainless.math.max(101 - n, 0))
  ensuring (_ == (if (n > 100) n - 10 else BigInt(91)))

As far as I understand, macro functions can help dealing with these new functions in a way that does not imply modification in existing instruments (e.g. Stainless).

This is basically the way to solve putting all contract pieces together. Unfortunately, it is not a universal solution because you still cannot add type invariants or constructor preconditions in this way. Also, this approach does not solve anything according to specification of interfaces (rather than implementations, i.e. in particular for abstract methods), and does nothing for inheritance-related problems.

Annotation-based solution

Look how the last function contract could be written in ACSL:

/*@ ensures \result == (n > 100 ? n - 10 : 91);
  @ decreases 101 - n;
int M(n: int) {
  return (n > 100) ? n - 10 : M(M(n + 11));

The whole contract is written in the specially formed comment to the specified function. This is not an ideal since no syntax highlighting is done in an editor, but it is not a problem for Scala since ACSL is a different language from C, but in Scala we can write contracts and code using the same language.

In ACSL we can split the contract into interface part regarding to the function signature and implementation-related part. The first contract would be added to the signature, the second to the implementation:

/*@ ensures \result == (n > 100 ? n - 10 : 91); */
int M(n: int);
/*@ decreases 101 - n; */
int M(n: int) {
  return (n > 100) ? n - 10 : M(M(n + 11));

The signature with its contract can be put into the header file while implementation with its contract lies in the C-file. This make each implementation to require to conform to the interface contract. Implementation contract is used only by a particular implementation.

How it could look like in Scala

What if we could reuse such contract ideas for Scala? These comments looks like annotations to a function. For example:

class ensures[Res](condition: Res => Boolean) extends StaticAnnotation

Then we could write something like

@ensures({ res: Int => res >= 0 })
def addNaturals(nats: List[Int]): Int = nats.foldLeft(0)(_ + _)

We could have such annotations for all contract members to have compact contracts that do not disturb the implementations and looks much nicer:

@ensuring({ res => res == (if (n > 100) n - 10 else BigInt(91)) })
@decreases({ stainless.math.max(101 - n, 0) })
def M(n: BigInt): BigInt = if (n > 100) n - 10 else M(M(n + 11))

Using this way of declaring a contract we could have contract specifications even for abstract members:

trait McCarthyCalculator {
  @ensuring({ res => res == (if (n > 100) n - 10 else BigInt(91)) })
  def M(n: BigInt): BigInt

class DirectMcCarthyCalculator extends McCarthyCalculator {
  @decreases({ stainless.math.max(101 - n, 0) })
  def M(n: BigInt): BigInt = if (n > 100) n - 10 else M(M(n + 11))

Using annotations you can even write easily type invariants:

@invariant({ m + z == 15 })
trait DependentFieldsTrait {
  val m: Int
  val z: Int

The problem with annotations

To me those examples of annotation-based contracts looks very pretty. But they have one big-big problem: functions I use as the annotation parameter (except for the very first example) use parameters (for functions) and fields (for classes) in their body, i.e. their computation context is actually wrong. Another problem is that types of annotations are completely unrelated to the types of functions (type Res in the example ensures annotation).

So what if there was a special kind of annotation putting its parameter functions in a different context, something like

trait FunctionContext {
  type Result
  // ... something else?

class ensures(fc: FunctionContext)(condition: fc.Result => Boolean) extends FunctionContextStaticAnnotation(fc)
// "condition" works as if it is put inside the annotated function.

where the first parameter is filled automatically by compiler. Or, probably, there could be a much better solution?

At the end

Finally, I want to open a discussion whether

  • does problems of contracts in Scala make sense in the community?
  • did you feel that annotation-based example of contracts looks nice or not?
  • can we do something with or without additions to the language to make annotation-based contracts work?

Compile-time parameter constraints

For what it’s worth, it would be easy to make a macro annotation that turns this code:

trait Interface {
  @ensures({ res: Int => res >= 0 })
  def addNaturals(nats: List[Int]): Int
class DynamicImplem extends Interface {
  def addNaturals(nats: List[Int]): Int = nats.foldLeft(0)(_ + _)
class StaticImplem extends Interface {
  def addNaturals(nats: List[Int]): Int = nats.foldLeft(0)(_ + _)


trait Interface {
  def addNaturals$contract(nats: List[Int])(res: Int): Boolean = res >= 0
  def addNaturals$impl(nats: List[Int]): Int
  final def addNaturals(nats: List[Int]): Int = addNaturals$impl(nats)
class DynamicImplem extends Interface {
  def addNaturals$impl(nats: List[Int]): Int = nats.foldLeft(0)(_ + _) ensuring addNaturals$contract(nats)
class StaticImplem extends Interface {
  def addNaturals$impl(nats: List[Int]): Int = // whatever syntax is needed to make the static verifier work


I actually don’t find the variants you propose to be any more readable than the original form used by Viktor, and the original form has the advantage of already being implemented.

My initial reaction is that for contracts to be more usable, they’d need to become a first-class construct. Something more like

def addNaturals(nats: List[Int] with { nats.forall(_ >= 0) }):
  Int with { _ > 0 } = 
  nats.foldLeft(0)(_ + _)


Contracts as a first-class construct is not a “contract” that I stated at the beginning of the post, if talking about terminology. But the idea is great: not to have a behaviour-specifying addition to the type system but to extend the type system to have an ability to express all the possibilities.

I see two problems here.

One is that in practice contracts takes not less than the whole code (except for pure-functional statements). It seems to be inconvenient to obligate everyone to write all inside the parameters’ types. You can say that probably it’s better to define something like

type NatsList = List[Int] with { nats.forall(_ >= 0) }

but what if I want to abstract it? Do you think I would need to write something like the following?

type PosList[N:Numeric] = List[N] with { nats.forall(_ >= 0) }

Maybe, I’m not sure.

The second is that pre- and postconditions are not the only. Security-aware options, termination, temporal properties – all of it can be in contract. What you’ve proposed is a brilliant way to define only particular case of them.


In this simple example it is clear, that marco annotation can help.

But there are two more problems I’ve wrote in the post:

  • compiler does not check that type of res in annotation corresponds to the result type of the annotated function;
  • you cannot use parameters of the annotated function in the condition (i.e., in the argument of the annotation).


I think it is generally advisable to keep constraints as close as possible to what they are constraining, which is why it looks like an extension to the type system in my examples.

You can get a pretty long way using only with as the constraint term, and focusing on restricting the range of types:

trait Constrained {
  val x: Int with { _ + y == 15 }
  val y: Int with { _ + x == 15 }

trait Constrained2 {
  val x: Int
  val y: Int
  (x + y) with { _ == 15 }

var i, s = 0
while (i < N) with i.0 < i.1 {
  s += i
// Would fail, because I omitted i += 1

Compile-time parameter constraints


Is the contract-based verification going to work often beyond toy examples?
For example, could it find out whether the following method is defined for
all inputs (if we assume Math.{log, cos, sin} have proper contracts added):

def m(x: Double): Double = Math.log(1.0 + 2.0Math.cos(x)Math.sin(x))

Best, Oliver


Macro annotations expand before name resolution and type-checking, so I’m pretty confident that the scheme I’ve exemplified should work when contracts refer to the method arguments. I don’t think making sure the type of res corresponds to the definition is a real problem either.


I can’t agree with you speaking about contract-based verification in general. For instance, there is a project for deductive verification of Linux kernel modules (actually, this project is kind of a toolchain-supporting project, specifications and verification artifacts are not widely available). This project successfully applied contract-based verification to the pretty complicated field of internals of operating systems. I’m sure, you cannot name this as a toy example.

The main problem of contract-based deductive verification here is in efforts and costs you need to pay to use it. So, there are not so many projects and fields that can pay so much, that’s why science is moving forward not as fast as it could; and even having moved forward a lot, the results are not so spectacular and widespread.

Talking about your example, there is are two points. At first, when we are talking about (more or less) simple pure functions, they not always need to be specified and verified at all since they often are their own specification.

This is not always true, it depends; for instance we can consider something like this:

@requires(x >= 0)
@ensures(res: Real => res * res == x)
def sqrt(x: Real) = ???

It can be used when our instruments is familiar with the multiplication operation and Real type (whatever it is, it doesn’t matter now). So such kind of a specification tells that ∀x:Real · sqrt(x) * sqrt(x) == x and this can be used later. So, if your instrument was not familiar with the square root operation, it became through relation to the multiplication operation it is familiar with.

Talking about sin and cos, if it is enough to reason about them is no know that, e.g. ∀x:Real · sin(x)*sin(x) + cos(x)*cos(x) == 1 and similar equations, modern solvers work with this pretty well (thus, proving verification conditions for functions using these sin and cos operations). Similarly to the square root example, information that is needed to be known by solvers is a relation of sin and cos operations to something known by a solver (multiplication operation and 1, in this example). This case is a little bit more complicated because a single relation contains both operations at the same time.

If you want to reason about the values of these operations in verification conditions, i.e. if only knowing how exactly these functions behave, we can prove correctness of some programs, it then start to depend on whether particular solvers are familiar with these operations (similarly to how they are familiar with values that produces multiplication operation). For now I know solvers that tries to work on this level, but I haven’t heard about successful application of them for deductive verification.

There is also one more point regarding to your example and applicability of contract-based deductive verification. This kind of verification efficiently enough (having in mind costs I was taking about) deals with structured control systems and data transformation code rather than sophisticated mathematical code. And you probably would agree that exactly such systems require close verification efforts.


Am I right that in this case compiler errors (if there are some problems in the code inside the annotations) would be still readable, i.e. they will contain correct line and position and etc?


Not 100% sure (haven’t tried), but I think they would, as the moved code trees would retain their original attached positions.