Generic vals, vars and return types

I haven’t found a previous discussion on this topic, so sorry for the noise if it was already discussed, or even worked on in Dotty - I don’t know the status of Dotty either.

In TypeScript you can declare generic functions just as in Scala:

function id<A>(a: A): A {
  return a
}

However in TypeScript functions are considered actual values, so the above is equivalent to this:

const id = <A>(a: A) => a

And just to make it clear, id is a plain value and its type is that of a generic function:

const id2: <A>(a: A) => A = id

So we can declare these generics anywhere, including in function return types. So now currying actually works in a totally non-useless way:

class Box<A> { 
  constructor(public readonly value: A) {} 
}

function map2<A, B>(ba: Box<A>, bb: Box<B>): <C>(f: (a: A, b: B) => C) => Box<C> {
  return f => 
    new Box(f(ba.value, bb.value))
}

Just to make that clear

// Infered type: <C>(f: (a: number, b: string) => C): Box<C>
const pf = map2(new Box(2), new Box("x"))

pf((a, b) => `${a}${b}`)
//=> Box { value: '2x' }

Here’s the Scala equivalent:

case class Box[A](value: A)

def map2[A, B, C](ba: Box[A], bb: Box[B])(f: (A, B) => C): Box[C] = 
  Box(f(ba.value, bb.value))

This works for plain function calls:

map2(Box(2), Box("x"))((a, b) => s"$a$b")
// res: Box[String] = Box(2x)

But then it breaks when doing currying at the call site:

val pf = map2(Box(1), Box("x")) _
// pf: ((Int, String) => Nothing) => Box[Nothing] = $$Lambda$1323/1759889326@68bd8ca7

And note this doesn’t work if we do a def either:

def pf = map2(Box(1), Box("x")) _
// pf: ((Int, String) => Nothing) => Box[Nothing]

Or in other words, currently in Scala currying is only a trick to infer the type of the f parameter when doing a normal function call (with all params provided), but it’s completely useless for partial application when generics are involved.

Is this something that can or is being fixed? In Dotty maybe?

3 Likes

It’s called first-class or higher-rank polymorphism, and I wish we had it in such a simple form. What’s weird is that the Scala compiler does have an internal representation of method types, but you can’t write them out explicitly, and lambda expressions cannot implement them. Not sure why that restriction exists.

Anyway, in your case you can do it via some encoding, this way:

scala> case class map2[A, B](ba: Box[A], bb: Box[B]) {
  def apply[C](f: (A, B) => C): Box[C] = Box(f(ba.value, bb.value)) }
// defined case class map2

scala> val pf = map2(Box(2), Box("x"))
val pf: map2[Int, String] = map2(Box(2),Box(x))

scala> pf((a, b) => s"$a$b")
val res0: Box[String] = Box(2x)

My impression was that it’s because functions/lambdas are instances of classes (on the JVM), and thus must have concrete type parameters at compile time. I could be mistaken, however.

You could have:

trait PolyFunction {
  def apply[A](a: A): A
}

But then you would need a lot of PolyFunction traits, something like: PolyFunction0ToA, PolyFunctionAToA, PolyFunctionABToA, PolyFunctionABToB, …

A more pragmatic approach might be:

trait PolyFunction2 {
  def apply(a: Any, b: Any): Any
}

And then the compiler would do something like this:

val f = [A,B](a: A, b: B) => (b, a)
f(1, "foo")

// ~~> compiles to following pseudo IR  ~~>

val f: [A, B](A, B) => Tuple2[B, A] = new runtime.PolyFunction2 {
  def apply(a: Any, b: Any): Any = (b, a)
}
f.apply(1, "foo").asInstanceOf[Tuple2[String, Int]]

As @Jasper-M explains, I don’t see a reason why the runtime would be a limitation here; as usual, we can rely on erasure. I think people are too prone to laying responsibility on the JVM for what are often just language design mishaps.

1 Like

The only reasonable implementation of this is identity. You probably want something more flexible like ErasedTypes/src/main/scala/com/novocode/erased/TypedFunction.scala at master · szeiger/ErasedTypes · GitHub. This function type can encode type-level computations and type constraints.

The other kind of constraint that you frequently want to use is implicit evidence (i.e. typeclasses). Dotty takes care of that one with implicit function types.

Yes I agree that one isn’t very useful. But if the compiler backed polymorphic function types with erased PolyFunctionN classes, that would give all the power one could need without having to write ultra-generic classes in order to support all possible permutations of type variables, upper bounds, context bounds etc.

You might find shapeless Polys worth a look … they’re covered in chapter 7 of Dave Gurnell’s Type Astronauts Guide to shapeless: https://underscore.io/books/shapeless-guide/.

Cheers,

Miles

I think you could actually do it pretty simply, by moving the all the type parameters from the trait to the method:

trait PolyFunctionN {
  def apply[T1, ... TN, R](v1: T1, ... vN: TN): R
}

The main problem is, many (most?) functions aren’t polymorphic; they’re going to be defined between concrete types (e.g. _.toInt: String => Int). Thus, while a polymorphic method can be eta-expanded into a monomorphic function, it can’t be done the other way around.

Additionally, many polymorphic functions (e.g. Polys in shapeless, as @milessabin mentioned) are defined by implicit mappings (Cases, in shapeless) between some set of concrete types, but not for any possible type. This requires a much more complicated encoding, as can be seen by looking at the implementation for Polys.

It may be worth looking into whether the Scala compiler could transform methods into FunctionNs or PolyFunctionNs, depending on the context and needs, though I suspect there may be good reasons not to.


Side note: the definition of PolyFunctionN I gave above does not deal with possible type bounds, but I suspect that could be managed by adding bounds as parameters to the trait (at the cost of making the source less readable).

Then to which PolyFunction would the following eta expand?

def foo[A](as: List[A], i: Int): A = as(i)
foo _

There’s actually more precedent for this than I thought. Dotty already compiles functions with more than 22 parameters to an erased type trait FunctionXXL { def apply(xs: Array[Object]): Object }.

By the way, Dotty now has dependent function types, so you can encode first-class polymorphism in a quite straightforward way:

// Conceptually takes a function f of type [T](_:List[T])List[T]
def baz(f: implicit (t: {type T}) => List[t.T] => List[t.T]): String = {
  class Poly{type T}
  f(new Poly{type T = String})(List("a","b")).head *
     f(new Poly{type T = Int})(List(1,2,3)).head
}

baz(ls => ls.reverse)  // 'polymorphic' lambda passed; returns "bbb"

The Poly class is just there because Dotty will currently lose precise type information about T if we just write f(new{type T = Int}). We’d have to write f(new{type T = Int} : {type T = Int}), which is even uglier than the above Poly hack.

2 Likes

You’re right, it’s not that simple. Nevertheless, I think it can be done without abandoning typing entirely.

trait PolyFunctionN[H1[_], ... HN[_], R[_, ... _]] {
  def apply[T1, ... TN](v1: H1[T1], ... vN: HN[TN]): R[T1, ... TN]
}

Then, we can eta-expand the method as follows:

def foo[A](as: List[A], i: Int): A = as(i)

type Foo = PolyFunction2[List, Const[Int]#λ, R { type λ[T1, _] = T1 }#λ] {
  def apply[T1, T2](v1: List[T1], v2: Const[Int]#[T2]): R#λ[T1, T2] = v1(v2)
}

(foo _): Foo

// Equivalent definition of Foo, with a little cleanup
type Foo = PolyFunction2[List, Const[Int]#λ, R { type λ[T1, _] = T1 }#λ] {
  def apply[T1, _](v1: List[T1], v2: Int): T1 = v1(v2)
}

What do you need a new type for; why not just use Function2[Any, Any, Any]?

Yes, you could build it on top of the existing function types.

Some relevant links:

Dotty feature request (closed).
Scalaz 8 documentation of polymorphic values.
P∀scal compiler plugin to reduce boilerplate in creating polymorphic values in the usual Scala encoding.

2 Likes