Proposal to add Dependent Function Types to the Language

Here’s a proposal to add dependent function types to Scala 3. A dependent function type such as

(x: A) => x.B

is the logical first-class version of a dependent method

def m(x: A): x.B

Here’s a link to the doc page.

Motivation

The main reason for adding dependent function types is more regularity. If methods can be dependent, why do you have to lift this into a class with an apply method if you want to do the same for functions? It’s clearer and more efficient if we can express the argument dependency directly in the function type.

Originally, Scala did not have dependent function types because we did not see a good way to map them into normal Scala types. Adding a fundamentally new type to your type system is a big decision which should not be taken lightly. The Dotty implementation uses a clever scheme, where a dependent function type is represented as a normal function type with a refinement. This avoids essentially all the added complexity on the type system side.

This proposal is open for discussion in the community and will be discussed in our next SIP meeting, where we’ll take all your feedback into account for the approval/dismissal of this feature.

Going Further

Once we have dependent function types, of course one would like to have polymorphic function types as well. There are some preliminary ideas about that floating around, but polymorphic function types should be a separate SIP, if they get to that stage.

20 Likes

Apologies for maybe irrelevant question, but in context of method types <==> function types unification, are there any plans to define “overloaded functional types” ? In particular, even now it is possible to write

  type TOverloadedFunc = (Int => Int) with ((String, String) => String)
  trait OverloadedFunc extends (Int => Int) with ((String, String) => String)

But can we expect that something like ((Int, Int) => Int) with ((String, String) => String) become available and meaningful ?
By the way, right now it is possible to define that

  type TOverloadedFunc2 = ((Int, Int) => Int) with ((String, String) => String)

- and it compiles well, but I can’t find any way to instantiate it :slight_smile:

I’m totally aware of java runtime limitation, which makes this task not that straightforward, but perhaps it could be implemented at least in form of some “semi-structural” / inline types, which could be present only on compile type and used only in inline functions? (meaning that after inlining of that inline function body “overloaded function type” should be replaced by particular type where that overloaded methods was defined in some regular way)

Probably you mean “or” instead of “and” (i.e. with) in your examples? With the Proposal to Add Union Types to the Language you will be able to write

(Int => Int) | ((String, String) => String)

I did not get what you are referring to?

I think what @ibaklan wants is that if you have overloaded methods, they could be seen as a single identifier with different potential function types. So you if you had

def func(i: Int): Int = ???
def func(a: String, b: String): String = ???

You could express that func is “either Int=>Int or (String, String)=>String,” i.e. if you pass it a single Int parameter you get the overload that returns Int, but if you pass it two String parameters it would call the overload that returns String. The set of potential paths should be expressible using a single type. The discussion was about what type that would be.

If you manage to construct such a value, it should be of type (Int => Int) & ((String, String) => String). Indeed, you can either use it as an Int => Int or as a (String, String) => String, which by LSP means its type should be a subtype of both, which is exactly what & models.

However, it is not, in general, possible to automatically construct this magic thing which dynamically dispatch on the appropriate overload. With overloads that have a different number of formal parameters, you can do it, but not with same-number-but-different-types. For example, if you have

def func(x: Iterable[Int]): Int = ???
def func(x: Int => Boolean): Int = ???

its type could be (Iterable[Int] => Int) & ((Int => Boolean) => Int). You could call that with a Set[Int], and now at run-time you cannot know what overload needs to be called. Both are applicable.

In general, it is impossible to reproduce Scala’s compile-time overload resolution at run-time. I should know, it’s an important topic in the design of Scala.js.

So automatically constructing a combined function value of multiple overloads would be a lie. We must not do that.

3 Likes

No, I actually mean with (&) (close to what was explained by @nafg)
Constructions with | which could mimic overloading would rather look like (String | Ind) => (String | Int), but it would be in fact more “runtime overloading” (dynamically dispatch) then true compile-time overloading. Interesting, that this kind of “runtime overloading” generally would not allow you to mimic overloading with different result types for different input arguments, however with Dependent Function Types I could imagine something like

    // this may express some "runtime overloaded" close to `(Int => Int) & (String => String)`
    type TrickyFunction = (arg: OvlCase[Int,Int] | OvlCase[String,String]) => arg.Result

    class OvlCase[T,R](arg: T) {
      type Result = R
    }

But I am not sure whether this combination of | and Dependent Function Types will compiles.
(And any way it is not what I mean in initial post).

def map[A, B, C](h: Int :: String :: User :: HNil)(f: (Int => A) & (String => B) & (User => C))

Actually exactly this “functions intersection” notation was suggested in this article as some motivational “meta code” for shapeless polymorphic function values. But my original motivation was taken not from that topics, but rather from some discussions with my friend (who is C# dev) and by some other early experiments with java generics edge-cases. Actually that thoughts are raw enough to be little bit verbose, so I was trying to collapse them as much as possible.

So when I was saying

I was generally thinking about multiple inheritance of same generic interface with various parametrization (which is basically impossible in java runtime, but still some “workarounds” could be invented).

Meanwhile in C# you can inherit same generic interface multiple times (with different parametrization).
You can easily define something like overloaded function (Foo => R1) & (Bar => R2):

    interface Function1<in T1, out R> {
        R apply(T1 arg1);
    }

    interface Foo {}
    interface Bar {}

    // order of extension matters when cast with variance occurs - first match will be preferred
    interface OverloadedFooBarFunction :
        Function1<Bar, Tuple<object,object>>, Function1<Foo, Tuple<object,object,object>> {}
complete working source code of above example inside
using System;
using System.Diagnostics;
using System.Collections.Generic;

namespace crafts
{
    interface Function1<in T1, out R> {
        R apply(T1 arg1);
    }

    // order of extension matters when cast with variance occurs - first match will be preferred
    interface Case1OverloadedFooBarFunction : 
        Function1<Bar, Tuple<object,object>>, Function1<Foo, Tuple<object,object,object>> {}
        
    // order of extension matters when cast with variance occurs - first match will be preferred
    interface Case2OverloadedFooBarFunction : 
        Function1<Foo, Tuple<object,object,object>>, Function1<Bar, Tuple<object,object>> {}
    
    interface Foo {}
    class FooImpl : Foo {}
    interface Bar {}
    class BarImpl : Bar {}

    interface FooBar : Foo, Bar {}
    class FooBarImpl : FooBar {}

    class OverloadingViaMultipleInheritanceTest
    {
        static void Main(string[] args)
        {
            Case1OverloadedFooBarFunction f1 = new Case1OverloadedFooBarFunctionImpl();
            Case2OverloadedFooBarFunction f2 = new Case2OverloadedFooBarFunctionImpl();
            Console.WriteLine("f1(Foo):" + f1.apply(new FooImpl()));
            Console.WriteLine("f1(Bar):" + f1.apply(new BarImpl()));
            // overloading works as expected
            Debug.Assert("apply(Foo):" == f1.apply(new FooImpl()).Item1);
            Debug.Assert("apply(Bar):" == f1.apply(new BarImpl()).Item1);
            // in cases where overloading fails, cast + variance just provides fisrt matching impl from the list of implemented interfaces
            // from `Case1OverloadedFooBarFunctionImpl` cast will automatically choose `Function1<Bar, Tuple<object,object>>`
            Function1<FooBar,object> f1_ = f1;
            Console.WriteLine("f1_(FooBar):" + f1_.apply(new FooBarImpl()));
            Debug.Assert(f1_.apply(new FooBarImpl()).ToString() == "(apply(Bar):, crafts.FooBarImpl)");
            // from `Case2OverloadedFooBarFunctionImpl` cast will automatically choose `Function1<Foo, Tuple<object,object,object>>`
            Function1<FooBar,object> f2_ = f2;
            Console.WriteLine("f2_(FooBar):" + f2_.apply(new FooBarImpl()));
            Debug.Assert(f2_.apply(new FooBarImpl()).ToString() == "(apply(Foo):, crafts.FooBarImpl, unicq)");
        }
    }

    class Case1OverloadedFooBarFunctionImpl : Case1OverloadedFooBarFunction {
        public Tuple<object,object,object> apply(Foo foo)
        {
            return new Tuple<object,object,object>("apply(Foo):", foo, "unicq");
        }
        public Tuple<object,object> apply(Bar bar)
        {
            return new Tuple<object,object>("apply(Bar):", bar);
        }
    }

    class Case2OverloadedFooBarFunctionImpl : Case2OverloadedFooBarFunction {
        public Tuple<object,object,object> apply(Foo foo)
        {
            return new Tuple<object,object,object>("apply(Foo):", foo, "unicq");
        }
        Tuple<object,object> Function1<Bar, Tuple<object,object>>.apply(Bar bar)
        {
            return new Tuple<object,object>("apply(Bar):", bar);
        }
    }
}

So in other words constructs like trait OverloadedFooBarFunction extends Function1[Int,Int] with Function1[String,String] is quite valid from C# perspective (I think important role here plays that fact that generics (parametrization) in C# are not striped out in runtime, but fully available through reflection, anyway not so critical to make imposable to do something “similar” in java runtime world).
From the other perspective
trait OverloadedFunction1x2[T1,T2,R1,R2] extends Function1[T1,R1] with Function1[T2,R2]
will be invalid from C# perspective as well - compile time error cs0695 will appear (By the way, it would be nice if in some verbose mode scala compiler also provides some links to exhorted/canonical error description page, to avoid extra googling step).

Disregarding that fact that
trait OverloadedFunction1x2[T1,T2,R1,R2] extends Function1[T1,R1] with Function1[T2,R2]
will be also invalid in C# for some reason, “direct” definition of “2-times overloaded function” interface works well in C#

    // valid C# code - "2-times overloaded function" generic interface
    interface Function1x2<in T1, in T2, out R1, out R2> {
        R1 apply(T1 arg1);
        R2 apply(T2 arg1);
    }
basically some bigger growing chain of "overloaded function interfaces" could be defined easily
    interface Function1x3<in T1, in T2, in T3, out R1, out R2, out R3> : Function1x2<T1,T2, R1,R2> {
        R1 apply(T1 arg1);
        R2 apply(T2 arg1);
        R3 apply(T3 arg1);
    }

    interface Function1x2<in T1, in T2, out R1, out R2> : Function1<T1,R1> {
        R1 apply(T1 arg1);
        R2 apply(T2 arg1);
    }

    interface Function1<in T1, out R> {
        R apply(T1 arg1);
    }

In fact this sort of interface expressing “2 times overloaded Function1” could be defined even in java runtime (applying some simple generics “trickery”)
So

    // this is invalid scala code - compile time "double definition: ...  and ... have same type after erasure: ..."
    // but effectively identical trait - could be compiled
    trait Function1x2[T1,T2, R1,R2] {
      def apply(v1: T1): R1
      def apply(v1: T2): R2
    }
Is generally invalid, but could be fixed by the following workaround: to make it working, complete example could look like this
    trait OverloadedFunction1_2[T1 <: AnyRefX, R1, T2 <: AnyRefX, R2] extends OverloadedFunctionBase1_2[T1, R1, T2, R2] {
      // def apply(v1: T1): R1
      // def apply(v1: T2): R2
    }

    trait OverloadedFunctionBase1_2[T1 <: AnyRef1Tag, R1, T2 <: AnyRef2Tag, R2] {
      def apply(v1: T1): R1
      def apply(v1: T2): R2
    }

    trait AnyRefX extends AnyRef1Tag with AnyRef2Tag
    trait AnyRef1Tag {}
    trait AnyRef2Tag {}

    // it could be proven that above mentioned OverloadedFunction1_2 traits is quite viable - it is possible to instantiate it
    trait Foo extends AnyRefX
    trait Bar extends AnyRefX

    // this will actually model `(Foo => Foo) & (Bar => bar)` instantiation
    object OverloadedFunctionImpl extends OverloadedFunction1_2[Foo, Foo, Bar, Bar] {
      def apply(v1: Foo): Foo = v1
      def apply(v1: Bar): Bar = v1
    }

    val overloadedId : OverloadedFunction1_2[Foo, Foo, Bar, Bar] = OverloadedFunctionImpl
    val foo = new Foo{}
    assert(foo == overloadedId(foo))
    val bar = new Bar{}
    assert(bar == overloadedId(bar))

Actually this trick just makes some generic parameters “tagging” (by that meaningless T1 <: AnyRef1Tag, T2 <: AnyRef2Tag, … which dose nothing except making base methods erasures distinct, and then anyway are overridden by unified T1 <: AnyRefX, T2 <: AnyRefX, …).

I am not sure how dose `|`-types should work with erasures, if my assumption is correct, probably this "tagging" could be "implemented" (worked aground) in following fashion: (assuming that `def apply[T <: SomeTag | T0](arg: T): R` will be erased to `apply(SomeTag)`)
    // maybe this code is valid: if `Foo | T` type is erased to `Foo` in java runtime representation
    trait OverloadedFunction1_2[T1, R1, T2, R2] extends OverloadedFunctionBase1_2[T1, T1, R1, T2, T2, R2] {
      // def apply(v1: T1): R1
      // def apply(v1: T2): R2
    }

    trait OverloadedFunctionBase1_2[T1, T1_ <: AnyRef1Tag | T1, R1, T2, T2_ <: AnyRef2Tag | T2, R2] {
      def apply(v1: T1_): R1
      def apply(v1: T2_): R2
    }
bad idea

Foo | T (in contrast to Foo & T) will be erased to common super-class of Foo and T, so that in general to Any. It seems that the only way to make it working with [T1 <: Any, T2 <: Any, ...] is to introduce some special tagging for overloaded generic method with erasures conflict - if such specially tagged methods will be “erased” specially, not to overloadedMethod(Any) but to some overloadedMethod$arg1tagT1(Any), overloadedMethod$arg1tagT2(Any), … it could resolve erasure clashes “manually”. Essentially that current “coherent” behavior of overloading should be anyway preserved - for abstract methods - if T1 =:= T2 then one of R1 <: R2 or R2 <: R1 should be true - and this 2-wo methods will effectively become single method (T => R1 & R2), in case when both (T1 => R1) and (T2 => R2) methods was already implemented in some class, it should be no way to instantiate that class with T1 =:=T2 parameters)

So at least from this perspective, one of the ways to define/represent overloaded function type on runtime could be this sort of Function1x1, …, Function1x22, … … , Function22x1, …, Function22x22 predefined traits
- not very elegant solution, but enough straightforward to be easily implemented


However my initial thought was actually more not about runtime modeling of arbitrary overloaded function, but about structural generic types and inlining. So when I was saying:

I mean what if we allow some sort of generic/polymorphic methods (best way - inline generic methods)
which will work rather like C++ templates but with that constraint that they should be type safe - so instead of letting boundless generic letter definition like <class T> it should be bounded, but it’s boundary should be some structural type - so code type checks could be verified correctly (but after template expansion that structural constraint boundary should just go away - should be simply replaced by some valid (non structural) type which was passed to that template and met that structural constraint requirement)

If this kind of “meta-methods” will be allowed, the overloaded generic structural functions could be used there in place of that structural boundaries.

This could be demonstrated by the following simplistic example:

    // this code is invalid in current scala, error:
    // "Parameter type in structural refinement may not refer to an abstract type defined outside that refinement ..."
    // this type will be really "problematic" if it will be used at runtime "as is"
    // but it is defiantly safe if it will be used only as a constraint in (inline) generic method/templates definitions
    type `structural=>`[T1, R] = {
      def apply(v1: T1): R
    }

    @inline def tupleMap[
      V1,V2, R1,R2,
      // type parameter with structural constraint
      Mapper <: (V1 `structural=>` R1) with (V2 `structural=>` R2)
    ](v: (V1,V2), mapper: Mapper) : (R1, R2) = (mapper(v._1), mapper(v._2))

    // test usage may look like this:

    class OverloadedFuncImpl {
      // this: (String `structural=>` String) with (Int `structural=>` Int) =>
      def apply(x: String): String = s"(${x}+100)"
      def apply(x: Int): Int = x + 100
    }

    val overloadedFunc = new OverloadedFuncImpl()
    assert (("(a+100)",107) == tupleMap(("a",7), overloadedFunc))
    // this should be just expanded to at least that code:
    // (without any mentioning of structural types - `Mapper` type fully replaced with `OverloadedFuncImpl`)
    assert (("(a+100)",107) == {val v = ("a",7); (overloadedFunc(v._1), overloadedFunc(v._2))})

Summarizing this idea, I cannot say, how much useful such generic/polymorphic/incline-templates methods could be, aforementioned example is really synthetic (and even in form of some inline static extension method @inline def map[...](this v: (...), mapper: Mapper): (...)
could be quite useless). However what could be here even more important then wide range of real world application - is didactical meaning of (T structural=> R) - having such construct working at lest in some contexts may help in explanation how does obj(args) works - it then could be said that it is available when obj.type <: (arg.type 'structural=>' Any)

This topic was automatically closed after 27 days. New replies are no longer allowed.