Proposal to enhance Scala's OO capabilities


Scala’s OO capabilities are excellent, however, I think we can go much further. I would like to discuss how we can extend Scala’s OO features, particularly with respect to meta-classes.


Scala is a fusion of OO and FP. While much discussion and effort the last few years has focused on the FP side, and hooray for that, we shouldn’t pretend that improvements to Scala’s OO aren’t possible or worthwhile.

For me the benchmark of OO is Smalltalk. Smalltalk models everything as an object. Computation is performed via message passing between objects. The patterns this model allows for are really quite powerful, however, nothing remotely similar is available (in a principled way) in Scala.

NOTE: Scala doesn’t need to be Smalltalk. This proposal is about extending Scala so that we can access a bit more of the semantic richness that Smalltalk supports.

Object Classes

I propose we add a new kind of class entity to Scala. That is, a new entity to join our classes, case classes, and objects. For now I’m going to call this new entity an object class. A better name should be proposed.

An object class allows one to dynamically, that is at runtime, compose units of behavior together. Behaviors come from:

  • the meta-class
  • other classes (via inheritance by the meta-class)
  • traits (this should be the primary source of behaviors).

Said another way, an object class:

  • Has a meta-class
  • At runtime, a new trait may be layered in to the meta-class, potentially overriding methods
  • At runtime, a new trait may be layered in to a specific instance, potentially overriding methods
  • Works like a regular class otherwise.

If this sounds familiar, its because the rules should be very similar to to how Groovy models its classes, with the caveat that this feature is focused on composition of behaviors, not specifically Groovy-style meta-programming (though that should be supportable).

In fact, the implementation could mostly just be a straight copy of how Groovy classes work.

Note, object classes are much more constrained than the dynamism of Groovy or Clojure. The goal is to be able to express some of this dynamism in a safe, if some what more limited, way.

Example 1

Let’s look at an example. (Btw, I’m not too concerned with the syntax of how this feature works.)

trait Named { def name:String }
object class Person(name:String) extends Named {
    def sayHello = "Hello " + name

val mark = Person("Mark")
mark.sayHello // Hello Mark

trait Goodbye { self: Named =>
  def sayGoodbye = "Good bye " + name

// Add Goodbye trait to Person
Person extends Goodbye

mark.sayGoodbye // Good bye Mark

In this case we layer in the Goodbye trait into the class Person. This has the effect of adding the method sayGoodbye to all instances of this class. Moreover, this could be done at any point in time.

Example 2

Let’s look at another example.

object class Duck(quacks:Int)

val mallard = Duck(32)

trait Waddle {
  def waddle =  ...

mallard.waddle  // would throw an exception if called here.

mallard extends Waddle

mallard.waddle  // works as expected

In this case we layer in the Waddle trait into a specific instance of class Duck.

Current solutions

The above looks like it could be reasonably accomplished with the features the language has today. And that is true, we can instantiate a class and using with syntax layer in a separate trait.

val p = new Person with SomeTrait
// OR
class Personish extends Person with SomeTrait

However, there are some drawbacks.

Regular layering doesn’t scale. For every combination of traits you might want to layer in, you now have to explicitly enumerate that someplace in your code. With N traits that ends up as 2^N combinations. That isn’t feasible.

Regular layering isn’t dynamic. You have to decide ahead of time which traits to extend. And then it’s fixed for every instance despite perhaps the need for some instances to have slightly different behavior.

Composition is type-level

While I’m not too concerned with syntax there is a side track here that I want to discuss.

I would want composition of behavior to be a type-level operation, not a value-level operation, with the run-time aspect inferred by the compiler. The syntax I’ve bikeshedded above demonstrates that.

The main reason is that, having worked in python and ruby, the compiler, and thus IDEs, having knowledge of the set of all possible compositions of behavior for a given class is essential to discoverability.

If composition were to be entirely runtime, that is, when a new trait is layered in you are just instantiating function(s) at runtime and adding those to some list, you lose any help that the compiler might offer. I’m not sure could be gained by that, since the run-time aspects can be written by the compiler, and you certainly lose any help that could be offered.

As to how this might limit the dynamism of the feature. Well I’m ok with that. But I also think its a minimal issue since “anonymous” traits could be supported. E.g

Person extends { self: SayGoodbye => def wave: String = sayGoodbye + " *waves*"}

Self types

That doesn’t mean that there isn’t a runtime aspect to behavior composition. In the example I just gave, the anonymous trait references a self-type. The meaning of this, is to say that Person should be required to already have SayGoodbye layered in, otherwise an exception should be thrown.

Self types would allow for both an explicit dependency graph, but also the sane management of which functions/etc are available in a given scope.

Type safety

But what if I, for some reason, want to write type safe code for an object class? That is, I want to write code that knows that Person has been extended by SayGoodbye? That should be as simple as

def foo(p: Person with SayGoodbye) = ...

However, to do that we need to keep track of which behaviors we have proven that an object class has baked in. The obvious way to do that is to instantiate a value that carries the proof of that fact.

The general mechanism for this should probably be via implicits, however, another more focused mechanism should be available.

I think extension should be an expression:

val ?? = Person extends { self: SayGoodbye => def wave: String = sayGoodbye + " *waves*"}

What gets returned? And what is the type of that value?

I propose that the value returned is the meta-class, with the type somehow encoding the fact that that meta-class implements the base class, the anonymous trait, as well as the SayGoodbye trait.

val mc: MetaClassOf[Person with {def wave:String} with SayGoodbye] = Person extends { self: SayGoodbye => def wave: String = sayGoodbye + " *waves*"}

If Scala is able retain structural types in some form, then expressing the type above seems like it should be possible. Further, the value mc can then act a proof for later code should type safety be required in certain methods.

I do think there is a connect here with the new notion of structural types. I’m not sure what that should be.


Soundness is likely a concern, but I’m not sure how this is any less safe than what we already have. Safe(r) subsets of the language can simply disallow object classes. For missing methods we can simply throw exceptions, which is already a possibility in regular code.

Surface area is increased in the language with more changes to absorb for new and old users alike. Given how self-contained this feature is, I’m not too concerned.

Java interop does suffer with an object class. My guess is that the Groovy model is again an acceptable model to emulate. Besides, calling Scala code from Java is already a pain. It should be noted too that on other platforms, especially Javascript, there is a fairly direct encoding for object classes, unlike with the JVM.

Performance is obviously a concern, but with the opt-in nature of this feature I feel like folks will have had a chance to make that decision up-front. When performance is critical users should choose low-overhead abstractions regardless, and where circumstances allow they can use more expressive one. Regardless object classes should be opt-in, not opt-out.


Are you familiar with Akka Typed? While that’s library-level rather than language-level, and is a bit more Smalltalk-ish than Scala-ish in some ways, I think it shares some of the same goals you’re going for here, particularly in terms of behavior composition. (And already exists and is in active use in the community, as opposed to being a likely uphill fight to change the language in major ways…)

I’ve only used regular Akka actors, but from what I can tell Akka Typed doesn’t allow the kind of composition of behavior that I’m thinking of. Now regular Akka actors do, but that composition isn’t something that is easily discoverable, and that’s partly driven by how open the system is.

In my experience the pain of using a dynamically typed language, like python (which can do what I’ve proposed and more at the object level) is due to those languages eschewing the constraints that types bring. I see Akka as roughly in line with that, which is partly why Akka typed exists. Frankly, its nice to know what messages an actor might process.

My thought is that maybe there is a place between Java classes, where the interfaces are fixed at compile-time, and, say, Smalltalk, where the even interfaces don’t exist and things are potentially always in flux.

Here is my thought process:

  • Types are really nice to have. Its even better when language tooling knows about those types.
  • Therefore, behaviors should be represented at the type-level.
  • Scala has traits, and traits already do this job in a static way.
  • Therefore, maybe there’s a more dynamic way to compose traits?

I do think something could possibly be prototyped via macros, and maybe that’s a path worth pursuing. I was mostly hoping to just jump start a conversation.

If I understand you correctly you propose that source files can contain statements like

Person extends Goodbye

that, when executed, mean that globally, for the entire program, each new and existing instance of Person gets the Goodbye behaviour.

When are these statements executed, and how can other code know this statement has been executed at compile time, particularly given separate compilation and dynamic class loading?

1 Like

What’s the distinction between Akka Untyped and Akka Typed that makes the latter incapable?

I think you could have a supertype e.g. DynamicActor that accepts messages:

case class MixinTrait[Current, Provided](
  builder: ActorRef[Current] => Behavior[Provided],
  replyTo: ActorRef[ActorRef[DynamicExtension[Current | Provided]]]
) extends DynamicExtension[Current]

case class SendMessage[Current](message: Current) extends DynamicExtension[Current]

After creating a DynamicActor instance you would get ActorRef[DynamicExtension[Nothing]]. After sending e.g. MixinTrait[Nothing, Greet](mixinBuilder, actorRef) you would get ActorRef[DynamicExtension[Nothing | Greet]]. Then you could send a mixin builder that requires Greet support and get ActorRef[DynamicExtension[Nothing | Greet | RichGreet]].

I don’t know if above scheme would work or be useful, but it’s a thought experiment, just as yours :slight_smile:

Above scheme also has a drawback - it allows extending behaviour by just one trait at a time. If you have traits like:

trait A { self: B => 
  def greeter: Greeter
trait B { self: A =>
  def greeter: Greeter

then it would be impossible to mix them in dynamically using above scheme. But is that really needed? I don’t think so. You need to have a list of concrete traits that together form a complete type before using them as dynamic extension. Therefore you can mix them before extension:

trait AB extends A with B

and then use the MixinTrait message that extends an actor by one trait at a time.