Synthesize constructor for opaque types

I’d tend to use this since:

val a2: A.Id2 = A.Id2("foo")
val bar: String = a2.tail

would be a bit weird. In my use cases, I have been a big fan og selectively adding back typeclasses from the underlying type via. coercible in

For instance, I’d be very happy to have String's Show instance for most of my opaque types. Similarly, I would probably use the same encoding/decoding in most cases. I’m not sure concatenation would make sense in many of my use cases, though.

This is actually quite possible using value classes thanks to export clauses. I completely agree that it would be nice to have a way to do this for opaque types, but I don’t think the current solution has to change radically to support it. Having a way of exporting a method with a different return type would be a really nice feature to have, but I’m not sure what that would look like.

I meant more like automatically generated by the editor. I don’t like bloated languages like Java that force you to do that for every little thing, but in this case I think it’s better than introducing sugar for what is essentially a slightly more performant version of

case class Username(str: String) extends AnyVal

I think the problem arises because there’s no way to mitigate the boilerplate, especially in the many cases where I don’t want to do any validation, but am using the opaque type only for tagging.

Yes, there is, and it was pointed out before.

// By Guillaume Martres
trait NewType[Wrapped] {
  opaque type Type = Wrapped
  def apply(w: Wrapped): Type = w
  extension (t: Type) {
    def unwrap: Wrapped = t

// Which I presume is used like this:
object Username extends NewType[String]
object Token extends NewType[String]
object UserId extends NewType[String]

Nice — I also missed Guillaume’s post about that.

I expect most people will want to also do this, though:

type Username = Username.Type

making it two lines of boilerplate instead of one.


When I think of an opaque type, I look at it more like a single field struct.
If Scala supported struct, then there wouldn’t be any need for opaque types.


struct UserName {
  val underlying : String
struct Token {
  val underlying : String

So IMO, opaque should be just short way of writing this and implicitly referencing their single fields, nothing more. So if in the future, Scala does support struct, opaque types should be just a private case of it.

This is gold. I was hoping for something like this using the existing constructs. I just need a good way to coerce Typeclasses from the underlying type to the newtype then I’m more than happy ( feels very doable) :grin: thanks for pointing this out!

Yeah, I get that - it seems like a different direction than the rest of the language, though. In the beginning, Scala was proud of not having getters and setters, having proper equals on case classes and all sorts of synthesized mechanics exactly to reduce boilerplate and reliance on editor support.

Since I’m not a part of the dev team, I wouldn’t know exactly where priorities are now. With the replies in this thread, though, I feel confident there’s a good enough solution using the current constructs.

1 Like

What would be the purpose of nominal structs in a language that already has case classes?

There are countless use cases for opaque types which can’t be represented with structs.

What do you mean by “implicitly”?


further reading:

There’s an important caveat to this approach: it will box all usages of primitive newtypes, such as NewType[Int], every time you call apply or unapply. You can see it in the generated bytecode. Hopefully the JVM can get rid of this boxing at runtime, but it’s not guaranteed to happen.

This wouldn’t be a problem if we could make the apply and unapply methods inline, but we can’t, due to technicalities in the encoding of opaque types.


I presume this would not be the case if one creates specialized traits for each primitive? Like, for example,

trait NewIntType {
  opaque type Type = Int
  def apply(w: Int): Type = w
  extension (t: Type) {
    def unwrap: Int = t

Or would it?

1 Like

Yeah this would be fine. I think using @specialized on the polymorphic trait would even work to solve the problem while avoiding duplication.


This is an interesting situation. I want to understand it more.

  • 99% of people (from what I’m reading) seem to want to use this as value classes that don’t box
  • @smarter (and anyone else? @sjrd maybe?) strongly disagrees

The only reason for disagreement as far as I can see is

  • don’t think of opaque types as classes
  • isInstanceOf and patmat won’t work

The result of the disagreement is that @smarter (and anyone else? sorry) really, really doesn’t want to:

  • synthesise anything like apply methods
  • support for one-line declarations
  • include information in the docs about how people can reduce boilerplate on their own

Is that a fair and correct summary?

I’m not writing this to push the solution in either direction, I won’t even say my opinion but there is one thing that I think needs to be said and considered. No matter how potentially correct, thoughtful and awesome @smarter and co’s reasoning is, if 90%+ of people view opaque types a certain way, and want to use it a certain way, they will and this conversation will continue for years and years. We can close PRs and issues on Github and they’ll likely just be raised again by new people (not maliciously btw), this discussion here can wrap up and it will likely just pop up again and again with new people proposing the same suggestions, he community will start sharing their own solutions which will become de-facto standards and most people will take it as gospel completely undeterred by technicalities like no patmat, and so on. Even the official doc is supportive of the newtype view with its Logarithm example.

I think the solution here is going to lie in documentation.

  1. If not the vast majority, then at least a very major use case, is unboxed newtypes / nominally type-safe primitives. We either need to show how to accomplish it with opaque types effectively (and describe any pitfalls), or present very, very convincing arguments why opaque types shouldn’t be used for that purpose.
  2. If boilerplate reduction doesn’t get any compiler support, then people are going to try to find their own means. If we mention in the docs something like “hey here’s a super common usecase, here’s the best known solution, beware these pitfalls” then we cut down the amount of future discussion to a much more focused and concise subset.

On the other hand, things like dismissing the relationship between newtypes and opaque types, and trying to convince people they don’t want newtypes, will not work and just guarantee a disconnect between nearly all (?) Scala users and a few on the inside.


It’s missing one crucial thing in my opinion: is coming (slooooowly but surely, here’s a recent related JEP making some baby steps: , it will be the correct way to represent value types on the JVM, anything we try to emulate that before this happen is at best a temporary hack.

I did not close any PR or issue, or shut down any discussion, just offered my opinion. Others are free to disagree. There’s nothing wrong with talking about newtype-like usecase in the documentation for example, but it needs to be done carefully to avoid encouraging premature optimization and complex code when simpler approaches exist (“use a case class if you can”), just like I would expect the documentation on while loops to encourage people to use for comprehensions instead if possible.


Another point I’d like to make: doing optimization work on the JVM is extremely hard and counter-intuitive, without measurements we simply cannot say what helps and what doesn’t ( gives a good idea of how subtle this can be). The only silver bullet I know of is GraalVM which really seems to perform well on Scala code as measured by benchmarks and actual users like Twitter even with the fully open source community edition, so if you care about performance you should definitely use it.

If someone really cares about performance and doesn’t know what to do, I’d like them to try running their code on Graal before they start replacing all their wrapper classes by opaque types and potentially introducing bugs in their codebase possibly without making it actually faster in the end.


Concrete example: we tried replacing all usages of List in dotty by a custom type where a singleton list is represented by the element itself and a list of several elements is represented by an Array, since :: is one of the most frequently allocated class in the compiler as measured by profilers you’d expect that change to have a noticeable impact, and yet in our benchmarks it didn’t:

Similarly, a while ago we experimented with unboxed options, again it didn’t seem to produce noticeable results:

I’m not saying that trying to avoid excessive allocations is a bad idea (in fact, we have many PRs in Dotty driven by profiling that do just that and performance has trended upwards), just that it’s not a panacea.


It’s missing one crucial thing in my opinion: is coming (slooooowly but surely, here’s a recent related JEP making some baby steps: ) , it will be the correct way to represent value types on the JVM, anything we try to emulate that before this happen is at best a temporary hack.

We all agree it will be the correct solution. We WANT that so much. But it’s a least 2y away (it’s not here, not in the next major, likely not in the following, then likely under a preview flag).

On the other hand, I need to support Java from debian 9 / centos 7, debian 10 / centos 8 for the forseable future (2024 for centos). Centos 9 may not have a version of Java with Valhalla.

A one decade hack seems not a hack that much. It’s less than the time there was triple-quoted string in Scala and not in Java. (yes, I understand it’s not the same impacting feature, but perhaps it’s even more an argument in favor to it).

And really, in all cases: are you sure using Valhalla correctly won’t need a major iteration of Scala? The JVM people are doing an amazing job regardning backward compat on that project, much more than it was expected. Though: will it be enought to be transparent? If not, maybe opaque type could just be evolved in a non totally compatible way at that moment ?

1 Like

If I don’t care about performance what’s the least-boilerplate way to make a newtype?


I believe this is fine if it gives me feature parity with the newtype library for scala 2. The biggest issues I face with just using case classes is how much I can steal from the underlying type. I want toString to behave like the underlying type. I want to be able to use it with encoding/decoding libraries as a string in most cases without having to reimplement all the typeclass instances.

say I have:

case class Name(s: String)

Now I need some nice way to express that this is a dumb wrapper with the purpose of using nominal types to avoid passing it in the wrong places.

Basically, I want to be able to derive (on a per-typeclass-basis) the typeclass instances for Name based on the String instances easily.

If I could do something like

case class Name(s: String) using Show[String], Codec[String], Monoid[String]

and then have the compiler derive the instances that wrap/unwrap the value that’d be perfect for me.
In most cases it’d simply amount to unwrapping/wrapping if the value is in a contravariant and covariant position respectively (at least that’s my intuition).
In this way, we could even reuse Monoid[String] while preserving the result type:

val stillAName: Name = Name("Bob") |+| Name("Dylan")