Looks like you want typeclass derivation.
My colleague just pointed out that itâs more like deriving via
from Haskell:
https://ghc.gitlab.haskell.org/ghc/doc/users_guide/exts/deriving_via.html
I believe he is exactly right!
Iâm wondering whether the popular use cases are better served with a different solution?
My understanding is that people are concerned that methods like these are not typesafe:
def login(user: String, password: String, host: String): Unit = ...
So, to prevent people from calling it like login(host, user, password
etc, people want to have something like:
def login(user: User, password: Password, host: Host): Unit = ...
But what are User, Password, Host? They canât be subtypes of Strings, because Strings are final. If they are aliases for String, the compiler will view them as interchangeable. If we wrap them, it is costly. Using special wrappers (value classes, etc) just to have the compiler or the JVM unwrap them is inherently unreliable. Having type aliases that sometimes donât behave like type aliases is inherently brittle and confusing.
Also, what if I want to keep String functionality? What if I want to be able to:
if(password.length < 8) { ... }
if(host.endsWith(".com")) { ... }
val email = user.toUppercase + "@[SOMEDOMAIN.COM](http://SOMEDOMAIN.COM)"
And even if we donât need String functionality, it makes it easier to understand the code if we know these are simple Strings.
So, what we really want to say is that these are special Strings, or Strings tagged to be special. Letâs introduce a new syntax that says âtype with tagâ, for example String@User
. The tag is erased at runtime, so not patmat for tags. You can assign a String@User
to a String
, but not the other way round. A String
can be cast to a String@Password
with explicit type ascription. For example:
val user: String@User = "hello" // illegal
val user: String@User = "hello": String@User // legal
So you can:
def generatePassword(): String@Password = Random.nextString(16): String@Password
def login(user: String@User, password: String@Password, host: String@Host): Unit = ...
Thoughts?
Actually, that would be quite feasible in the scope of the current proposal.
This example already works:
package newtype
import scala.annotation.targetName
trait Monoid[A] {
extension (x: A) @targetName("mappend") def <> (y: A): A
def mempty: A
}
extension [T](xs: List[T])(using m: Monoid[T])
def foldM: T = xs.fold(m.mempty)(_ <> _)
object newtypes {
opaque type Sum[A] = A
object Sum {
def apply[T](x: T): Sum[T] = x
def unapply[T](w: Sum[T]): Some[T] = Some(w)
}
opaque type Prod[A] = A
object Prod {
def apply[T](x: T): Prod[T] = x
def unapply[T](w: Prod[T]): Some[T] = Some(w)
}
opaque type Logarithm = Double
object Logarithm {
def apply(d: Double): Logarithm = math.log(d)
def unapply(l: Logarithm): Some[Double] = Some(math.exp(l))
}
given Monoid[Prod[Double]] {
extension (x: Prod[Double]) @targetName("mappend") def <> (y: Prod[Double]): Prod[Double] = x * y
def mempty: Prod[Double] = 1
}
given Monoid[Sum[Double]] {
extension (x: Sum[Double]) @targetName("mappend") def <> (y: Sum[Double]): Sum[Double] = x + y
def mempty: Sum[Double] = 0
}
given (using m: Monoid[Sum[Double]]) as Monoid[Prod[Logarithm]] = m
}
Now running this:
import newtype._
import newtypes._
object Main {
def main(args: Array[String]): Unit = {
val dProd: Prod[Double] = List(1.0,2.0,3.0,4.0).map(x => Prod(x)).foldM
val lProd: Prod[Logarithm] = List(1.0,2.0,3.0,4.0).map(x => Prod(Logarithm(x))).foldM
println(s"Regular Product: ${dProd}")
println(s"Logarithm Product: log(${lProd})")
lProd match{ case Prod(Logarithm(d)) => println(s"Logarithm Product: ${d}")}
}
}
Prints this:
Regular Product: 24.0
Logarithm Product: log(3.1780538303479453)
Logarithm Product: 23.999999999999993
Currently, you can only do this when you have all opaques in scope, but your example:
Could be implemented using something like this:
opaque type Name = String
given (using m: Show[String]) as Show[Name] = m
given (using m: Codec[String]) as Codec[Name] = m
given (using m: Monoid[String]) as Monoid[Name] = m
Maybe this sugar wouldnât be such a bad fit:
opaque type Name = String derives Show, Codec, Monoid
This would essentially be equivalent to Haskellâs GeneralisedNewtypeDeriving
.
If you want to generalize this further into something like DerivingVia
, then maybe this would not be a bad idea:
opaque type Name = String derives Show using Show[String], Codec using Codec[String], Monoid using Monoid[String]
But I think the first would be much easier to get people on board with.
Sure they can, just like "hello there"
is a subtype of String, and 3
is a subtype of Int. Final classes canât have subclasses, but they always have subtypes.
I find this really weird, you want to introduce a new role to the type operator in the form of weak type casts?
What is a âspecial stringâ if not a subtype of string? Thatâs possible with the current proposal, and if you want an implicit conversion String => SpecialString
that does some quick sanity checking, thatâs currently possible. If you want that cast to be explicit, then thatâs also possible.
If you want SpecialString
to be disjoint from regular Strings, then why canât you have an api that exports the functionality you need?
Im liking this a lot!
It uses the existing features and itâs very flexible (you choose what you want from the underlying type).
I wouldnât say that I really, really donât want to synthesize anything. However, I really, really donât want to synthesize anything by default.
Currently, the design of opaque type aliases is âmaximally minimalâ. The language featureâs specification and implementation is as minimal as possible while providing the fundamental expressive power that we need to build stuff on top of it.
Having a minimal feature is good when it comes to something that we donât have real experience with. We can ship this minimal design, and gain experience of the kinds of things that we want to build upon. It might require boilerplate for some (most) use cases, yes. But until we really gain experience with the feature, we wonât know with enough certainty what use cases are actually valid and widespread enough to require more sugar.
We could add, in the future, an annotation, or some other modifier, that would synthesize certain members, if we discover through experience that a specific set of members is very often required for a valid use case.
Independently, there is another point of view: if we donât synthesize members, we can write them by hand when we want them. If we do synthesize members, there is no way to prevent that when we donât want them. So if we have to choose only one spec, it should be the one that does not synthesize members.
I agree with most of your comment, but I feel like that part of the argument is widely weakened if 1/ part of the feature perceived goal is about reducing boilerplate (see my comment on that above Synthesize constructor for opaque types - #44 by fanf) and 2/ users donât have the tooling to do so by themself (which is, IIUC, the case in the abscence of the necessary macro to do so).
Also: if the feature soft spot is too far away from the path of least resistance to get to it, thereâs a risk that far less people that necessary to be interesting (or at least remove some disponibility bias) would use it.
All that being said, Iâm still agreeing with you: given the uncertainty and that whole thread, yes it seems that we need a final version of the feature to go in the wild and see what people do with it, and what people complain about.
Completely agree with this. Thereâs no reason to rule out sugar in the future, but I strongly oppose letting one particular use case dictate what the general case should look like at the expense of other use cases that donât fit.
A completely feasible sugar could be something like:
case opaque type Name(n: String) >: Lower <: Upper
Letâs say thatâs sugar for:
opaque type Name = String >: Lower <: Upper
object Name {
def apply(n: String): Name = n
def unapply(n: Name): Some[String] = Some(n)
}
(Also letâs assume that the Some
is optimized away somehow, because otherwise whatâs the point of all this?)
The problem here is that I would expect this to have the same semantics as newtype
in Haskell. But Scala is not Haskell. What makes newtype
work well in Haskell is that it behaves almost exactly like data
. The only difference has to do with laziness, and thatâs really not relevant here.
The biggest difference by far One of the many differences between Haskell and Scala is how pattern matching works in general. In Scala I can pattern match on anything:
(any: Any) match {
case Name(str) => Some(str)
case _ => None
}
This is not legal in Haskell:
newtype Name = Name String
anyToName :: forall a. a -> Maybe String
anyToName x = case x of
Name str -> Just str
_ -> Nothing
The price we pay for this is that Scala will gladly let us do this:
scala> def badMatch(s: String): Name = s match {case n @ Name(_) => n}
def badMatch(s: String): o.Name
scala> badMatch("hey")
val res0: o.Name = hey
I was surprised the compiler let me do this, because if Name wouldâve been a case class instead, this would not even compile. But even if that case was made illegal, the first snippet would match on any String.
What Iâm trying to say here is that while I think most people could understand why typed patterns should be avoided, itâs harder to understand why extractor patterns are unsafe.
There are solutions to this. For example, some problems would go away if extractor patterns were restricted so that the type of the scrutinee must always conform to the argument type in the unapply
method. Itâs not trivial to see what such a change would break.
The actual semantics of opaque types integrate very well into Scala. Something like Haskellâs newtype
simply doesnât. This is why people like @smarter are right in being careful in pushing this pattern.
Opaque types will always erase to the same type their right hand side erases to, because all type members behave that way and because this is fundamental for some uses like making facade for Scala.js, so itâs not possible to evolve opaque types into something that compiles to future-jvm-value-classes.
But we do have a feature that matches up with JVM value classes pretty well already; itâs the existing Scala value classes! Of course they will have to be tweaked to match whatever the JVM ends up with (at the very least, weâll be able to remove the restriction that a value class can only contain one field), but fundamentally they will still be classes, and respect the semantics of classes at runtime like existing Scala value classes do.
A case class.
Just stumbled on this discussion â just giving my 2¢ âŚ
I personally donât want a synthesized apply
, because I want to be able to define my own, maybe because I want validation:
opaque type Email = String
object Email {
def apply(email: String): Either[ParseException, Email] = ???
}
In this sample I want it to be illegal to build Email
values without validation.
And maybe there are use-cases in which the input isnât the aliased type:
opaque type ParsedNumber = Long
object ParsedNumber:
// no apply(Long)
def apply(n: String): Either[ParseException, ParsedNumber] = ???
Note that we used scala-newtype
too. Until we dropped it, replacing it with our own traits, as being in control of validation, or of what type class instances get derived automatically, is good.
I donât mind boilerplate. I wouldnât mind some annotation that synthesizes some defaults either, but the current default is fine as it is.
Speaking of scala-newtype
, hereâs what I mean when I said that we âreplaced it with our own traitsâ:
trait NewType[Src] {
opaque type Type = Src
def apply(value: Src): Type =
value.asInstanceOf[Type]
extension (self: Type)
def value: Src = self.asInstanceOf[Src]
}
// Sample
object Email extends NewType[String]
val email = Email("[email protected]")
val unwrapped: String = email.value
Or in Scala 2 if you want:
trait NewType[Src] { companion =>
type Repr = Src
type Base = Any { type NewType$base }
trait Tag extends Any
type Type <: Base with Tag
@inline def apply(x: Src): Type = x.asInstanceOf[Type]
@inline def value(x: Type): Src = x.asInstanceOf[Src]
implicit final class Ops(val self: Type) {
@inline
def value: Src = companion.value(self)
}
}
Cheers,
The cast isnât needed with opaque types, since theyâre not opaque in the scope where theyâre defined.
Iâve written about why this is unsafe and shouldnât be used earlier in this thread, in particular donât be surprised if it crashes at runtime when compiled with Scala 3.
My take on this whole discussion is that some people want to fit a round peg into a square hole.
Is opaque types the same as as Haskellâs newtype? No.
Can we have Haskellâs newtype? Not as long as the JVM does not support it. You can create a case class, but oh, it boxes. You can create a value class, but that seems to be objectionable as well somehow. Well, opaque types canât solve the problem seamlessly either, because the JVM does not support it.
Opaque types is a powerful feature that can be used for many different things that have been discussed in this thread, and synthesizing methods would be severely detrimental to them.
And if that trait NewType[T]
is particularly useful, you can rest assured there will be a library with it, and it need not be in either the doc or the standard library.
And if the issue is confusion about what opaque types are, the solution is the same as for any language feature: books, presentations, blog posts, etc. As well as, to be honest, stopping trying to fit the round peg in the square hole.
And, by the way, itâs not just round pegs Iâve seen. Thereâs at least two other polygons in this discussion alone.
I actually donât understand why some people say value types can replace opaque types. I want opaque type mostly to hide implementations and control implicit dispatch without additional wrapping costs.
That is 100%, afaict, something that is done at compile time. At runtime, I am happy for it to all be stripped away.
Why do value types solve this problem better? And why should we wait years (maybe 10 before widely deployed) more if we have an implementation now in scala 3 that can solve this issue?
I see the win of value types, mostly to define small tuples and avoid GC pressure. I donât see how a value type that has just a pointer to a reference type is a win, especially when the scala compiler could have removed that entirely.
I would really welcome a careful explanation of what I am missing for the use cases I mentioned (hiding implementation, and controlling implicit/given dispatch).
Not sure what you mean when you say we canât have Haskellâs newtype, as opaque types are perfectly adequate for that.
Otherwise I agree, we can define libraries for helpers.
One thing that is a bit unfortunate is that due to the lack of macro annotations in Dotty we canât really write libraries that add newtype-like functionalities in the same succinct way we can do with Scala 2.
Itâs a rare case where I think migrating to Scala 3 will make the experience worse, if you were previously using libraries like scala-newtype.
A dedicated compiler plugin for @newtype
is possible.
Perhaps, I hadnât thought of that, although I donât know how well that will interplay with IDEs.
There are a number of differences. Some might call it splitting hairs, butâŚ
Haskellâs newtype
defines a new type which is disjoint from the wrapped type in all contexts. You must always convert between them explicitly. There is no type that is inhabited by both values of the newtype and the underlying type. This means that Haskellâs newtype
is much safer to use, because you canât convert between them using pattern matching.
This however is not strictly true. Compiled Haskell code knows nothing about newtypes. A ârealâ newtype
a la Haskell is entirely possible on the JVM. Frege has it. A newtype
that works interoperably with Java in an opaque manner however, will not be possible unless Java adopts the exact same solution in the exact same way.
An opaque type
implementation with the same semantics of Haskellâs newtype
in the sense completely unrelated types is possible, but would require some radical changes to Scala.
Something like the Parametric Top proposal would be needed. Why? The join of an opaque type and itâs representation must have no members. Performing downcasts or pattern matching on that type must be illegal.
Without that, the only way to prevent unsound type conversions is some kind of boxing. The JVM can help by making that boxing local on the stack, but it would still be boxing.