Actually I found a way to preserve binary compatibility in 2.13, at the price of slightly increased memory usage.
I’m changing the
BigInt class definition to
/** An arbitrary integer type; wraps `java.math.BigInteger`, with optimization for small values that can
* be encoded in a `Long`.
final class BigInt private (_bigInteger: BigInteger, _long: Long)
// Class invariant: if the number fits in a Long, then _bigInteger is null and the number is stored in _long
// otherwise, _bigInteger stores the number and _long = 0L
def this(_bigInteger: BigInteger) = this(
if (_bigInteger.bitLength <= 63) null else _bigInteger,
if (_bigInteger.bitLength <= 63) _bigInteger.longValue else 0L
BigInt stays final, we can have optimized logic for small values.
Compared to the subclassing proposal:
in the case of a long-sized
BigInt, we save the price of the
BigInteger instance, so the 4/8 bytes of the reference are not an issue,
in the case of a larger
BigInt, we now store a unneeded
Long field, which implies a maximal memory overhead of around 10%; that maximal overhead is computed for a
BigInt that barely doesn’t fit in a
I conjecture that
BigInt has two major use cases in the Scala world:
use as a generic integer where most of the instances fit in a
BigInt is used to avoid problems related to overflow; that’s how we use
BigInt (or the optimized
SafeLong equivalent in Spire)
use for cryptographic purposes.
In the use case 1., we’ll have a net win. In the use case 2., the overhead is likely to be small as numbers will be much larger than a
I propose to optimize
BigInt while preserving binary compatibility in 2.13, running it against the community build as an additional safety measure.
Then split the logic using subclasses in 2.14.
What do you think?