Bigdecimal literal


We need BigDecimal for financial Calculations. For example:

    double a = 0.02;
    double b = 0.03;
    double c = b - a;

    BigDecimal _a = new BigDecimal("0.02");
    BigDecimal _b = new BigDecimal("0.03");
    BigDecimal _c = _b.subtract(_a);

Program output:


Unfortunately, scala does not support bigdecimal literal. Sometimes it is very inconveniently, for example in pattern matching, we have to convert value to string.

//Nn, NNull are extractor objects
val vals = None.nn::10.nn::20.nn::"20.5000".nn::"20.5".nn::Nil
    for(nCode <- vals ) {
      nCode match {
        case NNull() =>
        case Nn("20.5") =>
        case x =>
              println(s"else: ${x}")

It is inconveniently, and there is performance loss.
Is there any plan to add bigdecimal literal in scala?
For example:
20b – scala.math.Bigdecimal
With such literal it will be possible to write:

nCode.get match {
        case null =>
        case 20.5b =>
        case x =>
              println(s"else: ${x}")

To say the truth when we write erp modules, there is only one question: Why has not it been done yet? :))

For example in oracle sql (

Select 4.5 from dual

Return decimal.

String interpolation in pattern matching
Better number literals

The expectation is that string interpolators satisfy this use case. (For example, XML literals were retired in favor of an interpolated solution,)

Propensive’s Contextual intends to make it easy to write the interpolator and use them in pattern matching expressions as though they were literals.


We do need to be a bit careful with this since macros are being changed. I wonder if Contextual could be ported to Dotty using the new macro system (or soon to be implemented elements of it).


You can give numeric literals a type ascription of BigDecimal. Of course that would create an Int, Double, etc. and run the implicit conversion, but that covers a lot of cases.

val a: BigDecimal = 5.7

String interpolators don’t require macros (although compile-time number validation would certainly be nice).

Here’s one:

implicit class BigDecimalInterpolator(sc: StringContext) {

def bd() = BigDecimal(


val a = bd"5.7"


Using Double (or worse, Float) as an intermediate type is generally a bad idea, due to lack of precision. That’s why OP used the constructor which takes a String.


As a point of genuine interest, does a long of some suitable fraction (e.g. long of 1/100 $) not suffice? This would be big enough to represent $92.233.720.368.547.758


To be fair, BigDecimal also looses precision. I think proper financial calculations should use BigInt (or FixedPoint from libraries such as spire).


I think the important thing about BigDecimal is that it: a) allows very precise control of rounding; b) does rounding in decimal instead of binary so it always matches what a human would do.


Note that it’s possible to implement statically-checked BigDecimal “literals” with Contextual ( I just implemented a basic example in a few lines of code, here:

This gives you the ability to write d"3.1415926" and get a BigDecimal instance, but will give you a compile-time error if you accidentally write an invalid BigDecimal such as d"3.141592b".

There’s no support for extractors with Contextual yet. Though it’s possible to write you own as a macro, if you think it’s worth it. Kaleidoscope is just such a macro which I wrote for pattern-matching strings with regular expressions. It does quite a lot more than you would need for matching BigDecimals, so you could probably spend some time understanding the source code, then remove most of it to leave a BigDecimal matcher which would allow matches like,

myBigDecimal match { case d"3.14159" => "π"; case d => d.toString }

The code for Kaleidoscope is here,

If I get time later today, I’ll do it myself - it seems like it would be useful.



Ok, I thought it would be fun, so I went ahead and implemented it. Pattern-matching on BigDecimal literals is now in Kaleidoscope. But If you just want use it in your own project, just copy/paste the relevant code is here:



@propensive being amazingly amazing :slight_smile:


Generally, the problem happens when you do several calcul implying cumulative percents (like when you calcul futures or credit) and you need to have much more that two digits after the comma. Especially if you are a big bank and you deals with huge amonts of money :slight_smile:


That’s definitely correct, but still things like this

val a: BigDecimal = 0.02 - 0.03

would give

a: BigDecimal = -0.009999999999999998

because calculation is done in Doubles.

Maybe it’s okay since you write non-trivial constant expressions not so often. But I personally really love Haskell’s approach to literals, where

1 :: Num p => p
1.0 :: Fractional p => p

There number literals are treated somewhat like a function, i.e. its result type depends on the required type (like functions with path-dependent return type in Scala), including all operations. It means that when you write

y :: Float -- or Double or BigDecimal or whatever
y = 0.02 - 0.03

subtraction is performed in terms of Float (or Double, or BigDecimal, or whatever), thus giving correct and precise results.

This approach may not work so nice in Scala because of different type inference paradigm (thus, sometimes it can require too many type ascriptions) but I believe it can be harmonized (especially when we have scala.math.Numeric, scala.math.Fractional and etc).

Better number literals

BigDecimal can have unlimited precision (with java.math.MathContext.UNLIMITED)


I think it depends on the country’s laws.
We need bigdecimal in our software requirements, .
We use special algorithm to spread “precision loss” between document positions.
I have given example of sql. In the sql decimal number is standard.


If I understood everything correctly, kaleidoscope use white box macros.
It is experimental feature, and this feature will not be supported in future.

So we can’t use it in production.


Yes, both Kaleidoscope and Contextual use whitebox macros.

So, in defence of whitebox macros, I’m not sure there’s a particularly strong relationship between a feature being experimental and not being supported. Whitebox macros will be around for as long as Scala 2.x is, which will be at least five years, and they will be replaced by a variety of other features in Scala 3.0, to which there’s likely to be a straightforward upgrade path.

Most people are probably already using libraries in Scala which have a much shorter expected longevity that whitebox macros, either directly or indirectly.

Note that there is a rather easy exit strategy for this: ScalaFix could rewrite every instance of a "BigDecimal literal" back to the old applied form, if necessary. It would be a few lines of code to implement.



But the scheme also restricts the kind of macros that can be expressed: macros will be blackbox.
This means that a macro expansion cannot influence the type of the expanded expression as seen from the typechecker.

Migrating erp modules is nightmare :)).
“Acceptance tests” is very expensive.
Unit tests is often impossible.

So ScalaFix is not silver bullet at all.


I don’t want to hark on about this too much, but it seems like you’re passing over a good opportunity to get a useful feature of Scala, which is available right now.

Whitebox macros will continue to exist for as long as Scala 2.x exists, or to put that another way, there will never be another release of Scala 2.x which does not support whitebox macros.

They will, however, not exist in Scala 3 in their current form, but nor will blackbox macros. It should be mostly possible to convert blackbox macros to Scala 3, but there will be no direct replacement for whitebox macros in Scala 3. However there may be some new features in Scala 3 which provide some of the same functionality as certain whitebox macros.

Note that the dependency on “whitebox macros” is also only a compile-time dependency. The macros expand to code which has no runtime dependency on macros. It’s also the case that any blackbox macro implementation which uses quasiquotes will itself have a compile-time dependency on whitebox macros, because that’s how quasiquotes are implemented. So if you’re already using quasiquotes, you’re already depending on whitebox macros.

Anyway, migration of any non-trivial codebase from Scala 2.x to Scala 3 will require work, including (most likely) some manual rewrites of code, and some automatic rewrites.

Rewriting BigDecimal literals as method applications and extractors is the most mechanical rewrite there’s likely to be. In the grand scheme of a migration from Scala 2.x to Scala 3, it would have negligible impact on the difficulty of the task.

So, unless I’ve misjudged something here, there’s really no situation in which choosing not to use whitebox macros is saving you any trouble later on. At worst, it might encourage familiarity with a nicer syntax which gets dropped in Scala 3. By the way, I didn’t really understand what’s special about ERP modules, so let me know if I’m missing something specific here…


I think my previous words is a bit incorrect.
We have two deparments. One writes ERP-s, another writes frameworks.
And they don’t understand each other at all :slight_smile:

For one the best language is pl\sql, for others the best language is java\kotlin(not scala, scala is too difficult)

If I told about their difference, I will need special topic and much better language skill :))

To say the truth we need pl\sql, but there is no plsql on jvm. We choose scala because it allows extend its type system. You can write:

if (a +b === c) {
val l = a::b::c::nill
l.sortBy(a => a.nullsLast)

And ‘a’ and ‘b’ will be custome type.

I don’t know how to explain, that the users of framework wil have another qualification than the reader of this site.
Someone always can say, you must always improve your skills :slight_smile:

In short, there are reasons for which the whitebox macros is deprecated. We don’t use it for the same reason.

Here I hope we try to find out how to improve language.
And I think, since whitebox macros is deprecated, the suggestion to use it is not language improvement.