currency floating-point

Why not use Double or Float to represent currency?


I’ve always been told never to represent money with double or float types, and this time I pose the question to you: why?

I’m sure there is a very good reason, I simply do not know what it is.


  • 5

    See this SO question: Rounding Errors?

    Sep 16, 2010 at 19:31

  • 97

    Just to be clear, they shouldn’t be used for anything that requires accuracy — not just currency.

    – Jeff

    Sep 16, 2010 at 19:59

  • 184

    They shouldn’t be used for anything that requires exactness. But double’s 53 significant bits (~16 decimal digits) are usually good enough for things that merely require accuracy.

    – dan04

    Sep 17, 2010 at 19:23

  • 36

    @jeff Your comment completely misrepresents what binary floating-point is good for and what it isn’t good for. Read the answer by zneak below, and please delete your misleading comment.

    Aug 3, 2014 at 21:08

  • And to be clear, by “exactness” (or “precision”) you mean in decimal.

    – Jack Leow

    Aug 20, 2021 at 16:38


Because floats and doubles cannot accurately represent the base 10 multiples that we use for money. This issue isn’t just for Java, it’s for any programming language that uses base 2 floating-point types.

In base 10, you can write 10.25 as 1025 * 10-2 (an integer times a power of 10). IEEE-754 floating-point numbers are different, but a very simple way to think about them is to multiply by a power of two instead. For instance, you could be looking at 164 * 2-4 (an integer times a power of two), which is also equal to 10.25. That’s not how the numbers are represented in memory, but the math implications are the same.

Even in base 10, this notation cannot accurately represent most simple fractions. For instance, you can’t represent 1/3: the decimal representation is repeating (0.3333…), so there is no finite integer that you can multiply by a power of 10 to get 1/3. You could settle on a long sequence of 3’s and a small exponent, like 333333333 * 10-10, but it is not accurate: if you multiply that by 3, you won’t get 1.

However, for the purpose of counting money, at least for countries whose money is valued within an order of magnitude of the US dollar, usually all you need is to be able to store multiples of 10-2, so it doesn’t really matter that 1/3 can’t be represented.

The problem with floats and doubles is that the vast majority of money-like numbers don’t have an exact representation as an integer times a power of 2. In fact, the only multiples of 0.01 between 0 and 1 (which are significant when dealing with money because they’re integer cents) that can be represented exactly as an IEEE-754 binary floating-point number are 0, 0.25, 0.5, 0.75 and 1. All the others are off by a small amount. As an analogy to the 0.333333 example, if you take the floating-point value for 0.01 and you multiply it by 10, you won’t get 0.1. Instead you will get something like 0.099999999786…

Representing money as a double or float will probably look good at first as the software rounds off the tiny errors, but as you perform more additions, subtractions, multiplications and divisions on inexact numbers, errors will compound and you’ll end up with values that are visibly not accurate. This makes floats and doubles inadequate for dealing with money, where perfect accuracy for multiples of base 10 powers is required.

A solution that works in just about any language is to use integers instead, and count cents. For instance, 1025 would be $10.25. Several languages also have built-in types to deal with money. Among others, Java has the BigDecimal class, and C# has the decimal type.


  • 5

    @Fran You will get rounding errors and in some cases where large quantities of currency are being used, interest rate computations can be grossly off

    Sep 16, 2010 at 19:29

  • 6

    …most base 10 fractions, that is. For example, 0.1 has no exact binary floating-point representation. So, 1.0 / 10 * 10 may not be the same as 1.0.

    Sep 16, 2010 at 19:30

  • 8

    @linuxuser27 I think Fran was trying to be funny. Anyway, zneak’s answer is the best I’ve seen, better even than the classic version by Bloch.

    Oct 8, 2012 at 20:28

  • 5

    Of course if you know the precision, you can always round the result and thus avoid the whole issue. This is much faster and simpler than using BigDecimal. Another alternative is to use fixed precision int or long.

    Feb 24, 2013 at 12:12

  • 8

    @JoL You are right, the statement that float(0.1) * 10 ≠ 1 is wrong. In a double-precision float, 0.1 is represented as 0b0.00011001100110011001100110011001100110011001100110011010 and 10 as 0b1010. If you multiply these two binary numbers, you get 1.0000000000000000000000000000000000000000000000000000010, and after that has been rounded to the available 53 binary digits, you have exactly 1. The problem with floats is not that they always go wrong, but that they sometimes do – as with the example of 0.1 + 0.2 ≠ 0.3.

    Dec 15, 2018 at 17:59


From Bloch, J., Effective Java, (2nd ed, Item 48. 3rd ed, Item 60):

The float and double types are
particularly ill-suited for monetary
calculations because it is impossible
to represent 0.1 (or any other
negative power of ten) as a float or
double exactly.

For example, suppose you have $1.03
and you spend 42c. How much money do
you have left?

System.out.println(1.03 - .42);

prints out 0.6100000000000001.

The right way to solve this problem is
to use BigDecimal, int or long
for monetary calculations.

Though BigDecimal has some caveats (please see currently accepted answer).


  • 8

    I’m a little confused by the recommendation to use int or long for monetary calculations. How do you represent 1.03 as an int or long? I’ve tried “long a = 1.04;” and “long a = 104/100;” to no avail.

    – Peter

    Mar 15, 2014 at 10:32

  • 63

    @Peter, you use long a = 104 and count in cents instead of dollars.

    – zneak

    Mar 17, 2014 at 1:49

  • 4

    @zneak What about when a percentage needs to be applied like compounding interest or similar?

    – trusktr

    Mar 6, 2016 at 3:41

  • 5

    @trusktr, I’d go with your platform’s decimal type. In Java, that’s BigDecimal.

    – zneak

    Mar 6, 2016 at 19:42

  • 15

    @maaartinus …and you don’t think using double for such things is error-prone? I’ve seen the float rounding issue hit real systems hard. Even in banking. Please don’t recommend it, or if you do, provide that as a separate answer (so we can downvote it 😛 )

    – eis

    Feb 16, 2017 at 10:57


This is not a matter of accuracy, nor is it a matter of precision. It is a matter of meeting the expectations of humans who use base 10 for calculations instead of base 2. For example, using doubles for financial calculations does not produce answers that are “wrong” in a mathematical sense, but it can produce answers that are not what is expected in a financial sense.

Even if you round off your results at the last minute before output, you can still occasionally get a result using doubles that does not match expectations.

Using a calculator, or calculating results by hand, 1.40 * 165 = 231 exactly. However, internally using doubles, on my compiler / operating system environment, it is stored as a binary number close to 230.99999… so if you truncate the number, you get 230 instead of 231. You may reason that rounding instead of truncating would have given the desired result of 231. That is true, but rounding always involves truncation. Whatever rounding technique you use, there are still boundary conditions like this one that will round down when you expect it to round up. They are rare enough that they often will not be found through casual testing or observation. You may have to write some code to search for examples that illustrate outcomes that do not behave as expected.

Assume you want to round something to the nearest penny. So you take your final result, multiply by 100, add 0.5, truncate, then divide the result by 100 to get back to pennies. If the internal number you stored was 3.46499999…. instead of 3.465, you are going to get 3.46 instead 3.47 when you round the number to the nearest penny. But your base 10 calculations may have indicated that the answer should be 3.465 exactly, which clearly should round up to 3.47, not down to 3.46. These kinds of things happen occasionally in real life when you use doubles for financial calculations. It is rare, so it often goes unnoticed as an issue, but it happens.

If you use base 10 for your internal calculations instead of doubles, the answers are always exactly what is expected by humans, assuming no other bugs in your code.


  • 5

    Related, interesting: In my chrome js console: Math.round(.4999999999999999): 0 Math.round(.49999999999999999): 1

    Jun 8, 2015 at 17:26

  • 22

    This answer is misleading. 1.40 * 165 = 231. Any number other than exactly 231 is wrong in a mathematical sense (and all other senses).

    – Karu

    Jun 18, 2015 at 9:37

  • 3

    @Karu I think that’s why Randy says floats are bad… My Chrome JS console shows 230.99999999999997 as the result. That is wrong, which is the point made in the answer.

    – trusktr

    Mar 6, 2016 at 3:48

  • 6

    @Karu: Imho the answer is not mathematically wrong. It’s just that there are 2 questions one being answered which is not the question being asked. The question your compiler answers is 1.39999999 * 164.99999999 and so on which mathematically correct equals 230.99999…. Obviously tha’s not the question that was asked in the first place….

    – markus

    Mar 15, 2016 at 9:36

  • 3

    @CurtisYallop because the closes double value to 0.49999999999999999 is 0.5 Why does Math.round(0.49999999999999994) return 1?

    – phuclv

    Feb 21, 2018 at 8:56