What is the difference between decimal
, float
and double
in .NET?
When would someone use one of these?
2
float
and double
are floating binary point types (float
is 32bit; double
is 64bit). In other words, they represent a number like this:
10001.10010110011
The binary number and the location of the binary point are both encoded within the value.
decimal
is a floating decimal point type. In other words, they represent a number like this:
12345.65789
Again, the number and the location of the decimal point are both encoded within the value – that’s what makes decimal
still a floating point type instead of a fixed point type.
The important thing to note is that humans are used to representing nonintegers in a decimal form, and expect exact results in decimal representations; not all decimal numbers are exactly representable in binary floating point – 0.1, for example – so if you use a binary floating point value you’ll actually get an approximation to 0.1. You’ll still get approximations when using a floating decimal point as well – the result of dividing 1 by 3 can’t be exactly represented, for example.
As for what to use when:
For values which are “naturally exact decimals” it’s good to use
decimal
. This is usually suitable for any concepts invented by humans: financial values are the most obvious example, but there are others too. Consider the score given to divers or ice skaters, for example.For values which are more artefacts of nature which can’t really be measured exactly anyway,
float
/double
are more appropriate. For example, scientific data would usually be represented in this form. Here, the original values won’t be “decimally accurate” to start with, so it’s not important for the expected results to maintain the “decimal accuracy”. Floating binary point types are much faster to work with than decimals.
13
 84
float
/double
usually do not represent numbers as101.101110
, normally it is represented as something like1101010 * 2^(01010010)
– an exponentAug 13, 2014 at 21:50
 89
@Hazzard: That’s what the “and the location of the binary point” part of the answer means.
Aug 13, 2014 at 21:57
 137
I’m surprised it hasn’t been said already,
float
is a C# alias keyword and isn’t a .Net type. it’sSystem.Single
..single
anddouble
are floating binary point types.Feb 3, 2015 at 15:48
 63
@BKSpurgeon: Well, only in the same way that you can say that everything is a binary type, at which point it becomes a fairly useless definition. Decimal is a decimal type in that it’s a number represented as an integer significand and a scale, such that the result is significand * 10^scale, whereas float and double are significand * 2^scale. You take a number written in decimal, and move the decimal point far enough to the right that you’ve got an integer to work out the significand and the scale. For float/double you’d start with a number written in binary.
Nov 26, 2015 at 7:20
 36
Precision is the main difference.
Float – 7 digits (32 bit)
Double1516 digits (64 bit)
Decimal 2829 significant digits (128 bit)
Decimals have much higher precision and are usually used within financial applications that require a high degree of accuracy. Decimals are much slower (up to 20X times in some tests) than a double/float.
Decimals and Floats/Doubles cannot be compared without a cast whereas Floats and Doubles can. Decimals also allow the encoding or trailing zeros.
float flt = 1F/3;
double dbl = 1D/3;
decimal dcm = 1M/3;
Console.WriteLine("float: {0} double: {1} decimal: {2}", flt, dbl, dcm);
Result :
float: 0.3333333
double: 0.333333333333333
decimal: 0.3333333333333333333333333333
35
 70
@Thecrocodilehunter: sorry, but no. Decimal can represent all numbers that can be represented in decimal notation, but not 1/3 for example. 1.0m / 3.0m will evaluate to 0.33333333… with a large but finite number of 3s at the end. Multiplying it by 3 will not return an exact 1.0.
– Erik P.Nov 29, 2011 at 21:14
 57
@Thecrocodilehunter: I think you’re confusing accuracy and precision. They are different things in this context. Precision is the number of digits available to represent a number. The more precision, the less you need to round. No data type has infinite precision.
Jan 6, 2012 at 17:42
 17
@Thecrocodilehunter: You’re assuming that the value that is being measured is exactly
0.1
— that is rarely the case in the real world! Any finite storage format will conflate an infinite number of possible values to a finite number of bit patterns. For example,float
will conflate0.1
and0.1 + 1e8
, whiledecimal
will conflate0.1
and0.1 + 1e29
. Sure, within a given range, certain values can be represented in any format with zero loss of accuracy (e.g.float
can store any integer up to 1.6e7 with zero loss of accuracy) — but that’s still not infinite accuracy.Jan 10, 2012 at 1:49
 30
@Thecrocodilehunter: You missed my point.
0.1
is not a special value! The only thing that makes0.1
“better” than0.10000001
is because human beings like base 10. And even with afloat
value, if you initialize two values with0.1
the same way, they will both be the same value. It’s just that that value won’t be exactly0.1
— it will be the closest value to0.1
that can be exactly represented as afloat
. Sure, with binary floats,(1.0 / 10) * 10 != 1.0
, but with decimal floats,(1.0 / 3) * 3 != 1.0
either. Neither is perfectly precise.Jan 10, 2012 at 18:27
 20
@Thecrocodilehunter: You still don’t understand. I don’t know how to say this any more plainly: In C, if you do
double a = 0.1; double b = 0.1;
thena == b
will be true. It’s just thata
andb
will both not exactly equal0.1
. In C#, if you dodecimal a = 1.0m / 3.0m; decimal b = 1.0m / 3.0m;
thena == b
will also be true. But in that case, neither ofa
norb
will exactly equal1/3
— they will both equal0.3333...
. In both cases, some accuracy is lost due to representation. You stubbornly say thatdecimal
has “infinite” precision, which is false.Jan 10, 2012 at 19:29
++++++
 C#  .Net Framework  Signed?  Bytes  Possible Values 
 Type  (System) type   Occupied  
++++++
 sbyte  System.Sbyte  Yes  1  128 to 127 
 short  System.Int16  Yes  2  32,768 to 32,767 
 int  System.Int32  Yes  4  2,147,483,648 to 2,147,483,647 
 long  System.Int64  Yes  8  9,223,372,036,854,775,808 to 9,223,372,036,854,775,807 
 byte  System.Byte  No  1  0 to 255 
 ushort  System.Uint16  No  2  0 to 65,535 
 uint  System.UInt32  No  4  0 to 4,294,967,295 
 ulong  System.Uint64  No  8  0 to 18,446,744,073,709,551,615 
 float  System.Single  Yes  4  Approximately ±1.5e45 to ±3.4e38 
     with ~69 significant figures 
 double  System.Double  Yes  8  Approximately ±5.0e324 to ±1.7e308 
     with ~1517 significant figures 
 decimal  System.Decimal  Yes  16  Approximately ±1.0e28 to ±7.9e28 
     with 2829 significant figures 
 char  System.Char  N/A  2  Any Unicode character (16 bit) 
 bool  System.Boolean  N/A  1 / 2  true or false 
++++++
3
 12
You left out the biggest difference, which is the base used for the decimal type (decimal is stored as base 10, all other numeric types listed are base 2).
Mar 14, 2015 at 22:55
 2
The value ranges for the Single and Double are not depicted correctly in the above image or the source forum post. Since we can’t easily superscript the text here, use the caret character: Single should be 10^45 and 10^38, and Double should be 10^324 and 10^308. Also, MSDN has the float with a range of 3.4×10^38 to +3.4×10^38. Search MSDN for System.Single and System.Double in case of link changes. Single: msdn.microsoft.com/enus/library/b1e65aza.aspx Double: msdn.microsoft.com/enus/library/678hzkk9.aspx
– deegeeJun 22, 2015 at 19:18
 3
interesting article zetcode.com/lang/csharp/datatypes
Mar 1, 2014 at 14:20
You cannot use decimal to interop with native code since it is a .net specific implementation, while float and double numbers can be processed by CPUs directly.
Mar 6, 2021 at 10:55
