I am learning Primitive Data Types and i came up to double so would someone explain what double actually do? And by double i mean : double foo = 3.14; Code (markup): or what's the difference between floats and double? i got a little bit confused and i would really appreciate it if you could give a hand of help. float foo = 3.14f; Code (markup):
The main difference between the two is in the size of the numbers that they hold. For float its up to 7 digits number. For doubles its up to 16 digits. To be more precise, here's the official size: float: 1.5 × 10-45 to 3.4 × 1038 double: 5.0 × 10-324 to 1.7 × 10308
float: .net framework is single, bytes : 4 , Values: Approx. ±1.5 x 10-45 to ±3.4 x 1038 with 7 significant figures Double : .net framework is double, bytes::8, values : Approx. ±5.0 x 10-324 to ±1.7 x 10308 with 15 or 16 significant figures
1)Double hold more value than Float 2)Double takes 15-16 digit where as float approximately takes 7 digit 3)if you want check what is maximum size of double write this code in console application double d =0; console.writeline("Max value ={0}",Double.MaxValue) console.writeline("Max value ={1}",Double.MinValue) 4)you can check same with float
That's a very... convoluted way to differ FLOAT and DOUBLE. They do exactly the same thing, it's just that DOUBLE is exactly that - double precision. I recommend just reading the specs, for instance here: http://dev.mysql.com/doc/refman/5.6/en/floating-point-types.html and here: http://code.rohitink.com/2013/06/12/mysql-integer-float-decimal-data-types-differences/ Neither is good for financial content (then one should use DECIMAL), and both FLOAT and DOUBLE can be set up when creating the column, with how many digits should be allowed before and after the decimal point.
Actually @PoPSiCLe, decimal is HORRIBLE for tracking financial if taxes are involved -- see "rounding bug"... NOT that floats are much better... I find it a laugh when C syntax languages say "float" when they mean "single" . It's like going into a restaurant where they only have two sizes, medium and large. It's also a little funny how few programmers even grasp how it works, and still THINK in decimal even when the computer doesn't. It comes down to how many BITS are assigned to what task: https://standards.ieee.org/findstds/standard/754-1985.html It's pretty simple: Single: (per IEEE 754-1985) 1 bit sign 8 bits exponent 23 bits data Double: (per IEEE 754-1985) 1 bit sign 11 bits exponent 52 bits data Another common one is "extended" -- it's 80 bit, was created for internal use on the 8087 but is accepted as a standard for high precision. 1 bit sign 15 bits exponent 64 bits data Of course it's REAL fun when you start switching between languages and some use a 6 bit float (called a "REAL"), processors of different families that went and used their own proprietary formats. Or arbitrary precision numbers, where a 16, 32 or 64 bit number is used for the float, separate from a data storage that could be ANY length -- quite often done using BCD instead of byte-packed binary. Some languages have it built in like LISP, Smalltalk and Haskell. Java has bigDecimal and bigInteger so you can go any length as either... laughably C# only has bigInteger with no floating point equivalent. Then there's the REAL fun when computers store irrational numbers as fractions. See the 384 bit system used on some Cray supercomputers; 128 bits of integer, 128 bits of numerator and 128 bits of denominator, with a hardware reducer for 2, 3, 5, 7 and 11. I've been a little surprised that we don't see more fractional systems used on todays hardware; integer math is so much simpler than floating point in terms of implementing it on silicon, that say... 128 bits of 64:32:32 would blow floating point out of the water on speed whilst still having many MANY times higher precision, many "irrational" numbers being able to be absolutely stated. See 0::1/3 or 0::2/7 -- try stating those as absolutes with floating point. In most fractional computer systems you use a double colon to separate the integer component from the fraction. Nope, I've NEVER programmed in Ada... that's my story and I'm shticking to it!
With Float maybe you enter values in % and Double is for storing more value and takes memory space, if you want to make programs small then may see some specs like bigger data type makes program runs slow in CPU when working with real world programs and developing Softwares for MAC/PC etc.