Data Precision & Representation
Explore why 0.1 + 0.2 ≠ 0.3 in computers, and understand the fundamental limits of number representation.
The Famous 0.1 + 0.2 Problem
Try this in any programming language's console:
0.1 + 0.2 =
This is NOT a bug. It's a fundamental limitation of how computers represent decimal numbers in binary.
Precision Explorer
Why Does This Happen?
The Binary Problem
Computers use binary (base-2). Just like 1/3 = 0.333... in decimal (never ends), many simple decimal fractions are infinite in binary.
0.1 in decimal = 0.0001100110011... in binary (repeating forever)
Finite Storage
Computers have limited memory. They must truncate these infinite sequences, causing small errors.
A 64-bit floating point number (standard) has about 15-17 significant decimal digits of precision.
Which Numbers Are Exact?
Numbers that can be expressed as fractions with powers of 2 in the denominator:
- 0.5 = 1/2 ✓
- 0.25 = 1/4 ✓
- 0.125 = 1/8 ✓
- 0.1 = 1/10 ✗ (10 = 2×5, has factor of 5)
Precision vs Accuracy
Precision
How many digits can be stored. More precision = more decimal places.
Accuracy
How close to the true value. High precision doesn't guarantee accuracy.
Practical Implications
Never compare floating points for equality!
// BAD: May fail unexpectedly
if (0.1 + 0.2 === 0.3) { ... }
// GOOD: Use epsilon comparison
if (Math.abs((0.1 + 0.2) - 0.3) < 0.0001) { ... }
For money, use integers!
Store cents (integers) instead of dollars (decimals). $19.99 → 1999 cents.
Scientific calculations accumulate error
Each operation can introduce tiny errors. Over millions of calculations, this adds up.