Why 0.1 + 0.2 is not 0.3 in JavaScript?

In JavaScript, as in many other programming languages, the result of adding two decimal numbers together is not always exactly equal to the expected result due to the way that the computer represents decimal numbers in binary. This can sometimes lead to seemingly strange behavior like 0.1 + 0.2 not being exactly equal to 0.3.

Sounds impossible? Just Run the code in the following editor.

Let's understand now why we have this behavior.

One way to think about it is that the computer has a limited amount of space to store each decimal number, so it can't always represent them with perfect accuracy. For instance between 0.1 and 0.2 there is an infinite of numbers. There is no way that the computer can represent an infinite set. Therefore decimal numbers are stored with a limited precision. This means that the result of adding two numbers together may be slightly off from the expected result.

A more technical explanation is the following: JavaScript represents decimal numbers using the standard IEEE 754 binary floating-point format. This format is capable of representing numbers with fractional parts, as well as very large and very small numbers. When representing decimal numbers in this format, the precision may be limited, which can lead to rounding errors in some cases.

In the case of 0.1 + 0.2, the computer calculates the result to be 0.30000000000000004, which is not exactly equal to 0.3. This is just an artifact of the way that numbers are represented in the computer, and doesn't reflect any error in the JavaScript code.

How to work with floating-point numbers inside code?

Code that is comparing two floating point numbers needs to take into account the potential for rounding errors in the binary floating-point format. Because of this, it is not always sufficient to simply use the === operator to compare two floating point numbers. Using the equality operator may lead to errors in the code.

In the following example the block from the if statement is never executed. Try it:

If you need to compare the result of a decimal calculation to a specific number, it's usually best to use a tolerance value when doing the comparison. This allows for a small amount of error in the result, and ensures that your code will work as expected even if the result of the calculation is not exactly equal to the expected value.

For example, if you wanted to check if 0.1 + 0.2 is equal to 0.3, you could use the following code:

In the code, we use the Math.abs() function to calculate the absolute value of the difference between the result of the calculation and the expected value, and then we compare that value to the tolerance. If the difference is less than the tolerance, we consider the numbers to be equal. This allows us to account for any small error in the calculation, and ensures that our code will work correctly.

Read more blog articles Browse JavaScript projects

About codeguppy

CodeGuppy is a FREE coding platform for schools and independent learners. If you don't have yet an account with codeguppy.com, you can start by visiting the registration page and sign-up for a free account. Registered users can access tons of fun projects!


Follow @codeguppy on Twitter for coding tips and news about codeguppy platform. For more information, please feel free to contact us.