Interesting thing about computers: Computers work in binary while we tend to think in a decimal world. There are some limitations in binary that we don't see in our decimal world.
For example it is almost impossible for us to write 2/3 as a decimal because it is a repeating number (0.66666666667 is a close approximation).
Now lets look at the other side of the fence, if I write 0.1 in decimal, it is not a problem. Just one digit! However if we want to write this in binary it looks like this:
0.00011001100110011001100110011... It's an unavoidable fact that we'll get some errors when we convert between the two systems.
How do we minimize the effects of this? C++ gives us some tool for setting precision. If we use these to round appropriately, these effects will not be noticeable.