The computer does not care what kind of number you give it, it simple does what it is instructed to do. Decimal numbers (floats) are first converted to scientific notation and normalized. The problem with decimals is accuracy. .1 base 10 is clearly a rational number as well as in binary (base 2). BUT, since it is NON-terminating that is where the problem lines! Let's say we truncate (cut off) at 4 digits, so we get .0001. This equals 0.0625! Not 0.1. If we go out 5 places, we get 0.00011 in base2 which equals 0.09375 (much better)!
Finally, at places we get 0.000110001 in base2 we get 0.095703125 (better still, but not quite).
You can keep on going, but you will never get exactly .1!