I like to remind people that are worried about it that the whole purpose of a hard drive is to store information. Drives would be pretty useless if you can write a 0 then a 1 and if you read it enough the last bit you wrote will suddenly make itself known. Imagine a drive that flips bits depending on how many times you read or wrote the bit, ugh, data would be garbage.
Indeed, but here is the "kicker"
Writing 00s to a drive doesn't REALLY write 00s.
The fact of the matter is that the data is interpreted by the read heads. The read heads detect magnetic flux changes on the surface. Basically a change in the polarity equals a 0, and no Change equals a 1.
These mean that no specific polarity equals a 1 or a 0, it is the change, or lack of change that represents the binary.
A problem arises because the read heads will run into two serious issues: If a string of "No change" is too long the read heads will not be able to properly interpret the data, basically it will lose it's way, and at the same time if there are too many flux reversals in a area the magnetic domains may suddenly change polarity which is obviously BAD for data storage.
But software commonly has long strings of 1s, or 0s in it's coding. So The hard drive actually re-encodes the data into something it can safely use, and the decodes it on the fly when it is read off of the platter.
This is where RLL encoding comes in. HDD still use a evolved form of RLL encoding (Run Length Limited) Which basically means the HDD uses it's own binary based off of the rules: we cant have too many 1's or 0s in a row (maybe more then 4).
So when you write 00s to a drive, it is actually not writing all 00s at all.