ZFS mitigates some failure modes of RAID 5 better than plain RAID 5, but drive failure and URE it does nothing for.
I agree it does nothing extra versus RAID5 to cope with disk failure.
But URE's? Of course ZFS protects against that. That is the whole point!
Even on a single disk without any redundancy (no mirror/RAID-Z) - ZFS partly protects against URE because of the redundant metadata. So any URE that affects a sector in use by metadata, will be instantly corrected by ZFS. The same applies to data stored with copies=2. So you do not need volume redundancy to protect against URE. But copies=2 on your data can be very 'expensive', and it does not protect against disk failure only against corruption/bad sectors (URE).
With redundancy, like RAID-Z, you are basically immune to URE. With one disk failed, you revert to the behavior described above.
RAID-Z2 can have one failed drive with full protection against URE, while it can have two failed drives with full protection against metadata corruption from URE - so ZFS itself will survive maybe some files don't.
A traditional RAID5 can fail with just 2 bad sectors. Or one drive missing + 1 bad sector. This is what Robin Harris' article is all about. That after a disk failure you begin rebuilding your array and because this takes a long time and stresses the disks, any remaining disk with a bad sector might cause the entire array to be considered failed by the RAID controller. Unless you have TLER disks, the disk with the bad sector will also be dropped. Now you are in the process of rebuilding a RAID5 but one of the member disks is missing. That means that without some expert help to get that one disk going again, you cannot access your data. While recovery is possible, many home users simply forfeit their data and re-create the RAID-array and the filesystem and start from scratch.
Backups... they hear. But backing up hundreds of gigabytes of not-so-important-data is not very feasible to home users. They just want their one storage vault to be reliable.
While ZFS' ability to automatically repair corruption (self-healing) is often praised, i find that simply the
detection of corruption is invaluable. For many users, losing one or two files in the worst case that can realistically happen, is not that bad. If only they KNOW about which files. They can re-download them or just consider them lost. But not knowing what you lost is bad, and discovering that after all this while your wedding photo's are corrupted is even worse. Now you may have backed up those a million times, but silent corruption can affect the backups as well. So my point is that data integrity starts with error
detection and is followed by error
correction.