That data is very interesting. But some information is missing, and it is not clear how it extrapolates to other systems.
To begin with, many resilver times are reasonable. Most modern drives can read or write sustained at 100-200 Mbyte/s when doing large sequential reads and writes. The drives used in that test are 1 TB drives and old, so they're probably at the lower end of that spectrum. The bottleneck in a simple (single failed drive) resilvering should be writing to the new (target) drive, which can be done continuously and mostly sequentially. Therefore they should take about 5,000 to 10,000 seconds to do a complete read or write, which is about 80 to 170 minutes. Given that the file system is only 25% or 50% full, it should take a quarter or half of that time, which brings the range to 20-42 minutes for 25% full and 42-83 minutes for 50% full. And most of the times reported fit nicely in that range, at the slower end.
But for the high-redundancy systems (RAID-Z2 and -Z3) with large vdevs, the resilver times get much worse, reaching as much as as 252 minutes for something that should take at most 83. Where is the bottleneck? Is ZFS not using enough threads (or generating enough IO) to keep the source disks busy? Is the CPU out of horsepower to do CRC checking and "parity" (encoding) calculation? Is it a design flaw in ZFS that prevents it from exploiting the parallelism of the source drives? How would this scale to modern disks (similar sequential bandwidth and IOps, but much larger capacity), and modern CPUs (way more cores to do parallelism, and more integer speed)?