How often will a drive die (or have serious enough read errors that a process will crash)? Very rarely.
What fraction of all the processes on your computer would cause a crash or require a reboot if they stopped? Many will not. And swap isn't even used much on most systems, so even if the swap disk dies, most processes will live.
How often do you reboot your computer voluntarily? Depends; my laptop and rack-mounted servers roughly weekly, my home server roughly monthly (at home due to power outages), YMMV.
How often does your computer crash involuntarily? Mine hardly ever. My FreeBSD server crashes perhaps every year or two.
Put in reasonable estimates, and you will find that the improvement in availability from mirrored swap is small.
Now on the other side, what is the cost of mirrored swap, if you already have two disks? A little more disk space, irrelevant for most real-world uses (swap is tiny on the scale of modern disks). A tiny bit of work to set up. Tiny little loss of performance (need to wait for two writes that are occurring in parallel, which is not much slower than waiting for one of them). Compare that tiny or non-existing cost to the small benefit, and it is probably still a win. So go do it if you want.
On the question of auto restart: For the most part, no. The System-V init system (used by many Linux distributions before systemd) has the option to auto-restart services that fail. This was initially intended for getty, the program that gets the username from a terminal and passes it to login, and needs to be restarted after the user logs out. The BSD-style rc system does not in general restart a service after it crashes. There may be daemons that restart some subprocesses (I have one of those at home, but I wrote that code myself).
Anecdote from long ago: In a job a long time ago (when dinosaurs roamed the earth), we used an early 4-processor Unix machine for our departmental server, for a computer science research group with roughly a dozen people. One time we noticed that nobody had received any mail, for over a day. A quick look at the mail log found that no mail message had entered or left our server (and therefore our group) for a day and a half. This tells you that this was at a time when e-mail wasn't that important yet (today I would notice in roughly 15 minutes). So we looked at the server logs, and discovered that the OS had detected the complete failure of one of the four CPUs, and for safety shut that CPU down. The process that was running on that CPU was crashed, and it happened to be sendmail, which explains the absence of mail. But the rest of the computer continued functioning so normally that none of us had even noticed! I guess we weren't very performance constrained, and nobody happened to look at the blinking yellow light on the front panel. We simply restarted sendmail, and kept running on a 3-processor machine. In the meantime we contacted field service, who told us that a replacement CPU is a "customer replaceable part", and they mailed us a new CPU. Which arrived via courier service about 24 hours later. In those days, a "CPU" was a whole PC board, about the size of an A4 sheet of paper. The amazing thing was the instructions: it was actually not necessary to turn the computer off or even reboot it to take the broken CPU out and put the spare part in; it could be hot-swapped, and then enabled via a command. We decided that we were to chicken to try that, shut the machine down, put the spare in, and rebooted. How is that for availability and uptime?
Same site, same machine, about a year earlier: When I interviewed there to get the job, the manager of the group gave me a little tour of the computer room, including the group's main server (see above). I had one of those new-fangled RAID arrays with many disks (probably 10 or 20, which was considered a heck of a lot in those days). And to show it off, he pulled a disk physically out of the disk array, handed it to me still warm, I handed it back (a little astonished), he plugged it in, and everything continued functioning. In his defense, the disk was not pulled from the production array connected to the main server, but from a second RAID array used on an experimental system. Still, having the guts to pull a disk from a running system amazed me.
This was about 20 or 25 years ago. Back then, they already had machines with really good OSes and availability.