I still wonder if it wouldn't be relatively easy to tune such periodic scripts to play nicer, or fewer in parallel, even if taking longer elapsed? But I'm out of time to test ...
A typical find process (such as locate, or backup software, or virus scanning or intrusion detection) is very IO intensive. It traverses the directory tree, reading each directory, then reading the metadata for each file (size, mtime, ownership), and then often reading each file. It uses very little CPU time, but keeps the disk very busy. The usual implementation is single-threaded, and they issue one IO at a time to the disk. Most of these IOs will be small reads (a few kB at a time), so the disk will be very inefficient, mostly seeking. But because most of these IOs are random reads (with long seeks), each of them will take typically 10ms on a spinning disk.
When a foreground task (like a human trying to get some work done) now competes with this workload, it will typically get about 50% of the disk's throughput. If the foreground's workload is small sequential IOs (which typically get served in ~ms), having that reduced tenfold may feel very slow.
Note that none of this has to do with swapping: there is no excessive memory usage, and no memory has to be sent to/from disk. But the symptoms feel the same: Operations that used to be really fast are suddenly slow, and the disk is 100% busy. There are lots of things in the OS to help in this case, for example prefetching in the file system, and IO scheduling. But ultimately, the system has to find a compromise between two conflicting goals: Making background batch workload (such as the locate/find/backup/... software) run efficiently with high throughput, while making foreground workload (user tasks) have low latency. That compromise is particularly hard when both batch and foreground are low intensity (only one IO at a time).