Sigh. This is an extremely complicated topic. In the following, I will just use Ananas and Banana as fictitious brand names for disk drive makers, since I have too many friends who work for Seagate, WD, Hitachi and so on, and I don't want to insult any of them.
First aspect of it: Reliability, dependability, durability, availability, all that stuff. You give the argument that having disks whose failures are not correlated with each other will improve things. Good argument. It also demonstrates that you have understood that correlated failures or multiple faults are the death of RAID. Fabulous. BUT: In a small RAID system (a handful of disks), the statistics is just not good enough to make it relevant.
Think of correlated failures as a manufacturing defect that shows itself at a specific time. Example 1: You have a 1000 disk RAID array, made up of 500 Ananas and 500 Banana. The array is configured so it can handle 10 simultaneous faults. Due to a known correlation, you know that on Monday, the 54th of Octuary (a day that only exists in Covid-leap-years), all the Banana disks have a 1% chance of failing (the normal failure probability for a good-quality disk is much lower, so this is highly significant). So you expect 5 Bananas to fail that Monday. It won't be a good day (you will make 5 trips into the cold and windy data center to replace disk drives, and your RAID will be doing massive rebuild), but you and your data will survive. Example 2: Now you have a 4-disk RAID array (two Ananas, two Banana), that can tolerate one fault. On that Monday, there is a ~2% chance that one of the two Banana will fail (which will be survivable, but you will be wetting your pants, hope you have no single sector error on the other 3 disks), and a 0.01% chance that both fail (and you lose your file system, and probably your job). Honestly, this is really not much worse than any other day, 0.01% is not significant. Third example: We're back to the 1000 drive array, but you stupidly configured it so it can only handle 2 faults. That Monday, 5 Banana fail at once, and you get fired. Honestly, you deserved that, because configuring a 1000-disk array to handle only 2 faults was dumb to begin with.
Now let's do an extreme example. Instead of a 1% failure probability, let's really screw up the works. On Thursday you come to work, and find in your e-mail that there is a firmware update for your Ananas disks. Unfortunately, you don't check the e-mail address carefully, and it is actually from the Elbonian spy agency, and that firmware update will brick all your Ananas disks (for reasons that can be found in a Dilbert cartoon, the Elbonian software engineers really hate your company). You apply the forged firmware, and lose half your disks drives. Very likely, your RAID is now dead - dee ee dee (that's a joke from some comedy movie), since no practical high capacity RAID system can tolerate loss of half its drives. The fact that the Bananas are alive won't save you. Sucks to be you. Now, if you had bought more different models and manufacturers of disks (Cherry, Date, Elderberry, Fig and Grape), and only lost 14% of the disk drives when the Elbonians hacked your Ananas, you might be alive ... but there are no 7 independent disk drive makers in the world any longer (I think there are 3 left, and I'm not sure how independent Toshiba is).
So: In reality, the problem of correlated failures really makes no difference, since massive correlations are deadly anyway, and small correlations are irrelevant compared to normal failure rates. In reality, it is much more likely that an unexpected correlation gets you. For example (real world experience from a previous job): When shipping a disk system with ~3000 drives, we experienced a failure rate of several percent in the first week or so. Several percent of 3000 is a relatively big number (hundreds), so clearly field service and spare parts logistics became overwhelmed, and the system collapsed, causing the customer to be not perfectly happy. The root cause was a combination of a manufacturing defect, quality control failing twice (at two different companies), and for budgetary reasons the system being sold and shipped on December 31st, in spite of the fact that the usual supply of (well quality-tested) disk drives was exhausted. This was a disaster, and fixing it ended up costing thousands of man-hours and millions of $. The root cause: all 3000 disks in the system came from one manufacturing run, one shipment, just a few pallet loads of drives, all of which had skipped over quality control together (it didn't help that there was a company VP standing there screaming that this thing needs to get shipped TODAY or else we won't make our revenue numbers for this fiscal year). That's the kind of correlated failure that really burns people. And this was from a respectable and conscientious vendor! I have other horror stories: like the big shipment of disks that was stored in a tropical climate in a non-air-conditioned warehouse in a city that's infamous for the sulfur smell in the air, for half a year during the monsoon season, and those disks were never reliable afterwards. Or the disks that got deep-frozen when the field service technician in Canada got caught in a snow storm, slid off the highway, had to be rescued (probably by the royal canadian mounted police on horses), while his van and the disks spent the night stuck in a snow bank (strangely, they did fine after thawing). You want to work around this kind of effect? Buy a little bit from each possible vendor, each possible model, from different sources that were not on the same truck together, or were not baked in the same warehouse. Even better: buy two disks every week for half a year, so you spread the risk over manufacturing weeks. In practice, this kind of countermeasure is so completely impractical at the individual level, just forget it. Buy good-quality drives that are intended for your usage, and get on with life.
Second aspect: Performance. I will deliberately ignore what you said about vibration and noise, since I know little about it. I don't think that mixing drives will cause any of them to fail faster. But it will have an effect on performance. Most RAID implementations are mostly synchronous on an IO-by-IO basis. If your raid is for example RAID-6 with an array size of 8 (meaning 8 data disks and two redundancy disks), then the RAID code will usually issue 8 or 10 IOs at the same time (8 for reads, 10 for writes), and wait until they're done. That is: wait until the slowest of the 8 or 10 is done. This is known as the convey effect ... a convoy moves at the speed of the slowest ship in the convey. So you should not build a RAID array made of 99 fast and expensive disks, and 1 slow and cheap disk ... the $$$ you spent on getting the fast disks is wasted. Matter-of-fact, since workloads are variable, you should not even mix disks that have the same average performance. For example, if your Ananas are really good on sequential throughput, and your Bananas good on random seeks, and both suck on the other workload (making them both on average both are mediocre), then at any given workload, your performance will suck. Nice theory. In practice, if you use disks of similar generations (same number of platters, some spindle RPM, similar sequential MB/s and similar seek time), then the difference between brands will likely be 10-20%, which is about similar to the difference between individual drives, and the change in performance of individual drives over weeks or months. So just don't worry about it; your RAID system performance is unpredictable by 10 or 20% anyway. Side remark: There are RAID systems that will measure the performance of individual disks, and either steer more IOs to faster disks, or put less data on slower disks, or ask the user to replace disks that are so slow that they cause problems, or deliberately attempt extra IOs ahead of time, if they can foresee that some disks will be slow. With these tricks, you can squeeze good performance out of arrays of quite dissimilar drives, even over the long term, as disks age and change. But those technologies are not available in free software or consumer RAID systems.
My advice: Do whatever is convenient. And keep good backups, and configure your RAID array for handling one extra fault (one more than you expect statistically, because Murphy). Buy good-quality disk drives that are appropriate for your workload, following the advice from the disk vendor (don't use consumer drives in a NAS, don't use NAS drives in a supercomputer, don't use supercomputer drives in a consumer desktop, and so on). And keep good backups. Did I mention that backups need to be part of your durability strategy?