The problem isn't the BIOS, it is the second power supply in the second computer. ATX power supplies don't have a simple on-off switch any more (like old AT power supplies), instead they get turned on by a signal from the motherboard, which comes through the 20-pin connector. Unfortunately, in your case the motherboard connector is in use by the first power supply. Fortunately, it is easily possible to override this: There is a dedicated wire on the 20-pin connector (the one you will use in the second computer), which simply needs to be connected to ground (any black wire) to turn the power supply on. Look on the web, there are lots of instructions for how to connect a simple switch there.
To your question: a normal motherboard BIOS doesn't "recognize" power supplies at all. It simply closes a circuit to turn the power supply off. The "chicken and egg" question is: how does the BIOS do that, where does the BIOS get power to make that decision (it is a complicated decision, which may require a small CPU)? Turns out normal ATX supplies have a very small power supply built in, which permanently delivers low power to the motherboard, enough run a small service processor which manages turning the main power supply on when needed. But the BIOS or the motherboard typically doesn't know the wattage or the serial number or the model of the power supply, it just has 1 bit of control: off and on.
With expensive servers, this gets much more complicated. Those typically have two power supplies built into the case, for redundancy (so one power supply can be removed and replaced with a spare, while the computer keeps running). They typically also have hot-swappable fans. The two power supplies are designed to any one of them can handle the whole load; in normal operation they are both lightly loaded, which is good because it keeps things could and they live for a long time (heat is the enemy of capacitors and moving parts). The two power supplies are typically connected to separate power distribution grids; this allows electricians to turn power off to one side and work on repairs and upgrades, without disrupting server operations.
High-end servers typically also have connections to "expansion boxes" (such as disk enclosures, which is what you are really building here), but there are many other kinds of expansion boxes. In the old days, there were dedicated cables that were run from the server to those extra devices, and that allows the server to communicate with the expansions, turn them on and off, monitor temperatures and fans, and so on. Today that is typically done by connected everything to a completely separate management network: each device in the data center has an extra ethernet port, which is connected to a separate ethernet fabric, and this is used by centralized management infrastructures. If implemented correctly, this can make for extremely reliable servers, because the management system manages to deliver power, cooling and networking to the servers at all times, using redundant infrastructure.
Note that these techniques are typically used for commercial computing, with high-availability servers and services. You would typically see this kind of hardware in the data center of a bank. The big internet companies (Facebook, Apple, Google, Amazon, Alibaba) run their machines very differently: instead of trying to make each "computer" (server and external enclosures) as reliable as possible, they simply replicate services world-wide, and accept that individual machines or groups of machines are inherently reliable. There is a story of a whole giant data center in Europe going offline for several days (due to an animal getting into the power distribution system and exploding spectacularly), but customers never noticed, other than a sub-second delay as work was routed to other locations.
For your purposes, just read on the web about how to put a switch on the second power supply, done. Search for "ATX power supply switch external" or some combination of such words.