Wrong. Or perhaps a little bit right. But in computer security, "a little bit" doesn't cut it.
Your first step needs to be: Think about what you want to accomplish. How important is your data? How big is the damage if it gets revealed? Will the economic cost to you be $100, $1M, or $10B? If the answer is "small", then stop. Don't waste your (and our) time by trying to implement something that most likely will have more holes than a swiss cheese. If your answer is "big", then hire some experts to do it. Your best bet at that point is probably to outsource your computing or at least its management to an experienced and trustworthy provider. This will cost you something, but if your data is worth a few billion, then managing it will be worth some money. In the middle is a painful grey zone, where it is desirable to protect your data, but it isn't valuable enough to invest lots of money into protecting it.
Second step: Think about likely attack vectors. Who is after your data? If you have made enemies of a major state actor (US, Russian or Chinese government as examples), then stop. Nothing an amateur can do will protect you against a concerted attack by their law enforcement agencies. Similarly, if you have secrets that are of national security importance, then you can safely assume that a non-existing agency with a name like [CN][IRS][AO] or similar alphabet soup is using "national technical means" (that's a term of art) to find out what you're doing. What this leaves to guard against is commercial theft of data (usually by pretty clueless thieves), and preventing embarrassing mistakes. So are you worried about someone coming and grabbing your server? Are you worried about someone hacking into your server? Spying on the network connection to it (for example by using your unsecured wireless)? Or are you only worried about a disk going into the trash?
Encrypting disks helps against a very small number of attack vectors. It uses either hardware encryption (commonly called "SED" or self-encrypting drives or T10 after the name of the INCITS committee that creates the SCSI and SATA encryption standards), or software encryption (geli is one example). It helps when a disk has been separated from the system, and is found powered-down. It does not help when the disk is powered up and in use, because then the channel is open, and an attack through the server is the easiest path. It also only helps if the attacker is guaranteed to not get their hands on the key. So you need physical security against an attacker getting to your running server, or physically stealing it together with whatever thing holds the key.
But: Using disk encryption only against people reading your disk after you throw it in the trash is difficult and inefficient. There is a much simpler way to protect that: take a hammer to the disk before putting it in the trash. So for the rest of the discussion, let's assume that security of discarded media is not relevant.
The security of a storage encryption system depends crucially on the security of the key. If someone can get the key, the game is over. I used to work on a system that was able to encrypt its disks, and our joke was that using hardware-encrypting drives is like closing the bathroom window in the barn after the horse has left. Most horses vanish because the barn door is open, or because a horse thief opens the barn door which isn't locked. Protecting against the bizarre attack that the thief crawls through the bathroom window and open the barn door from inside is only sensible after you have secured everything else, and in the case of storage, that's physical and network security. What we conclude from that is: never store the key in an insecure location. Putting it on a USB stick that's physically in the server at all times is just dumb. It's like hanging the key to the barn door on a nail next to the door outside. The correct place to store they key is on a system that's intrinsically more secure than the thing you are protecting. For small systems, the best option is to use a human who has to enter the key, and the key is used once to unlock and then discarded. That's a little bit secure, but inconvenient. To make it more secure, use multiple factors (like text string password + touch key or fingerprint reader), and encrypt the key itself in transit (give the human an encrypted key store, those can be bought commercially). For large systems, the only practicable solution is to implement a key management system which is itself physically secure, and produces keys when needed, using secure communication channels. To figure out how to do that, read about Kerberos, and how one physically protects Kerberos servers (the original MIT ones were in special steel cages in the data center). Key management for storage is big business (there is a handful of specialized companies that do it, plus groups in the big companies like EMC, IBM, Google, HP and so on); this is not something for amateurs. By the way, I would never implement something like that myself either: while I have worked in such groups, it's way to complex and intense for a person to do themselves.
Your last question is: is it OK to use the same key on multiple disks? In theory, no. Security people always advise to expose a key as little as possible, to keep the attack surface small. Everytime the key travels or is used, there is a risk of it being exposed. In practice, compared to all other risks, the exposure of the key itself is minor, and sharing it among disks is the least of your problems. Personally, my advice would be: Use geli, remember the key in your brain, do not store it in cleartext anywhere (not on a USB stick, not on a yellow sticky note attached to the computer), and type it in everytime the system boots. Or look into the various hardware security devices (Yubikey, RSA tokens, Titan keys, ...) to handle the key. Or just put your data into a cloud provider, where it is much more secure than anything you can do yourself.