Does each rack mounted server have an OS?

Does every server in a rack system have an OS installed locally? Lije the rack systems you see in businesses like data centers and such.
google-datacenter-39.jpg


I guess I'm trying to ask if you can have a computer (a box) without on OS on it that is being controlled by a remote computer.

Sorry if this is a dumb question.
Thanks!
 
Does every server in a rack system have an OS installed locally? Lije the rack systems you see in businesses like data centers and such.
Usually, yes.

I guess I'm trying to ask if you can have a computer (a box) without on OS on it that is being controlled by a remote computer.
If it doesn't have an OS on it it's just a piece of dead metal. Even if it's being controlled remotely there still needs to be something on the receiving end to understand what to do with those commands. How this OS is started is another matter, it can be on a locally attached disk (harddisk, SSD, or even an USB stick), it can also be 'diskless', meaning it boots off the network using PXE and NFS for example.
 
I can't help but wonder how Google's computer systems work... It amazes me that I can get a reply to a Google search in a fraction of the time it takes me to find a file on my local system. I wonder if Google's computer ever comes up with Disk Full error msg on the operator's console... :)
 
I can't help but wonder how Google's computer systems work... It amazes me that I can get a reply to a Google search in a fraction of the time it takes me to find a file on my local system. I wonder if Google's computer ever comes up with Disk Full error msg on the operator's console... :)
latest
 
I can't help but wonder how Google's computer systems work... It amazes me that I can get a reply to a Google search in a fraction of the time it takes me to find a file on my local system. I wonder if Google's computer ever comes up with Disk Full error msg on the operator's console...
Unfortunately I've never looked "behind the scenes" (would love to have a look) but I imaging they use clusters of machines. In which case a single machine can be broken without interfering with the operations.
 
You can have a server class bios installed which allows remote control for purpose of accessing the console (KVM) and other bios functions.
 
You can have a server class bios installed which allows remote control for purpose of accessing the console (KVM) and other bios functions.
You mean IPMI, DRAC, iLO and a few more? Those really only allow you to remote control a machine, but if that machine doesn't have an OS you can't do anything besides changing some UEFI/BIOS parameters. The machine still requires an OS to be functional.
 
Why not have 1 computer with 1 OS, but that has hundreds of CPUs & disks, instead of the other way around? Seems funny to have hundreds of OS's running all in the same data center.
 
Why not have 1 computer with 1 OS, but that has hundreds of CPUs & disks, instead of the other way around? Seems funny to have hundreds of OS's running all in the same data center.

We call that a "mainframe".

If you start as a small company, you don't need a mainframe. So, you start with a Linux/FreeBSD/UNIX server and start clustering them. The high-end clustering software is sufficiently advanced that you end up with about the same capabilities for about the same cost as a mainframe but you were able to grow to get there instead of starting out with a $10,000,000.00 computer.
 
Why not have 1 computer with 1 OS, but that has hundreds of CPUs & disks, instead of the other way around?
Because one computer can break down, taking everything with it. With hundreds of computers you can have dozens that are down while the service itself keeps working. So it's usually better to create, say, 4 machines load-balanced instead of one single machine that handles everything. With the 4 machines I can take one or more machines offline for updates for example, or it can have a hardware failure. While the other machines take over the load.
 
Does every server in a rack system have an OS installed locally?
Usually, one or more: Today, many physical boxes (a sheet metal box with a motherboard and a few daughter cards, and the motherboard has CPUs and memory) run a hypervisor (such as VMWare or KVM), and run multiple client operating systems in there. A few years ago, the record was about 30,000 guest operating systems on a single machine (but it was an awfully big machine, a multi-million $ monster, and not powered by Intel CPUs).

I guess I'm trying to ask if you can have a computer (a box) without on OS on it that is being controlled by a remote computer.
In nearly all cases, every box (sheetmetal/motherboard/cards/...) will have at least one OS.

In some rare instances (I only know about it in supercomputers), many of the boxes will have an ultra-lightweight OS, which does not have normal facilities (no IO, no login, no user/system split), and are used as pure compute servers. Actually, I don't know for sure that any such machines are still available for sale, but I think some are still running.

There are rare exceptions; in high end supercomputers you can have one "OS image" (single node, since kernel running) that is split over multiple boxes (each with CPU chips, memory, and networking gear). This kind of thing is only available in multi-million-$ supercomputers, but I've seen 4 physical boxes, each with 8 motherboards, each with 4 CPU chips (that works out to 128 chips) that form a single computer.

But again, the very rare exceptions I'm quoting are mostly there to prove the rule: Nearly always, a box (meaning CPU + memory + sheetmetal + ...) will have an OS.

Sorry if this is a dumb question.
There are no dumb question, unless you ask the same question twice (then the second one is dumb).
 
You mean IPMI, DRAC, iLO and a few more? Those really only allow you to remote control a machine, but if that machine doesn't have an OS you can't do anything besides changing some UEFI/BIOS parameters. The machine still requires an OS to be functional.
Some of them also allows to insert media into virtual devices, which the machine can boot from. And then there is network boot. So many possibilities!
 
Some of them also allows to insert media into virtual devices, which the machine can boot from. And then there is network boot. So many possibilities!
Absolutely. But the point was, the machine needs an OS to be functional. How you start that OS or where you start it from is irrelevant.
 
Usually, yes.

Except when the "server" in the rack is just a chassis full of disks. :) In which case, that box doesn't have an OS installed on it, there's just cables going from that box to another one that actually has an OS on it. Or cables going to a storage switch, which then goes off to the servers.

We have several servers where there's a 2U "head" unit with the OS installed, and a bunch of HBAs plugged in. Those HBAs are then attached via external cables to either 2x or 4x JBOD chassis each with 45 harddrives in them.

There are datacentres where entire racks are nothing but disk chassis, without any local OS installed.

But, yes, usually you need to have an OS installed on a server in order for it to be useful.
 
The latest server I configured allowed for an iSCSI mount during the BIOS phase to boot off. No need for local disk if your application is appropriate.
 
Great thread. I've been thinking about this as I've been installing several flavours of FreeBSD onto several types of machines. This type of optimization could be very key to the adoption rates of an OS.

The vision I had was that "servers" (from rack mounts to workstations to items like Raspberry Pi) could somehow be integrated with minimal instructions. Much like how drives are hot-swappable, servers might be able to do the same, considering that all of them are essentially on a LAN. A $300 older server can saturate a LAN channel, so these older units can easily be part of the fore-mentioned cluster for local OS management and distribution.

It is significant in that there are man-hours and troubleshooting. Much like how ports are managed with default configurations, directives could be recorded for OS_version-machine_model matchups. Scripting that with some kind of installation error reporting, would be quite the tool. If anything, it could be automated with automatic erasure of tested installations, and reports generated for maintainers.

MacOS has/had (?) something called NetBoot. Granted, every Mac shipped has an OS installed, but I'm thinking that the BIOS could/should have some kind of protocol where it looks to 10.x.x.x for a negotiated OS installer system of some sort. If an installation requires static IP assignments, it could be databased and servers are completely databased for DNS tables, maintenance, maybe SNMP applications, etc. But that would take collaboration with hardware providers...I'm assuming the motherboard manufacturers are responsible for the BIOS design. I'm surprised that this hasn't already been accomplished, a server taking instructions from its network environment as to how it should be flashed with an OS, and who it becomes on the LAN.

So a server becomes more of a robot than anything else. Fresh, right out of the box.

Ya, great thread.
 
Why not have 1 computer with 1 OS, but that has hundreds of CPUs & disks, instead of the other way around? Seems funny to have hundreds of OS's running all in the same data center.

Well, Of course there is pros with a true multiprocessor system but when go you to scale (say millions of CPUs or more) its not practical or even feasible to build such a single system. Even if it was redundant and modular it would be impossible to build.

Another problem is the amount of memory that is needed. Today a regular server can have about 4TB of RAM but what if you need 5PB of RAM?

The solution to this is like people have said before, to interconnect many servers into big clusters with thousands of machines. And then adding a very fast interconnect like Intel Omnipath or infiniband for RDMA (Remote Direct Memory Access) over the network. It really boils down to what is practical and economically favourable.
 
Great thread. I've been thinking about this as I've been installing several flavours of FreeBSD onto several types of machines. This type of optimization could be very key to the adoption rates of an OS.

The vision I had was that "servers" (from rack mounts to workstations to items like Raspberry Pi) could somehow be integrated with minimal instructions. Much like how drives are hot-swappable, servers might be able to do the same, considering that all of them are essentially on a LAN.

I manage clusters at work and I think you should look at some automated installation procedure (maybe cobbler) and then some configuration managament tool or orchestration tool. There are many out there, Chef, Puppet and the new star right now Ansible. Ansible is more like "scripts on speed", but with the ability to orchestrate so its very popular among "DevOps"
 
Well, Of course there is pros with a true multiprocessor system but when go you to scale (say millions of CPUs or more) its not practical or even feasible to build such a single system. Even if it was redundant and modular it would be impossible to build.

Another problem is the amount of memory that is needed. Today a regular server can have about 4TB of RAM but what if you need 5PB of RAM?

The solution to this is like people have said before, to interconnect many servers into big clusters with thousands of machines. And then adding a very fast interconnect like Intel Omnipath or infiniband for RDMA (Remote Direct Memory Access) over the network. It really boils down to what is practical and economically favourable.
OK. I think I now kind of can wrap my head around what people are saying. I'm imagining it being like having 1 guy with an epic brain that can control thousands of dudes that have no brain. But if the guy gets hurt or whatever, then the thousands of dudes stop in their tracks? Is this a fair analogy?

[edited to correct spelling]
 
I'm imagining it being like having 1 guy with an epic brain that can control thousands of dudes that have no brain. But if the guy gets hurt or whatever, then the thousands of dudes stop in their tracks? Is this a fair analogy?
Yes, that's close enough. The other side would be a couple of hundred guys with decent intelligence. If one of those guys get sick the others can easily take over.
 
I manage clusters at work and I think you should look at some automated installation procedure (maybe cobbler) and then some configuration managament tool or orchestration tool. There are many out there, Chef, Puppet and the new star right now Ansible. Ansible is more like "scripts on speed", but with the ability to orchestrate so its very popular among "DevOps"

OK, so there is an approach already. Do the servers need to be flashed initially with something before these can take control? The challenge I see in any application is the very introduction of a "black server" (fresh, right out of the box) to the point of control over the LAN.
 
Back
Top