Today we are announcing the general availability of the world’s first commercial cloud computer

Wow, uninformed joking??? and this sheer stupidity netted them $44 million USD??? ?

Now that's a real gem by drhowarddrfine ...

What happened to Amazon EC3? or Google, for that matter? ? Those guys already offer commercial cloud that's generally available for rent, since before 2020, and now this moronic startup bilked investors of $44,000,000 claiming to be the first on the scene??? ? What about Microsoft Azure, another player in the arena???
 
Bullet point #2 addresses your question.
I read it. Sounds like hardware parts for rackmount servers are getting offered for sale to the general public. That's old hat, too - Dell has been offering those since before 2010, and even these days, they can be easily sourced from Amazon, brand-new. Expensive, yeah. Not everything is quite as compatible with each other as consumer-grade aftermarket PC components - yeah, there's that, too. But old hat, it's been there LONG before those blogger clowns.
 
Looks super cool (and expensive).
It's actually quite boring. The machines they're building are pretty much run of the mill x86 based servers with medium amount of memory and average networking. They give them a handful U.2 disk slots, which is common in storage appliances (although they don't provide a solution for disk-based storage). Their mechanical mounting seems to be very close to normal 19" rackmount or OpenCompute form factor; the only unusual thing are blind mating power and network, so you don't need to manually plug in connectors when you add or remove a server. Their per-rack switches and power supplies, while bespoke and custom-built, have completely normal performance characteristics. In a nutshell, they're selling a standard 19" rack with standard half-width 2U rackmount servers and the shared power/network infrastructure. One could buy effectively the same thing by getting a standard 19" rack from a cabling supply house, a big order of SuperMicro servers, and a Cisco or Juniper switch.

On the software side, their stack seems to be nothing more than a control plane for power/networking, and a mechanism to deploy VMs on the machines (something that's available off the shelf from places like RedHat or VMware).

For a comparison, it's interesting to read the web pages of the OpenCompute foundation.

Still, kudos for providing a FreeBSD version. Given Bryan Cantrill's history, that makes sense, and I like it.
 
It is amazing to see how many people post on this thread that did not read the announcement. Although I would agree that this is not completely new, there have been some offers in the private (on prem) cloud segment before. It appears (also looking at some posters here) that private cloud is still not well understood.
 
It is amazing to see how many people post on this thread that did not read the announcement. Although I would agree that this is not completely new, there have been some offers in the private (on prem) cloud segment before. It appears (also looking at some posters here) that private cloud is still not well understood.
Unsurprising... ? For a small-time shop, the hardware components (that you kind of have to have if you want to offer your own cloud services, rather than rent) are prohibitively expensive. And, the stuff a small shop can afford - it just doesn't have the specs to scale appropriately. This is why small shops rent. This is why places like Github and Amazon EC3 exist in the first place - they have the hardware and capacity that they can rent out to the small guys.

It's kind of like trying to build your own contraption to fly to the space instead of renting space on a NASA mission. ? As a recent example, John Hopkins University rented space on International Space Station to conduct medical experiments. My point is, on its own, even a major university doesn't have the money or expertise to actually build something space-worthy. "Private cloud" is like "Private space travel" - in both cases you have to have prohibitively high expenses just to have hardware that is up to the task.

recluce : Double-check the date, and you'll see what I'm talking about:
  • Yes, I did read that post, all the way down.
1698678820023.png
 
Wow, uninformed joking??? and this sheer stupidity netted them $44 million USD??? ?

Now that's a real gem by drhowarddrfine ...

What happened to Amazon EC3? or Google, for that matter? ? Those guys already offer commercial cloud that's generally available for rent, since before 2020, and now this moronic startup bilked investors of $44,000,000 claiming to be the first on the scene??? ? What about Microsoft Azure, another player in the arena???

Where can I get an on-prem Amazon or Google system?
 
Where can I get an on-prem Amazon or Google system?
You start by figuring out where you can contact them to rent. Then talk to whoever is doing the renting, and ask THEM about building an on-premise Amazon or Google datacenter. You'll end up with a quote to the tune of at least $10 million USD. ?
 
I've seen quite a few users on these Forums who install FreeBSD on decommissioned rack-mount servers from big shops. But building a rack-mount server from brand-new aftermarket parts that you source from Amazon's general consumer market (as opposed to manufacturer-direct, enterprise market) is not impossible. One still needs to know how to do research on compatibility and prices of different components, and how to use a spreadsheet to keep track. Not that different from building a PC from aftermarket parts. It can be dirt cheap (and not very powerful) or out-of-hand expensive.
 
I think the target market here are well-funded shops that have significant baseline computing needs (not bursty / in need of multiple orders-of-magnitude scaling) that have been migrated to the cloud, and would like to add an on-premises option for either disaster recovery, security/privacy compliance, or cost savings for non-bursty workloads.

In that context, this is some pretty cool kit for being able to order a plug-in “cloud in a box” solution. (Both hardware and software; it’s not just the physical part that is of interest. They also have their own firmware / root of trust architecture.)

If you’ve gone all in on cloud technologies, but want to bring some back in-house / hybrid, it looks interesting. Also for anyone spooked by the security state of current BMCs.
 
Oxide reimagines private cloud as... a 2,500-pound blade server? LLNL as a customer, not bad for "this moronic startup".
Must have been Shopify's showcasing.
Now it's starting to make sense... so Oxide's selling point is this (as per that theregister.com article):
integrated backplane that provides not just power but 12.8Tbps of switching capacity to boot

First I see this info, just now.

So, to make sense of what Oxide is saying:
"We brought this previously unavailable integrated backplane hardware component to commercial/private market. To showcase its potential, we built this rack server that is cheaper and faster than stuff that Lawrence Livemore currently has. This thing can run www/nextcloud and be viable commercially, or it can do compute tasks for Lawrence Livemore, and be competitive".

Yeah, not everybody can grasp a message like that... it's far easier to make an outlandish, splashy, and incorrect claim of "World's First Commercial Cloud Computer". 🤣 That kind of marketing will reel in the money, though...
 
Using a very large PC board (probably half the rack height) as a backplane would be interesting. It has been done before, and leads to interesting tradeoffs. Lots of mechanical engineering is required, to make sure the backplane connectors are reliable. But when it works, it makes for a system that's much easier to assemble and maintain.
 
Each gimlet (= compute node sled) is mated to an individual physical backplane that connects to the outgoing cabling at the back of the rack; this is intended to last the rack's lifetime and comes pre-installed; this is not a normal user servicable part. That backplane is not a giant PC board, but consists of mostly three pass-through connectors; two backplanes in the rack connect its gimlet via an additonal fourth connector, providing a PCIe control lane, to each of the two sidecar switches; as this is a "Sidecar-adjacent Gimlet" they are sometimes referred to as scrimlets. All connectors are cabled out to their destination. To allow for physical frame fabrication tolerances of a gimlet sled, two mechanical guiding pins have been added; the final mating of a gimlet is also protected and guided through by means of two hardware pins in the frontlever.

Of those connectors, one carries the DC power, cabling out to the 54 V DC bus bar. Two other identical connectors each connect high speed (100 GbE) and low speed (ARM service processor) ipv6 ethernet to each switch. The gimlet design is demonstrated in Introduction to Oxide's Compute Sled - Gimlet, Oxide Sled Fan Assembly Tour - Gimlet and Cabling the Backplane.

A good video overview can be found at Oxide Computer Company Presents at Cloud Field Day 20; this will likely clear up some of the information/opinions here. StorageReview also had a look: This SERVER Boots AMD EPYC CPU's, WITHOUT BIOS!. Oxide's website contains a lot more information, including RFD's. Their own Rust based OS design for the ARM service processor is discussed in Steve Klabnik's Oxidize Conference: How Rust makes Oxide possible. For their programmable network design, Ryan Goodfellow explains: Building a Rack-Scale Computer with P4 at the Core; Rack-scale Networking.

I see as their strong point the vertical integration from the hardware up into the software stack, including virtualisation; considering VMware's take-over by Broadcom, it seems likely that, even for well funded customers, that factor has gained in importance: VMware GUTS Customers with 10x Price Increases.
 
I enjoy from Oxide their open-source Rust Software stack, they're following the footsteps of TalosOS (I don't use but I've installed). The more open source the better as far as I am concern!!! 🔥
On the hardware I rather not talk as I am tiny biased.
 
Back
Top