Though these two videos are from 2014, they're fun and interesting on YouTube
600 million unsuspecting FreeBSD users
That's B with a Billion
This came about from a posting on Hacker News where (I think) he responded to someone's post with this:
And there's also this 2014 article, too:
highscalability.com
I always struggle to have enough time to talk about what we have to do to Erlang to scale but the bottom line with FreeBSD is...we don't have to do much.
600 million unsuspecting FreeBSD users
That's B with a Billion
This came about from a posting on Hacker News where (I think) he responded to someone's post with this:
About half the early server team had used FreeBSD at Yahoo, and zero had any experience with Erlang.
It helped that the SoftLayer servers (mostly SuperMicro, although some Lenovo post IBM acquisition) were very stable. This fed into the stability of FreeBSD and the operability of Erlang. Whenever we shut down chat servers, we'd find some clients with chat connections open for 45 days (mostly Nokia S60, which had a stable networking stack, but no push services, so we had to stay connected). Ocassionally, we'd need to do BEAM updates or FreeBSD kernel updates to address issues, but mostly servers were running for months or years uninterrupted. This is only possible with quality software and hardware. Both FreeBSD and BEAM/Erlang/OTP are quite approachable for local patching as well. They don't have a lot of churn, so patches don't need a lot of changes between releases, and things are well organized. We didn't have a ton of patches, but we did run things towards the limits. Not quite the same limits that Netflix explores though. We never did more than 2x10G at Softlayer, but chat didn't need that much bandwidth (ran out of CPU first), and for the most part MMS would be real close to disk bandwidth limits before network limits; and more MMS servers gave us larger storage capacity as well as more network and more CPU (TLS isn't free). Edit: I've heard from SoftLayer that by not using most of their services (including their load balancers, ugh), we stayed in the sweet spot of stability; we did have issues with LAN stability from time to time though; I used to joke that we were their network monitoring team, but they did beef things up there towards the end of our time at SoftLayer, I started getting their incident alerts before we noticed and reported problems.
We didn't use any sort of service orchestration until Facebook. I'm old and grumpy now, but I hate all these layers of stuff that hides things. We never needed to split a physical server into multiple jobs, so just running FreeBSD on bare metal was good enough. Erlang's dist and pg2 with some augmentation here and there worked for finding the current active servers.
And there's also this 2014 article, too:

The WhatsApp Architecture Facebook Bought For $19 Billion - High Scalability -
Rick Reed in an upcoming talk in March titled That's 'Billion' with a 'B': Scaling to the next ...
