Apparently they used those DNS servers for their internal authentication and authorization too because I heard you couldn't even enter the buildings anymore because the access gates didn't accept their passes anymore.
I heard the same story through the Silicon Valley rumor mill. The version I heard is that even the entrance door security system at Facebook's data centers uses Facebook's internal network and DNS. So when all of their networks went down, it became impossible to even get into the building that holds the servers which needed to be restarted or reconfigured. According to the rumors, the problem was eventually solved by using a sledge hammer on a door, and bringing someone and something inside.
The other part of the rumor is that the fix was done by starting with the data center nearest to Facebook's main engineering facilities (which are in Menlo Park); I've heard both Redwood City and Santa Clara mentioned. From there, it became possible to restart networking infrastructure at other sites remotely. That probably used dedicated network links; all the big hyper-scalers have their own fiber networks which they control end to end (not rented bandwidth, but unshared dark fiber). The funny thing about this is that I didn't know that any of the big hyper-scalers have data centers in Silicon Valley. With the insanely high cost of real estate and electricity here, there are few data centers in the immediate vicinity, and the few that exist are run by wholesale colocation operators (like Equinix), usually serving smaller customers. Companies large enough to build dedicated data centers (and Facebook is definitely in that class) typically place them in places where real estate, electricity and cooling are cheaper, but not so remote that labor is unavailable.
Or maybe they did have something in place but it just took a long time to get a hold of those stored keys/passwords (things can get a bit messy if you can't even enter the building or safety box to access it; a bit of a chicken and egg situation).
I've heard stories that some of the most fundamental security keys (like the ultimate root password to all of Amazon AWS, just as a hypothetical example) are stored in a physical safe (a big steel box with thick walls) near the CEOs office, using a standalone security device. That safe uses traditional mechanical locks (the thing with a dial). I've also heard stories that some of those security devices rely on being unlocked by a pass phrase which is memorized by a small number of humans, but not recorded otherwise (not on a piece of hardware). Part of the long delay in getting Facebook back online might have been caused by the need for one of those humans to be brought to the correct location. If someone has some spare time, they could track what flights Facebook's corporate aircraft took yesterday, it might give us a clue.
Whatever actually caused this mess you can be sure they will be having quite a few really long meetings trying to figure out a way to prevent this from ever happening again. And those aren't going to be fun meetings.
And everyone else in the industry will also have long meetings, to make sure that "this can't happen to us". Those meetings won't be quite as painful, but no means amusing.