PMc: I'm not disagreeing with you. From an engineering and financial viewpoint, Unix (in all its variations) is a great success. Matter-of-fact, I personally often go months using no operating system other than various Unixes (which includes Linux, MacOS, and Android).
Rob Pike's viewpoint comes from a different direction. He was a member of one of the two premier systems software research organizations in the 70s through 90s, namely Bell Labs and Berkeley. The created great stuff - today we call it Unix, and in particular (relevant to this forum) the BSD flavor of Unix. But after Unix, there was very little new research. Berkeley had Sprite, Bell Labs hat Plan 9, Tanenbaum had Amoeba, and little came from them in terms of new operating systems. Sure, they created spinout technologies ... for example Sprite begat both the Tcl programming language and the idea of log-structured filed systems. But since then, nobody has proposed and seriously followed through on a new paradigm of how to run a computer. For a researcher (not an implementor, engineer, marketer, or financial person) like Rob Pike, this is an important observation.
The question is not just about a rewrite. Linux did that: It wrote a completely new kernel, without using a single line of either Bell Labs or Berkeley code. Similarly, much of the user land has been rewritten multiple times; the Gnu compiler/linker and LLVM/Clang have nothing to do with Ken Thompson's original Unix C compiler (and are sadly lacking the ability to hack themselves into the system, at least as far as we know). But that is just an internal rewrite, leaving the interfaces at the outside surface the same, and copying many of the ideas about how to implement it. For example, Linux' VFS layer (the part of the kernel that takes file system IO requests and distributes them over multiple internal file systems) is mostly the same as in AT&T and Berkeley Unix, even though no lines of code were stolen. What has not been there is new concepts, new approaches, new capabilities.
You ask above: what intellectual breakthrough would I expect? Let me give you one example. At work, I use "Unix machines", which are typically clusters of 10^2 to 10^n individual computers (n is relatively large), with this many OSes, kernels, network stacks, local file systems, and so on. There is a huge amount of ad-hoc tools to get state (like files or executable programs) and resources (like processing power or RAM) distributed among these machines. Every major user of computer clusters, clouds, or supercomputers has a different set of ad-hoc tools, and thousands of engineers around the world work hard on these tools. Yet, it is all very pedestrian, manual, inefficient, error-prone, and annoying. In the 1990s, various research groups (in particular the Berkeley group) had the vision of making a large number of potentially disparate machines into a single entity, that a user could flexibly use. Cluster computing at large scales remains a big construction yard. Various companies (such as Sun and Apollo) tried to make a go of it, and make profits off it. Sun ended up mostly succeeding as a business, but it ended up selling individual workstations and servers, and never delivered on the promise of a real cluster computer; Apollo eventually failed (got absorbed by HP and the technology eaten). I think the closest we ever got to a homogeneous compute environment was actually not in the Unix ecosystem, but it was Digital's VAXcluster (which died together with Digital, the company, and with the VAX). Yes, we have little bits and pieces of the technology, which make clusters tolerable, and which typically don't scale to large installations: NFS as a network file system, LSF for batch scheduling, and so on. But even for something as simple as "parallel distributed compilation/linking", there is no universal or generally accepted solution. Basel might be the closest approximation, but it has tiny market share. For the harder general problem of "programming language that can be used for a multi-threaded problem and works from small CPUs to million-node clusters", there is nothing in site. Much research is needed here, and that research is just not getting done. And research not of little spot solutions to spot problems (like Bazel might be a solution to the parallel/distributed make problem), but an organic approach to making a set of heterogeneous machines feel and work like a single computer. I think that's the kind of problem Rob Pike has been bemoaning.