Network Admins to Decline Against Network Architects in the Next Decades

Background:
Please read first this posting on LinkedIn - https://www.linkedin.com/posts/dani...interesting-activity-7267440219300257792-5txK .

Cross Comment:
Reading it brought some thoughts about System Engineering and the place of Open Source.

Specifically, the post was about a decline in network admins against a sharp rise in network architects in the following decades. I cannot but think of BSD systems' role in the tooling as a primary actor in the radical shift.

I looked at the examples they gave - ansible, python, Kubernetes/chef, etc. - and saw that essential stand orchestration tools from the BSD community were missing. I am not too convinced those are the best tools today. In the commercial environment for today's Private/Hybrid/Public Cloud, they would be, particularly thinking of the virtualisation technologies used in situ - VEEAM, ESXi, VMWare, etc.

And I was wondering about the position of the BSD community on the post. I think BSD brings a new dimension to the field. Would network admins be fewer than network architects? Possibly, the architects might take the role of the admins over time - thinking of SD-WAN, SDN, etc? BSD products and Open Source Products and Systems also dictate the future direction although they are rarely used in the big businesses - Mining, FMG, Oil and Gas, etc.
 
Would network admins be fewer than network architects?
Both positions seem like they would be, ultimately, obsolescent.

The former because administration is far less taxing than it was decades ago -- and, likely to become even less so as tools get more powerful and flexible. Systems (and the applications they support) are becoming considerably more "vanilla". Hardware more and more ubiquitous to the point where hardware selection is almost a checkoff item.

Network architecting is a step behind that (in time/evolution). And, for many of the same reasons.

Add AI to the mix, and its easy to see a "wizard" making the architecture choices and an "agent" administering those choices, once reified.

There is little to prevent this sort of change from happening sooner rather than later.
 
Both positions seem like they would be, ultimately, obsolescent.

The former because administration is far less taxing than it was decades ago -- and, likely to become even less so as tools get more powerful and flexible. Systems (and the applications they support) are becoming considerably more "vanilla". Hardware more and more ubiquitous to the point where hardware selection is almost a checkoff item.

Network architecting is a step behind that (in time/evolution). And, for many of the same reasons.

Add AI to the mix, and its easy to see a "wizard" making the architecture choices and an "agent" administering those choices, once reified.

There is little to prevent this sort of change from happening sooner rather than later.
Thanks Don Y. What another perspective!
 
So is the AI going to be self learning "ooh, network loops are hurt performance" or "let me do more deep dive into that users packets"?
Seems like the movie Terminator may have been a prophecy, not Sci-Fi
 
It is part of a general trend. In the old days, every computer was administered by hand, with all administration and configuration actions (commands) executed by typing them in. At the amateur level, that's still true. Example: I have TWO FreeBSD servers in production, (one at home, one in the cloud), and a handful of Linux machines. I do not use any "automation", except: (a) I meticulously record what I do, so reproducing it later or on another node is just cutting and pasting from my log files, and (b) for scheduled tasks (such as OS upgrades), I open multiple windows, and perform things in parallel.

At the mass production level, this is completely not how systems are administered. Large operators today have millions of servers. Today, servers are racked and unracked by robots (they take the server from a depot to the rack, and slide it into the correct place), disk drive replacement is done by robots (they pull out the server, lift out the disk, and put a new one in). At the software side, everything is automated: A blank new server boots from the network, the stuff that is "installed" (a copy kept on the local disk) is copied and updated automatically, configuring the network for new servers or new racks is automated by a recipe, and so on. The last bit of manual labor that remains is for example unloading pallets of servers or disks from a delivery truck with a forklift and unloading the pallets for the automated systems, and facilities maintenance for the plant (fire sprinklers, diesel generators).

What we see here is that manual and repetitive labor of a system or network administrator is replaced by automation, and that automation then needs to be designed, engineered, and maintained. Instead of having 10,000 admins for a million computers, we have hundreds of mechanical and software engineers who design the automation (including robots and automatable interfaces), hundreds of people who architect the compute clusters, and hundreds of people who do the last necessary bit of manual labor. In that sense, the manual labor of systems- and network administrators has been replaced by engineering. Calling this "IaC" is sensible, since infrastructure has become code, but it is also misleading, since some of the code is for example in the CAD/CAM files that are used to manufacture specialized racks that are prepared for robots. Another name for it is SRE: Instead of a SA = Systems Administrator, we now have a (somewhat misnamed) Site Reliability Engineer, who designs, implements and monitors automated tools that install and maintain software artifacts that perform system administration tasks (such as installing an OS, upgrading packages, and configuring networks).

In a nutshell, this is taking manual and repetitive labor and replacing it with a (partially) automated assembly line. What IoC and SRE are doing is very much like Henry Ford's revolution of how the Model T was built.
 
The question is: who programs those systems that translate the custom languages into the individual commands that they are in the end.

And why are those languages not Lisp-based? Fools.
 
So is the AI going to be self learning "ooh, network loops are hurt performance" or "let me do more deep dive into that users packets"?
Don't confuse the LLM hype with the general case for AI. Even rule-based designs can give the appearance of intelligence.

E.g., it would be relatively trivial to design a wizard to configure a custom kernel for any bit of hardware presented: elide all devices that don't probe() as present, look at the binaries intended to run on it to determine what support libraries/interface are needed, etc.

Or, alternatively, to configure a single kernel that can address the needs of a set of machines (assuming this is possible) by addressing the union of their requirements.

A human could perform these operations -- but, at considerable cost (time, lost opportunity). When you have a fleet of servers (or even a small collection in a $HOMELAB), the advantage of automating the task (time/labor saving and accuracy!) becomes significant.
 
In that sense, the manual labor of systems- and network administrators has been replaced by engineering.
This is misleading as the "replacement" happens once and replaces ALL such efforts thereafter.

The time/effort/technology involved in designing a hammer replaces all future searches for (and uses of) rocks for said purpose.

E.g., I designed my first computer with a databook, pencil and pad of paper. Building it with an assortment of components and a wirewrap gun. A second unit would leverage the design effort but repeat the construction effort. Laying out a circuit board economizes on that. Farming the job out to a manufacturer replaces it with writing a check!

At each level, the efforts from the prrevious are leveraged to ever increasing extents.
 
Partially correct: The replacement is built and implemented once. But after that, it will need constant maintenance. When building software, I used to estimate that just doing maintenance on it without introducing new features will use 10% of the effort per year of support that it took to originally build it (so if if takes 100 person-years = a 33-person team 3 years to build it, you'll need a 10-person team in perpetuity to maintain it).

And, just like with hammers, there will also be continuous improvement and re-engineering. We're a long ways from the rock the neanderthal's used. For fun, look at a modern Estwing hammer (solid single piece steel, yet reasonable vibration absorption), a Stiletto, or a Vaughan (I use their "California Framer"). There have been lots of trial and error and better ideas.

far-side-january-17-1986-prehistoric-man-working-on-a-wheel-curses-all-tools-for-looking-the-...jpeg
 
But after that, it will need constant maintenance.
As would a human-administered system.

And, just like with hammers, there will also be continuous improvement and re-engineering
As would human-administered systems.

doing maintenance on it without introducing new features will use 10% of the effort per year of support that it took to originally build it
That's true for poorly engineered systems.

Many industries don't like you dicking with your "finished product" after the sale. Some even make it so expensive that you wouldn't want to attempt such a change. (pharma, medical, aerospace, gaming, etc.)

For folks used to writing software in a desktop environment -- where upgrades have long been an accepted admission of poor quality -- the door to downloading an update and having the user install it is now propped fully open ("C'mon in! The more the merrier!")

When was the last time the software in your microwave oven was updated? Furnace? Stove? Refrigerator? Washing machine? TV? CD player/DVD player/HiFi/mouse/etc? How often have you checked the status of the firmware in your disk drives, optical drives, etc.? These are difficult to update (though if the NEED for updates arose, they could surely be accommodated) -- but, their designers have realized that updates are costly so they have decided to design things "right" the first time.

[Be careful not to dismiss these as trivial pieces of code; I'd wager you couldn't write the control loop for the servo in a disk drive and be so sure of it that you would allow manufacturing to commit to producing hundreds of thousands of units with it!]

How often do you think the software/firmware in medical instruments is updated? Flight control systems on aircraft? These are easy to update in that there are paid staff available to perform those updates. But, the regulatory processes that precede an update being "made available" are VERY costly. YOU don't get to say when the update is ready -- third parties (agencies) make those decisions to ensure your update doesn't compromise existing systems.

Sadly, the software world has embraced the idea of "you EXPECT it to have bugs" -- something that was once a woeful complaint of its USERS!
 
I read the website, articles and revelations of Andy Lapteff. Very informative. Read. Everything is predictable, expected and obvious.
I read the resume, portfolio of Andy Lapteff. Also fascinating, bright, stylish, youthful, sparkling.
This is a person with a large baggage of skills and abilities in the field of “how to make money” under the brands of quasi-proprietary solutions.
Briefly, my personal opinion about the note by Andy Lapteff: a cheap custom article on lobbying for corporate proprietary solutions, projects, sides. The paradigm is the same: “We are large integrators making decisions FOR YOU!”, “There are 10 of us and we can do it!”, “Give our architects your headache and we will solve your problems for a subscription fee!”, “Why do you need your local “administrators-craftsmen” if there are proven corporate solutions in cloud computing, virtualization (in reliable data centers)”.
From the spam on the author’s resources, highlight the trademarks, who is behind whom and who works for whom. Everything will fall into place. The usual powerful propaganda from large players, aimed at collecting donations, regular payments and subscriptions for a virtual “cell” in a data center. For this, Cisco, RedHat and other corporate players hire their watchdogs.
A typical “Facebook” with users, only in the terminology of open source software. So it turns out: the assets will be ours - your data, accounts, personal data, confidential data, etc. And virtualization assets, data centers, disk quotas, subscriptions, etc. are also OURS. Cool, very cool. Just like Klaus Schwab - the concept of living in rent for today's and future generations. When necessary, we will turn off this, this and this for you. Aka today's managed social networks and platforms.
Read how much headache your "clouds" in the field of video surveillance cause! Read a couple of very narrow topics about how people were turned into commodity. Local artisan admins of companies like "Hikvision", "Dahua" simply take away the access rights to the video recorder! This is just a small remark.
Admins, your administration rights are simply taken away, replacing "administration" with "virtualization systems in cloud computing". This is a typical and traditional “vendor lock”, only at the level of network-centric architectures involving millions of users. Yes, it makes sense – admins are no longer needed!
Admins, you are turning into users! But people like Andy Lapteff serve it up with the sauce: “Architects” delegate authority to themselves and will solve all your problems!” Do you know my problems, their depth, quality, properties? No.
On the topic: I will disappoint you, corporate boys on the payroll of “Cisco-RedHat” - the niche of traditional small “artisans” will not die for a very long time. Such notes are aimed only at squeezing out and ruining small private businesses. This is a “civilized” form of total subordination to large players.
Andy Lapteff is typical. And in the chain of such typical ones there are tens-hundreds of lawyers, marketers, gurus and coaches.
 
the niche of traditional small “artisans” will not die for a very long time.
I think that's wishful thinking. EVERYONE is concerned about their own "bottom line".

If Google comes to you and says they can handle your email FOR YOU, for very little (or zero!) money -- "Just point your MX records at us" -- how long before someone counting pennies opts to make that switch? "What HARM could it do?"

"Let us keep your data safe in our cloud service! We'll ensure that you never have to pay someone to do/restore backups, ever again!"

If I have to spend $X to hire someone to maintain my business's infrastructure, that's $X that I can't spend on hiring a CREATIVE person to innovate and give me an edge in the market. A guy handling email, web server, etc. doesn't buy me ANYTHING in the market!

When your competitors (or, others in your market) adopt a cost-saving measure, there is huge pressure on you to do likewise; THEY now have a financial advantage over you (their profit margins can be higher -- or, their sell prices lower... either way, YOU are the loser.

This is directly applicable to FOSS development efforts. Change for the sake of change is usually NOT a good thing. It increases the cost of using your "product" over the cost of using someone else's product that doesn't change (as often) -- even if the other product is inferior!

[And THAT sums up my stand on Linux!]
 
Back
Top