Where you are right: in the short term, a more difficult-to-use Linux with systemd means more support workload for RHEL (meaning more expenses), and more revenue, as more customers renew their support contracts because running Linux without professional support is outside the skills or wishes of more customers. In the long run, making Linux hard to use is not a good business strategy. Remember who RH's real competitors are: Not AIX and HP-UX and Solaris (which are in various shades of dead), and absolutely not FreeBSD (which is mostly a hobbyist operating system), but Windows, and SUSE and CentOS. In particular CentOS, because it gives you everything RHEL does, just without the costly support contracts and the annoying license management. For customers with staff skilled enough to run it, CentOS is a good alternative, good for the customer, horrible for RH.
Red Hat does not 'fight' with CentOS, Red Hat and CentOS recently joined forces:
https://community.redhat.com/centos-faq/ said:
Red Hat is taking an active role in the CentOS Project to accelerate the development and broaden the reach of projects such as OpenStack by expanding our base of community-oriented users to include those engaged with CentOS now and in the future.
By working with the CentOS Project, we can reach beyond those actively engaged in platform innovation through Fedora to projects and people in need of a community Linux distribution that’s open to selective modification while remaining relatively stable.
Red Hat support is, same as their Linux, sh!t. We do not use their support because if something is easy to fix, its pointless to make support request as solution is know or easy searchable on the Internet. If something is hard (like some nuances in Red Hat Cluster Suite or some 'internals') then the only thing Red Hat support does good is passing your service request over different time zones to different support people, but as its switched to new guy, he will probably ask You again the same questions ... You may ask why we have Red Hat support at all? Well, so called 'business' wants to and pays for so called support just to have support, because they paid for the database with support, for application servers with support etc.
We often 'split' these 'business' demands into Red Hat for production with support and CentOS or Oracle Linux (which is also RHEL clone) for test/dev.
They are all (or at least most of them) RHEL employers, and RHEL live from support services. It is very interesting to RHEL if Linux become very fragile and difficult to understand because there will have more profit for --> RHEL. Same can be said about SUSE.
Aside from RHEL and SUSE, there are no other serious support bussiness available worldwide, nor for Linux neither for *BSD.
You can also buy same/similar support from Oracle for their Oracle Linux which is, besides logos/colors, same RHEL clone as CentOS.
Administering Linux before systemd wasn't trivial either. The system configuration (both parameters of the kernel and interfaces, and which services to start when and how) was a huge mess beforehand. Systemd has not removed the mess (it really can't, there are too many moving parts), just arranged it differently.
CentOS/RHEL/Oracle Linux 6.x are easier (or more predictible) to administer then 7.x series with systemd. And its not only about systemd, no sir. Even installer in 7.x series is totally fscked up. For example, if in that 7.x installer You will create pretty standard 'enterprise' setup with two physical network interfaces coupled together into highly available interface
bond0 (
lagg0 on FreeBSD) and then You put a VLAN tag and IP address on that VLAN, then You get total mess which looks like that:
Code:
# cat /etc/sysconfig/network
# (empty file)
# pwd
/etc/sysconfig/network-scripts
# ls -1 ifcfg-*
ifcfg-Bond_connection_1
ifcfg-eno49
ifcfg-eno49-1
ifcfg-eno50
ifcfg-eno50-1
ifcfg-VLAN_connection_1
# tail -n 9999999 ifcfg-*
==> ifcfg-Bond_connection_1 <==
DEVICE=bond0
BONDING_OPTS="miimon=1 updelay=0 downdelay=0 mode=active-backup"
TYPE=Bond
BONDING_MASTER=yes
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=dhcp
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_PRIVACY=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME="Bond connection 1"
UUID=ca85417f-8852-43bf-96ee-5bd3f0f83648
ONBOOT=yes
==> ifcfg-eno49 <==
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=dhcp
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=eno49
UUID=2f60f50b-38ad-492a-b90a-ba736acf6792
DEVICE=eno49
ONBOOT=no
==> ifcfg-eno49-1 <==
HWADDR=xx:xx:xx:xx:xx:xx
TYPE=Ethernet
NAME=eno49
UUID=342b8494-126d-4f3a-b749-694c8c922aa1
DEVICE=eno49
ONBOOT=yes
MASTER=bond0
SLAVE=yes
==> ifcfg-eno50 <==
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=dhcp
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=eno50
UUID=4fd36e24-1c6d-4a65-a316-7a14e9a92965
DEVICE=eno50
ONBOOT=no
==> ifcfg-eno50-1 <==
HWADDR=xx:xx:xx:xx:xx:xx
TYPE=Ethernet
NAME=eno50
UUID=a429b697-73c2-404d-9379-472cb3c35e06
DEVICE=eno50
ONBOOT=yes
MASTER=bond0
SLAVE=yes
==> ifcfg-VLAN_connection_1 <==
VLAN=yes
TYPE=Vlan
PHYSDEV=ca85417f-8852-43bf-96ee-5bd3f0f83648
VLAN_ID=601
REORDER_HDR=yes
GVRP=no
MVRP=no
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=none
IPADDR=10.20.30.40
PREFIX=24
GATEWAY=10.20.30.1
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_PRIVACY=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME="VLAN connection 1"
UUID=90f7a9bb-1443-4adf-a3eb-86a03b23ecfb
ONBOOT=yes
For the record, I choose 'STATIC' IPv4 address, but installer made these interfaces to use DHCP AND that STATIC address ... enterprise ...
After manual fixing with
vi(1) this is how it supposed to look ...
Code:
# cat /etc/sysconfig/network
GATEWAY=10.20.30.1
NOZEROCONF=yes
# ls -1 ifcfg-*
ifcfg-bond0
ifcfg-bond0.601
ifcfg-eno49
ifcfg-eno50
# tail -n 9999999 ifcfg-*
==> ifcfg-bond0 <==
DEVICE=bond0
BONDING_OPTS="miimon=1 updelay=0 downdelay=0 mode=active-backup"
TYPE=Bond
BONDING_MASTER=yes
BOOTPROTO=none
IPV4_FAILURE_FATAL=no
IPV6INIT=no
ONBOOT=yes
==> ifcfg-bond0.601 <==
VLAN=yes
TYPE=Vlan
VLAN_ID=601
DEVICE=bond0.601
REORDER_HDR=yes
GVRP=no
MVRP=no
BOOTPROTO=none
IPADDR=10.20.30.40
PREFIX=24
IPV4_FAILURE_FATAL=no
IPV6INIT=no
ONBOOT=yes
==> ifcfg-eno49 <==
BOOTPROTO=none
IPV4_FAILURE_FATAL=no
IPV6INIT=no
TYPE=Ethernet
NAME=eno49
DEVICE=eno49
ONBOOT=yes
MASTER=bond0
SLAVE=yes
==> ifcfg-eno50 <==
BOOTPROTO=none
IPV4_FAILURE_FATAL=no
IPV6INIT=no
TYPE=Ethernet
NAME=eno50
DEVICE=eno50
ONBOOT=yes
MASTER=bond0
SLAVE=yes
Not to mention that the same configuration on FreeBSD would be in 7 lines of
/etc/rc.conf file:
Code:
ifconfig_fxp0="up"
ifconfig_fxp1="up"
cloned_interfaces="lagg0"
ifconfig_lagg0="laggproto failover laggport fxp0 laggport fxp1"
vlans_lagg0="601"
ifconfig_lagg0_601="inet 10.20.30.40/24"
defaultrouter="10.20.30.1"
Another thing ... if You want to have something executed at boot on 6.x You would put it as /etc/init.d/NAME, then put a link in needed runlevel (rc3.d most of the time) and viola! With systemd You need to create script, then create NAME.service, then systemd creates links to that NAME.service so it will actually eventually run that script ... surely a great improvement :ASD
And while I have not had to do any serious admin work on a Linux machine since systemd has been in use (matter-of-fact, I have done very little admin stuff on Linux in the last 10 years, with the exception of one Raspberry Pi running Raspbian, which does have systemd), I have friends who administer very complex servers (clusters, with high-end networking, extreme hardware, strange and powerful software), and they say that systemd doesn't really hurt.
Generally administrating Linux is not a pleasant activity, I work with Linux only because I am paid for that, with RHEL/CentOS 7.x its even more PITA because of systemd.