Solved Segmentation fault error after python 3.6 to 3.7 update

You did not miss it. I have not really posted it. It is just a python message that came up multiple other times while troubleshooting the previous problems ( lack of rebuilding after updating python 3.6 to 3.7).

At the end of my listing of jails, I get the following error message.
the command is:
iocage list
jails listed ....
Code:
sys:1: ResourceWarning: unclosed file <_io.TextIOWrapper name=1 mode='w' encoding='utf8'>
sys:1: ResourceWarning: unclosed file <_io.TextIOWrapper name=2 mode='w' encoding='utf8'>

I am currently still rebuilding though. I hope it goes away!


I have tried to set up VM in my windows machines, but have yet to get them working. In all honesty, I was trying to setup ubuntu rather than freeBSD. At any rate, have not gotten that far. The folks at openemr like ubuntu and Docker a lot. I just happen to find freebsd to be lighter (maybe) or maybe I am just used to it since it is what I started with. Back then there was no Ubuntu or docker. I just don't want to learn a different system. So networking the VM's from within the windows was what had me a bit stuck, plus I spend a lot of time figuring out the actual openemr software.
I will switch to pkg rather than ports from the recommendation here.

I will be setting up the new machines once I understand the one problem I get here which is:
mail kernel: swap_pager: indefinite wait buffer: bufobj: 0, blkno: 525570, size: 4096
when this happens the machine really slows down. If this has to do with HDD configuration, then I would like some advice before I configure it. Yes, I can redo it several times, but my current machine can be my future test machine, and the new machine I set up should be the new production server.
 
You do seem to make things complicated. If you've got Windows, use Virtual Box, CD "boot" off a FreeBSD 12.1 ISO and play away and learn.

Google says the unclosed file warning is Python closing a file that seems to have been forgotten, but no point in looking at that until you've finished whatever you are doing on the machine that is throwing the message.

And for your latest issue, Google points here: https://forums.freebsd.org/threads/help-needed-kernel-swap_pager-indefinite-wait-buffer.67840/
 
A couple of suggestions based on what I have read of your predicament so far:

You're already familiar with jails, so keep the base OS as minimal as possible and do everything in jails. Doing this also makes it easier to test out new configurations (just snapshot the filesystems for a jail and create a new one) as well as confining any mess to a smaller environment. Generally I run one major service per jail (mx, www, db, etc) with a local virtual network that's unrouted and filtered via pf to provide isolation and allow services that need to to interconnect (e.g. services that need to be able to get to the DB can, but those that don't can't and it's absolutely unreachable from outside the network).

As others have said, use pkg except where you need something custom. It's faster and less error prone.

Build custom (or getting to latest faster, such as with security vulnerabilities) packages using poudriere rather than portmaster. It'll build the port in a separate jail (and thus clean environment) which helps for inter-package dependencies as well as just general sanity (your non-build environments aren't cluttered with build dependencies, for instance). It also outputs packages and you can roll back to an earlier version by simply doing a pkg install of the older version. The following script will work (or you can use a build environment such as Jenkins or GoCD).
Code:
#!/bin/sh

if [ $# -ne 1 ]; then
  echo "USAGE: $0 port_name" >&2
  exit 1
fi

# Update the ports so we have the latest version before each build
sudo poudriere ports -uv

# Build the port ($1 is the port)
# Replace $poudriere_jail_name with the name of your Poudriere jail (or assign the variable further up)
sudo poudriere bulk -j $poudriere_jail_name $1

Force cores to be dropped to a known location. This way they won't clutter the disk (by dropping in whatever the current directory is of the process) and won't fill your disk (if you put a quota on, which I recommend).
Code:
zfs create  zroot/var/coredumps

# Change 2G to whatever size is appropriate to your setup
zfs set quota=2G zroot/var/coredumps

# Make root the only one to access core dumps
chmod 0700 /var/coredumps

cat >> /etc/sysctl.conf <<EOF
# Store all cores in /var/coredumps/
# See core(5) for details of variables
kern.corefile=/var/coredumps/%H_%N.%P.%U.%I
# Compress cores
kern.compress_user_cores=1
EOF

# Reload /etc/sysctl.conf
sysctl -f /etc/sysctl.conf

PHP is a security nightmare. Make sure you are on the security mailing lists for PHP itself as well as any and all software that uses it. I recommend removing PHP from your environment if practical, especially for security conscious services such as medical records. Each instance of PHP software should go in its own jail.

As VladiBG said, you can get to root via "su". You can also do so with "sudo -s", which has you typing your password instead of roots'. This is generally considered preferable. It is also generally a good practice to do all root operations via sudo so that you don't accidentally do something in a root shell (lots of mistakes leading to a reinstall happen this way).

I don't understand. The server must be available at all times.

This is not practical or financially viable. You need to work out how much you're willing to be down and then design around that requirement. The closer you get to 100% the more money you'll through at it (going from 99.9% to 99.99% is often an order of magnitude difference in cost). The SLA, service level agreement, you provide to customers will involve financial penalties for too much unscheduled downtime, so you really need to be aware of what is practical for your configuration. You'll also need monitoring to tell you when things are down and when you're getting close to your SLAs (you should have SLOs, service level objectives, which are tighter than your SLA so that you fix things before lawyers for your customers start scrutinizing your SLAs).

A reasonable production SLA is 99.9% uptime. This is approximately 43 minutes of downtime per month. Depending on what your product is and what your customers hours are, you may be able to build in service windows or make it business hours. This will give you more time for maintenance. A good architecture will also help with doing maintenance window free changes.

Remember that the availability is not just the machines, it's everything between you and the customer. You'll need high availability local networking, redundant ISP connections etc.

Yes, I have two boxes with 6 hot-swappable drive bays.

Your ZFS architecture depends on what your reliability and throughput/latency requirements are. I generally recommend putting the OS on a separate pair of mirrored disks and then everything else on the remaining disks. Unfortunately, this only leaves you with 4 disks, so your choices are:
  • Mirrored stripes (RAID 1+0). This means that you can safely lose 2 disks, one from each stripe, without losing data. If you lose two disks from the same stripe you will lose data. The storage you'll be left with is 50% of your disk size. Throughput is 4x the individual disk throughput for reads and 2x for writes (throughput numbers, as with those below, really depend on your workload so they are approximations and could be off by quite a bit).
  • RAIDZ (3 data + 1 parity). This means you can safely lose any 1 disk without losing data. The storage space you'll have is 75% of your disk size. Throughput is approximately 3x your individual disk throughput for reads and writes. Due to the size of modern disks and chance of losing a second disk during a rebuild, it is not recommended for non-HA configurations. Reads and writes are slowed down by having to compute parity.
  • RAIDZ2 (2 data + 2 parity). You can safely lose any 2 disks without losing data. The storage you'll be left with is 50% of your disk size. Throughput is approximately 2x the individual disk throughput for reads and writes. Reads and writes are slowed down by having to compute parity.
Personally, for extremely sensitive data, I go with RAID 1+0 using three disk mirrors. I have had double failures on a single mirror before where the three disk replication saved me. These are typically larger installations using tens or hundreds of disks though and the hardware you have won't work for this.

The reasoning for putting the OS on separate disks is because the workloads and requirements are different. For example, you might need to encrypt your data disks (for medical data, you definitely do), but you can't boot off encrypted ZFS or GEOM. ZFS encryption is only available in HEAD (FreeBSD 13).

The folks at openemr like ubuntu and Docker a lot.
New hotness, all the cool kids are doing it. Docker is like a less secure version of jails, but it's great for getting up demo environments really fast. Production is a whole different ball game with Docker containers being the script kiddies wet dream.

Other things which have not yet come up which you definitely want to think about (I'm guessing you might not have considered some or all of these based on your assumption of 100% uptime):
  • Monitoring. Everything needs to be monitored both internally and externally and someone needs to be oncall (i.e. be pageable) all the time. Operationally, you need two people minimum (and more if possible) as 24/7 oncall leads to burn out and people need to be able to take holidays. A reasonable number of people for proper production oncall is 6 people in two timezones (so 12 total, preferably in two different economic zones such as USA and EU), although most places get by with fewer staff and take a hit on project work and burnout rate. Oncall staff will also need to be paid an oncall bonus because they're effectively working 24 hours a day for their oncall period. You'll skip a lot of this in the early days as you just struggle to get your operation going, but keep it in mind as something that needs fixing sooner rather than later. Prometheus with Grafana makes a good local monitoring starting point. Pingdom is a good starting point for remote monitoring. Remember that monitoring too much is as bad as monitoring too little. It's easy to get swamped by things which are not impacting your customers and so not actually relevant.
  • Configuration management. Is every change checked into a source code repository and then automatically deployed? How are changes deployed to production so that all of your machines providing a given sub-service all look identical? Which configuration management tool will you use: Ansible, SaltStack, Puppet or Chef? How do you upgrade your production software? Can you do it live or do you need to do it during a maintenance window? If you're aiming for live, are you using blue/green or configuration flags or some other process? How do you name your machines/jails? Are those names visible in DNS and how do they get there? Do you need service discovery for your components?
  • Local scripts and software: /opt/? /usr/local/? Packaged or deployed via configuration management copying? I recommend running your own package repo and packaging all local software/scripts so they deploy the same way as everything else.
  • Backups. All the data you store needs to be backed up securely offsite. As it's healthcare data, it needs to be backed up in a HIPPA compliant way. I recommend restic to your favorite cloud provider as it encrypts and de-duplicates. I use Backblaze B2, but they aren't HIPPA compliant so you'll likely want SpiderOak, Carbonite or someone else willing to sign a BAA.
  • High availability. What can fail without causing an outage? How quickly can you fail over if a critical component goes down (e.g. motherboard failure or a datacenter power outage)? Is that automated or not? How is your customer data (most likely, your database) replicated to its partners?
  • Performance and scalability. How many customers of what size can your hardware support? How do you tell when you're hitting a limit (e.g. if your average response time slows down or disks are filling up)? How do you expand that? It sounds like you're starting with everything on a single tier. How are you going to split things up to be multiple tiers when the applications no longer fit on a single machine? How do you scale each tier?
  • SLAs, SLOs and SLIs. As mentioned above, you'll need to settle on a service level that's appropriate to your customers. You'll want internal objectives which are tighter than the SLA and SLI (service level indicators) which tell you about service critical path failures. If you have to go to backups to recover data (e.g. during a data corruption event), how long does that take to restore and get your system running again (this is the mean time to recover or MTTR)?
  • Security and privacy. How are you going to ensure the security of your service? Are you HIPPA compliant? How about COPA, GDPR and COPPA? Are you part of the financial chain and also need to get SOC 2 or SSAE 18?
 
I have tried to set up VM in my windows machines, but have yet to get them working. In all honesty, I was trying to setup ubuntu rather than freeBSD. At any rate, have not gotten that far. The folks at openemr like ubuntu and Docker a lot. I just happen to find freebsd to be lighter (maybe) or maybe I am just used to it since it is what I started with. Back then there was no Ubuntu or docker. I just don't want to learn a different system. So networking the VM's from within the windows was what had me a bit stuck, plus I spend a lot of time figuring out the actual openemr software.
I will switch to pkg rather than ports from the recommendation here.

Oracle's VirtualBox, it's free and it does work with Windows and FreeBSD unless if you're using a different VM emulator. Stay away from Ubuntu, Debian or linux variants as they're not the same as FreeBSD. Of course... stay far far away from Docker as it's designed for sys admins who don't want to learn how to manage their servers.
 
PHP is a security nightmare. Make sure you are on the security mailing lists for PHP itself as well as any and all software that uses it. I recommend removing PHP from your environment if practical, especially for security conscious services such as medical records.
Cough. Can you point me in the direction of the resources that prove this point specifically about PHP 7.x - the core PHP, not things like Wordpress?

Agree about being on mailing lists, patching & keeping up-to-date, install as few modules/add-ons/frameworks etc. but that is true of any programming environment exposed to the internet.

A bit OT from the OP's request but would like to see/read why you've made this statement "security nightmare". Most vulnerabilties seem to be caused by C's foibles, and if we stopped using anything that used C or was built on C, the internet would be very different!
 
Like this?
Code:
root@kg-core1# tail /var/log/messages
Sep 26 00:18:48 kg-core1 pkg-static: qtchooser-66_4 installed
Sep 26 03:15:12 kg-core1 pkg: qt5-webengine-5.15.0_2 deinstalled
Sep 26 03:15:15 kg-core1 pkg-static: qt5-webengine-5.15.0_3 installed
Sep 26 16:54:40 kg-core1 pkg: gpu-firmware-kmod-g20200503 deinstalled
Sep 26 16:54:41 kg-core1 pkg-static: gpu-firmware-kmod-g20200920 installed
Sep 26 17:12:53 kg-core1 pkg: intel-graphics-compiler-1.0.4879 deinstalled
Sep 26 17:12:54 kg-core1 pkg-static: intel-graphics-compiler-1.0.5064 installed
Sep 26 22:15:43 kg-core1 pkg: node-14.10.0_1 deinstalled
Sep 26 22:16:06 kg-core1 pkg: rust-cbindgen-0.14.3 deinstalled
Sep 26 22:16:19 kg-core1 pkg: rust-1.44.1_1 deinstalled
 
Oracle's VirtualBox, it's free and it does work with Windows and FreeBSD unless if you're using a different VM emulator. Stay away from Ubuntu, Debian or linux variants as they're not the same as FreeBSD. Of course... stay far far away from Docker as it's designed for sys admins who don't want to learn how to manage their servers.

Most definitely. Docker is not meant for production. At least not the free version.
 
A couple of suggestions based on what I have read of your predicament so far:

You're already familiar with jails, so keep the base OS as minimal as possible and do everything in jails. Doing this also makes it easier to test out new configurations (just snapshot the filesystems for a jail and create a new one) as well as confining any mess to a smaller environment. Generally I run one major service per jail (mx, www, db, etc) with a local virtual network that's unrouted and filtered via pf to provide isolation and allow services that need to to interconnect (e.g. services that need to be able to get to the DB can, but those that don't can't and it's absolutely unreachable from outside the network).

As others have said, use pkg except where you need something custom. It's faster and less error prone.

Build custom (or getting to latest faster, such as with security vulnerabilities) packages using poudriere rather than portmaster. It'll build the port in a separate jail (and thus clean environment) which helps for inter-package dependencies as well as just general sanity (your non-build environments aren't cluttered with build dependencies, for instance). It also outputs packages and you can roll back to an earlier version by simply doing a pkg install of the older version. The following script will work (or you can use a build environment such as Jenkins or GoCD).
Code:
#!/bin/sh

if [ $# -ne 1 ]; then
  echo "USAGE: $0 port_name" >&2
  exit 1
fi

# Update the ports so we have the latest version before each build
sudo poudriere ports -uv

# Build the port ($1 is the port)
# Replace $poudriere_jail_name with the name of your Poudriere jail (or assign the variable further up)
sudo poudriere bulk -j $poudriere_jail_name $1

Force cores to be dropped to a known location. This way they won't clutter the disk (by dropping in whatever the current directory is of the process) and won't fill your disk (if you put a quota on, which I recommend).
Code:
zfs create  zroot/var/coredumps

# Change 2G to whatever size is appropriate to your setup
zfs set quota=2G zroot/var/coredumps

# Make root the only one to access core dumps
chmod 0700 /var/coredumps

cat >> /etc/sysctl.conf <<EOF
# Store all cores in /var/coredumps/
# See core(5) for details of variables
kern.corefile=/var/coredumps/%H_%N.%P.%U.%I
# Compress cores
kern.compress_user_cores=1
EOF

# Reload /etc/sysctl.conf
sysctl -f /etc/sysctl.conf

PHP is a security nightmare. Make sure you are on the security mailing lists for PHP itself as well as any and all software that uses it. I recommend removing PHP from your environment if practical, especially for security conscious services such as medical records. Each instance of PHP software should go in its own jail.

As VladiBG said, you can get to root via "su". You can also do so with "sudo -s", which has you typing your password instead of roots'. This is generally considered preferable. It is also generally a good practice to do all root operations via sudo so that you don't accidentally do something in a root shell (lots of mistakes leading to a reinstall happen this way).



This is not practical or financially viable. You need to work out how much you're willing to be down and then design around that requirement. The closer you get to 100% the more money you'll through at it (going from 99.9% to 99.99% is often an order of magnitude difference in cost). The SLA, service level agreement, you provide to customers will involve financial penalties for too much unscheduled downtime, so you really need to be aware of what is practical for your configuration. You'll also need monitoring to tell you when things are down and when you're getting close to your SLAs (you should have SLOs, service level objectives, which are tighter than your SLA so that you fix things before lawyers for your customers start scrutinizing your SLAs).

A reasonable production SLA is 99.9% uptime. This is approximately 43 minutes of downtime per month. Depending on what your product is and what your customers hours are, you may be able to build in service windows or make it business hours. This will give you more time for maintenance. A good architecture will also help with doing maintenance window free changes.

Remember that the availability is not just the machines, it's everything between you and the customer. You'll need high availability local networking, redundant ISP connections etc.



Your ZFS architecture depends on what your reliability and throughput/latency requirements are. I generally recommend putting the OS on a separate pair of mirrored disks and then everything else on the remaining disks. Unfortunately, this only leaves you with 4 disks, so your choices are:
  • Mirrored stripes (RAID 1+0). This means that you can safely lose 2 disks, one from each stripe, without losing data. If you lose two disks from the same stripe you will lose data. The storage you'll be left with is 50% of your disk size. Throughput is 4x the individual disk throughput for reads and 2x for writes (throughput numbers, as with those below, really depend on your workload so they are approximations and could be off by quite a bit).
  • RAIDZ (3 data + 1 parity). This means you can safely lose any 1 disk without losing data. The storage space you'll have is 75% of your disk size. Throughput is approximately 3x your individual disk throughput for reads and writes. Due to the size of modern disks and chance of losing a second disk during a rebuild, it is not recommended for non-HA configurations. Reads and writes are slowed down by having to compute parity.
  • RAIDZ2 (2 data + 2 parity). You can safely lose any 2 disks without losing data. The storage you'll be left with is 50% of your disk size. Throughput is approximately 2x the individual disk throughput for reads and writes. Reads and writes are slowed down by having to compute parity.
Personally, for extremely sensitive data, I go with RAID 1+0 using three disk mirrors. I have had double failures on a single mirror before where the three disk replication saved me. These are typically larger installations using tens or hundreds of disks though and the hardware you have won't work for this.

The reasoning for putting the OS on separate disks is because the workloads and requirements are different. For example, you might need to encrypt your data disks (for medical data, you definitely do), but you can't boot off encrypted ZFS or GEOM. ZFS encryption is only available in HEAD (FreeBSD 13).


New hotness, all the cool kids are doing it. Docker is like a less secure version of jails, but it's great for getting up demo environments really fast. Production is a whole different ball game with Docker containers being the script kiddies wet dream.

Other things which have not yet come up which you definitely want to think about (I'm guessing you might not have considered some or all of these based on your assumption of 100% uptime):
  • Monitoring. Everything needs to be monitored both internally and externally and someone needs to be oncall (i.e. be pageable) all the time. Operationally, you need two people minimum (and more if possible) as 24/7 oncall leads to burn out and people need to be able to take holidays. A reasonable number of people for proper production oncall is 6 people in two timezones (so 12 total, preferably in two different economic zones such as USA and EU), although most places get by with fewer staff and take a hit on project work and burnout rate. Oncall staff will also need to be paid an oncall bonus because they're effectively working 24 hours a day for their oncall period. You'll skip a lot of this in the early days as you just struggle to get your operation going, but keep it in mind as something that needs fixing sooner rather than later. Prometheus with Grafana makes a good local monitoring starting point. Pingdom is a good starting point for remote monitoring. Remember that monitoring too much is as bad as monitoring too little. It's easy to get swamped by things which are not impacting your customers and so not actually relevant.
  • Configuration management. Is every change checked into a source code repository and then automatically deployed? How are changes deployed to production so that all of your machines providing a given sub-service all look identical? Which configuration management tool will you use: Ansible, SaltStack, Puppet or Chef? How do you upgrade your production software? Can you do it live or do you need to do it during a maintenance window? If you're aiming for live, are you using blue/green or configuration flags or some other process? How do you name your machines/jails? Are those names visible in DNS and how do they get there? Do you need service discovery for your components?
  • Local scripts and software: /opt/? /usr/local/? Packaged or deployed via configuration management copying? I recommend running your own package repo and packaging all local software/scripts so they deploy the same way as everything else.
  • Backups. All the data you store needs to be backed up securely offsite. As it's healthcare data, it needs to be backed up in a HIPPA compliant way. I recommend restic to your favorite cloud provider as it encrypts and de-duplicates. I use Backblaze B2, but they aren't HIPPA compliant so you'll likely want SpiderOak, Carbonite or someone else willing to sign a BAA.
  • High availability. What can fail without causing an outage? How quickly can you fail over if a critical component goes down (e.g. motherboard failure or a datacenter power outage)? Is that automated or not? How is your customer data (most likely, your database) replicated to its partners?
  • Performance and scalability. How many customers of what size can your hardware support? How do you tell when you're hitting a limit (e.g. if your average response time slows down or disks are filling up)? How do you expand that? It sounds like you're starting with everything on a single tier. How are you going to split things up to be multiple tiers when the applications no longer fit on a single machine? How do you scale each tier?
  • SLAs, SLOs and SLIs. As mentioned above, you'll need to settle on a service level that's appropriate to your customers. You'll want internal objectives which are tighter than the SLA and SLI (service level indicators) which tell you about service critical path failures. If you have to go to backups to recover data (e.g. during a data corruption event), how long does that take to restore and get your system running again (this is the mean time to recover or MTTR)?
  • Security and privacy. How are you going to ensure the security of your service? Are you HIPPA compliant? How about COPA, GDPR and COPPA? Are you part of the financial chain and also need to get SOC 2 or SSAE 18?
Hmm, my business plan.
Yes, I am HIPAA compliant. Was planning on 2 datacenters in USA this year. 3rd next year. All in different states. Had not thought of EU.
Lot of stuff, thanks
was planning on ansible
Already had a few of these planned. But not most of the other things. I will get back to this.
Openemr does automated backups. I will be using it for medical data.
Plan is to hire sys admins like you mentioned. Too much for me. My plan is to design and understand. That is all. After all, only I am responsible for the data. No one else.
 
It won't be a popular opinion here, but if that is what they are developing on, and recommend you use for running their application, then it may be best to use Ubuntu. Generally there isn't much difference going between "Unix" systems, because once you know it, then you're fine. Some commands may be different, or how they do networking, but it is trivial.

Although it may run on FreeBSD, if there is an issue with the application and it is due to FreeBSD and say no library they expect, or it is different than Linux, the developers probably aren't going to do much of anything to help. You'll be in a sink or swim position.

Yes, I find myself like so most of the time. Hence trying to set up the ubuntu for comparison.
Use what is best. With openmr it sounds like Ubuntu might be the better choice. And if they have openmr in Docker, then by all means, use it! It won't run on FreeBSD but who cares. It will run where it is best to run, and for health care you would certainly want it to be where they recommend. Not a popular opinion here, but I am not a zealot for any operating system. It serves no purpose to always hear "Don't use Linux." There is nothing wrong with it. Given the limited amount of information I have from what you said, if I was your paid consultant giving you advice, I'd tell you to use Ubuntu and if there is a Docker image to go that route.
OK, I understand.
My concern had been security?
 
I don't think it's wrong to say the "best" thing to do is what works for you.

Not every job is a nail, not every tool is a hammer.

Whether you use Linux, Mac, Windows, or a BSD - you've got to keep it up-to-date, patched, understand your risks and how to mitigate them (but there's always a degree of luck.) You need mirror systems to play, test, and develop on.

I use FreeBSD, OpenBSD, Mac, Windows, and Linux. MySQL mostly, some MariaDB, I've had a look at PostgreSQL and want to try my next project using it. Apache for web server but I know nginx and OpenBSD's httpd there as options. PHP does for my programming language - it's not perfect, none are. And plenty of other programming languages. 😍

There are no silver bullets, no magic solutions. Things that are easier to set-up and appear to not need a lot of knowledge initially will bite you hard when you have a real issue and you have no understanding of how it actually worked so you don't know where to start.

You can run a web server using Microsoft Windows, and .Net and SQL Server. Or Oracle. Or Linux. Or OpenBSD. Or FreeBSD. Apache, nginx, lighthttpd, etc. You've got to know what you are doing and you've got to look after it. There's no checklist that you can tick boxes on and then walk away.

And you might do everything right and be hit by a firmware issue, a bug in UEFI, human error. Life is messy!
 
Openemr does automated backups. I will be using it for medical data.

What does it mean to do automated backups? Are these just dumps of the current DB state to local disk? This is not the same as an offsite backup. It's critically important to get the data to an offsite location on a regular interval (max daily) in a storage format that is tamper resistant (not necessarily in a security sense, although that is a nice bonus and it's important not to neglect the security of your backups, so much as a fat finger sense). If your backups are local and the machine disappears, so have your backups.

The other thing to keep in mind about backups is that backups are irrelevant. Restores are what matters. Can you get back all your data, uncorrupted, quickly and get the service back into production? How long does that take? What do you do with the fact that some data will have been added between when your last backup was and when you had an outage? That's both a sysadmin and a business problem

I should also correct something I wrote the first time (don't write long replies at 02:00). RAID 1+0 is striped mirrors not mirrored stripes. Do the mirroring first and then stripe across the mirrors. Doing it the other way will cost you your data very quickly.
 
Cough. Can you point me in the direction of the resources that prove this point specifically about PHP 7.x - the core PHP, not things like Wordpress?

I will admit to not having studied PHP 7.X in depth. I gave up on it in 5.X after the core developers had several critical security holes and their response was to add flags to disable the fixes and recommend that everyone enable those flags because they were a breaking change otherwise. Security is a process and mindset and the PHP team did not even recognize its existence.

Prompted by your question, I did look around and see that there were considerable updates to PHP security in 7.1. The language however, still contains a myriad of ways in which it's easier to shoot yourself in the foot than do the right thing (this could be said of most languages, but I still contend that PHP is plagued by this to a greater degree than others as one can see in silliness such as TRUE == -1 is true while TRUE == 0 is not ). As noted in the Wikipedia security entry 11% of security vulnerabilities last year were PHP linked and there's still no taint checking or input validation.

Am I strongly biased against it after a number of years of chasing down weird bugs and fighting with the core developers about bone headed security flaws due to a lack of understanding of the very concept of security? Yes, absolutely. Could it have improved in the mean time? Quite possibly, but there's still quite of recent data that suggests the improvements still have a very long way to go.
 
If Linux was so bad it wouldn't be running the world.

I'm going to nitpick here. Windows was absolutely so bad all through the 1990's and early 2000's and it ran the corporate world. While it's actually good today (not just improved, it's good), there was the better part of two decades where Windows ran the world and was absolutely horrible. I can say the same about BIND 4 and BIND 8 (both of which literally ran the world as they were the primary DNS servers of their times).

The rest of your post I completely agree with. Linux is fine, just not my first choice (this is a FreeBSD forum, after all). It's a UNIX variant and that's much easier to run than Windows or MacOS for production (although Windows has caught up a lot in the last couple of years, so I'm not entirely sure how much longer that'll be the case).

Stay away from Ubuntu, Debian or linux variants

I would personally take Debian over the other Linux distributions as they are very particular in their stability and security (the latter being an area which FreeBSD could learn from, for the most part security flaws in existing packages are fixed in hours or a small number of days which is considerably faster than most fixes to the ports system, especially if you wait for a package to be built). Ubuntu is a Debian derivative which focused on updating packages faster and being more corporate friendly (i.e. having a throat to choke). It isn't generally as secure or stable, but there's a trade off there that different organizations will fall on different sides of.

As said by others, there's some value in doing what the developers do. There's also some value in working with the system you are comfortable working with as, at the end of the day, you have to keep it secure and maintained which is harder if you don't know what you're doing or dislike what you're working with.

As I suspect is true for most people on this forum, I get more personal value out of running FreeBSD in production and having to port software and put up with its various flaws than I do out of running other systems and putting up with their flaws. I've ported a ridiculous number of pieces of software between various platforms, even individual Linux distributions (nothing like finding that the developer not only assumed Linux, but RedHat Linux specifically) and it's just part of the job that you don't get away from by picking the "right" platform. There will always be something that is weird about the platform you're on and thus needs working around or accounting for.
 
Am I strongly biased against it after a number of years of chasing down weird bugs and fighting with the core developers about bone headed security flaws due to a lack of understanding of the very concept of security? Yes, absolutely. Could it have improved in the mean time? Quite possibly, but there's still quite of recent data that suggests the improvements still have a very long way to go.

PHP is quite secured and so is Java, Perl, C, Python, etc. It all comes down to how well the applications are designed on top of those frameworks. If its poorly designed application then expect problems and security issues. Most of those frameworks are open sourced and has been checked hundred of times for vulnerabilities. If there are known vulnerabilities in the framework and there's always a workaround to address those issues.
 
As VladiBG said, you can get to root via "su". You can also do so with "sudo -s", which has you typing your password instead of roots'. This is generally considered preferable. It is also generally a good practice to do all root operations via sudo so that you don't accidentally do something in a root shell (lots of mistakes leading to a reinstall happen this way).
I don't see any benefit of "sudo -s" over "su -" - but the disadvantage of staying in the users environment (!).

I switch to root as long I'm doing admin stuff. Using "Ubuntus sudo way" instead of "su -" means a hacker only needs to crack just one instead of two passwords - I don't agree that using sudo is to be preferred. And especially when it comes to servers / security: As long I don't need a package I won't install it.

The "sudo thing" became popular as Ubuntu decided to disable the root account. At that time it was popular by newbies to run the whole X session as root - because the target group was a) used to do their daily work on the admin account on another OS, and b) they felt it is uncomfortable to handle more than one account. So for that use case it was a benefit.

But the intention of sudo is not just to avoid roots command line, but to allow defined users to run defined commands as another user. And as far as I remember: A default installation of sudo doesn't allow a user to do any administration stuff.

And why is sudo to be preferred when there is also doas or super? Some tutorials nowadays assume that any Unix like OS has sudo available - and of course already pre-configured as Ubuntu does. And that's not true - even on Linux. I always see a lack of background in these instructions and recommendations. A general manual shouldn't contain "sudo", but nowadays it often does…

Looking through my shell histories tells me: Running the commands I'm using as user accidentally as root wouldn't do any harm - I'm not typing the "read mail real fast" command (rm -rf /) even as user. And to avoid such a confusion a simple white-on-red colorized root prompt can also help ;) The clear separation of the root account by having to switch to it is IMO to be prefered over just typing "sudo" in front of a command; Nowadays I see users first switching to a /usr/… directory to do some sudo commands, and then their failing on wget that cannot save their files anymore… A clear separation is IMO better - even for the lazy, dilettante end user.

Sorry for posting OT…
 
jmos i agree with you and i can only add this to your post:

CVE-2020-7954
CVE-2020-7254
CVE-2020-14342
CVE-2020-14162
CVE-2020-13695
CVE-2020-13694
CVE-2020-12850
CVE-2020-11108
CVE-2020-11069
CVE-2020-10589
CVE-2020-10588
CVE-2020-10587
CVE-2020-10286
CVE-2020-10277
CVE-2020-10255
CVE-2019-9891
CVE-2019-8320
CVE-2019-19234
CVE-2019-19232
CVE-2019-18684
CVE-2019-18634
CVE-2019-15949
CVE-2019-14287
CVE-2019-12775
CVE-2019-12147
CVE-2019-11526
CVE-2018-3263
CVE-2018-20052
CVE-2018-1903
CVE-2018-18556
CVE-2018-15359
CVE-2018-13341
CVE-2018-1203
CVE-2018-11194
CVE-2018-11193
CVE-2018-11192
CVE-2018-11191
CVE-2018-11190
CVE-2018-11189
CVE-2018-11134
CVE-2018-10852
CVE-2018-0493
CVE-2017-7642
CVE-2017-5198
CVE-2017-2381
CVE-2017-16777
CVE-2017-13707
CVE-2017-11741
CVE-2017-1000368
CVE-2017-1000367
CVE-2016-7091
CVE-2016-7076
CVE-2016-7032
CVE-2016-3643
CVE-2016-0920
CVE-2016-0905
CVE-2015-8559
CVE-2015-8239
CVE-2015-5692
CVE-2015-5602
CVE-2015-4685
CVE-2014-9680
CVE-2014-4870
CVE-2014-2886
CVE-2014-10070
CVE-2014-0106
CVE-2013-6831
CVE-2013-6433
CVE-2013-4984
CVE-2013-2777
CVE-2013-2776
CVE-2013-1776
CVE-2013-1775
CVE-2013-1068
CVE-2013-1052
CVE-2012-6140
CVE-2012-5536
CVE-2012-3440
CVE-2012-2337
CVE-2012-2053
CVE-2012-0809
CVE-2011-5275
CVE-2011-2473
CVE-2011-2472
CVE-2011-2471
CVE-2011-1760
CVE-2011-0010
CVE-2011-0008
CVE-2010-3856
CVE-2010-3853
CVE-2010-3847
CVE-2010-2956
CVE-2010-2757
CVE-2010-1938
CVE-2010-1646
CVE-2010-1163
CVE-2010-0427
CVE-2010-0426
CVE-2010-0212
CVE-2010-0211
CVE-2009-4648
CVE-2009-1185
CVE-2009-0037
CVE-2009-0034
CVE-2008-3825
CVE-2008-3067
CVE-2008-2516
CVE-2007-4305
CVE-2007-3149
CVE-2007-0475
CVE-2006-1079
CVE-2006-1078
CVE-2006-0576
CVE-2006-0151
CVE-2005-4890
CVE-2005-4158
CVE-2005-3629
CVE-2005-2959
CVE-2005-1993
CVE-2005-1831
CVE-2005-1387
CVE-2005-1119
CVE-2004-1689
CVE-2004-1051
CVE-2002-0184
CVE-2002-0043
CVE-2001-1240
CVE-2001-1169
CVE-2001-0279
CVE-1999-1496
CVE-1999-0958

Back to the OP, is your issue with certbot resolved if yes mark it as such.
Also try not to mix different topics in one thread as it get very messy to read.
 
Did you ever get certbot to work?
I did actually, and my system was up and running for like a day. But yes, the problem was the inappropriately updated python 3.6 to 3.7 (meaning the lack of rebuilding the ports). I figured each of the error messages by looking at the port dependencies in freshports.org and rebuilding the required dependencies, then rebuilding the problem port. Eventually, I was able to rebuild all of the python modules and the related ports.
I was also able to start the update to the recommended perl version.
 
Did you ever get certbot to work?

Back to the OP, is your issue with certbot resolved if yes mark it as such.

I did, I posted it several messages ago:

So, with rebuilding everything, (python 3.7 and perl 5.32) my EHR is back up and running. My problem was that I updated python to 3.7 without following the instructions.
IOCAGE-devel does work with python 3.7
I just keep getting the unclosed file warning, that is all.

I have also specified that the problem was a badly updated python 3.6 to 3.7 and the title may need changing. It really was not a certbot problem.
I don't know how to mark something as solved. If you point me to where I do it, I will.

Also try not to mix different topics in one thread as it get very messy to read.

I try and will continue to try. My brain is not typical. It causes me a lot of problems.
Thanks for all your help.
 
Back
Top