===> Configuring for py37-MarkupSafe-1.1.1
setup.py:1: ResourceWarning: unclosed file <_io.BufferedReader name='setup.py'>
from __future__ import print_function
ResourceWarning: Enable tracemalloc to get the object allocation traceback
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "setup.py", line 13, in <module>
from setuptools.command.build_ext import build_ext
File "/usr/local/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 26, in <module>
__import__('Cython.Compiler.Main')
File "/usr/local/lib/python3.7/site-packages/Cython/Compiler/Main.py", line 28, in <module>
from .Scanning import PyrexScanner, FileSourceDescriptor
SystemError: init function of Scanning returned uninitialized object
*** Error code 1
Stop.
make: stopped in /usr/ports/textproc/py-MarkupSafe
Yes, I agree with you. I can't help it.I think you are in search of the perfect set-up - with jails, ports, make.conf tweaks, perfect ZFS set-up, etc. from the get-go.
It feels like you've tried to sprinkle lots of "best" or "better" practices and ended up with a bit of a monster set-up that you don't quite understand and is fragile and too hard to update.
No offense taken at all. I actually appreciate the help.I really don't mean that rudely, so please don't take it that way!
OK, but I will use 1 TB and leave 6 out? :-(Slow down, try a few things on the spare server. Learn what works and doesn't work, document those things. Take the time to learn. Don't worry about getting ZFS set up "properly" on the test server, just take the defaults. You will probably make some mistakes and end up having to scrub the server and trying from scratch again. But that's great - you'll learn what can go wrong, how it will manifest itself, and get used to trying to fix things and roll back.
Keep it simple, only make it as complicated as it needs to be. No-one can tell you what you need or what is best for you & your set-up - that's what you have to find out.
Start simple - take the default options for OS install, use binary packages (pkg) for everything. Try and get basic cop(ies) of your website(s) etc. working on the spare server. Do you need jails? Do you need to use ports? Or can you keep it very simple to start with?
Then you can use that experiment to fix your main server, but keep the spare server going, and test all your upgrades on that machine - giving you the confidence that it should work on the main machine when you want to upgrade that.
lol, yes I think too much, also, I need access to my charts. But yes, I will take all the defaults. It's just so much work. It takes me hours. I have to research everything (practically) but yes, I will do it.Oh, look, it's getting complicated again.
I'd really suggest you try installing FreeBSD on the spare machine, take all the defaults (if that uses one drive or 15, who cares, this is just a test install), install packages, copy across your site(s) and see if you can things working on there - not 100% but enough to prove you can set-up a working configuration. Leave it a few days and try freebsd-update and pkg update/upgrade and get confidence that all works.
As part of this process you'll find out if packages work for you or if you really need to use ports.
Then scrub the machine, use ZFS options, make jails, etc. Go to town - at least you've got a baseline you can retreat to if you get lost.
If you break anything, clear it down, try again.
It doesn't even have to be real hardware - if you've got a Windows or Mac or Linux box or FreeBSD desktop machine - install Virtual Box and play on there. Do an install in the VM, freebsd-upgrade it, copy the VM file (for faster restore to good starting state.)
You are still trying to run before you can walk, and tripping up over your shoe laces (or is that mixed metaphors?!)
What is your availability requirement? 5 minutes a year, 5 hours a year, or it doesn't matter? ZFS isn't storage that can be replicated disk like an enterprise where 2 TB of SAN will come to about $10,000, and you can have a remote copy on another sever. ZFS is intended to be on an individual server.
Yes, I have two boxes with 6 hot-swappable drive bays. I have 100 1 TB drives for those bays. I've been planning for this for years. Since 2008. These boxes are in addition to the box which has the broken system. The latter has 8 small form factor hot-swappable HDD bays.There is an option to use ZFS+HAST+CARP, but it involves diverging from the KISS principle. There is also FreeNAS. I'd be partial to SmartOS for NAS since it boots from USB and all of the disk is left for storage.
What would be optimal is you have a primary server and a second server used as the failover. If something happens on the primary, then your secondary becomes the active server. A third server would be your development server. It would, or should, be as nearly identical to prod as possible. Then, if you need to upgrade FreeBSD and/or packages, then your development server is the guinea pig. If something goes wrong then you fix it there and take note so you're aware for prod. When all updates are done on development, then you do the secondary server which validates your development steps. Then when you move to production, there should be no surprises.
nginx can be setup for high availability in either an active=passive or active-active setup. I don't know anything about openmr and how it stores data. Glusterfs is a distributed filesystem that may be an option to store the openmr data. Not sure if FreeBSD has AoE (ATA over Ethernet) which could be a low cost SAN or maybe use iSCSI. Brantley Coile, the founder of Coraid is the inventor of AoE, and also developed the PIX firewall (on Plan 9). AoE has to be inside your LAN and is not routable outside of it.
At work we have systems tiered. Tier 0 are critical. Tier 4 are heh, okay (when we get it back up). We no longer use any HA clustering software like Veritas Cluster Server or Sun Cluster, but it is on the application teams to design HA into their applications. If Hadoop has an NVMe drive failure then it is replaced and data synced from other servers in the cluster. Another highly critical app has servers over two data centers and dozens of apps on hundreds of servers which are in their own clusters of usually 8 servers. We have over 30,000 servers in 4 data centers and other remote locations (not Facebook or Google or an Internet-related industry).
It all comes down to "how reliable does it need to be?
Yes, I have two boxes with 6 hot-swappable drive bays. I have 100 1 TB drives for those bays. I've been planning for this for years. Since 2008. These boxes are in addition to the box which has the broken system. The latter has 8 small form factor hot-swappable HDD bays.
US healthcare. It needs to be reliable.
OMG, it just keeps getting worse. I am working on updating the dependencies, (the ones that will update) and working with packages. I received specific instructions to follow but couldn't read them. My screen was not bright enough. I stupidly rebooted my laptop to move to another location. Is there a pkg log?
You can add to the ALIAS section of /usr/local/etc/pkg.conf
read through the messages all usingCode:message: "query '[%C/%n] %M'",
and apply the requested settings...Code:pkg message | less
All you need is an old laptop that can run Virtual Box. None of the learning need be on a real server machine (until you want to learn about real RAID and real performance etc. but that is a way down the trail!)until recently I only had one server
I skimmed the last 3 pages but can't see that message but have probably missed it.I just keep getting the unclosed file warning, that is all.