System upgrade, bhyve and the clueless looking for a clue

Hello all,

It's that time where I've decided I need to do a hardware upgrade on my FreeBSD box. Its assorted chores are fairly low-key in that it's the household file server (ZFS RAID-10ish sort of thing, 4+1 3½" SATA drives) as well as email (Postfix & Cyrus), web cache/proxy, print server, local webserver and assorted other gubbins. Usual sort of stuff.

I have also waffled: a lot. tl;dr version is at the bottom!

What I'm using currently:
he hardware is a bit long in the tooth and actually was when I got it as I was upgrading its previous incarnation on a budget: so mid-range Core 2, 8 GB RAM (non-ECC! :eek: ) blah etc.

For assorted reasons I've decided to update it. Partly as it needs better memory, and as I'm trying to keep its packages up to date and like compiling my own stuff (yeah I know... even I'm wondering why I'm making my life difficult: I have at least bitten the bullet and moved from -CURRENT to -STABLE where I'm much happier!) but I'm impatient.

Bhyve: "Do you think we should?" "No. Let's do it."
The slightly random ingredient is bhyve: that caught my attention and I found myself thinking, "wouldn't it be cool if my slightly noisy and also underpowered Linux desktop was hosted as a guest on the fancy new computer?" Which may be right up there with my other bad ideas like "wouldn't sticking with -CURRENT be awesome?" which I'm finally disabused of, but I think it's worth pursuing. I can't test out that theory because the Core 2 is too much of an antique to support bhyve.

But if that isn't a completely terrible idea (and Linux is a better option for my desktop as from albeit now quite distant experience, there can be compatibility issues) obviously I need to make it work. And do so sensibly, and I do not always do sensible. First off is that I'm looking at a Ryzen 2 based system. "How many cores?!" was one of my initial reaction but this'd be a nice way of using some of them. I also predict it would need in the region of 32GB (of ECC!!!) to give it a bit of headroom: currently the FBSD box is 8GB, as mentioned, which is "about enough" and the Linux box is 12GB which isn't enough, so absolute minimum would be 24GB. May as well round it up.

Possibly making life difficult for myself as I have a poor grasp of how bhyve works, I figured I could continue to use my KVM arrangement and possibly dedicate a USB port to the Linux VM and also a DVI port of a dual-head card. But I don't know if that's at all feasible, or if it's even remotely the best answer. Something tells me I shouldn't be running an X11 server on the FBSD box though: I want easy access to the console if something goes wrong, and an unresponsive X11 server combined with a LAN problem would render me unable to communicate with it, which is why I like the KVM idea, as much as it's a PITA in some respects.

And the last thing, and something which also dictates more modern (and supported!) hardware as well as CPU cycles is that I use external USB-connected SATA drives with geli-encrypted ZFS to do my backups. Something a bit quicker than the pedestrian 30-40MB/s I have at present would be nice. Currently a faster USB-3 PCIe card (before it promptly died) gave somewhat faster transfer speeds but I wonder if they were constrained more by the CPU horsepower than the HDD speed. Probably a close-run thing.

"'Blah blah blah-de-blah blah': get on with it, Vom!"
Er, yeah. So, hardware: aaaages back I was definitely on the Opteron bandwagon, but since the Core 2 and especially since the "i" series processors, I've been much more fangirly about Intel stuff. They've certainly enjoyed a per-core performance advantage for the longest time, though I'm also thinking that the price tag matches it. AMD have (and may still be) lagging in that regard but have gone down the "but we have more cores!" route. That's usually left me feeling sceptical of marketing gimmickry thinking back to Ye Olden Dayes when it was really hard to find enough work to keep all the cores of an MVME188 brick busy (er, that'll be a prehistoric Motorola M88K quad-cpu contrivance, briefly in vogue as a RISC replacement for their actually skill viable MVME1[467]x M68K-based boards: "everybody else is doing RISC, so we should too!") Plus the corporate mainframe only had three processors so obviously that was enough for anybody.

And while 57 cores might be slight over kill for a "someone just wake me up...?" type server, running VMs may potentially give them something to do. Yeah, I know I don't need those cores to run VMs (the aforementioned three-legged IBM 3090 running VM/ESA was certainly more than happy to run way more subsystems than it had CPUs, obvs.) it might give them something to do.

"Yes, yes, this ancient history is all very interesting, but please, get to the bloody point!"
Ahem. Okay. Ryzen 2 processor with "some" cores. Prefer ASUS board. Definitely want on-board USB-3. PCIe slots, one for the current SATA adapter that the ZFS drives plug into another for a dual-head low-mid spec GPU of some sort. 32GB ECC RAM unless I don't run bhyve VMs in which case half should do. Suggestions of good, compatible hardware for FBSD-12-STABLE?

By-the-by, probably not relevant, but just for completeness: another thing which instigated this is that the local power supply which whilst mostly there is now always 100% reliable. I would at least like enough juice to allow a graceful shutdown, as well as being able to surf through brown-outs without incident, which I suspect may become more common in future as I live in the UK which hasn't had a coherent energy police since the 1980s. The intent is to rack-mount both the new server (so a new case, probably a fairly ordinary 4U server case, and the games PC may also join it (and as much other gubbins as I can get off my desk, so hi-fi amp, networking gribblies, maybe the smallish laser printer too).

Enough! tl;dr version, please.
Vom is clueless and wants to update her FBSD server box. Something modern but not exotic.
  • Multi-core CPU supporting bhyve (AMD Ryzen-2, maybe?)
  • Motherboard: likes ASUS, doesn't like GB or MSI. Pref. with USB 3.
  • Memory: 32GB ECC is probably sensible.
  • Some means of separating VGA console from X11 server, possibly via KVM
  • Preferably under £500 in total, if possible (not including rack, UPS etc).
 
Something modern but not exotic.
Have you thought about getting a second-hand server and cannibalizing it? There's plenty of server hardware that's being written off and can be bought cheaply. For a home situation those servers can still be extremely useful.

About a year ago I was given a Dual Xeon E5620; 2.40GHz with 48GB ECC memory, an LSI 8 port SAS/SATA controller and a couple of 600GB 10.000RPM 2.5" disks. It was an old "pizzabox" server that was decommissioned. Those server pizzaboxes are extremely noisy, so not good for my home. But I bought a refurbished mainboard for 150,- euro that fitted a "standard" ATX case (which I already had). Transplanted the CPUs, memory and controller. Bought another set of 48GB ECC memory (exactly the same type), also for about 150,- euro. So for around 300,- euro I now have a really nice box to play with:
Code:
CPU: Intel(R) Xeon(R) CPU           E5620  @ 2.40GHz (2400.05-MHz K8-class CPU)
  Origin="GenuineIntel"  Id=0x206c2  Family=0x6  Model=0x2c  Stepping=2
  Features=0xbfebfbff<FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CLFLUSH,DTS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE>
  Features2=0x29ee3ff<SSE3,PCLMULQDQ,DTES64,MON,DS_CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,DCA,SSE4.1,SSE4.2,POPCNT,AESNI>
  AMD Features=0x2c100800<SYSCALL,NX,Page1GB,RDTSCP,LM>
  AMD Features2=0x1<LAHF>
  VT-x: PAT,HLT,MTF,PAUSE,EPT,UG,VPID
  TSC: P-state invariant, performance statistics
real memory  = 103079215104 (98304 MB)
avail memory = 100380839936 (95730 MB)

And I'm using it to run a bunch of VMs on them for experimentation:
Code:
root@hosaka:~ # vm list
NAME            DATASTORE  LOADER     CPU  MEMORY  VNC           AUTOSTART  STATE
case            default    bhyveload  4    4096M   -             Yes [3]    Running (2559)
jenkins         default    bhyveload  4    4096M   -             Yes [5]    Running (20751)
kdc             default    uefi       2    2048M   0.0.0.0:5901  Yes [2]    Running (2211)
gitlab          stor10k    bhyveload  4    6144M   -             Yes [9]    Running (4461)
gitlab-runner   stor10k    bhyveload  4    4096M   -             Yes [10]   Running (4481)
kibana          stor10k    bhyveload  4    6144M   -             Yes [1]    Running (94958)
lady3jane       stor10k    uefi       4    4096M   -             No         Stopped
plex            stor10k    bhyveload  4    4096M   -             Yes [6]    Running (3892)
sdgame01        stor10k    grub       2    4096M   -             No         Stopped
tessierashpool  stor10k    bhyveload  4    32768M  -             Yes [4]    Running (3081)
wintermute      stor10k    bhyveload  4    4096M   -             Yes [8]    Running (4441)
 
I like what@SirDice gets at. Maybe you could bump it up 1 generation to Socket LGA2011.
To me I like the underdog in AMD but server parts cost a good bit so I need to be sure it works.
Hence my penchant for Intel Xeons.
I think LGA2011V1 is still viable. It is SandyBridge. Many of the same motherboards will also take V2 IvyBridge chips.
I just completed a new open-box Tyan S7052 build. I mounted dual E5-2650LV2 and it offers 20 cores for around 60W each.
With Cooljag 2U coolers and DDR3 ECC-Registered RAM being so cheap, I bet I don't have much in it. Some used parts.
Mobo=$100, 2@CPUZ=$60 64GB-RAM=$70
I added some SAS2 controllers for $30 and you have a heck of a start.
This is for a secondary 2U Chenbro 24bay storage box I bought for $60.

So as you can see if you can assemble parts you can do this cheaply.
Who cares if my chassis has stickers all over it and one rack ear is missing LED's. I can live with that.

If you want something newer it really gets tricky.
Xeon 2011-V3/V4 boards still carry a hefty price tag, even used. Amost to the point of where you would be better off buying newest Xeon LGA3647 for only a few hundred more.
For example: I built out a Supermicro X10SRL with-2650LV3. The CPUz cost me $250 each, DDR4 RAM sky high and mobo was $250. So I have $1K maybe into that build with some used parts.
Question is: Is that much faster than the IvyBridge 2650LV2. The answer is no.
I just needed to start on LGA2011 somewhere as I skipped LGA1366.

So you need to decide if you want single CPU or Dual. There are advantages to single if you want something smaller.
Many dual socketed boards are the larger EATX/SSI standard.

LGA2011 may be overkill for you needs.
FreeBSD/SMP: Multiprocessor System Detected: 48 CPUs
 
surf through brown-outs without incident
A UPS is best. Generator with automatic transfer switch or solar array with batteries would work too..
Sizing a UPS for your needs is important.
I use APC SU1400 and they are underpowered for my needs so I had to add external batteries.
So figure your total power needed, not just computer, but monitor, modem, router too.
 
Something tells me I shouldn't be running an X11 server on the FBSD box
I have not shyed away. I like doing headless Xorg with xforwarding. Passing xfe for a graphical file manager.
In fact I just installed Virt-Manager/Xorg on my bhyve hypervisor. The only problem I have with Virt-Manager is that is does not detect bhyve VM's started from the command prompt. But it can make bhyve VM's from the GUI which is wizbang.

I am just getting started at bhyve but I have the need. My first standup VM is going to be a Poudiere package builder.
Still doing dumb newbie things to see what is possible.

First I tried FreeBSD virtualization using Xen DOM0 but my VM's disk throughput was horrendous compared to bhyve.
Currently I am using a single M.2 NVMe Toshiba XG3 and I see 950MB/s on bare metal and 900MB/s in a VM.
On Xen it was <100MB/s and you could feel it. Xen testing is where I discovered the GUI virt-manager.

Bhyve manual startup is confusing but I am getting the hang of it. I am still using legacy/bhyveload.
Soon I plan on installing the needed ports for bhyve UEFI roms. Still trying to get a hang of it all.
I went with PCI passthrough of NIC's and that has made it harder to figure out, but that is what I wanted.
Planning many pass-through connections on my VM's NIC's with extra host interfaces..
I just bough a 48 port switch so I have plenty of empty holes to fill.
By the time I get it all wired I will probably figure out virtual switches.....Vale anyone?
 
Just a little operational clue here:
We have great docs for bhyve. They are too obtuse for the really dense like me.

You really have to run bhyveload and bhyve together in the beginning.
The docs all mention that bhyveload has to be used but all the examples don't really show it.
Nor the handbook. They use a script from /usr/share/examples/bhyve and depend on that.
https://www.freebsd.org/doc/handbook/virtualization-host-bhyve.html
They should really show the complete manual method before jumping to UEFI booting.

I found this document the most helpful. It made me realize how you must run bhyveload and bhyve together.
http://robotdisco.github.io/2017/12/30/running_a_bhyve_vm_without_a_helper_wrapper.html
Be advised line 6 and line 11 of his example needs your VM name at the end of that command.
Which is 'pfsense' in the example.
That's what the handbook shows: bhyveload(8) or it will manually add one named vmname.
 
I just switched to ahci-hd from virtio-blk for my NVMe drive and now I am getting darn near bare metal speeds:
Transfer rates:
outside: 102400 kbytes in 0.120093 sec = 852673 kbytes/sec
middle: 102400 kbytes in 0.105646 sec = 969275 kbytes/sec
inside: 102400 kbytes in 0.108429 sec = 944397 kbytes/sec
 
Thanks for the replies! I must admit I hadn't thought about looking at the used server end of things and wouldn't even know where to look. Normally I shy away from used stuff as it's often been run to death (and sometimes full of dust...) but it's a possibility. I think the main risk even with solid-state stuff is that I am The Nemesis Of Cooling Fans so they will probably die on me!

Intel Xeon is something I also haven't thought about, and while my knowledge of gaming spec stuff has slipped in recent years, that's something I know absolutely nothing about at all. I've occasionally seen the name crop up with the connotations, "this is expensive". But of course a used server will likely offset that. I'd need to look into it, I don't need enormous top-of-the-line performance when there's just two of us using it, just something a bit more, erm, "contemporary". I shall look into the suggestions for the other bits: both Tyan and Supermicro are old favourites from previous incarnations of my server.

The stuff about bhyve is also interesting though is still slightly reading as if it's written in Martian as my knowledge is scant without having a chance to try it out. But it contains important stuff like IO throughput being way better! Again, I'm not sure how that would normally be set up. As mentioned, the server runs ZFS, which I've heard is good, but I'm not sure if the client would get at it through mapping directly to ZFS-managed, er, thingies, whatever they're called (which seems sensible) or through NFS (which doesn't). Both have pros and cons. My home stuff is hosted on the server so I imagine that NFS is The Only Way as I haven't heard of whether ZFS handles any sort of "dual porting" but that sounds like it's just a bad idea anyway. Unless there's another way of getting the client to directly access a directory point on the host's filesystem.

The Xorg thing also contains some terminology I'm unfamiliar with so I'll need to read about that before commenting further!
 
Tyan does not seem as prominent as they once were. Supermicro defiantly has the lead.
Tyan was predominantly a AMD builder and SuperMicro an Intel outfit back in the day.
I just happened to stumble on that Tyan ebay deal. I also made an lowball offer.
Unfortunately when I installed IvyCreek cpu's it would not kick into PCIe 3 bus. All I can pull is PCIe 2.
So that is why it got returned probably(and ended up on ebay).
The good boards(SM X9 series) use PCIe 3 bus with the appropriate IvyCreek CPU. Sandy Bridge CPU's were only PCIe 2.

Regarding cooling fans that is where it gets rough. The darn BIOS runs them full blast.
The server boards don't have the options for much fan control even with PWM fans. It's a shame. All set in the BMC.
With 60W processors you really don't need full blast airflow. I buy the LV chips so I don't have to.
 
SirDice I tried vm list but get nothing. Are you running a front end?
Code:
root@VIRT:/vm/freebsd # ls /dev/vmm
freebsd2
root@VIRT:/vm/freebsd # vmm list
vmm: Command not found.
root@VIRT:/vm/freebsd # vm list
vm: Command not found.
 
The server is now taking shape. Some advice was followed, some was not followed but all was considered!

For a number of reasons, in the end I went for a current AMD-based solution, which predictably ended up costing more than I'd intended, as these things always do! So I've settled on a Ryzen 2700 and an Asus Prime X470 Pro motherboard: the latter was chosen on the basis that I generally trust Asus as a MB maker and that particular motherboard has better voltage regulation than their supposedly "ruggedised" model. The CPU was recommended by a friend who's very much into lower power consumption hardware and so far my experience is that it has excellent performance given its rather frugal 65W maximum power consumption. I've also turned on hyperthreading, something I usually don't do on the assumption that the OS knows more about how it's scheduling stuff than the CPU's hardware, but having read about it, as long as the OS is aware of the arrangement of cores (virtual or otherwise), which it does, it tends to improve performance a fair bit. I'm actually very impressed at how whizzy it is when running parallel builds and stuff and the performance is nothing short of astonishing compared to the rather entry-level Core 2 it replaces.

Memory I agonised about for a long time. I know I should use ECC and I would have had ECC with a second-hand Xeon based motherboard but I had my reasons for going down this route. ECC RAM may work on this setup but it seems that every experiment I'd read about was inconclusive, and a 50% mark up for slower memory is a lot to pay for memory which may offer no advantage. So I know I'm a bad sysadmin, even though this is just our home machine, but I've gone for 32GB of high performance non-ECC. It's Crucial Ballistix and isn't OCed so it should be reliable. I've read some people saying "it's not necessary with ZFS because checksums!" which I think is not-very-subtly missing the point of the potential dangers though it does ameliorate the risks a little.

"Graphics don't belong in a server" etc, but unless I'm going to dig the vintage VT320 out of the garage the thing needs some means of talking to me. Experiments with an old ATI 6970 were not encouraging and the system was very unstable; as much as I'm usually an ATI girl for gaming stuff the recommendations for Unix seem to be almost universally to use Nvidia, and something that didn't come out of the early industrial era, so I got myself a GT1030. I know some gamers would say "lol" but I didn't get it to play the latest and greatest on, I bought it so I have something that can do desktop graphics and which is stable. And in this case that has low enough power consumption that it can use passive cooling. I'm very pleased with it: performance is likely as much as I'm ever going to need for its intended purpose and it's absolutely rock steady.

Storage: it's still undergoing testing and migration so I've hooked up a couple of old 400GB Hitachi Deskstars; the main storage is almost ready, being a mix of WD Red and Blue 4TB units that I'll assemble as a RAID-10 type array using ZFS. The choice may be a bit curious but again I'm trying to not blow the budget on this. Each mirror will contain one Red and one Blue, partly for cost, but also partly because of not using exactly the same type and batch of HDDs for a mirror in case they share a common fault and both die at once. The original idea was a modest increase using 3TB drives as I already have one of them: I'll use that as the hot spare. "But it's not big enough!" The idea is still to use 3TB ZFS partitions but I realised it was a false economy buying 3TB drives as there's only about a 10% price difference, and sometimes they're more expensive. If I want more then I'll grow the partitions and buy a new hot spare.

At present my HDD throughput is lamentable. I have absolutely no idea why. I did notice a significant downturn when I moved from 11.x to 12.0-STABLE so I don't know if a glitch or different algorithm has crept in or of 11.x was doing something it shouldn't, like write caching. But this is much worse, when the HDDs seldom achieving much more than 10% of their throughput, chugging along at just 10-15 MB/s. I've read about other people having the same problem and other than a chorus of "that really sucks lol", apparently no workable answers.

Well okay, my workable answer was always going to be to use an LSI-based SAS card in JBOD mode; I was going to just swipe the SAS1068E out of the existing server but I want to run them in parallel until I'm completely happy, so I've ordered a 9211-8i which apparently should work well. If it doesn't I'll just have to use the rubbishy onboard SATA and retrieve the SAS1068E from the current server when I finally power it off. Whichever one I end up using, the games machine can have the other as it has lots of drives to manage. They also need to be rationalised but trying to move system partitions on Windows always seems to be harder than it should. But I digress in what is already a very long reply.

Anyway, hopefully the performance will be okay. Only potential hiccup right now is that registering the new drives on WD's site, one of the Blues comes back "no limited warranty" meaning that I've been sold an OEM drive. Apparently this happens often with the likes of Amazon and is usually sorted out by raising a ticket, though I am a little wary of putting data on a drive that may need to go back. I really need to look into proper drive-erasing solutions anyway so I should get on with that.

My configuration and testing has brought up quite a number of issues, some resolved and some not.

Currently the main irritation is actually a minor problem, but one that's visually annoying which is that the mouse pointer tends to skip and jump quite a lot, particularly when hovering or moving in certain applications such as Firefox and Gimp. I haven't identified the cause though I've always been of the impression that Linux is better at real-time desktop performance whereas FreeBSD is better at the heavy lifting so it may just be a result of respective design philosophies. I would really like to sort it out though. So far my assumptions are either FBSD regarding "trivial" IO as a lower priority or it's that lamentable drive speed delaying e.g. the loading of different pointer shapes or something. It doesn't lose tracking as far as I can tell it just frequently pauses and then jumps to the new location. But it's extremely annoying to use.

Bhyve has been a learning experience. I think my plan to use it to take over my desktop work was perhaps optimistic: the performance of X-Windows is surprisingly bad even on the same machine and try as I might, I can't get GLX to work over an X11 connection. The debug messages suggest it should but no deal and it's one of those things that could end up taking a very significant effort to debug with no guarantees. VNC is little better and not helped by so far not being able to find a good client that works on FBSD; but I just sort of object to using VNC on principle anyway. Fortunately, other than the skipping mouse, FBSD works about well enough as a desktop.

My other Bhyve application was to keep a virtualised copy of the old server running so I can get rid of the actual hardware. That was also a struggle: I wanted to use ZFS but had to find the hard way that it seems that ZFS doesn't like being virtualised inside another ZFS: my idea to use "a stripe set of one" to get all the ZFS features but with the robustness handled by the host just resulted in the guest's ZFS repeatedly becoming unresponsive. Next attempt was to use NFS, but it was excessively fiddly and the performance is terrible. I've always disliked NFS for that reason and have never been able to get it to perform as well as it should. As an aside, I'm not sure why FBSD clients on FBSD servers tend to leave stale .nfs cache files strewn around absolutely everywhere. But having been reminded of disliking it so much I'm actually seriously wondering of Samba wouldn't be a better solution to drive sharing: even between Unix-based clients and servers it seems faster and less troublesome. But again, I digress: I've settled on UFS on a sparse volume which works well enough. For my own entertainment, I may experiment with virtualised but non-nested ZFS with the temporary HDDs my making them available to the guest to see how that performs and if it works any better.

Printing: argh. For a while, ethernet-based printing got rid of a lot of the chaos but it seems that the complexity of the drivers has returned us to the bad old days. I like my Brother MFC-9140CDN but Brother likes closed-source proprietary drivers; which aren't available for FBSD. Fortunately they report any old PCL or PS driver will do the trick: and indeed it does. PCL works well enough: a little slower and less precise but it's okay.

The big problem without the Brother software is scanning. I'd assumed Sane would do the job but apparently it never worked well with Avahi so support for the latter was stripped out: no networked scanners unless they're handled by another Sane server. Argh. Then I remembered that the MFC has its own built-in administration stuff: lo and behold, it can scan stuff and ftp it to another computer. A rather crude solution and one where I had to do a work-around using an unprivileged account but actually it's a better option for scanning as I don't have to come back upstairs every time I want to do a new page (the feeder is pretty unreliable). So that actually worked out better.

Pretty much everything has been like this: it's been an intense couple of weeks. Mainly as I changed from my original plan to simply plop the drive array into the new hardware and pick up where I left off, but eventually decided it would be better to start from scratch to clear out all the cruft and misconfiguration.

Oh, and one of the main reasons was for those nice USB-3 ports for my backups: they work and are very fast indeed. Yay.

tl;dr: new server mostly works except for the jumpy mouse pointer. Wish I could fix it.
 
Also a word of warning: probably not news but it turns out the suspicious-looking WD Blue HDD is indeed dodgy and had no business finding its way into the retail supply chain. As it was bought direct from Amazon (I mean actually sold by them, not FBA) I've asked them what's going on and told them it needs replacing urgently as it has no warranty: it's all very well to say "you have to use the manufacturer's warranty" except when the manufacturer says "this drive should've never been for sale, it has no warranty."

Though I'm also not very impressed with Western Digital's "not our problem. Sucks to be you lol" attitude. Given my less than stellar experience with Seagate that does reduce my options significantly as there isn't really enough competition in the HDD market.
 
Back
Top