Hey folks,
I've stumbled upon a strange issue on my server concerning a discrepancy in 'used' space between two ZFS volumes after a ZFS send/receive operation. I transferred data from basia-ssd/vm-118-disk-0 to basia-nvme/vm-118-disk-0. Initially, the source volume basia-ssd/vm-118-disk-0...
I initially had FreeBSD 13 on a 512 GB NVMe SSD (ZFS root and GELI encryption configured via the installer). Later, I bought two 8 TB hard disk drives. I reinstalled FreeBSD 13 but this time on the two 8 TB HDDs instead of the NVMe SSD. In the installer, I chose ZFS root on the two HDDs (2-disk...
Hello
FreeBSD 12.2
There were 2 pools. zroot on ada0 vdev and zdata on da0 vdev.
da0 is a hardware raid.
Nobody changed anything in zfs pools configuration. And when ordinary reboot of server I got this view.
I don't know how 13003137893086410532 pool appeared with the same name.
I can boot...
Hello,
I had the following partition scheme:
ada0:
ada0p1 freebsd-boot
ada0p2 freebsd-swap
ada0p3 freebsd-zfs
ada0p3 had a size of ~1TB, and after I resized this partition to 500GB using gpart, the boot loader cannot found my zfs pool. What should I do to fix my boot loader and my zfs pool...
Can I create a RAID-Z1 pool consisting of five vdevs (each a physical disk) but using four vdevs (each a physical disk) attached and marking one as missing and then later add that one disk to make the pool online instead of degraded?
I'm trying to fit 9 disks into 8 slots, and without a hammer...
I'm new to Nanobsd, built with Feebsd 12 (compile Generic kernel all modules), everything run as expected, now I intend to build a zfs pool, but being a filesystem (ro) it is not possible, I think of a link to /var or /etc., but I'm not sure it would be the ideal way out. I also didn't find...
Design of a huge future storage with FreeBSD and ZFS
This design is to improve it with your help and recommendations...
Thanks in advance for your time and help!
I will update this content as design improvements are made...
Hi folks ,
Recently we had network maintenance where underlying switches connected to the node where rebooted , due to this one of the node in the cluster got cpu panic , crashed and eventually got rebooted.After the reboot we tried to bring up the services and found that we have some problem...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.