As we're running out of disk space, I've added another block storage device and made it a zfs partition with gpart :
$ gpart show
=> 40 125829040 vtbd0 GPT (60G)
40 1024 1 freebsd-boot (512K)
1064 6291456 2 freebsd-zfs (3.0G)
I am a newbie to ZFS, but I've installed it on my laptop in order to learn about it (hey, it's a free country).
I am totatlly confused about what are the recommended poudriere.conf settings to use?
For example, there is the memory setting to use instead of TMPFS, but I configure TMPFS to...
...and no, this is not a long time ago in a galaxy far far away either...
For those of you who are following my experience with FreeBSD and a hard disk failure, the is system install v2.0. If you are new to my little story, then you can catch up at the following links...
I have a machine with three 2TB drives mirroring the single zroot zpool. Is there a command sequence to pull one of those drives from the mirror, make the mirror forget it as a member of the pool and mark the drive as a spare rather than as an active pool member? A link to a page would be fine...
I've searched google and this forum but still no sucess. I've created a file-backed filesystem (memory disk). what I want is to automatically mount the md device at boot. What would be the correct entries in the rc.conf and fstab. Thank you for your inputs.
I've run 'tunefs -m 0' and 'tunefs -o space' on my UFS hard drive, however I haven't seen any reclaimed space. The changes wrote successfully.
It's a 4TB drive so I expect 3.725 TB but I'm getting 3.52 TB.
I'm looking for solution.
We have a storage with 60x4TB zfs pool wich split on three raidz2 arrays + 256 Gb RAM. We have to connect around 30 Mac OS clients for online video streaming (~7-8Gbits/sec) via NFSv3
Some times we have a users complain about freezing.
Hi there :)
Someone has demonstrated for a while running OpenStack hosted on FreeBSD 11. Described right here
For me that sounds awesome. I would like to have a similar setup now with FreeBSD 11 XEN and hosting OpenStack on Dom0. The minimum goal is to run FreeBSD as an Nova Compute
I am an idiot.
I have no backup. Nothing critically important, but 6 TB of personal data I would sorely like to recover. I solemnly swear to create real backups first, before anything else, if I can get this working again.
I'm a newbie to FreeBSD. This new FreeBSD system is replacing an old...
I am a relative noob to the storage world in general and FreeBSD in particular. OF what I have been learning of late, I have become somewhat familiar with such concepts like disk queuing, IOPs, latencies and the likes
I am also reading the classic 'The design and implementation of the FreeBSD...
OS: FreeBSD 10.3
SAS HBA: LSI 9201-16e
SAS HBA FW/DRIVER: P19
DISK ENCLOSURE: Supermicro SC847E16-RJBOD1
CAPACITY-DISKS: Seagate ST6000NM024 6TB
FAST-DISKS: Samsung 850 Pro MZ-7KE1TOBW 1TB
I'm having an issue with my FreeBSD build for our homebrew NAS solution here at...
We are trying to use smartmontools (smartctl command) to check the health of our file servers disks.
We have an LSI MegaRaid (Dell branded) controller attached to a DAS array. Each of the 12 disks are a single-disk volume, presenting as mfidXX. I'm aware that there are /dev/passYY devices...
I am currently building my first physical FreeBSD box which should soon replace my Synology NAS.
I have put together all the pieces and was able to install FreeBSD just fine, but I have had some issues ever since I created the main storage pool where all the files are supposed...
I'm working on the FreeBSD based dual-controller reliable storage system concept with aim to implement ZFS and in-memory cache. Fell free to discuss. The system is needed to be tested. So any help will be great!
as we all know there is no single command to show storage devices on FreeBSD system. Some grep the dmesg command, some check /var/run/dmesg.boot file, some try camcontrol command and so on ... as I struggle to create any empathy for Linux systems I really like the lsblk command and I always...
I want to know if FreeBSD / ZFS or UFS send a flag to lower level drivers regarding File System metadata in fashion similar to Linux where the ext3/4, jfs and other file systems use the
REQ_RW_META and BIO_RW_META flags in struct request and struct bio structures respectively to tell lower...