Hi
Previously we can change the scrub priority by tuning these 2 parameters:
vfs.zfs.scrub_delay
vfs.zfs.resilver_delay
However, these 2 parameters were removed.
Any replacement of these 2 parameters to lower down the io priority during scrubbing or resilvering?
Too bad, seems there is no way to change the recordsize of file with zfs send and receive.
The downtime of VM on changing recordsize by using rsync or copy is much longer. ZFS send and receive with incremental changes impact much shorter than these 2 methods.
Hi
Recently I have experienced low performance on 4k read benchmark in my VM with only 21-25MB on read, however, the read on host was saturated the SSD read which is 530MB/s during benchmark.
After investigation, I realized that the issue was due to read amplification on 128k recordsize...
My server has the following parameters in /etc/rc.conf
# NFS
nfsv4_server_enable="YES"
rpcbind_enable="YES"
nfs_server_enable="YES"
nfs_server_flags="-u -t -n 16"
mountd_flags="-r"
rpc_lockd_enable="YES"
rpc_statd_enable="YES"
Hi
I have added the following line into /etc/exports :
/vol/server/storage1_vol00 -maproot=root -alldirs
/vol/server/storage1_vol01 -maproot=root -alldirs
and the zfs sharenfs is set to off on both filesystem.
However, after I have restarted nfsd, the volumes still not mountable with error...
I had found the root cause, it was due to the sharenfs was enabled on the destination zfs filesystem.
I suspect the zfs will re-generate the /etc/zfs/export and restart the nfs service everytime when the zfs receive has been completed. During that time the mounted host will become unreadable...
The problem occur when I'm running zfs sync over ssh to this server, but through another network interface.
Increasing nfs timeout doesn't help.
No idea why it happens on NFS only , possible the ssh using the port that nfs portmap are using?
Still have 3.5TB left which is 70% full, no quota on that volume. 9.5GB just the data been written during this test, sometime it can write 20GB, and sometime less than 5GB .
DD read also having the same issue, it just dead randomly.
Hi
I'm having the following NFS problem, the mount point is writable but it will prompt permission denied in the midst of reading/writing, but no issue on writing dd locally .
The system is FreeBSD 10.2-RELEASE-p24
root@storage:/mnt # mount -t nfs localhost:/vol/test /mnt/test/...
Hi
I'm unable to import the pool with missing log device, even the zpool import -m didn't work as well, any idea ?
FreeBSD 10.2-P18 , zfs 5000
root@:~ # zpool import
pool: vol
id: 3396994461211423289
state: DEGRADED
status: One or more devices contains corrupted data.
action: The...
FreeBSD 10.3 has another issue which does not occur on FreeBSD 10.2 , the level 2 arc device on partition will missing after zpool import or server reboot.
I'm sticking with FreeBSD 10.2 until they fix this issue.
Had been upgraded to FreeBSD 10.3-p2 however this issue still persist. Does it means we won't be able to get the fixed version of FreeBSD 10.3 until they made changes on the kernel? Probably FreeBSD 10.4 or 11?
Surpricingly no one mention this issue, but definitely it will happen after reboot or zpool import.
I'm on hold on moving to the FreeBSD 10.3 because of this.
The issue can be reproduced with the following command, assuming the l2arc disk is da4.
gpart create -s gpt da4
gpart add -t freebsd-zfs -b 2048 -a 4k -l l2arc_disk da4
zpool add vol00 cache /dev/gpt/l2arc_disk
zpool export vol00
zpool import -d /dev/gpt vol00
anyone encounter the same issue...
Found something today, the spare disk with label will not correctly present after zpool import.
And all the cache device with GPT label will not retain or missing after import. I can confirm this issue only happens on FreeBSD 10.3. No issue on the same server with FreeBSD 10.2
Hi,
I have no idea why this weird problem only occur on FreeBSD 10.3, some of the disk label will be gone when importing my zpool or after reboot the server.
NAME STATE READ WRITE CKSUM
vol...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.