ZFS The data transfer rate is better if I use ext4/Linux fs/os than using ufs/zfs/FreeBSD fs/os. Why ?

Hello to everyone.

can someone explain why,when I should copy data from :

from ufs to zfs disks
from zfs to ext4 disks and viceversa
from ext4 disks to ntfs disks and viceversa

or even when I should create the image of a disk with dd,if I use a real installation of Linux with the ext4 fs instead of FreeBSD with ufs or zfs fs,the first one is much faster ? This is the reason why I created a qemu vm to do the same :

Code:
qemu-system-x86_64-debian_fs -machine q35 -cpu kvm64,hv_relaxed,hv_time,hv_synic \
-m 1G -vga std -drive file=Debian-fs.img,format=raw -drive file=/dev/$vmdiskA,format=raw \
-drive file=/dev/$vmdiskB,format=raw -rtc base=localtime \
-device usb-ehci,id=usb,bus=pcie.0,addr=0x3 -device usb-tablet -device usb-kbd \
-smbios type=2 -nodefaults -netdev tap,id=mynet0,ifname=tap19,script=no,downscript=no \
-device e1000,netdev=mynet0,mac=52:55:00:d1:55:01 -device ich9-ahci,id=sata \
-drive if=pflash,format=raw,readonly=on,file=/usr/local/share/edk2-qemu/QEMU_UEFI_CODE-x86_64.fd \
-drive if=pflash,format=raw,file=/usr/local/share/edk2-qemu/QEMU_UEFI_VARS-x86_64.fd \
-nographic -serial none -monitor none &

but unfortunately I haven't reached the same speed that I have if I was running a physical installation of Linux. Instead,the data transfer speed is even worse than using FreeBSD with zfs or ufs fs. In terms of speed,is a native ext4 fs better than zfs or ufs under FreeBSD ? Why using a qemu vm is a bad idea ? Using a bhyve vm is better ?
 
dont use ext4 with freebsd its 10 times slower than using zfs

i had a usb drive formatted with ext4 and it was fine copying data from the drive to freebsd
but copying data from freebsd to the drive with rsync was really, really slow

copying data to the ext4 formatted drive the max speed i got was about 4MB
whereas copying data to the same drive formatted as zfs i got 40MB transfer speeds

makes a big difference if you are trying to copy gigs of data
 
The comparaison made is from ext4 to zfs and viceversa or even from ufs to zfs on a physical Linux installation and from ufs to zfs and viceversa on FreeBSD. Or from ext4 to zfs and viceversa from ufs to zfs on a virtual Linux installation and from ufs to zfs and viceversa on FreeBSD. In this case I see the worst situation.
 
I don't use the ext or the ntfs driver for FreeBSD at all,because in the past I saw a lot of data corruption. This is the reason why I tried to copy the informations from a disk to another within a Linux vm,using the ext4 fs as main fs. But I saw that it didn't help at all. The best choice is to use Linux natively. I would like to understand why it is better than using a Linux vm.
 
Speed / performance as measured by?

There are many different workloads and metrics for filesystem performance. ZFS does a lot (integrity, checksums, crash-resistance) that other filesystems do not.

As to why a VM is slower? Layers. Layers necessarily add latency. Especially with small I/O sizes, latency will kill performance.
 
Speed / performance as measured by?

There are many different workloads and metrics for filesystem performance. ZFS does a lot (integrity, checksums, crash-resistance) that other filesystems do not.

As to why a VM is slower? Layers. Layers necessarily add latency. Especially with small I/O sizes, latency will kill performance.

You want matematic. I'm not so experienced. Speed / performance is measured by my eyes.
How can I reduce the latency of the vm ?
 
You want matematic. I'm not so experienced. Speed / performance is measured by my eyes.
How can I reduce the latency of the vm ?
You likely can’t. What you can do is use larger operations if running dd is actually your use case. (bs=1M, for example).

But for things like rsync, you might look at mounting with noatime on FreeBSD or relatime on Linux. This will cut down on extra metadata updates.

Additionally, on zfs, you can use sync=disabled to allow it to coalesce transactions more efficiently. (I would avoid this in general, but during a specific task like populating a new drive with rsync, it’s benign so long as you don’t have a power outage.)
 
You want matematic. I'm not so experienced. Speed / performance is measured by my eyes.
How can I reduce the latency of the vm ?
In the performance business we call that "seat of the pants" measurement. And seat of the pants measurement is unreliable when compared to actual numbers. Users I've always dealt with in my 50+ year career always say "it feels slower" or "feels faster" but feeling slower or feeling faster is hardly a measurement to base tuning decisions on.
 
In the performance business we call that "seat of the pants" measurement. And seat of the pants measurement is unreliable when compared to actual numbers. Users I've always dealt with in my 50+ year career always say "it feels slower" or "feels faster" but feeling slower or feeling faster is hardly a measurement to base tuning decisions on.

man,I saw clear with my eyes something like 4 MB inside the vm and 50 MB using Linux physically.
 
man,I saw clear with my eyes something like 4 MB inside the vm and 50 MB using Linux physically.
A VM will never match the speed of a physical unless you're on an ESXi host (VMware) in an enterprise environment. If you're running vbox, qemu, bochs, bhyve, or kvm on your home machine, you'll get a fraction of the performace than on a physical.

At home your VM will *always* perform much worse than a physical.

At $JOB we use VMware and KVM. In both cases the hosts are huge (multi CPU and multi terabyte). VMs on those compare closely to physicals with fewer CPUs and gigabyts of RAM. You'll never get that performace with VM on a computer at home, unless you have the $$$ to buy one of those beasts we have at $JOB.
 
Back
Top