proxmox Running FreeBSD as a virtual guest with ZFS

junialter

New Member


Messages: 11

Hi,

I've been running Proxmox 6 with ZFS as guest storage for some while now.
As guests I have a couple of FreeBSD systems and I plan on expanding the list of FreeBSD guests migrating away from Linux.
I once read that it would be counter productive to use ZFS within the FreeBSD guest because ZFS ist already used as backend for the hypervisor.

Can someone please elaborate why that is and what would be a better option? Should I be using UFS instead?

Thank you very much.
 

covacat

Well-Known Member

Reaction score: 198
Messages: 424

the theory is that the host os already provides the/part of zfs benefits like data integrity and snapshots, and by using less complex fs-es on guests you have it easier on cpus and memory
 

Lamia

Aspiring Daemon

Reaction score: 207
Messages: 770

Freebsd on ZFS smoothly runs on Proxmox on ZFS/non-ZFS.Choosing ZFS for host is preferred owing its benefits.
Going by the above, the question then will be what other less complex fs-es would be appropriate for a/the guest noting that it would be dependent on the purpose of the guest?
 

gpw928

Aspiring Daemon

Reaction score: 220
Messages: 527

Double file system overhead is inevitable. It's just a matter of choosing your poison.

Faced with your dilemma, the best compromise I could conjure to minimise file system overhead on the ZFS server was to create a zvol on the ZFS server to provision a block device.

In my case, the ZFS server and the virtualisation server were actually on different hardware platforms. So I had the extra step of using iSCSI to present that block device to the virtualisation server.

This worked quite well. On the virtualisation server, the zvols that presented as "disks" were then provisioned directly to the vm client, and managed just like physical disks on the client.

I don't use proxmox, but would expect that with the ZFS server and virtualisation server on the same physical host you could provision zvols directly to a local vm client (without iSCSI).
 

Lamia

Aspiring Daemon

Reaction score: 207
Messages: 770

Faced with your dilemma, the best compromise I could conjure to minimise file system overhead on the ZFS server was to create a zvol on the ZFS server to provision a block device.

In my case, the ZFS server and the virtualisation server were actually on different hardware platforms. So I had the extra step of using iSCSI to present that block device to the virtualisation server.

This worked quite well. On the virtualisation server, the zvols that presented as "disks" were then provisioned directly to the vm client, and managed just like physical disks on the client.
That's being the standard practice regardless of the location of the VM manager. It works on Proxmox as Zvols unless one wants something different. Migration of images amongst hosts is then easy to accomplish.
One would still be left with choosing the FS for the client OS.
 

gpw928

Aspiring Daemon

Reaction score: 220
Messages: 527

That's being the standard practice regardless of the location of the VM manager. It works on Proxmox as Zvols unless one wants something different. Migration of images amongst hosts is then easy to accomplish.
One would still be left with choosing the FS for the client OS.
That's interesting, thank you.

Since the original poster was concerned about double file system overhead, I had assumed that (s)he was using something filesystem-resident like QCOW2.
 
Top