ZFS Extremely slow disk I/O on OpenStack. Need help interpreting results.

I started going down this rabbit hole after upgrading three virtual machines from FreeBSD 14.3-RELEASE-p9 to 14.4-RELEASE. These are hosted on an OpenStack cluster not administered by me. This was a proof-of-concept, so to speak, to move some servers off VMware vSphere to another group's OpenStack platform.

This is my first time using OpenStack and for the most part it works fine. When we installed these three VMs, we used the official FreeBSD cloud image (FreeBSD-14.3-RELEASE-amd64-BASIC-CLOUDINIT-zfs.qcow2). From the beginning, there was a small part of me that would notice "Hey, is this slower than I'm used to?" when doing anything. Something felt off compared to my vSphere VMs. But the workloads were seemingly running ok. I did have to adjust php opcache to get better results.

Fast forward to a few weeks ago when I decided to upgrade to 14.4-RELEASE. That's when things were extremely noticeable. The OpenStack VMs were taking an hour or more to run through the standard process of freebsd-update -r 14.4-RELEASE upgrade and the subsequent install commands and reboots. On my vSphere VMs, I can complete this process in 15 minutes or so depending on if I have to clear up any conflicts. Obviously, I could easily compare 15 minutes to 60+ minutes. And it was the same across all VMs.

I know benchmarks and speed tests are not always indicators of problems, but I needed something to confirm what I was witnessing. I've never experienced this level of slowness on FreeBSD before.

OpenStack VMs used the official cloud image (FreeBSD-14.3-RELEASE-amd64-BASIC-CLOUDINIT-zfs.qcow2) to create the VM using all defaults. The vSphere VM was created using the iso image (FreeBSD-14.3-RELEASE-amd64-disc1.iso).

Code:
[~]$ freebsd-version -kru ; uname -aKU
14.4-RELEASE
14.4-RELEASE
14.4-RELEASE
FreeBSD <both VMs> 14.4-RELEASE FreeBSD 14.4-RELEASE releng/14.4-n273675-a456f852d145 GENERIC amd64 1404000 1404000

On an OpenStack VM
Code:
[root@openstack-vm ~]# fio --name=random-write --ioengine=posixaio --rw=randwrite --bs=4k --size=4g --numjobs=1 --iodepth=1 --runtime=60 --time_based --end_fsync=1

write: IOPS=484, BW=1940KiB/s (1986kB/s)(115MiB/60590msec); 0 zone resets
WRITE: bw=1940KiB/s (1986kB/s), 1940KiB/s-1940KiB/s (1986kB/s-1986kB/s), io=115MiB (120MB), run=60590-60590msec

[root@openstack-vm ~]# bonnie++ -d /mnt/test -u root -s 16G -n 256
Version  1.98       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Name:Size etc        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
openstack-vm 16G  301k  95 27.7m   1 66.6m   4 1033k  99  590m  19  2815  53
Latency             58116us   16468ms    6927ms   16962us     170ms     179ms
Version  1.98       ------Sequential Create------ --------Random Create--------
openstack-vm -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                256 21175.718292  25 233652.842040  99 57980.991283  94 75808.902620  97 203705.724168  98 63950.805093  98
Latency              7535ms   10186us   12734us   43836us    1279us    4126us

ScreenFloat Shot of Firefox on 2026-03-26 at 11-34-56.png


On a vSphere VM
Code:
[root@vsphere-vm ~]# fio --name=random-write --ioengine=posixaio --rw=randwrite --bs=4k --size=4g --numjobs=1 --iodepth=1 --runtime=60 --time_based --end_fsync=1

write: IOPS=11.1k, BW=43.5MiB/s (45.6MB/s)(2621MiB/60260msec); 0 zone resets
WRITE: bw=43.5MiB/s (45.6MB/s), 43.5MiB/s-43.5MiB/s (45.6MB/s-45.6MB/s), io=2621MiB (2749MB), run=60260-60260msec

[root@vsphere-vm ~]# bonnie++ -d /mnt/test -u root -s 16G -n 256
Version  1.98       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Name:Size etc        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
vsphere-vm     16G  288k  99  1.5g  95  1.1g  95  798k  99  2.7g  99 +++++ +++
Latency             38474us    2748us    2908us   18932us     802us    1568us
Version  1.98       ------Sequential Create------ --------Random Create--------
vsphere-vm         -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                256 60087.271687  99 197630.015678  99 44886.226062  92 67944.266626  98 188127.384552  99 49042.758713  98
Latency              2310us     111us     188ms   25797us      78us    3201us

ScreenFloat Shot of Firefox on 2026-03-26 at 11-35-15.png


Similarities
  • Both are running the same FreeBSD
  • Both are running ZFS
  • Both are using 8GB vRAM
  • Disks are both 40GB
  • zroot primarycache all
  • zroot atime off

Differences (Obvious ones I can think of. Let me know if there's something else to check.)
  • Infrastructure hosting
  • vCPU
    • OpenStack VM has 4 vCPU
    • vSphere VM has 2 vCPU
  • zfs compression
    • OpenStack VM is off
    • vSphere VM is lz4
Code:
[root@openstack-vm ~]# gpart show -lp
=>      34  83886006    vtbd0  GPT  (40G)
        34       345  vtbd0p1  bootfs  (173K)
       379     66584  vtbd0p2  efiesp  (33M)
     66963   2097152  vtbd0p3  swapfs  (1G)
   2164115      2048  vtbd0s4  config-drive  (1M)
   2166163  81719877  vtbd0p5  rootfs  (39G)

[root@openstack-vm ~]# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zroot  38.5G  19.5G  19.0G        -         -    16%    50%  1.00x    ONLINE  -

[root@openstack-vm ~]# zpool status
  pool: zroot
 state: ONLINE
config:

    NAME        STATE     READ WRITE CKSUM
    zroot       ONLINE       0     0     0
      vtbd0p5   ONLINE       0     0     0

errors: No known data errors

[root@openstack-vm ~]# pciconf -lv
hostb0@pci0:0:0:0:    class=0x060000 rev=0x02 hdr=0x00 vendor=0x8086 device=0x1237 subvendor=0x1af4 subdevice=0x1100
    vendor     = 'Intel Corporation'
    device     = '440FX - 82441FX PMC [Natoma]'
    class      = bridge
    subclass   = HOST-PCI
isab0@pci0:0:1:0:    class=0x060100 rev=0x00 hdr=0x00 vendor=0x8086 device=0x7000 subvendor=0x1af4 subdevice=0x1100
    vendor     = 'Intel Corporation'
    device     = '82371SB PIIX3 ISA [Natoma/Triton II]'
    class      = bridge
    subclass   = PCI-ISA
atapci0@pci0:0:1:1:    class=0x010180 rev=0x00 hdr=0x00 vendor=0x8086 device=0x7010 subvendor=0x1af4 subdevice=0x1100
    vendor     = 'Intel Corporation'
    device     = '82371SB PIIX3 IDE [Natoma/Triton II]'
    class      = mass storage
    subclass   = ATA
intsmb0@pci0:0:1:3:    class=0x068000 rev=0x03 hdr=0x00 vendor=0x8086 device=0x7113 subvendor=0x1af4 subdevice=0x1100
    vendor     = 'Intel Corporation'
    device     = '82371AB/EB/MB PIIX4 ACPI'
    class      = bridge
vgapci0@pci0:0:2:0:    class=0x030000 rev=0x05 hdr=0x00 vendor=0x1b36 device=0x0100 subvendor=0x1af4 subdevice=0x1100
    vendor     = 'Red Hat, Inc.'
    device     = 'QXL paravirtual graphic card'
    class      = display
    subclass   = VGA
virtio_pci0@pci0:0:3:0:    class=0x020000 rev=0x00 hdr=0x00 vendor=0x1af4 device=0x1000 subvendor=0x1af4 subdevice=0x0001
    vendor     = 'Red Hat, Inc.'
    device     = 'Virtio network device'
    class      = network
    subclass   = ethernet
virtio_pci1@pci0:0:4:0:    class=0x078000 rev=0x00 hdr=0x00 vendor=0x1af4 device=0x1003 subvendor=0x1af4 subdevice=0x0003
    vendor     = 'Red Hat, Inc.'
    device     = 'Virtio console'
    class      = simple comms
virtio_pci2@pci0:0:5:0:    class=0x010000 rev=0x00 hdr=0x00 vendor=0x1af4 device=0x1001 subvendor=0x1af4 subdevice=0x0002
    vendor     = 'Red Hat, Inc.'
    device     = 'Virtio block device'
    class      = mass storage
    subclass   = SCSI
virtio_pci3@pci0:0:6:0:    class=0x00ff00 rev=0x00 hdr=0x00 vendor=0x1af4 device=0x1002 subvendor=0x1af4 subdevice=0x0005
    vendor     = 'Red Hat, Inc.'
    device     = 'Virtio memory balloon'
    class      = old
virtio_pci4@pci0:0:7:0:    class=0x00ff00 rev=0x00 hdr=0x00 vendor=0x1af4 device=0x1005 subvendor=0x1af4 subdevice=0x0004
    vendor     = 'Red Hat, Inc.'
    device     = 'Virtio RNG'
    class      = old

Code:
[root@vsphere-vm ~]# gpart show -lp
=>      40  83886000    da0  GPT  (40G)
        40      1024  da0p1  gptboot0  (512K)
      1064       984         - free -  (492K)
      2048   8388608  da0p2  swap0  (4.0G)
   8390656  75493376  da0p3  zfs0  (36G)
  83884032      2008         - free -  (1.0M)

[root@vsphere-vm ~]# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zroot  35.5G  4.01G  31.5G        -         -    20%    11%  1.00x    ONLINE  -
[root@vsphere-vm ~]# zpool status
  pool: zroot
 state: ONLINE
config:

    NAME        STATE     READ WRITE CKSUM
    zroot       ONLINE       0     0     0
      da0p3     ONLINE       0     0     0

errors: No known data errors
[root@vsphere-vm ~]# pciconf -lv
hostb0@pci0:0:0:0:  class=0x060000 rev=0x01 hdr=0x00 vendor=0x8086 device=0x7190 subvendor=0x15ad subdevice=0x1976
    vendor     = 'Intel Corporation'
    device     = '440BX/ZX/DX - 82443BX/ZX/DX Host bridge'
    class      = bridge
    subclass   = HOST-PCI
pcib1@pci0:0:1:0:   class=0x060400 rev=0x01 hdr=0x01 vendor=0x8086 device=0x7191 subvendor=0x0000 subdevice=0x0000
    vendor     = 'Intel Corporation'
    device     = '440BX/ZX/DX - 82443BX/ZX/DX AGP bridge'
    class      = bridge
    subclass   = PCI-PCI
isab0@pci0:0:7:0:   class=0x060100 rev=0x08 hdr=0x00 vendor=0x8086 device=0x7110 subvendor=0x15ad subdevice=0x1976
    vendor     = 'Intel Corporation'
    device     = '82371AB/EB/MB PIIX4 ISA'
    class      = bridge
    subclass   = PCI-ISA
atapci0@pci0:0:7:1: class=0x01018a rev=0x01 hdr=0x00 vendor=0x8086 device=0x7111 subvendor=0x15ad subdevice=0x1976
    vendor     = 'Intel Corporation'
    device     = '82371AB/EB/MB PIIX4 IDE'
    class      = mass storage
    subclass   = ATA
intsmb0@pci0:0:7:3: class=0x068000 rev=0x08 hdr=0x00 vendor=0x8086 device=0x7113 subvendor=0x15ad subdevice=0x1976
    vendor     = 'Intel Corporation'
    device     = '82371AB/EB/MB PIIX4 ACPI'
    class      = bridge
vmci0@pci0:0:7:7:   class=0x088000 rev=0x10 hdr=0x00 vendor=0x15ad device=0x0740 subvendor=0x15ad subdevice=0x0740
    vendor     = 'VMware'
    device     = 'Virtual Machine Communication Interface'
    class      = base peripheral
vgapci0@pci0:0:15:0:    class=0x030000 rev=0x00 hdr=0x00 vendor=0x15ad device=0x0405 subvendor=0x15ad subdevice=0x0405
    vendor     = 'VMware'
    device     = 'SVGA II Adapter'
    class      = display
    subclass   = VGA
pcib2@pci0:0:17:0:  class=0x060401 rev=0x02 hdr=0x01 vendor=0x15ad device=0x0790 subvendor=0x15ad subdevice=0x0790
    vendor     = 'VMware'
    device     = 'PCI bridge'
    class      = bridge
    subclass   = PCI-PCI
pcib3@pci0:0:21:0:  class=0x060400 rev=0x01 hdr=0x01 vendor=0x15ad device=0x07a0 subvendor=0x15ad subdevice=0x07a0
    vendor     = 'VMware'
    device     = 'PCI Express Root Port'
    class      = bridge
    subclass   = PCI-PCI
pcib4@pci0:0:21:1:  class=0x060400 rev=0x01 hdr=0x01 vendor=0x15ad device=0x07a0 subvendor=0x15ad subdevice=0x07a0
    vendor     = 'VMware'
    device     = 'PCI Express Root Port'
    class      = bridge
    subclass   = PCI-PCI
pcib5@pci0:0:21:2:  class=0x060400 rev=0x01 hdr=0x01 vendor=0x15ad device=0x07a0 subvendor=0x15ad subdevice=0x07a0
    vendor     = 'VMware'
    device     = 'PCI Express Root Port'
    class      = bridge
    subclass   = PCI-PCI
pcib6@pci0:0:21:3:  class=0x060400 rev=0x01 hdr=0x01 vendor=0x15ad device=0x07a0 subvendor=0x15ad subdevice=0x07a0
    vendor     = 'VMware'
    device     = 'PCI Express Root Port'
    class      = bridge
    subclass   = PCI-PCI
pcib7@pci0:0:21:4:  class=0x060400 rev=0x01 hdr=0x01 vendor=0x15ad device=0x07a0 subvendor=0x15ad subdevice=0x07a0
    vendor     = 'VMware'
    device     = 'PCI Express Root Port'
    class      = bridge
    subclass   = PCI-PCI
pcib8@pci0:0:21:5:  class=0x060400 rev=0x01 hdr=0x01 vendor=0x15ad device=0x07a0 subvendor=0x15ad subdevice=0x07a0
    vendor     = 'VMware'
    device     = 'PCI Express Root Port'
    class      = bridge
    subclass   = PCI-PCI
pcib9@pci0:0:21:6:  class=0x060400 rev=0x01 hdr=0x01 vendor=0x15ad device=0x07a0 subvendor=0x15ad subdevice=0x07a0
    vendor     = 'VMware'
    device     = 'PCI Express Root Port'
    class      = bridge
    subclass   = PCI-PCI
pcib10@pci0:0:21:7: class=0x060400 rev=0x01 hdr=0x01 vendor=0x15ad device=0x07a0 subvendor=0x15ad subdevice=0x07a0
    vendor     = 'VMware'
    device     = 'PCI Express Root Port'
    class      = bridge
    subclass   = PCI-PCI
pcib11@pci0:0:22:0: class=0x060400 rev=0x01 hdr=0x01 vendor=0x15ad device=0x07a0 subvendor=0x15ad subdevice=0x07a0
    vendor     = 'VMware'
    device     = 'PCI Express Root Port'
    class      = bridge
    subclass   = PCI-PCI
pcib12@pci0:0:22:1: class=0x060400 rev=0x01 hdr=0x01 vendor=0x15ad device=0x07a0 subvendor=0x15ad subdevice=0x07a0
    vendor     = 'VMware'
    device     = 'PCI Express Root Port'
    class      = bridge
    subclass   = PCI-PCI
pcib13@pci0:0:22:2: class=0x060400 rev=0x01 hdr=0x01 vendor=0x15ad device=0x07a0 subvendor=0x15ad subdevice=0x07a0
    vendor     = 'VMware'
    device     = 'PCI Express Root Port'
    class      = bridge
    subclass   = PCI-PCI
pcib14@pci0:0:22:3: class=0x060400 rev=0x01 hdr=0x01 vendor=0x15ad device=0x07a0 subvendor=0x15ad subdevice=0x07a0
    vendor     = 'VMware'
    device     = 'PCI Express Root Port'
    class      = bridge
    subclass   = PCI-PCI
pcib15@pci0:0:22:4: class=0x060400 rev=0x01 hdr=0x01 vendor=0x15ad device=0x07a0 subvendor=0x15ad subdevice=0x07a0
    vendor     = 'VMware'
    device     = 'PCI Express Root Port'
    class      = bridge
    subclass   = PCI-PCI
pcib16@pci0:0:22:5: class=0x060400 rev=0x01 hdr=0x01 vendor=0x15ad device=0x07a0 subvendor=0x15ad subdevice=0x07a0
    vendor     = 'VMware'
    device     = 'PCI Express Root Port'
    class      = bridge
    subclass   = PCI-PCI
pcib17@pci0:0:22:6: class=0x060400 rev=0x01 hdr=0x01 vendor=0x15ad device=0x07a0 subvendor=0x15ad subdevice=0x07a0
    vendor     = 'VMware'
    device     = 'PCI Express Root Port'
    class      = bridge
    subclass   = PCI-PCI
pcib18@pci0:0:22:7: class=0x060400 rev=0x01 hdr=0x01 vendor=0x15ad device=0x07a0 subvendor=0x15ad subdevice=0x07a0
    vendor     = 'VMware'
    device     = 'PCI Express Root Port'
    class      = bridge
    subclass   = PCI-PCI
pcib19@pci0:0:23:0: class=0x060400 rev=0x01 hdr=0x01 vendor=0x15ad device=0x07a0 subvendor=0x15ad subdevice=0x07a0
    vendor     = 'VMware'
    device     = 'PCI Express Root Port'
    class      = bridge
    subclass   = PCI-PCI
pcib20@pci0:0:23:1: class=0x060400 rev=0x01 hdr=0x01 vendor=0x15ad device=0x07a0 subvendor=0x15ad subdevice=0x07a0
    vendor     = 'VMware'
    device     = 'PCI Express Root Port'
    class      = bridge
    subclass   = PCI-PCI
pcib21@pci0:0:23:2: class=0x060400 rev=0x01 hdr=0x01 vendor=0x15ad device=0x07a0 subvendor=0x15ad subdevice=0x07a0
    vendor     = 'VMware'
    device     = 'PCI Express Root Port'
    class      = bridge
    subclass   = PCI-PCI
pcib22@pci0:0:23:3: class=0x060400 rev=0x01 hdr=0x01 vendor=0x15ad device=0x07a0 subvendor=0x15ad subdevice=0x07a0
    vendor     = 'VMware'
    device     = 'PCI Express Root Port'
    class      = bridge
    subclass   = PCI-PCI
pcib23@pci0:0:23:4: class=0x060400 rev=0x01 hdr=0x01 vendor=0x15ad device=0x07a0 subvendor=0x15ad subdevice=0x07a0
    vendor     = 'VMware'
    device     = 'PCI Express Root Port'
    class      = bridge
    subclass   = PCI-PCI
pcib24@pci0:0:23:5: class=0x060400 rev=0x01 hdr=0x01 vendor=0x15ad device=0x07a0 subvendor=0x15ad subdevice=0x07a0
    vendor     = 'VMware'
    device     = 'PCI Express Root Port'
    class      = bridge
    subclass   = PCI-PCI
pcib25@pci0:0:23:6: class=0x060400 rev=0x01 hdr=0x01 vendor=0x15ad device=0x07a0 subvendor=0x15ad subdevice=0x07a0
    vendor     = 'VMware'
    device     = 'PCI Express Root Port'
    class      = bridge
    subclass   = PCI-PCI
pcib26@pci0:0:23:7: class=0x060400 rev=0x01 hdr=0x01 vendor=0x15ad device=0x07a0 subvendor=0x15ad subdevice=0x07a0
    vendor     = 'VMware'
    device     = 'PCI Express Root Port'
    class      = bridge
    subclass   = PCI-PCI
pcib27@pci0:0:24:0: class=0x060400 rev=0x01 hdr=0x01 vendor=0x15ad device=0x07a0 subvendor=0x15ad subdevice=0x07a0
    vendor     = 'VMware'
    device     = 'PCI Express Root Port'
    class      = bridge
    subclass   = PCI-PCI
pcib28@pci0:0:24:1: class=0x060400 rev=0x01 hdr=0x01 vendor=0x15ad device=0x07a0 subvendor=0x15ad subdevice=0x07a0
    vendor     = 'VMware'
    device     = 'PCI Express Root Port'
    class      = bridge
    subclass   = PCI-PCI
pcib29@pci0:0:24:2: class=0x060400 rev=0x01 hdr=0x01 vendor=0x15ad device=0x07a0 subvendor=0x15ad subdevice=0x07a0
    vendor     = 'VMware'
    device     = 'PCI Express Root Port'
    class      = bridge
    subclass   = PCI-PCI
pcib30@pci0:0:24:3: class=0x060400 rev=0x01 hdr=0x01 vendor=0x15ad device=0x07a0 subvendor=0x15ad subdevice=0x07a0
    vendor     = 'VMware'
    device     = 'PCI Express Root Port'
    class      = bridge
    subclass   = PCI-PCI
pcib31@pci0:0:24:4: class=0x060400 rev=0x01 hdr=0x01 vendor=0x15ad device=0x07a0 subvendor=0x15ad subdevice=0x07a0
    vendor     = 'VMware'
    device     = 'PCI Express Root Port'
    class      = bridge
    subclass   = PCI-PCI
pcib32@pci0:0:24:5: class=0x060400 rev=0x01 hdr=0x01 vendor=0x15ad device=0x07a0 subvendor=0x15ad subdevice=0x07a0
    vendor     = 'VMware'
    device     = 'PCI Express Root Port'
    class      = bridge
    subclass   = PCI-PCI
pcib33@pci0:0:24:6: class=0x060400 rev=0x01 hdr=0x01 vendor=0x15ad device=0x07a0 subvendor=0x15ad subdevice=0x07a0
    vendor     = 'VMware'
    device     = 'PCI Express Root Port'
    class      = bridge
    subclass   = PCI-PCI
pcib34@pci0:0:24:7: class=0x060400 rev=0x01 hdr=0x01 vendor=0x15ad device=0x07a0 subvendor=0x15ad subdevice=0x07a0
    vendor     = 'VMware'
    device     = 'PCI Express Root Port'
    class      = bridge
    subclass   = PCI-PCI
pvscsi0@pci0:3:0:0: class=0x010700 rev=0x02 hdr=0x00 vendor=0x15ad device=0x07c0 subvendor=0x15ad subdevice=0x07c0
    vendor     = 'VMware'
    device     = 'PVSCSI SCSI Controller'
    class      = mass storage
    subclass   = SAS
vmx0@pci0:11:0:0:   class=0x020000 rev=0x01 hdr=0x00 vendor=0x15ad device=0x07b0 subvendor=0x15ad subdevice=0x07b0
    vendor     = 'VMware'
    device     = 'VMXNET3 Ethernet Controller'
    class      = network
    subclass   = ethernet

/boot/loader.conf
Code:
[root@openstack-vm ~]# cat /boot/loader.conf
autoboot_delay="-1"
beastie_disable="YES"
console="comconsole,vidconsole"
kern.geom.label.disk_ident.enable=0
loader_logo="none"
zfs_load=YES

[root@vsphere-vm ~]# cat /boot/loader.conf
cryptodev_load="YES"
kern.geom.label.disk_ident.enable="0"
kern.geom.label.gptid.enable="0"
zfs_load="YES"

/etc/sysctl.conf
Code:
[root@openstack-vm ~]# cat /etc/sysctl.conf
<file is empty>

[root@vsphere-vm ~]# cat /etc/sysctl.conf
vfs.zfs.min_auto_ashift=12

I don't know anything about the OpenStack hardware. Our vSphere cluster is sitting on top of Dell PowerStore 3000T running all flash storage with 25Gb interconnects to the vSphere hosts.

I'm needing guidance on how to interpret the results and why there is an obvious drop-off in disk performance on OpenStack. Is there anything I can do to improve the OpenStack performance at the VM-level? Are there better fio or bonnie++ tests I can run?
 
The numbers looked like it :) I think you are royally screwed.
Wow, that's good to know and disheartening. I know nothing about Ceph other than it exists. Do you know if this is something where FreeBSD specifically does not work nicely with Ceph or is it something inherent within the Ceph culture itself? I'm wondering if I need to explore different operating systems for this workload if it remains on OpenStack or stop considering OpenStack altogether and move it over to Hyper-V. At the moment, those are our only two centrally-managed options for us since it's looking like we won't be able to stay on vSphere.
 
Wow, that's good to know and disheartening. I know nothing about Ceph other than it exists. Do you know if this is something where FreeBSD specifically does not work nicely with Ceph or is it something inherent within the Ceph culture itself? I'm wondering if I need to explore different operating systems for this workload if it remains on OpenStack or stop considering OpenStack altogether and move it over to Hyper-V. At the moment, those are our only two centrally-managed options for us since it's looking like we won't be able to stay on vSphere.

No, Ceph is just slow. Placing a VM on it is suicide, similar to old NFS homedirs.

You could confirm with a Linux VM is that is an option.
 
Back
Top