change from MFI to mrsas...Disk performance is very bad..Trim?

Hi All,

After I upgrade from 10.1 to 10.3, I got below errors when FreeBSD is booting. I did some research on the internet but no luck.

The raid card I'm using is Dell H330 which is working under HBA mode.

Any idea about this?

Thanks

12.png
 
I'm not sure if it'll help but have you tried switching to the mrsas(4) driver instead of mfi(4)? Keep in mind that if you do the drive names are going to change from mfidX to daX.
 
I'm not sure if it'll help but have you tried switching to the mrsas(4) driver instead of mfi(4)? Keep in mind that if you do the drive names are going to change from mfidX to daX.


Thanks for your reply.

My question is for now I'm using H330 as a HBA card, then I created a RAIDZ2 pool with 6 disk with the default kernel.

Code:
  NAME  STATE  READ WRITE CKSUM
   zroot  ONLINE  0  0  0
    raidz2-0  ONLINE  0  0  0
    mfisyspd0p3  ONLINE  0  0  0
    mfisyspd1p3  ONLINE  0  0  0
    mfisyspd2p3  ONLINE  0  0  0
    mfisyspd3p3  ONLINE  0  0  0
    mfisyspd4p3  ONLINE  0  0  0
    mfisyspd5p3  ONLINE  0  0  0

My concern is if i change mfi to mrsa, I believe that my disk name will be changed and my zpool may get some problem.
 
With ZFS it might actually just work. As far as I know ZFS uses the drive's UUID internally and that shouldn't change.
 
With ZFS it might actually just work. As far as I know ZFS uses the drive's UUID internally and that shouldn't change.
Thanks

But in case it's not booting (disk name is changed), how can I get in the rescue mode and modify the loader.conf?
 
I'm not sure if it'll help but have you tried switching to the mrsas(4) driver instead of mfi(4)? Keep in mind that if you do the drive names are going to change from mfidX to daX.
Hi Sirdice,

I tried to add mrsas_load="YES" into loader.conf, but after reboot, FB is still using mfi driver, not the new mrsas driver.

Based on https://www.freebsd.org/cgi/man.cgi?query=mrsas&sektion=4&manpath=freebsd-release-ports
It seems this new driver is supporting Dell H330, but I'm using H330 mini. Although everything is the same between H330 and H330 mini, I don't know why the driver doesn't support H330 mini!
 
Issue fixed, after add mrsas_load="YES", we still need to add
Code:
hw.mfi.mrsas_enable="1"
into device hint, otherwise FreeBSD will continue use the old mfi driver.
 
New question....
After I switch from mfi to mrsas, my server is slow...and the disk IO is pretty high, almost always 100%

If I run top, nginx is waiting for disk


11803 www 1 37 15 503M 119M zio->i 12 2:53 3.56% nginx
11805 www 1 37 15 503M 119M zio->i 14 2:53 3.47% nginx
11811 www 1 36 15 503M 119M zio->i 20 2:59 2.59% nginx
11798 www 1 36 15 499M 115M zio->i 7 2:37 2.49% nginx
12452 www 1 36 15 487M 101M zio->i 6 0:34 2.39% nginx


And here is the result for gstat

L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name
0 113 67 1669 121.0 24 100 306.5 55.5| da0
10 45 31 451 143.1 14 56 0.1 59.0| da1
0 55 42 543 0.3 13 52 0.1 1.1| da2
8 43 29 739 180.9 14 56 51.2 97.7| da3
10 44 30 571 166.9 14 56 30.9 81.8| da4
0 54 40 619 0.5 14 56 2.6 3.2| da5



And here is the result for zfs-stats


vm.vmtotal:
System wide totals computed every five seconds: (values in kilobytes)
===============================================
Processes: (RUNQ: 6 Disk Wait: 0 Page Wait: 0 Sleep: 928)
Virtual Memory: (Total: 19457116K Active: 14391548K)
Real Memory: (Total: 6578188K Active: 6503740K)
Shared Virtual Memory: (Total: 2050276K Active: 15308K)
Shared Real Memory: (Total: 43656K Active: 12896K)
Free Memory: 1762140K
root@www:~ # zfs-stats -a

------------------------------------------------------------------------
ZFS Subsystem Report Mon May 16 12:16:35 2016
------------------------------------------------------------------------

System Information:

Kernel Version: 1003000 (osreldate)
Hardware Platform: amd64
Processor Architecture: amd64

ZFS Storage pool Version: 5000
ZFS Filesystem Version: 5

FreeBSD 10.3-RELEASE-p2 #0: Wed May 4 06:03:51 UTC 2016 root
12:16下午 up 4:33, 3 users, load averages: 5.13, 4.03, 3.36

------------------------------------------------------------------------

System Memory:

2.00% 1.24 GiB Active, 6.91% 4.29 GiB Inact
87.94% 54.59 GiB Wired, 0.00% 332.00 KiB Cache
3.16% 1.96 GiB Free, 0.00% 0 Gap

Real Installed: 64.00 GiB
Real Available: 99.62% 63.76 GiB
Real Managed: 97.36% 62.08 GiB

Logical Total: 64.00 GiB
Logical Used: 90.24% 57.75 GiB
Logical Free: 9.76% 6.25 GiB

Kernel Memory: 782.46 MiB
Data: 96.48% 754.95 MiB
Text: 3.52% 27.51 MiB

Kernel Memory Map: 62.08 GiB
Size: 82.07% 50.95 GiB
Free: 17.93% 11.13 GiB

------------------------------------------------------------------------

ARC Summary: (HEALTHY)
Memory Throttle Count: 0

ARC Misc:
Deleted: 278.35k
Recycle Misses: 0
Mutex Misses: 964
Evict Skips: 2.70k

ARC Size: 84.02% 51.31 GiB
Target Size: (Adaptive) 84.01% 51.31 GiB
Min Size (Hard Limit): 12.50% 7.63 GiB
Max Size (High Water): 8:1 61.08 GiB

ARC Size Breakdown:
Recently Used Cache Size: 59.82% 30.70 GiB
Frequently Used Cache Size: 40.18% 20.62 GiB

ARC Hash Breakdown:
Elements Max: 2.20m
Elements Current: 100.00% 2.20m
Collisions: 647.35k
Chain Max: 5
Chains: 241.28k

------------------------------------------------------------------------

ARC Efficiency: 414.28m
Cache Hit Ratio: 99.16% 410.82m
Cache Miss Ratio: 0.84% 3.46m
Actual Hit Ratio: 99.15% 410.77m

Data Demand Efficiency: 99.39% 403.87m
Data Prefetch Efficiency: 56.75% 49.77k

CACHE HITS BY CACHE LIST:
Most Recently Used: 4.41% 18.11m
Most Frequently Used: 95.58% 392.67m
Most Recently Used Ghost: 0.10% 394.09k
Most Frequently Used Ghost: 0.12% 506.38k

CACHE HITS BY DATA TYPE:
Demand Data: 97.70% 401.38m
Prefetch Data: 0.01% 28.24k
Demand Metadata: 2.28% 9.38m
Prefetch Metadata: 0.01% 29.73k

CACHE MISSES BY DATA TYPE:
Demand Data: 71.79% 2.48m
Prefetch Data: 0.62% 21.53k
Demand Metadata: 27.03% 934.97k
Prefetch Metadata: 0.56% 19.40k

------------------------------------------------------------------------

L2ARC is disabled

------------------------------------------------------------------------


------------------------------------------------------------------------

VDEV cache is disabled

------------------------------------------------------------------------

ZFS Tunables (sysctl):
kern.maxusers 4416
vm.kmem_size 66653896704
vm.kmem_size_scale 1
vm.kmem_size_min 0
vm.kmem_size_max 1319413950874
vfs.zfs.trim.max_interval 1
vfs.zfs.trim.timeout 30
vfs.zfs.trim.txg_delay 32
vfs.zfs.trim.enabled 1
vfs.zfs.vol.unmap_enabled 1
vfs.zfs.vol.mode 1
vfs.zfs.version.zpl 5
vfs.zfs.version.spa 5000
vfs.zfs.version.acl 1
vfs.zfs.version.ioctl 5
vfs.zfs.debug 0
vfs.zfs.super_owner 0
vfs.zfs.sync_pass_rewrite 2
vfs.zfs.sync_pass_dont_compress 5
vfs.zfs.sync_pass_deferred_free 2
vfs.zfs.zio.exclude_metadata 0
vfs.zfs.zio.use_uma 1
vfs.zfs.cache_flush_disable 1
vfs.zfs.zil_replay_disable 0
vfs.zfs.min_auto_ashift 9
vfs.zfs.max_auto_ashift 13
vfs.zfs.vdev.trim_max_pending 10000
vfs.zfs.vdev.bio_delete_disable 0
vfs.zfs.vdev.bio_flush_disable 0
vfs.zfs.vdev.write_gap_limit 4096
vfs.zfs.vdev.read_gap_limit 32768
vfs.zfs.vdev.aggregation_limit 131072
vfs.zfs.vdev.trim_max_active 64
vfs.zfs.vdev.trim_min_active 1
vfs.zfs.vdev.scrub_max_active 2
vfs.zfs.vdev.scrub_min_active 1
vfs.zfs.vdev.async_write_max_active 10
vfs.zfs.vdev.async_write_min_active 1
vfs.zfs.vdev.async_read_max_active 3
vfs.zfs.vdev.async_read_min_active 1
vfs.zfs.vdev.sync_write_max_active 10
vfs.zfs.vdev.sync_write_min_active 10
vfs.zfs.vdev.sync_read_max_active 10
vfs.zfs.vdev.sync_read_min_active 10
vfs.zfs.vdev.max_active 1000
vfs.zfs.vdev.async_write_active_max_dirty_percent60
vfs.zfs.vdev.async_write_active_min_dirty_percent30
vfs.zfs.vdev.mirror.non_rotating_seek_inc1
vfs.zfs.vdev.mirror.non_rotating_inc 0
vfs.zfs.vdev.mirror.rotating_seek_offset1048576
vfs.zfs.vdev.mirror.rotating_seek_inc 5
vfs.zfs.vdev.mirror.rotating_inc 0
vfs.zfs.vdev.trim_on_init 1
vfs.zfs.vdev.cache.bshift 16
vfs.zfs.vdev.cache.size 0
vfs.zfs.vdev.cache.max 16384
vfs.zfs.vdev.metaslabs_per_vdev 200
vfs.zfs.txg.timeout 5
vfs.zfs.space_map_blksz 4096
vfs.zfs.spa_slop_shift 5
vfs.zfs.spa_asize_inflation 24
vfs.zfs.deadman_enabled 1
vfs.zfs.deadman_checktime_ms 5000
vfs.zfs.deadman_synctime_ms 1000000
vfs.zfs.recover 0
vfs.zfs.spa_load_verify_data 1
vfs.zfs.spa_load_verify_metadata 1
vfs.zfs.spa_load_verify_maxinflight 10000
vfs.zfs.check_hostid 1
vfs.zfs.mg_fragmentation_threshold 85
vfs.zfs.mg_noalloc_threshold 0
vfs.zfs.condense_pct 200
vfs.zfs.metaslab.bias_enabled 1
vfs.zfs.metaslab.lba_weighting_enabled 1
vfs.zfs.metaslab.fragmentation_factor_enabled1
vfs.zfs.metaslab.preload_enabled 1
vfs.zfs.metaslab.preload_limit 3
vfs.zfs.metaslab.unload_delay 8
vfs.zfs.metaslab.load_pct 50
vfs.zfs.metaslab.min_alloc_size 33554432
vfs.zfs.metaslab.df_free_pct 4
vfs.zfs.metaslab.df_alloc_threshold 131072
vfs.zfs.metaslab.debug_unload 0
vfs.zfs.metaslab.debug_load 0
vfs.zfs.metaslab.fragmentation_threshold70
vfs.zfs.metaslab.gang_bang 16777217
vfs.zfs.free_bpobj_enabled 1
vfs.zfs.free_max_blocks -1
vfs.zfs.no_scrub_prefetch 0
vfs.zfs.no_scrub_io 0
vfs.zfs.resilver_min_time_ms 3000
vfs.zfs.free_min_time_ms 1000
vfs.zfs.scan_min_time_ms 1000
vfs.zfs.scan_idle 50
vfs.zfs.scrub_delay 4
vfs.zfs.resilver_delay 2
vfs.zfs.top_maxinflight 32
vfs.zfs.zfetch.array_rd_sz 1048576
vfs.zfs.zfetch.max_distance 8388608
vfs.zfs.zfetch.min_sec_reap 2
vfs.zfs.zfetch.max_streams 8
vfs.zfs.prefetch_disable 1
vfs.zfs.delay_scale 500000
vfs.zfs.delay_min_dirty_percent 60
vfs.zfs.dirty_data_sync 67108864
vfs.zfs.dirty_data_max_percent 10
vfs.zfs.dirty_data_max_max 4294967296
vfs.zfs.dirty_data_max 4294967296
vfs.zfs.max_recordsize 1048576
vfs.zfs.mdcomp_disable 0
vfs.zfs.nopwrite_enabled 1
vfs.zfs.dedup.prefetch 1
vfs.zfs.l2c_only_size 0
vfs.zfs.mfu_ghost_data_lsize 24647285760
vfs.zfs.mfu_ghost_metadata_lsize 1333219328
vfs.zfs.mfu_ghost_size 25980505088
vfs.zfs.mfu_data_lsize 24993696768
vfs.zfs.mfu_metadata_lsize 74132992
vfs.zfs.mfu_size 26147871744
vfs.zfs.mru_ghost_data_lsize 23861542912
vfs.zfs.mru_ghost_metadata_lsize 3496402432
vfs.zfs.mru_ghost_size 27357945344
vfs.zfs.mru_data_lsize 24735312896
vfs.zfs.mru_metadata_lsize 6396928
vfs.zfs.mru_size 27587964928
vfs.zfs.anon_data_lsize 0
vfs.zfs.anon_metadata_lsize 0
vfs.zfs.anon_size 8735232
vfs.zfs.l2arc_norw 1
vfs.zfs.l2arc_feed_again 1
vfs.zfs.l2arc_noprefetch 1
vfs.zfs.l2arc_feed_min_ms 200
vfs.zfs.l2arc_feed_secs 1
vfs.zfs.l2arc_headroom 2
vfs.zfs.l2arc_write_boost 8388608
vfs.zfs.l2arc_write_max 8388608
vfs.zfs.arc_meta_limit 16395038720
vfs.zfs.arc_free_target 112860
vfs.zfs.arc_shrink_shift 7
vfs.zfs.arc_average_blocksize 8192
vfs.zfs.arc_min 8197519360
vfs.zfs.arc_max 65580154880

------------------------------------------------------------------------



zpool iostat


# zpool iostat 1
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
zroot 2.76T 2.77T 196 226 5.96M 1.85M
zroot 2.76T 2.77T 74 30 1.54M 694K
zroot 2.76T 2.77T 193 594 6.40M 3.61M
zroot 2.76T 2.77T 61 125 2.75M 575K
zroot 2.76T 2.77T 119 52 2.75M 237K
zroot 2.76T 2.77T 67 30 2.56M 156K
zroot 2.76T 2.77T 45 78 1.82M 317K
zroot 2.76T 2.77T 111 273 3.54M 1.99M
zroot 2.76T 2.77T 162 708 2.94M 4.99M
zroot 2.76T 2.77T 135 118 4.43M 542K
zroot 2.76T 2.77T 105 3 2.53M 16.0K



All my disks are SSD, in the past it's fast. But now it's very slow..

Anyone can help?
 
I have changed the driver from mrsas back to mfi, now my server is back to normal.

The thing I noticed is when I was using mrsas driver, the trim is supported. (Raidcard can pass the trim command to disk). And I run gstat, the disk usage are almost >90%

#sysctl -a |grep _trim
kstat.zfs.misc.zio_trim.failed: 0
kstat.zfs.misc.zio_trim.unsupported: 0
kstat.zfs.misc.zio_trim.success: 108528912
kstat.zfs.misc.zio_trim.bytes: 4015158734784


A

And after I changed back to mfi, the trim is unsupported by the raid card. And for gstat it's all <10%

kstat.zfs.misc.zio_trim.unsupported: 16089


Not sure this issue is related with trim, any clue or any suggestion?
 
Back
Top