Search results

  1. S

    Poor transfer rates on SATA drives on LSI SAS3008 controller

    For sure this issue affects multiple vendors and models: In addition to the WD drives previously discussed, This disk gets 194.78MB/sec read via camdd, 193.35MB/sec write via camdd 193.53MB/sec read via dd, 73.67MB/sec write via dd pass0: <ST1000DM003-9YN162 CC4D> ATA8-ACS SATA 3.x device...
  2. S

    Poor transfer rates on SATA drives on LSI SAS3008 controller

    Yes, the drives are all shucked. I don't remember reading anything about SMR on them, and I haven't noticed SMR's performance impacts. The 5400 RPM doesn't bother me in the slightest - these are bulk storage drives. That said, I haven't done a proper random-write test on one of these, either...
  3. S

    Poor transfer rates on SATA drives on LSI SAS3008 controller

    Also does not appear during non-(pure)sequential write - Mount a filesystem on the disk and write to the filesystem, and the drive performs great. It's only sequential writes to the bare disk object that are incorrectly slow - the camdd program works as a workaround for this. It's been...
  4. S

    Poor transfer rates on SATA drives on LSI SAS3008 controller

    I tried following the instructions for the mrsas driver, but it would not pick up this card. Either I screwed something up, or the SAS3008 is only supported by mpr in FreeBSD-14.
  5. S

    Poor transfer rates on SATA drives on LSI SAS3008 controller

    Hrm. I wonder if this is the reason the camdd program exists. # camdd -i file=/dev/zero -o pass=/dev/da1 -m 10G 10737418240 bytes read from /dev/zero 10737418240 bytes written to pass1 55.5901 seconds elapsed 184.21 MB/sec # camcontrol tags da1 -v (pass1:mpr0:0:1:0): dev_openings 252...
  6. S

    Poor transfer rates on SATA drives on LSI SAS3008 controller

    I disconnected one drive from the LSI controller and plugged it into the motherboard, where it was picked up as ada1 # dd if=/dev/zero of=/dev/ada1 bs=1m count=10240 10240+0 records in 10240+0 records out 10737418240 bytes transferred in 52.340632 secs (205144987 bytes/sec) I ran camcontrol...
  7. S

    Poor transfer rates on SATA drives on LSI SAS3008 controller

    > But the thing is, I thought that print was removed in 14.0. so that might indicate something isn't matching your expectation (or the FreeNAS-core is 13.x and still has the message). That print is on FreeBSD 14.0-RELEASE, not FreeNAS/TrueNAS-Core. > Are the multiple dd's to one drive or...
  8. S

    Poor transfer rates on SATA drives on LSI SAS3008 controller

    So I've done a bit of testing in some other OSes, with interesting results: Linux - TrueNAS-SCALE - works fine, speeds as expected Linux - Ubuntu Desktop 23.10 - Prints a bunch of angry messages (A bunch of array out-of-bounds stuff, I think it was) on startup, works and speeds are as expected...
  9. S

    Poor transfer rates on SATA drives on LSI SAS3008 controller

    This is a brand-new computer, so I don't know if this is an issue with this controller or some other kind of interaction within the system. Suggestions for troubleshooting are welcome. On a fresh install of FreeBSD 14.0-RELEASE, I'm only getting about 70MB/s while doing a dd if=/dev/zero...
  10. S

    Solved Horrific ZFS performance on new ST4000DM004 drive?

    And I'd say that's pretty conclusive - the ST4000DM004 is an SMR drive. I apologize for these crude graphs, they're what comes out of the box with fio and I don't care enough to spend time and calories making it prettier. Two-hour 128KB Random-Write test, with a little Random-Read for good measure:
  11. S

    Solved Horrific ZFS performance on new ST4000DM004 drive?

    Nah. My current backup regime is inadequate, and a reliable 4TB drive will go a ways towards fixing that. I actually think I've resolved the drive issue that caused me to replace the old ST4000DM000-1F2168 drive in the first place [Manufacturing defect on the controller board, fortunately...
  12. S

    Solved Horrific ZFS performance on new ST4000DM004 drive?

    ralphbsz: I understand the situation the hard-drive business faces. Ultimately, Moore's Law on NAND chips will render spinning magnetic disks completely obsolete - it's only a question of how long they can drag it out, not if it is going to happen. I'm not even upset that they put SMR into a...
  13. S

    Solved Horrific ZFS performance on new ST4000DM004 drive?

    I expected "reasonable" disk performance as well, given that there is zero marketing anywhere indicating this drive uses SMR, which is a substantial departure from traditional HDD performance characteristics. SMR drives have historically been marketed as "Archive" drives. I'm chasing cost/TB...
  14. S

    Solved Horrific ZFS performance on new ST4000DM004 drive?

    Zpools are built on GELI providers using 4kb sector size, in addition to setting ashift at pool creation time. zdb indicates ashift 12 on these pools.
  15. S

    Solved Horrific ZFS performance on new ST4000DM004 drive?

    You've got to be shitting me. I'm aware of SMR and it's limitations, and if Seagate wants to put it in consumer drives that's fine. But for the datasheet not to mention that fact? Absolutely unacceptable. Do you have a source to back this up?
  16. S

    Solved Horrific ZFS performance on new ST4000DM004 drive?

    Resilvering is done, and was a week-long endeavor, mostly spent watching ada4 report >1000ms latency and bottlenecking the process. ZFS is running on GELIed partitions. I always instruct gpart to align to 4k, and I instruct geli to do 4k blocks as well. All the old drives look like this: =>...
  17. S

    Solved Horrific ZFS performance on new ST4000DM004 drive?

    To provide a bit more information, the drive seems almost-okay when reading or writing in straight lines. It's seeking that's absolutely brutally horrible relative to the old disks. L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name 0 2251 2251 124684 0.4 0...
  18. S

    Solved Horrific ZFS performance on new ST4000DM004 drive?

    So I've got a geli raidz1 pool on a bunch of ST4000DM000-1F2168 drives, running great for about 2 years now One of the disks has gone tits up, so I replaced it with a new ST4000DM004-2CV104, and I can't say the experience has been very good. gstat shows very high %busy and latency numbers...
  19. S

    Solved Multiple zpools on boot, broke in 10.1

    Got it working. I unmounted /bootpart, and created /bootpart/boot/zfs/zpool.cache, then remounted /bootpart. So the kernel is able to find zpool.cache in the expected location (symlinked /boot -> /bootpart/boot/zfs/zpool.cache) at boot-time. Administratively this is a less than desirable...
  20. S

    Solved Multiple zpools on boot, broke in 10.1

    I'm pretty sure the issue isn't GELI per se, but rather the very odd partition structure that this use of GELI requires. The zpool.cache file lives on the small UFS partition in the expected location (boot/zfs/zpool.cache), and this worked in FreeBSD-9, but somehow FreeBSD-10 handles loading...
Back
Top