Solved sata harddisk speed was slow in freebsd 14 . anyone can help me to speed up it . thanks.

dear all:
i have install freebsd 14 in sata harddisk . below is information. this machine was so slowly . how to speed up .? thanks.
cpu : Xeon(R) CPU E5-1650 v3
ram: 32GB
disk: seagate 2TB.
Code:
 # diskinfo -tv /dev/ada1p3
/dev/ada1p3
    512             # sectorsize
    1965765427200    # mediasize in bytes (1.8T)
    3839385600      # mediasize in sectors
    4096            # stripesize
    0               # stripeoffset
    3808914         # Cylinders according to firmware.
    16              # Heads according to firmware.
    63              # Sectors according to firmware.
    ST2000DM001-1ER164    # Disk descr.
    Z4Z50Q3Z        # Disk ident.
    ahcich5         # Attachment
    id1,enc@n3061686369656d31/type@0/slot@2/elmdesc@Slot_01/p3    # Physical path
    No              # TRIM/UNMAP support
    7200            # Rotation rate in RPM

Seek times:
    Full stroke:      250 iter in   5.623468 sec =   22.494 msec
    Half stroke:      250 iter in   5.151391 sec =   20.606 msec
    Quarter stroke:      500 iter in   7.810032 sec =   15.620 msec
    Short forward:      400 iter in   2.220425 sec =    5.551 msec
    Short backward:      400 iter in   2.622122 sec =    6.555 msec
    Seq outer:     2048 iter in   0.089199 sec =    0.044 msec
    Seq inner:     2048 iter in   0.280802 sec =    0.137 msec

Transfer rates:
    outside:       102400 kbytes in   0.494964 sec =   206884 kbytes/sec
    middle:        102400 kbytes in   0.586504 sec =   174594 kbytes/sec
    inside:        102400 kbytes in   1.037697 sec =    98680 kbytes/sec

# dmesg |grep "transfers"
ada0: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 8192bytes)
ada1: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 8192bytes)
cd0: 150.000MB/s transfers (SATA 1.x, UDMA6, ATAPI 12bytes, PIO 8192bytes)
 
It looks like the ST2000DM001 is a BarraCuda drive. https://www.seagate.com/files/staticfiles/docs/pdf/datasheet/disc/barracuda-ds1737-1-1111us.pdf
If so, bad news - the drive is SMR, not CMR, according to Seagate https://www.seagate.com/gb/en/products/cmr-smr-list/
The drive will be slow. A CMR drive will be better.
daer tingo, thanks. why the this harddisk was so slowly. the speed is bad more than windows 7. i wish some one can help me improve speed . thanks.
 
It doesn't have many friends on the internet. For example:


"The results for this benchmarking of the Seagate Barracuda 2TB ST2000DM001 were very mixed. In some of the tests the ST2000DM001 was running lower than even the older HDDs running at SATA2 / 5400 RPM, but it did boast the highest storage capacity of the hard drives tested at 2TB. While the performance isn't a strong point, its 2TB storage capacity and coming in at just under $90 USD does make it a bit more worthwhile depending upon your particular needs."


"The drive’s performance in our file transfer benchmark tests wasn’t the best we’ve seen. We were never expecting this disk to rival the speeds of an SSD or SSHD hybrid drive, but its performance was even so short of other mechanical disks we’ve tested. In our large-file benchmark, the Desktop HDD wrote files at 168MB/s and read them back at 157MB/s for an average score of 162MB/s, making this one of the slower mechanical hard disks we’ve seen."

Emphasis in both quotes is mine.

Think you might be better getting another drive or SSDs if you want better performance; you might be able to get this to go as fast under FreeBSD as you can under Windows 7, but seems like it might not be worth it.
 
It is 206MB/s only if you use the normal metric system math of 1000k=1M instead of the computer thing of 1024k=1M as is usually done with bits, bytes, etc. Computer software often does math in base 2 meaning you multiply/divide by 1024 to go through prefixes different by 1000 normally. The way to convert from one unit to another is to multiply by what you want and divide by what you have while giving relative quantity offset as numerator and denominator so with:
206884kB/1s * 1MB / 1024kB = 202.035MB/1s
You can get more precision if you take the originally reported data quantity and time instead of using the computed result it gave.
The same logic also converts time units but seconds start in the denominator so you either do something like
... * (1/1min)/(1/60s) which simplifies to as if the top and bottom were reversed without the fractions
... * 60s/1min = data moved in minutes
That's taught as (a/b)/(c/d)=(a*d)/(b*c) in algebra but now you are using it to convert units so its slightly harder to follow/think about but very useful.
 
Ok so around the 200 mark which is better than the reviews I posted. I was trying to make the point that the performance seen is in line with the review‘s benchmarks.
 
Transfer rates:
outside: 102400 kbytes in 0.494964 sec = 206884 kbytes/sec
middle: 102400 kbytes in 0.586504 sec = 174594 kbytes/sec
inside: 102400 kbytes in 1.037697 sec = 98680 kbytes/sec

Ok so around the 200 mark which is better than the reviews I posted. I was trying to make the point that the performance seen is in line with the review‘s benchmarks.
dear richardtoohey2. i think the speed of writing is 98m/s .read speed is 206m/s. that is not sata speed . thanks.
 
diskinfo man page says:

”The diskinfo utility prints out information about a disk device, and optionally runs a naive performance test on the device.”

I think outside/middle/inner refer to the area of the disk not read/write speeds.

SATA is s-l-o-w.

But as cracauer@ already said, post some numbers - show the speeds you are getting from Windows 7, and then do the equivalent tests on FreeBSD. Such benchmarking is not easy (to be absolutely sure you are comparing like with like.)
 
Transfer rates:
outside: 102400 kbytes in 0.494964 sec = 206884 kbytes/sec
middle: 102400 kbytes in 0.586504 sec = 174594 kbytes/sec
inside: 102400 kbytes in 1.037697 sec = 98680 kbytes/sec


dear richardtoohey2. i think the speed of writing is 98m/s .read speed is 206m/s. that is not sata speed . thanks.
SATA speed is irrelevant to this particular measurement, which is about the transfer rate to/from the platter (this is probably a non-destructive read test, and reads can be a little bit faster than writes. In nearly all cases, the speed at which the disk drive can get data from the platter is much slower than the SATA interface speed.

The specification for this disk drive calls for a maximum sustained data rate of 210 MB/s (and yes, that means decimal megabytes here, not binary mebibytes a.k.a. MiB), and you're measuring 207 MB/s. The specification calls for an average of 156 MB/s, and given that you're seeing 175 and 99 MB/s in the middle and inside, it seems plausible that the overall average across the whole drive would exceed 156 MB/s (in particular given that there is more data outside). So this test passes perfectly.

Yet, you say the machine is slow. Let me rephrase what cracauer and richardtoohey already asked: have you measured how fast the file system is? What is your workload? Are you doing transactional database workloads, data mining, giant compiles, image processing? What are your needs and expectations of speed?
 
For desktop use, sustained throughput #s of 100-200MB/s are likely not what matters; seek will happen frequently and #s that are 1/10th to 1/200th of that may be more relevant for common desktop tasks like reading a (probably fragmented) cache of many files like common browsers do these days. Even "if" the drive is a slow drive, knowing what you are doing and what you see as slow FreeBSD vs fast Win7 tasks can help.

If it is formatted with ZFS but nearly full then writes can likely be slower and more fragmented which performs poorly. ZFS also is Copy-On-Write which causes small edits to larger files to turn into excessive fragmentation (which slows down access to the file "over time") and can cause write amplification but such impacts are task dependent. If you installed KDE, and particularly if you brought in or have many data files then the Baloo service may run up drive I/O excessively in the background. If using Firefox with many tabs and windows open then you may notice excessive memory pressure which in my experience hits a point where swap partition can start to get excessive use; it wouldn't be slow from the start though. If you are just comparing benchmarks to say, "fast vs slow" then knowing what benchmarks on both sides with what results helps too. Or if you are copying some known set of large/small/mixed files and not seeing expected performance...

Once we know what software and tasks are in use and how they are expectedly/unexpectedly performing then we can consider looking into related issues, tweaks, and what to look for with various benchmarks, monitoring tools (systat -v, top, etc.), and so forth. Hardware specifications beyond the drive can be relevant too. Any other setup details that could matter like win7=first half of drive and FreeBSD uses a partiton on the second half of drive could matter too as drives are always slower there. Partition layout (as seen with `gpart show`) and filesystem configuration 'could' matter if you somehow ended up not being aligned and sized for 4k sectors or had no swap partition but the installer should have taken care of doing things right and good if not overridden. I also presume this was a fresh 14 install instead of an upgrade from a previous version.
 
SATA speed is irrelevant to this particular measurement, which is about the transfer rate to/from the platter (this is probably a non-destructive read test, and reads can be a little bit faster than writes. In nearly all cases, the speed at which the disk drive can get data from the platter is much slower than the SATA interface speed.

The specification for this disk drive calls for a maximum sustained data rate of 210 MB/s (and yes, that means decimal megabytes here, not binary mebibytes a.k.a. MiB), and you're measuring 207 MB/s. The specification calls for an average of 156 MB/s, and given that you're seeing 175 and 99 MB/s in the middle and inside, it seems plausible that the overall average across the whole drive would exceed 156 MB/s (in particular given that there is more data outside). So this test passes perfectly.

Yet, you say the machine is slow. Let me rephrase what cracauer and richardtoohey already asked: have you measured how fast the file system is? What is your workload? Are you doing transactional database workloads, data mining, giant compiles, image processing? What are your needs and expectations of speed?
Dear ralphbsz:
this machine is freebsd14 just for a daily work desktop. not server . when i open a app, they will spend more time. i just want to find a way to fast opening speed like as libreoffice, chromimu...etc. thanks for your advice .
 
For desktop use, sustained throughput #s of 100-200MB/s are likely not what matters; seek will happen frequently and #s that are 1/10th to 1/200th of that may be more relevant for common desktop tasks like reading a (probably fragmented) cache of many files like common browsers do these days. Even "if" the drive is a slow drive, knowing what you are doing and what you see as slow FreeBSD vs fast Win7 tasks can help.

If it is formatted with ZFS but nearly full then writes can likely be slower and more fragmented which performs poorly. ZFS also is Copy-On-Write which causes small edits to larger files to turn into excessive fragmentation (which slows down access to the file "over time") and can cause write amplification but such impacts are task dependent. If you installed KDE, and particularly if you brought in or have many data files then the Baloo service may run up drive I/O excessively in the background. If using Firefox with many tabs and windows open then you may notice excessive memory pressure which in my experience hits a point where swap partition can start to get excessive use; it wouldn't be slow from the start though. If you are just comparing benchmarks to say, "fast vs slow" then knowing what benchmarks on both sides with what results helps too. Or if you are copying some known set of large/small/mixed files and not seeing expected performance...

Once we know what software and tasks are in use and how they are expectedly/unexpectedly performing then we can consider looking into related issues, tweaks, and what to look for with various benchmarks, monitoring tools (systat -v, top, etc.), and so forth. Hardware specifications beyond the drive can be relevant too. Any other setup details that could matter like win7=first half of drive and FreeBSD uses a partiton on the second half of drive could matter too as drives are always slower there. Partition layout (as seen with `gpart show`) and filesystem configuration 'could' matter if you somehow ended up not being aligned and sized for 4k sectors or had no swap partition but the installer should have taken care of doing things right and good if not overridden. I also presume this was a fresh 14 install instead of an upgrade from a previous version.
Dear mirror176:
that is fresh freebsd 14 with gnome . this machine have xeron cpu and 32G ram, zfs, swap . not server. do you have some way to optimize it for desktop ?
thanks.
 
Dear ralphbsz:
this machine is freebsd14 just for a daily work desktop. not server . when i open a app, they will spend more time. i just want to find a way to fast opening speed like as libreoffice, chromimu...etc. thanks for your advice .

But you bought the slowest disk on the market...
 
when i open a app, they will spend more time.
When you say "spend more time", you probably mean: "it's much slower on FreeBSD compared to doing the equivalent thing on Windows using the same hardware".

Assuming this is the correct interpretation, I'll give you a few answers, none of them particularly helpful:
  • Are you sure? Have you actually measured it? Or could it be that the way it acts and reacts makes it "feel" slower, but in reality it is nearly as fast?
  • Assuming you really have measured that it is objectively slower ...
  • In general, Unix variants have better caching in the file system. You may very well be right that opening something like libreoffice takes much longer the first time after reboot compared to MS word. But if you open it again 5 minutes later, it might be very fast.
  • If that's not true, it is a fact of life that Windows has been super carefully optimized and (re-) designed around the desktop use case. I even know a few places in the Windows file system code (visible in the file system SDK) where clearly the goal of the implementation is desktop machines. Unix, on the other hand, spent most of its 50 year history either being a server or a general-purpose machine. And today there just isn't the huge amount of manpower available to tune and craft things specifically for the desktop use case. This is particularly true for FreeBSD (compared to Linux), which has much fewer developers available. In a nutshell I'm saying: this may just be a fact of life.
  • In which case, if it is still so slow that it annoys you, as cracauer and richardtoohey already said: buy a much faster disk, and it will feel better. It may still be slower than Windows on the same hardware, but you might be a happier (but poorer) person.
  • No, I don't know of any magic bullet to quickly tune the machine.
 
Windows also starts loading components like Office and Internet Explorer etc when you start it up, so running Microsoft applications feels a lot faster (because a lot of the code is already in memory so doesn’t need to be loaded from disk.)

Not sure if that’s true in these days of Edge/Chrome but certainly one of the things they used to do to make Windows feel a bit faster.
 
Windows also starts loading components like Office and Internet Explorer etc when you start it up, ...
One idea to help the OP: Figure out which particular applications they like to usually run. Then observe what shared libraries or other programs are used when those applications start. Then after the system is booted, or after login, start reading those libraries from disk, just so they are already in the cache.

Problem with this approach: To do it right, you need to first do a lot of measurements (there is a reason Patterson and Hennessy's book has "quantitative approach" in the title). And if done carelessly, it might make the system slower, by flushing more valuable stuff out of the cache.

A lot of this can be automated: observe what files are typically read at what times, and then prefetch them; measure what helps and what doesn't. A friend of mine turned that idea into his PhD thesis.
 
Windows also starts loading components like Office and Internet Explorer etc when you start it up, so running Microsoft applications feels a lot faster (because a lot of the code is already in memory so doesn’t need to be loaded from disk.)

Not sure if that’s true in these days of Edge/Chrome but certainly one of the things they used to do to make Windows feel a bit faster.

I hear that this is actually a general mechanism that preloads disk locations that were in heavy use before. I.e. not hardcoded to specific applications. Whether that's true seems to be hard to google.
 
The TLDR is there is no options to change that benefit all users all of the time. Upgrading to a quality SSD is unquestionably going to make it better as it has alawys worked with every OS that is used for general purposes. You can try to do smaller tweaks like killing Baloo if using KDE and it is doing too much, or switching to a lighter UI and lighter programs. Finding problems to tweak, adjust, and troubleshoot requires more specifics; that includes even getting to where people recommend loading a driver that was missing.

I thought that Windows uses prefetch files to log program file accesses on launch so that they can be placed more sequentially when the program is defragged (a trick I first saw 3rd party defragmenters using back in windows 95 or 98 days). You could probably imitate that on zfs with tracing file open/close activity and then copy those files over themselves rapidly in order; Need to make sure benefits aren't partially undone with active atime (few programs use it but some need it, seems to cause lots of fragmentation effects for zfs but ive needed to analyze more), active block cloning (if not copying the files back from an external destination) and may need to increase vfs.zfs.txg.timeout but I'd be surprised if it is worth the trouble unless the program is reading in a lot of small/fragmented blocks from disk. For magnetic drives, I do sometimes move data away and copy it back to improve its disk access for things like ccache repository, /usr/ports tree (actually testing if "occasional" `git gc` is an acceptable solution), and folders like firefox thunderbird user folders.

You could use the same notes for files read to manually develop a load+cache disk data on boot or anytime as a separate command; just read the file contents with a pipe to /dev/null and you would see both ZFS and UFS cache sizes in RAM go up. If you don't have enough RAM for other tasks then you would see these benefits go away. Same happens if you shut down. As running that task takes time, you are still taking the time but choose when it happens: loading OS, logging in, refresh every night or when at lunch, etc.). Environments like KDE also have the option to restore previously open windows; it will relaunch those same programs itself on its next launch.

On Windows, some programs like Internet Explorer or Microsoft Office had much of their used data preloaded with the operating system last I worked with them. Other programs like OpenOffice had an optional process to do that to some of their data.

Modern browsers 1. use databases and 2. use many small cache files not in databases. Some database use can have optimizations on zfs such as different block size to minimize write amplification types of issues but smaller recordsize means less compression on that content; the benefits would be suboptimal for all the other files around it so many datasets/subdatasets would be needed unless symlinking these files to a database specific dataset. Many small files being written (and presumably destroyed) as a cache + copy on write filesystem always sounds like fragmentation of datastream is waiting to happen.

Any SSD of reasonable quality will be enough of an improvement over every magnetic drive that any user will notice the improvement despite the chosen operating system. SSDs do slow down with fragmentation, but much less so and with most SSDs trying to find how to make cheap memory instead of durable memory (like intel optane) its common that people are terrified of wasting writes (a durability limitation of the 'other' SSDs) to defragment the drive, rewrite caches and other data as I said I do with my magnetic disks, etc. The fastest NVMe vs the fastest SATA drives are an insignificant difference to the general desktop experience compared to magnetic HDD vs quality SSD generally being described as feeling like a new computer. There are also hybrid magnetic drives which include a small SSD on them and attempt to cache regularly accessed data so you can 'usually' feel like its an SSD. Some laptops also have or offer adding a small intel optane stick (16-64GB) which when paired with intel or laptop manufacturer's version of intel drivers it can be used as a cache for any magnetic drive; not sure of the state of that use on FreeBSD but with ZFS you have some options to use any SSD as a cache (some options are reliable, some you lose your pool if you lose the SSD) which 'may' be benificial to some workloads.

You could work around having slower parts to the hard drive if you repartition it to keep data toward the faster parts only. This may help with sequential reading (FreeBSD actually should be 'trying' to allocate data in faster parts first) but random I/O is still completely killed by seek times and requires an SSD as drive or cache to minimize. The slower area could be partitioned for infrequently used data to be stored there. Partitioning has its own drawbacks like manual effort to setup, resizing later will lose its benefits and may require backing up data+restoring to repartitioned disk to complete, with possible performance issues caused by a resize later. If the second half of the drive is kept blank, you can always use that as a place to copy to/from for rearranging other partitions, defragmenting data on a filesystem that is missing that tool, or having a backups/extra copies in case of basic mistakes being made so you don't have to get to an enternal disk or expensive(?)/slow cloud drive.

Having backups is expected of any good computing practice with the one exception of when you don't care about any of the data involved; proper backups are never replaced by redundancy of multiple copies of data (or their parity) on the same filesystem or on different drives in a RAID configuration. An enclosure can amke your magnetic into a portable drive which can be userful for moving files, backups, etc. so it is not just wasted/unused. Two backup drives you roll between or 1 drive + cloud would be even better but gotta start somewhere.
 
Back
Top