FreeBSD 13.1 extremally slow

SMR is really that bad, yes. While probably not applicable in this case, head parking can also be a problem I have encountered even on enterprise disks - sysutils/parkverbot to the rescue.
 
So.. this morning I did a zpool scrub root .... took around3 hours... reported 0 errors.
Code:
scan: scrub repaired 0B in 02:57:50

The strange thing is that after this, the disk are not spiking from 0 to 100%.They have normal usage... between 10-80%. And the system is way more faster... After a MariaDB import which finished way more faster, the drives started to spike again, but not as before... I got another BSD machine with raid0, where I was playing more than 1y ago... I powered on this morning and was fast like a rocket... ZFS raid0 2xToshiba CMR drives.
now... the questions is as follow, regarding the drives:

1. I found a nice offer on a local store with some old SEAGATE Pipeline 2tb 5900rpm drives at 50$ / piece
2. Toshiba DT01ACA200 2tb (exactly as the one from the test machine, the only difference is that they are 2tb) at 70$ / piece
3. Put some extra money and buy some SAS drives.
4. Refurbished SAS drives at around 100$ / piece (2tb 3.5inch).

Thanks!

 
My 2 cents.
As to spinning rust platters: choose drives that are certified/meant for NAS application and or server application. I have no up to date advice as to current SATA drives most favourable for NAS/server applications. I used to use WD red (non-pro) (CMR) drives but they seem less popular today, or so I heard.

Unless you have qualified reasons to use SAS drives and are willing to pay for those, choose SATA drives. Such reasons might be higher reliability, higher base throughput (higher rpm), use for dual data access and are capable to host SAS drives through a SAS host adapter. With transitioning from SMR to CMR (background: WD Red SMR vs CMR Tested Avoid Red SMR) you'll have taken the biggest drive hardware step forward. I'd stay away from refurbished drives for professional applications.

As to adding additional hardware in relation to your applications: you might benefit from adding a SLOG and L2ARC. However, under normal circumstances these are not the first goto solutions to get a (too) slow system up to speed; see e.g. What Makes a Good Time to Use OpenZFS Slog and When Should You Avoid It and OpenZFS: All about the cache vdev or L2ARC. After installing CMR drives, IMO your first attention should go to get an appropriate amount of RAM for your set up; after that manage/tune the ARC size in relation to the needs of your applications (especially your db's) if required.
 
Some random thoughts:
  1. You really should turn off "advanced power management" in your existing drives!
  2. It's difficult to make disk recommendations without knowing your workload or risk profile.
  3. Disk drives marketed for "NAS" generally don't park their heads uninvited, and don't use SMR recording. But you need to check!
  4. WD Red CMR drives are at the low end of NAS performance. But, my personal experience is that their value for money equation is OK (50% of my 10 year old CMR Reds are still running). The WD Red SMR drives are easily mistaken for CMR drives.
  5. Any CMR disk doing random writing is going to run like a rocket when compared to an SMR drive.
  6. The Toshiba DT01ACA200 is a consumer grade disk, and it's not mentioned in the list of SMR drives published by Toshiba in 2020. They are available on Newegg for $50.
  7. These days 4 TB drives are often not significantly more expensive than 2 TB drives.
 
I use this server for the following:
1. Backup my Mac files (time machine setup via samba)
2. Samba for sharing and storing files... Like photos from my holidays
3. Backup of various windows machines
4. Nightly backup of my office computer.
5. Iocage for Apache/mysql/PHP development
6. Iocage for a Django app
7. In the future I want to make a new pool for NFS and iscsi.
8.This server sync all the files to a standalone drive nightly as a second backup excepting the raid.
9. The holidays photos are also synced to a second location from my home which is an old time capsule.
 
I infer that it's a home system, not under sustained load, and that sensible pricing is desirable.

So choose CMR disks without "advanced power management" that fit in your budget.

The Toshiba drive you mention looks to have good performance but the failure rate reported on Newegg is a worry.

I really don't buy enough disks to make a strong recommendation on anything. You can examine most of the usual options at Newegg.
 
I got several dumb questions... and for not opening another thread, I will ask here.

1. I want to migrate the zpool to new disks configured as raid-z1 (raid5). this is doable? The current setup is zfs-raid10.
2. Over time, if I add new disks to the pool, can I expand the pool?
3. What would you choose between some 5y old seagate savvio SAS 10k rpm with a total of 1gb read/write data (was in a server as hot-spare drives), some desktop 7200rpm CMR or some new seagate constellation es3 7200rpm ( this is the most expensive option)?
The cheapest option is to use the 5y old savvio. I was thinking to an array with 8 disks, configured in raid6.
4. My SAS controller is a ibm 5015... so it cannot be flashed to ITmode. Can I put the controller in JBOD and create zfs raid-z2 or is not recommended?
 
I got several dumb questions... and for not opening another thread, I will ask here.
  1. Yes, you use zfs-send(8). Beware, RAID-Z is slower at writing than mirrors.
  2. Not traditionally, but that's changing.
  3. Used disks are a crap shoot. If they were extremely low cost, I might run them up and have a close look with smartctl(8). Consider the cost of replacement 10K SAS disks when they fail (and your cables will be SAS, so you may not be able to switch to SATA easily). Make sure you understand what sort of data cables you need (LSI controllers typically have custom 4-into-1 cables). Also look at higher capacity new drives, as their value equation may be better.
  4. I have no specific knowledge of the IBM M5015, but Google suggests it has no IT Mode, and no JBOD option, so not what you want for ZFS.
Building a NAS server can provide a sense of achievement and fun, but you are planning to add a lot of noisy, power hungry, hot disks into your case. Come summer, you will probably be asking how to improve the cooling, or reduce the noise... Then there will be backups...
 
Hi,
Sorry my fault ... I already own the SAS savvio disks, the controller and also the cables.
The server is on my basement where is a constant temperature. I don't care about the noise of the disks... Is not so much difference in terms of dB between 3.5 SATA 7200rpm and 2.5SAS 10k rpm.

How this bsd machine will work using a hardware raid array instead of zfs raid array?


In the past I was using as a lamp server, a machine with centos8 + hardware raid10 (desktop drives) + lvm + dm-cache on nvme. Was pretty fast compared to my iocage on fbsd13. The fbsd machine is running zfs raid10 on smr drives with iocage for FEMP.

For what I'm using this machine I can use any OS basically... Is like you are taking photos in raw format and then process it... The results will be similar.

Is just a matter of flavour... That's why I want to use BSD.
 
I don't think you can use the IBM M5015 with ZFS.

You could use it with any O/S in RAID mode -- it just presents virtual disks -- so Linux of FreeBSD should be fine with native file systems other than ZFS.

The Art of Server appears to genuine refurbished LSI controllers flashed to IT mode, if you want to go with ZFS.

Figuring out the best use for the NVMe SSD would take serious some study with ZFS. Separate ZIL, L2ARC, and "special vdev" are all candidates, but may or may not help performance or be safe, depending on context.

Personally, I'd settle in with my first FreeBSD system. Get comfortable, and then move on.
 
I put a nvme as a cache... The SMR drives are much better now... Infinite better... Next step to move from onboard controller to a JBOD SAS controller like IBM m5015...
 
To this day, I am still flabbergasted by how many people (suckers) actually spend their money on NEW hard drives! I don't think I own a hard drive that was manufactured after 2000. Even though I don't really know what the fuck goes on inside those metal blocks (magic?), and even though I don't normally believe in magic, I DO KNOW that I can trust these old things basically forever.
 
If i would have money i would have been on NVME.
- Number of concurrent traits by the CPU.
- How much Gb Memory.
They remain important settings.
Currenlty i use SSD because of the speed compared to spinning drives.
I use spinning drives for larger data.
 
If i would have money i would have been on NVME.
- Number of concurrent traits by the CPU.
- How much Gb Memory.
They remain important settings.
Currenlty i use SSD because of the speed compared to spinning drives.
I use spinning drives for larger data.
The re-writ-ability of the spinning magnetic platters makes them ideal for lots of (re)writing. However, Mr. Burn-in has not yet transformed any of my flash drives into some type of useless plastic and sh*t...
 
The re-writ-ability of the spinning magnetic platters makes them ideal for lots of (re)writing. However, Mr. Burn-in has not yet transformed any of my flash drives into some type of useless plastic and sh*t...

Unfortunately, I have to self-correct myself--I now recall one flash drive being unusually slow which was likely due to burn-in. Please respect me and pretend you never read my previous (embarassing) message, friends.
 
To this day, I am still flabbergasted by how many people (suckers) actually spend their money on NEW hard drives!
Well, it depends. Last month I bought a new disk, then found a writing on the label: DOM:28MAR2017.
But then, that's different to the new Ultrastar that come with an erased date on the label and zeroed SMART data.
 
Back
Top