bbzz wrote:Does ZFS support TRIM? Or just UFS?
bbzz wrote:Would it be wise to run [file]swap[/file], swap-mounted[file]/tmp[/file], [file]/var[/file] on SSD?
bbzz wrote:So with UFS+ZFS on SSD, you should align inside BSD labels for best performance;
bbzz wrote:and then ZFS partition that will be used for pool should itself be aligned, correct?
bbzz wrote:Anyway, please feel free to share other intricacies you might have found.
# [color="Blue"]diskinfo -c -t -v ada0[/color]
512 # sectorsize
160041885696 # mediasize in bytes (149G)
312581808 # mediasize in sectors
0 # stripesize
0 # stripeoffset
310101 # Cylinders according to firmware.
16 # Heads according to firmware.
63 # Sectors according to firmware.
CVPO01160261160AGN # Disk ident.
I/O command overhead:
time to read 10MB block 0.046067 sec = 0.002 msec/sector
time to read 20480 sectors 1.900484 sec = 0.093 msec/sector
calculated command overhead = 0.091 msec/sector
Full stroke: 250 iter in 0.025751 sec = 0.103 msec
Half stroke: 250 iter in 0.026163 sec = 0.105 msec
Quarter stroke: 500 iter in 0.052073 sec = 0.104 msec
Short forward: 400 iter in 0.040653 sec = 0.102 msec
Short backward: 400 iter in 0.040956 sec = 0.102 msec
Seq outer: 2048 iter in 0.077597 sec = 0.038 msec
Seq inner: 2048 iter in 0.103460 sec = 0.051 msec
outside: 102400 kbytes in 0.444337 sec = 230456 kbytes/sec
middle: 102400 kbytes in 0.443060 sec = 231120 kbytes/sec
inside: 102400 kbytes in 0.438879 sec = 233322 kbytes/sec
# [color="#0000ff"]time find / 1> /dev/null 2> /dev/null[/color]
find / > /dev/null 2> /dev/null 0.44s user 3.83s system 52% cpu 8.158 total
# [color="#0000ff"]df -m[/color]
Filesystem 1M-blocks Used Avail Capacity Mounted on
/dev/label/root 495 71 424 14% /
storage/usr 144907 127316 17591 88% /usr
# [color="#0000ff"]blogbench -i 10 -d BLOG[/color]
Frequency = 10 secs
Scratch dir = [BLOG]
Spawning 3 writers...
Spawning 1 rewriters...
Spawning 5 commenters...
Spawning 100 readers...
Benchmarking for 10 iterations.
The test will run during 1 minutes.
Nb blogs R articles W articles R pictures W pictures R comments W comments
58 403642 2804 290140 3172 229064 6738
60 276701 121 195970 93 185351 5240
60 273753 11 193679 7 218322 4016
60 300775 21 212174 10 252008 2029
64 285494 246 202465 221 246846 1833
64 296025 17 206478 11 246857 2959
64 293819 19 207351 9 250991 2423
64 274489 9 193715 4 253598 4781
70 303173 327 215951 393 263001 2063
70 295326 22 157769 13 263545 2345
Final score for writes: 70
Final score for reads : 65929
bbzz wrote:If you have any experience with how SSD work/behave on FreeBSD, please write it here. Specifically I'm interested in how it works with different filesystems. I know popular way is to use SSD as a cache device for ZFS, but how would it perform if it actually hold ZFS pool; Does ZFS support TRIM? Or just UFS?
bbzz wrote:Thanks for all the posts]
I've been running the Intel SSDs as boot drives for a year now and I notice no difference. Not sure this is a valid test, but anyway:
[CMD="#"]dd if=/dev/ada1 of=/dev/null bs=10m[/CMD]
- Code: Select all
3816+1 records in
3816+1 records out
40020664320 bytes transferred in 213.738392 secs (187241347 bytes/sec)
What's that, 178MB/s? Over spec of 170MB/s, anyway.
Intel says:Intel wrote:(TRIM) allows the operating system to inform the solid-state drive which data blocks (e.g. from deleted files) are no longer in use and can be wiped internally allowing the controller to ensure compatibility, endurance, and performance.
Hence if your root mirror basically behaves in a normal way, there will not be much deleting going on, and there will be lots of spare space (basically half, not even counting swap, which in my system doesn't get used).
If you set it up like I do, you will have plenty of space left on your SSDs and they should not get full. See:
- Code: Select all
NAME USED AVAIL REFER MOUNTPOINT
storage 145G 148G 22K /storage
storage/distfiles 3.54G 148G 2.82G /usr/ports/distfiles
storage/home 77.8G 148G 56.0G /home
storage/packages 6.54G 148G 6.30G /usr/ports/packages
storage/zrootbackup 57.4G 148G 6.44G /storage/zrootbackup
storage/zrootbackup/zroot 51.0G 148G 1.05G /storage/zrootbackup/zroot
storage/zrootbackup/zroot/tmp 2.20M 148G 76.5K /storage/zrootbackup/zroot/tmp
storage/zrootbackup/zroot/usr 41.0G 148G 8.75G /storage/zrootbackup/zroot/usr
storage/zrootbackup/zroot/usr/ports 15.8G 148G 314M /storage/zrootbackup/zroot/usr/ports
storage/zrootbackup/zroot/usr/src 1.19G 148G 308M /storage/zrootbackup/zroot/usr/src
storage/zrootbackup/zroot/var 3.68G 148G 26.8M /storage/zrootbackup/zroot/var
storage/zrootbackup/zroot/var/crash 112K 148G 21.5K /storage/zrootbackup/zroot/var/crash
storage/zrootbackup/zroot/var/db 3.36G 148G 675M /storage/zrootbackup/zroot/var/db
storage/zrootbackup/zroot/var/db/pkg 144M 148G 24.9M /storage/zrootbackup/zroot/var/db/pkg
storage/zrootbackup/zroot/var/empty 96K 148G 20K /storage/zrootbackup/zroot/var/empty
storage/zrootbackup/zroot/var/log 52.0M 148G 1.21M /storage/zrootbackup/zroot/var/log
storage/zrootbackup/zroot/var/mail 510K 148G 23.5K /storage/zrootbackup/zroot/var/mail
storage/zrootbackup/zroot/var/run 1.74M 148G 117K /storage/zrootbackup/zroot/var/run
storage/zrootbackup/zroot/var/tmp 16.1M 148G 1.61M /storage/zrootbackup/zroot/var/tmp
zroot 12.3G 16.2G 1.06G legacy
zroot/tmp 4.43M 16.2G 74.5K /tmp
zroot/usr 10.2G 16.2G 8.60G /usr
zroot/usr/ports 1.05G 16.2G 321M /usr/ports
zroot/usr/src 309M 16.2G 309M /usr/src
zroot/var 1.02G 16.2G 26.8M /var
zroot/var/crash 20.5K 16.2G 20.5K /var/crash
zroot/var/db 910M 16.2G 698M /var/db
zroot/var/db/pkg 34.3M 16.2G 25.5M /var/db/pkg
zroot/var/empty 20K 16.2G 20K /var/empty
zroot/var/log 6.45M 16.2G 1.21M /var/log
zroot/var/mail 372K 16.2G 24.5K /var/mail
zroot/var/run 819K 16.2G 92.5K /var/run
zroot/var/tmp 8.54M 16.2G 1.97M /var/tmpbbzz wrote:@carlton_draught
I'm going to set it up something like that. Things that would benefit from faster disk, ie. kernel/world build, [FILE]/var[/FILE], [FILE]/tmp[/FILE] will got on SSD.
I have a set of articles on installing, backing up and restoring such a system, along with the scripts to do everything, already written and ready to post. All I'm waiting on is clarification of licensing for the install scripts, which are derived from the article linked in that post.Yes, I head that too. Basically, old fashioned plates are still more reliable. In my case I don't care since important data is on zpool. This would be just for system. Serious server should mirror that ofcourse.
bbzz wrote:About disks and reliability. I think SSD have potential to be more reliable, but HDDs are more mature technology.
bbzz wrote:What about SSD and how long they can store data for? Static charges dissipate over time, right?
Galactic_Dominator wrote:The technology powering SSD's have been around for nearly 30 years and been in widespread use for over 20. How long do you need to feel comfortable?
bbzz wrote:I'm merely restating what other experts commented on this.
bbzz wrote:Interesting; any performance hit with running two zpools? I was under impression that it would be silly to run two zpools.
bbzz wrote:About disks and reliability. I think SSD have potential to be more reliable, but HDDs are more mature technology. So, I don't know. I think in the end time will tell.
What about SSD and how long they can store data for? Static charges dissipate over time, right?
carlton_draught wrote:I'm not sure why there would be a performance hit, provided that you aren't making poor choices in the process (e.g. if you aren't using sufficient redundancy, or you put the wrong things on SSD). I certainly haven't detected any.
Galactic_Dominator wrote:One other tidbit not mentioned a lot is that SSD's actually have an exceptionally long write life span. Even one of the cheap MLC 10,000 write SSD's will outlast a normal laptop by decades. You can make it easier for your drive to operate effectively by reserving a portion of the disk and never utilizing it say 10%. This allow static wear leveling to achieve maximum efficiency. Using your SSD's maximum capacity is a bad idea.
lele wrote:Would you please expand on this? Do you mean that when partitioning, 10% of the space should be left unpartitioned?
mav@ wrote:He means space unused from SSD's point of view, i.e. never written, or erased with TRIM or SECURITY ERASE. SSD's wear leveling algorithms reserve some amount of free space to equalize media wear. The more free space available, the better algorithms should work and faster and more reliable device should be. TRIM does it dynamically, but static allocation may also be helpful.
wblock@ wrote:My unverified impression is that UFS does not write to every block during a [man=8]newfs[/man]. If true, those blocks are free for wear leveling, and unused filesystem space is as good a source of unused blocks as unpartitioned space. The difference is that it's easily available for use if you need it. Remember that UFS typically hides 8% of filesystem space anyway.
Users browsing this forum: No registered users and 1 guest