ZFS ARC and SSHDs

I have a solid state hybrid drive with a 32 GB NAND cache...

I set the ARC to 1 GB before knowing what it was at all, but can I just turn it off altogether since I already have that?
 
I think you are confusing the L2ARC with the ARC. By default there is no L2ARC configured so you shouldn’t have to do anything special for your pool. Unless you have a specific application need I would leave the ARC configured with a larger amount.
 
I think you are confusing the L2ARC with the ARC. By default there is no L2ARC configured so you shouldn’t have to do anything special for your pool. Unless you have a specific application need I would leave the ARC configured with a larger amount.

I'm not confusing them, I just don't like writing on eggshells... It's not like I'm trying to configure or do something to either, I'm trying to turn them off because I have better use of my RAM than holding copies of data that's already cached somewhere else...
 
Sorry I should have worded that better. The SSHD behaves like an L2ARC. Some SSHD's have drivers for the host OS that coordinate what's in the NAND cache, but I don't believe FreeBSD does. The NAND doesn't take the place of the work that the ARC is doing. Disabling the ARC will make performance worse, not better. Treat the SSHD like a faster platter drive and let ZFS do its thing!
 
Sorry I should have worded that better. The SSHD behaves like an L2ARC. Some SSHD's have drivers for the host OS that coordinate what's in the NAND cache, but I don't believe FreeBSD does. The NAND doesn't take the place of the work that the ARC is doing. Disabling the ARC will make performance worse, not better. Treat the SSHD like a faster platter drive and let ZFS do its thing!

ZFS, by default, will allocate all but one GB of RAM to the ARC!

That's insane!

And also, I didn't know SSHDs had drivers... it would be awesome if my hard drive had one... !

And I know the NAND does the same thing as a SSD attached as a L2ARC... It's like a built-in L2ARC that works with any filesystem... And disabling the ARC ... I don't know how it works, exactly, and how big it should be... I'm not sure I want everything to be cached... Both were set as enabled on all filesystems because it's an inheritable setting...

The thing used to grow so quickly, leaving very little RAM for me to work with...

Right now I'm not too dissatisfied with performance, but I'll get used to this, and turn it back on, see exactly what it does... Maybe I'll give it a bit more than 1 GB, but definitely not 15 GB like it had...

That's more data than I use as a user... I'm not even sure I have that much data on my hard drive, if I exclude distfiles and compressed packages...
 
The ARC has a low priority in RAM, so if any program wants/needs to allocate memory, the ARC size is reduced.
If you constantly run into memory pressure ( zfs-stats -A), put more memory in the system as your problem isn't ZFS but too little memory for your workloads.
On low-memory systems you absolutely don't want to use an L2ARC - firstly because it needs memory for its pointer maps, which will be capped from the ARC, and secondly the L2ARC holds blocks that have fallen off the ARC, so there has to be an ARC to spill over in the first place.

As for SSHDs: I've replaced all the various (Seagate) SSHDs here that came in Laptops and Clients. The failure rate of these drives is absolutely insane: I still have 6 of them lying here in the closet and 4 of them are refurbished RMA parts and almost all of the other ~half I've already recycled also were RMA'd drives. I think the main reason for this is the constant spindown/spinup on mid- to low workloads on these drives, just as it was the case with WD green drives. On the WDs you could at least adjust or disable the automatic spindown to prevent them from self-destruct...

Also: NO disk-cache is even remotely as fast as ARC in system memory, especially over slow and single-queued SATA links. We're talking about magnitudes way up in the 4-digit range for latency and probably even bandwidth here. Completely disabling the ARC will definitely obliterate the performance of the pool.
 
  • Thanks
Reactions: Oko
The ARC has a low priority in RAM, so if any program wants/needs to allocate memory, the ARC size is reduced.
If you constantly run into memory pressure ( zfs-stats -A), put more memory in the system as your problem isn't ZFS but too little memory for your workloads.
On low-memory systems you absolutely don't want to use an L2ARC - firstly because it needs memory for its pointer maps, which will be capped from the ARC, and secondly the L2ARC holds blocks that have fallen off the ARC, so there has to be an ARC to spill over in the first place.

As for SSHDs: I've replaced all the various (Seagate) SSHDs here that came in Laptops and Clients. The failure rate of these drives is absolutely insane: I still have 6 of them lying here in the closet and 4 of them are refurbished RMA parts and almost all of the other ~half I've already recycled also were RMA'd drives. I think the main reason for this is the constant spindown/spinup on mid- to low workloads on these drives, just as it was the case with WD green drives. On the WDs you could at least adjust or disable the automatic spindown to prevent them from self-destruct...

Also: NO disk-cache is even remotely as fast as ARC in system memory, especially over slow and single-queued SATA links. We're talking about magnitudes way up in the 4-digit range for latency and probably even bandwidth here. Completely disabling the ARC will definitely obliterate the performance of the pool.

What could be cached that could improve performance?

You seem like you could sell me the idea...

I just don't see what could be in the ARC that would improve performance on my system... everything I need is already in RAM, and I can't think of any other files taking up more than 1 GB that the ARC still has...

Like Firefox, X11, etc. are already in RAM, and my whole user profile and all the shell commands I use can easily fit in 1 GB... no point, really... no?
 
Back
Top