UFS gmirror + scrub/verify

What is the equivalent command of a verify in 3ware or a scrub in zfs for a plain old RAID1 gmirror of 2 SATA drives?

Thanks!
 
What are you asking for?
You need to know that ZFS scrub checks the integrity of the data, while auto-verify (I guess), will check the integrity of the raid.
gmirror status should give you the status of your mirror, but it can‘t check data integrity of a UFS filesystem if you expected that. UFS has no mechanism to verify data integrity.
Mo matter what filesystem you use, the smartmontools port is handy to check your drives SMART status and run self tests on them.

(sorry for missing formatting, I‘m on phone)
 
I'm looking for some kind of way to forcibly ask the system to check and report that both drives in the mirror are properly mirrored/sync'd and the mirror is healthy. i.e. whatever a scrub (zfs) or a verify (3ware) command did for those mirrors, what is the equivalent for a gmirror-based mirror?
 
gmirror status << THIS WILL SHOW ARRAY COMPONENTS AND ARRAY STATUS
gmirror list
https://www.freebsd.org/cgi/man.cgi?gmirror(8)

gmirror will automatically rebuild an array if you pull out a drive and insert a new drive.
gmirror status will show you the rebuild percentage.(resilvering in ZFS parlance)

Because it operates automatically you have to be careful inserting a disk with prior data into a degraded mirror machine.
It does not warn. It overwrites to rebuild the array. So you need to keep advised of array status before adding any disks..
 
How about cmp adaX adaY?
Provided the free space is equal, this should work.
 
UFS has no mechanism to verify data integrity.

What about gmirror, though ? Does gmirror, or any of its associated binaries or libraries, have any equivalent to a 'zpool scrub' ? It seems like something you'd really want to do from time to time, not just wait for 'gmirror status' or smartmontools to start throwing errors ....
 
Short answer, no.
Like Crivens suggested, you could compare the drives, but there is no way to tell which drive has the good, and which the bad data in case they don't match.
Theoretically, comparing file by file and then looking for read errors on both drives, could work, I guess. A read error on a file, due to filesystem corruption, would help to tell a good from a bad file. But, so called data rot does not necessarily damage the underlying file system, since it's more subtle where only single bits change. In addition, you can't compare files, as long as gmirror is active.
This boils down to, why ZFS is so special, it's a filesystem and a volume manager and it has control over it all. Not to mention the checksums to provide the data integrity.
 
Maybe you could mount both arts of the mirror separately (read only!) and do a diff on the trees. And check the SMART error count before and after to see which drive hit some dead blocks doing that.
 
Back
Top