Some ZFS questions

I am having difficulty finding answers to these simple principles:

1. Does ZFS keep an error log by default or do you have to set it up? Where is the log file stored or how is the log set up?
2. I have a single disk and no Raid or Mirror. Each pool is usually on its own GPT partition. Is there a way to copy (resilver) an entire pool from one slice to another or to migrate the pool from one slice to the other.
3. Snapshots: Is a complete snapshot able to fully restore the pool or just incremental data? Does ZFS require a separate pool for snapshot storage?

Thanks.
 
1. Does ZFS keep an error log by default or do you have to set it up? Where is the log file stored or how is the log set up?
Error log of what? Maybe zpool history is what You are looking for?

Code:
# zpool history
History for 'storage':
2011-04-12.12:09:04 zpool create storage /dev/ad4s3
2011-04-12.12:09:22 zfs create storage/var
2011-04-12.12:09:25 zfs create storage/usr
2011-04-12.12:11:46 zfs set mountpoint=none storage
2011-04-12.12:12:27 zfs set mountpoint=/NEW/var storage/var
2011-04-12.12:12:34 zfs set mountpoint=/NEW/usr storage/usr
2011-04-12.12:15:34 zfs create -o mountpoint=/NEW/tmp storage/tmp
2011-04-12.12:21:44 zfs set mountpoint=/var storage/var
2011-04-12.12:21:49 zfs set mountpoint=/usr storage/usr
2011-04-12.12:21:54 zfs set mountpoint=/tmp storage/tmp
2011-04-12.12:23:34 zpool export storage
2011-04-12.12:24:52 zpool import storage
2011-06-09.08:52:59 zpool scrub storage
2011-07-28.09:01:13 zpool add storage cache da0
2011-07-28.09:09:04 zpool remove storage cache da0
2011-08-03.12:59:14 zpool scrub storage
2011-08-03.13:10:55 zpool upgrade storage
2011-08-26.08:02:41 zfs set dedup=on storage
2011-08-27.00:34:42 zfs set dedup=off storage

2. I have a single disk and no Raid or Mirror. Each pool is usually on its own GPT partition. Is there a way to copy (resilver) an entire pool from one slice to another or to migrate the pool from one slice to the other.

ZFS works best with WHOLE DISKS.

3. Snapshots: Is a complete snapshot able to fully restore the pool or just incremental data? Does ZFS require a separate pool for snapshot storage?
The snapshots are part of ZFS, there are no other tools needed, but there are tools that automate that process for daily/weekly/monthly snapshots like sysutils/zfs-snapshot-mgmt.

Also check these:
http://download.oracle.com/docs/cd/E19253-01/819-5461/index.html
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
http://docs.huihoo.com/opensolaris/solaris-zfs-administration-guide/html/
 
Error log of what?
1. A log of errors the zfs system encounters during operation. Could be due to hardware or other anomalies. Timely feed of this stream is better than periodic manual checking. I think zfs feeds this to syslog - so this becomes a syslog setup question.
Code:
# zpool status ->  READ WRITE CKSUM
                   0    0     0
errors: No known data errors
2. WHOLE DISK setup is needed when you have your system set up with Mirror/Raid Variants. If you are not doing any replication or fail-over protection, WHOLE DISK criteria becomes unnecessary and a non-issue; you can use zfs just like any other fs on partition/slice basis (easier on GPT of course).
The question is still open and related to #3:

3. I still cannot figure out how to restore the snapshot to a different slice on the same disk (I would rather have zfs do this rather than patch-solutions like tar-untar etc.) As far as I can figure out, I need to a. send b. receive the snapshot to the new slice but most examples describe how to restore the snapshot to the originating tank?
 
Beeblebrox said:
I am having difficulty finding answers to these simple principles:

1. Does ZFS keep an error log by default or do you have to set it up? Where is the log file stored or how is the log set up?
...
1. A log of errors the zfs system encounters during operation. Could be due to hardware or other anomalies. Timely feed of this stream is better than periodic manual checking. I think zfs feeds this to syslog - so this becomes a syslog setup question.

# zpool status
will give details of any faults ZFS detected and whether it was able to repair them automatically.

Beeblebrox said:
2. I have a single disk and no Raid or Mirror. Each pool is usually on its own GPT partition. Is there a way to copy (resilver) an entire pool from one slice to another or to migrate the pool from one slice to the other.

If you want to make a one-off copy of your pool, you can create a second empty pool on your new slice (or disk) and use zfs send/receive to transfer your zfs datasets to the new pool.

If you want to set up real-time mirroring, you can add a second disk to your existing pool.



Beeblebrox said:
3. Snapshots: Is a complete snapshot able to fully restore the pool or just incremental data? Does ZFS require a separate pool for snapshot storage?

Snapshots must reside within the same pool as their parent filesystem, due to the way they work.
 
@ Crest: Can you elaborate a litte? Lets call tanko the origin, tankd the dest. ssh is not necessary since the data is moving inside the same system - is this correct?
zfs send -R $tanko@backup | $tankd zfs recv $tanko@backup

@ jem: I was trying to point-out that manually running status could result in delays with becoming aware of errors. Better to have a feed into syslog or e-mail.
 
Beeblebrox said:
@ jem: I was trying to point-out that manually running status could result in delays with becoming aware of errors. Better to have a feed into syslog or e-mail.

On the systems I look after at work, we have a 5-minute cron job grepping the output of zpool status and flagging an alert on our monitoring system if any error string is present.
 
jem said:
On the systems I look after at work, we have a 5-minute cron job grepping the output of zpool status and flagging an alert on our monitoring system if any error string is present.

You can use periodic too:

periodic.conf
Code:
daily_status_zfs_enable="YES"
daily_scrub_zfs_enable="YES"
 
Back
Top