Can't destroy a particular ZFS snapshot

Hi,

For some reason I can't destroy this snapshot. As you can see, it doesn't give any errors, but it is still not destroyed. Using FreeBSD 9.0.

Any ideas? =)

Hugs,
Sandra


Code:
[root@nas3 ~]# zfs list -t snapshot
NAME                           USED  AVAIL  REFER  MOUNTPOINT
tank3/pro1@first              2.00K      -  55.9K  -
[root@nas3 ~]# zfs destroy -d tank3/pro1@first
[root@nas3 ~]# zfs list -t snapshot
NAME                           USED  AVAIL  REFER  MOUNTPOINT
tank3/pro1@first              2.00K      -  55.9K  -
[root@nas3 ~]# zfs destroy -rd tank3/pro1@first
[root@nas3 ~]# zfs destroy -Rd tank3/pro1@first
[root@nas3 ~]# zfs list -t snapshot
NAME                           USED  AVAIL  REFER  MOUNTPOINT
tank3/pro1@first              2.00K      -  55.9K  -
 
Code:
[root@nas3 ~]# zfs unmount tank3/pro1@first
cannot open 'tank3/pro1@first': operation not applicable to datasets of this type

It doesn't seem so. Other ideas? =)
 
@bbzz

Code:
[root@nas3 ~]# zfs destroy tank3/pro1
cannot destroy 'tank3/pro1': filesystem has children
use '-r' to destroy the following datasets:
tank3/pro1@first
tank3/pro1@auto130218-174224
[root@nas3 ~]# zfs destroy -r tank3/pro1
cannot destroy 'tank3/pro1@first': dataset is busy
[root@nas3 ~]# zfs destroy tank3/pro1@first
cannot destroy 'tank3/pro1@first': dataset is busy
[root@nas3 ~]# zfs destroy -d tank3/pro1@first
[root@nas3 ~]# zfs destroy -r tank3/pro1@first
cannot destroy 'tank3/pro1@first': dataset is busy
no snapshots destroyed
[root@nas3 ~]# zfs list -t snapshot
NAME                           USED  AVAIL  REFER  MOUNTPOINT
tank3/pro1@first              2.00K      -  55.9K  -
tank3/pro1@auto130218-174224      0      -  55.9K  -
[root@nas3 ~]# zfs destroy -R tank3/pro1@first
cannot destroy 'tank3/pro1@first': dataset is busy
no snapshots destroyed
[root@nas3 ~]# df -h
Filesystem      Size    Used   Avail Capacity  Mounted on
/dev/ada0s1a     13G    2.4G     10G    19%    /
devfs           1.0k    1.0k      0B   100%    /dev
tank3            63T     58k     63T     0%    /tank3
[root@nas3 ~]# zfs allow tank3
---- Permissions on tank3 --------------------------------------------
Local+Descendent permissions:
	user  create,destroy,hold,release,send,snapshot

Maybe the filesystem was created as a user different from root. Can't really remember, but I have deleted the user.

I suppose there should have been an username right next to "user "...
 
Maybe have hidden dataset with a '%' in the name, can find using zdb(8) then you can use zfs destroy as usual with -f, -r, or -R options:

# zdb -d tank3 | grep %

Note that if the file system to be destroyed is busy and cannot be unmounted, the zfs destroy command fails. To destroy an active file system, use the -f option. Use this option with caution as it can unmount, unshare, and destroy active file systems, causing unexpected application behavior.

Read to refresh a bit: Destroying a ZFS File System ;)
 
@cpu82

I am getting this weird error

Code:
[root@nas3 ~]# zdb -d tank3
zdb: can't open 'tank3': Device not configured
 
@Sebulon

The other snapshot have the exact same problem, and I know I created that one as root and with "zfs hold".

Code:
[root@nas3 ~]# zfs list
NAME         USED  AVAIL  REFER  MOUNTPOINT
tank3        497K  63.9T  57.9K  /tank3
tank3/pro1  57.9K  63.9T  55.9K  /tank3/pro1
[root@nas3 ~]# 

[root@nas3 ~]# df -h
Filesystem      Size    Used   Avail Capacity  Mounted on
/dev/ada0s1a     13G    2.4G     10G    19%    /
devfs           1.0k    1.0k      0B   100%    /dev
tank3            63T     58k     63T     0%    /tank3

[root@nas3 ~]# zfs list -t snapshot
NAME                           USED  AVAIL  REFER  MOUNTPOINT
tank3/pro1@first              2.00K      -  55.9K  -
tank3/pro1@auto130218-174224      0      -  55.9K  -

[root@nas3 ~]# zfs list -t snapshot
NAME                           USED  AVAIL  REFER  MOUNTPOINT
tank3/pro1@first              2.00K      -  55.9K  -
tank3/pro1@auto130218-174224      0      -  55.9K  -

[root@nas3 ~]# zfs destroy tank3/pro1@auto130218-174224
cannot destroy 'tank3/pro1@auto130218-174224': dataset is busy

[root@nas3 ~]# zfs destroy -d tank3/pro1@auto130218-174224

[root@nas3 ~]# zfs list -t snapshot
NAME                           USED  AVAIL  REFER  MOUNTPOINT
tank3/pro1@first              2.00K      -  55.9K  -
tank3/pro1@auto130218-174224      0      -  55.9K  -
 
littlesandra88 said:
@cpu82

I am getting this weird error

Code:
[root@nas3 ~]# zdb -d tank3
zdb: can't open 'tank3': Device not configured

Try the follow steps to fix the issue, and get all data back:
Code:
# zpool import -F tank3
# zpool clear tank3
# zpool online tank3
# zpool status

Reboot your machine and try again.
 
@cpu82:

What can be concluded from this?

Code:
[root@nas3 ~]# zpool import -F tank3
cannot import 'tank3': a pool with that name is already created/imported,
and no additional pools with that name were found

[root@nas3 ~]# zpool clear tank3

[root@nas3 ~]# zpool online tank3
missing device name
usage:
	online <pool> <device> ...

[root@nas3 ~]# zpool status
  pool: tank3
 state: ONLINE
 scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	tank3       ONLINE       0     0     0
	  raidz2-0  ONLINE       0     0     0
	    da0     ONLINE       0     0     0
	    da8     ONLINE       0     0     0
	    da4     ONLINE       0     0     0
	    da9     ONLINE       0     0     0
	    da2     ONLINE       0     0     0
	    da3     ONLINE       0     0     0
	  raidz2-1  ONLINE       0     0     0
	    da6     ONLINE       0     0     0
	    da1     ONLINE       0     0     0
	    da5     ONLINE       0     0     0
	    da7     ONLINE       0     0     0
	    da10    ONLINE       0     0     0
	    da20    ONLINE       0     0     0
	  raidz2-2  ONLINE       0     0     0
	    da11    ONLINE       0     0     0
	    da14    ONLINE       0     0     0
	    da15    ONLINE       0     0     0
	    da21    ONLINE       0     0     0
	    da18    ONLINE       0     0     0
	    da16    ONLINE       0     0     0
	  raidz2-3  ONLINE       0     0     0
	    da22    ONLINE       0     0     0
	    da17    ONLINE       0     0     0
	    da19    ONLINE       0     0     0
	    da23    ONLINE       0     0     0
	    da13    ONLINE       0     0     0
	    da12    ONLINE       0     0     0
	  raidz2-4  ONLINE       0     0     0
	    da24    ONLINE       0     0     0
	    da26    ONLINE       0     0     0
	    da32    ONLINE       0     0     0
	    da33    ONLINE       0     0     0
	    da29    ONLINE       0     0     0
	    da28    ONLINE       0     0     0
	  raidz2-5  ONLINE       0     0     0
	    da30    ONLINE       0     0     0
	    da27    ONLINE       0     0     0
	    da25    ONLINE       0     0     0
	    da31    ONLINE       0     0     0
	    da34    ONLINE       0     0     0
	    da35    ONLINE       0     0     0

errors: No known data errors
 
littlesandra88 said:
The other snapshot have the exact same problem, and I know I created that one as root and with "zfs hold".

Ah, hold, that was a new one for me. Well, this is from the friendly manual:
Code:
     zfs holds [-r] snapshot ...

         Lists all existing user references for the given snapshot or snap-
         shots.

         -r      Lists the holds that are set on the named descendent snap-
                 shots, in addition to listing the holds on the named snap-
                 shot.

     zfs release [-r] tag snapshot ...

         Removes a single reference, named with the tag argument, from the
         specified snapshot or snapshots. The tag must already exist for each
         snapshot.

         -r      Recursively releases a hold with the given tag on the snap-
                 shots of all descendent file systems.

So perhaps like this:
# zfs holds -r tank3/pro1@auto130218-174224
# zfs release -r [i]tag[/i] tank3/pro1@auto130218-174224

Sweet pool BTW:)

/Sebulon
 
You may have created a clone from that snapshot. I that case you should promote or rollback it in order to be able to destroy that snapshot.
try # zfs list -t filesystem
 
@Sebulon

Wow. That solved the problem =) Thanks =)

I had never noticed "zfs holds", but I had wondered how I could list the tags...

I was under the impression that -d

# zfs destroy -d tank3/pro1@first

would delete the snapshot and unhold if required?

Sweet pool BTW

Thanks =)
 
@cpu82

Yes, still same weird error.

Code:
[root@nas3 ~]# zdb -d tank3
zdb: can't open 'tank3': Device not configured

Should I be concerned?
 
Concerned? Not necessarily, if you fix dataset output of zdb(8). The following command will rewrites all labels to the vdevs using current settings:

# zpool reguid tank3

So you can reguid, then run zdb with -d option to displays basic dataset information: ID, create transaction, size, and object count. Please, see man page to know more stuff (display options and examples) about this utility.

Show following outputs:
Code:
# zdb
# zdb tank3 | grep ashift

Today most drives report ashift=9 (512B sectors) even if they internally use ashift=12 (4K sectors). When drives start commonly reporting 4K sectors ashift will be automatically set to 12.
 
@cpu82

It seams that zfs in FreeBSD 9 is not new enough to have support for

# zpool reguid tank3

The command "reguid" doesn't exist.

Do you know of alternative methods? =)
 
@littlesandra88

Sorry for not pointing changes, merged new ZFS features from illumos in r229578. See Feature #1748: desire support for reguid in zfs (fixed in changeset 13514). Committed in FreeBSD 9.1-STABLE, you can read changes that affect to zpool(8) man page.

The zpool command has suffered changes both in /usr/src/cddl/contrib/opensolaris/cmd/zpool/zpool.8 as /usr/src/cddl/contrib/opensolaris/cmd/zpool/zpool_main.c to support subcommand reguid.

For FreeBSD 9-STABLE see committed r243674. The patches can be downloaded here.

Hope that helps to update your ZFS features ;)
 
@cpu82

Ahhh. Very nice =)

I am quite new to FreeBSD, but used to Linux and have an idea of what happens, when adding third party package repositories.

But how does this work on FreeBSD? If I apply patches like the one you linked to. What does that mean for the official supported version of zfs?

Do I need to take special care of zfs from this day on, when I want to install the latest supported updates?
 
littlesandra88 said:
@cpu82

Ahhh. Very nice =)

I am quite new to FreeBSD, but used to Linux and have an idea of what happens, when adding third party package repositories.

But how does this work on FreeBSD? If I apply patches like the one you linked to. What does that mean for the official supported version of zfs?

Do I need to take special care of zfs from this day on, when I want to install the latest supported updates?

A "Good Tips": HOWTO: FreeBSD ZFS Madness should be compulsory reading for anyone who need a good start. All important announcements (TODO tasks) are frequently edited in wiki ZFS. To be current may check freebsd-fs mailing list. To stay quiet, instead install patches, wait MFC new ZFS version. Read /usr/src/UPDATING.
 
Back
Top