root on zfs migration between devices, stuck

Hi there guys!
I have problem with migrating my zfsroot from USB stick (in case of suspect it's dying (actually not, just poor performance ~1 MB/s)) to SATA SSD. This situation becomes after many retries of zpool replace/attach zroot gptid/c09b56c8-d31c-11e3-89ae-002590dce2eb gptid/72ae90a9-4d3f-11e4-b255-002590dce2eb and after asking some help @ freenode #freebsd some guy advised me to use zpool add, how x( ugh I'm just must use my head before (but I do trust to that guy :x that's surely my bad) and read a lot before doing that; so now I have this sticky situation (BTW all that replace/attach/detach/etc goes really smooth and well in my VM where I try did it first time and getting these errors https://bpaste.net/show/652fb62adb31 ):
Code:
[00:56:40] root@freebsd-home-nas:~> zpool status
  pool: tank
 state: ONLINE
  scan: scrub repaired 0 in 3h40m with 0 errors on Sat Oct  4 06:40:12 2014
config:

	NAME        STATE     READ WRITE CKSUM
	tank        ONLINE       0     0     0
	  da0       ONLINE       0     0     0
	  da1       ONLINE       0     0     0
	  da2       ONLINE       0     0     0
	  da3       ONLINE       0     0     0
	  da4       ONLINE       0     0     0
	  da5       ONLINE       0     0     0
	  da6       ONLINE       0     0     0
	  da7       ONLINE       0     0     0

errors: No known data errors

  pool: zroot
 state: ONLINE
  scan: scrub repaired 0 in 0h4m with 0 errors on Tue Oct  7 00:55:09 2014
config:

	NAME                                          STATE     READ WRITE CKSUM
	zroot                                         ONLINE       0     0     0
	  gptid/c09b56c8-d31c-11e3-89ae-002590dce2eb  ONLINE       0     0     0
	  gptid/72ae90a9-4d3f-11e4-b255-002590dce2eb  ONLINE       0     0     0

errors: No known data errors
[01:33:39] root@freebsd-home-nas:~> gpart show
=>      34  15646653  da8  GPT  (7.5G)
        34      1024    1  freebsd-boot  (512K)
      1058  15645629    2  freebsd-zfs  (7.5G)

=>       34  175836461  ada0  GPT  (84G)
         34          6        - free -  (3.0K)
         40       1024     1  freebsd-boot  (512K)
       1064  175835424     2  freebsd-zfs  (84G)
  175836488          7        - free -  (3.5K)

[01:33:45] root@freebsd-home-nas:~> gpart list
Geom name: da8
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 15646686
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: da8p1
   Mediasize: 524288 (512K)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 17408
   Mode: r0w0e0
   rawuuid: c04628fa-d31c-11e3-89ae-002590dce2eb
   rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f
   label: gptboot0
   length: 524288
   offset: 17408
   type: freebsd-boot
   index: 1
   end: 1057
   start: 34
2. Name: da8p2
   Mediasize: 8010562048 (7.5G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 541696
   Mode: r1w1e2
   rawuuid: c09b56c8-d31c-11e3-89ae-002590dce2eb
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: zfs0
   length: 8010562048
   offset: 541696
   type: freebsd-zfs
   index: 2
   end: 15646686
   start: 1058
Consumers:
1. Name: da8
   Mediasize: 8011120640 (7.5G)
   Sectorsize: 512
   Mode: r1w1e3

Geom name: ada0
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 175836494
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: ada0p1
   Mediasize: 524288 (512K)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   rawuuid: 49e49714-4d3f-11e4-b255-002590dce2eb
   rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f
   label: ssdboot
   length: 524288
   offset: 20480
   type: freebsd-boot
   index: 1
   end: 1063
   start: 40
2. Name: ada0p2
   Mediasize: 90027737088 (84G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2
   rawuuid: 72ae90a9-4d3f-11e4-b255-002590dce2eb
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 90027737088
   offset: 544768
   type: freebsd-zfs
   index: 2
   end: 175836487
   start: 1064
Consumers:
1. Name: ada0
   Mediasize: 90028302336 (84G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e3
[00:11:42] root@freebsd-home-nas:~> zpool replace zroot gptid/c09b56c8-d31c-11e3-89ae-002590dce2eb gptid/72ae90a9-4d3f-11e4-b255-002590dce2eb
invalid vdev specification
use '-f' to override the following errors:
/dev/gptid/72ae90a9-4d3f-11e4-b255-002590dce2eb is part of active pool 'zroot'


So now I can't replace and detach and can't literally do anything to fix these mistakes, at least at my point of knowledge. I also got some ideas (guys at #zfs tell me that I could do zfs send>>local tank, then destroy zroot and somehow put data back and successfully load from SSD) but I don't know how :C
I also don't know why when I do something with my gpart(8) I got these spaces
Code:
=>       34  175836461  ada0  GPT  (84G)
         34          6        - free -  (3.0K)
         40       1024     1  freebsd-boot  (512K)
       1064  175835424     2  freebsd-zfs  (84G)
  175836488          7        - free -  (3.5K)
Glad to hear some advices to get rid of that dire situation. :C
 
If I understand correctly, you want two ZFS pools; tank and zroot. tank is fine, though I saw you have it configured without any redundancy. At the moment, if one disk fails, you are very likely to lose data. You might want to consider reconfiguring this pool to use RAID-Z or mirroring to provide some fault tolerance.

zroot currently contains two providers; da8p1 (uuid c04628fa-d31c-11e3-89ae-002590dce2eb) and ada0p2 (uuid 72ae90a9-4d3f-11e4-b255-002590dce2eb). The zpool replace zroot gptid/c09b56c8-d31c-11e3-89ae-002590dce2eb gptid/72ae90a9-4d3f-11e4-b255-002590dce2eb command you tried to run failed because the new provider you offered (ada0p2) is already part of the pool. I suspect what you are looking to do is remove da8p1 from the zroot pool. Unfortunately, I don't believe this is possible, since it is a non-redundant device. From the zpool(8) man page:
zpool remove pool device ...
Removes the specified device from the pool. This command currently
only supports removing hot spares, cache, and log devices. A mirrored
log device can be removed by specifying the top-level mirror for the
log. Non-log devices that are part of a mirrored configuration can be
removed using the "zpool detach" command. Non-redundant and raidz
devices cannot be removed from a pool.

This means you are going to have to save the datasets you need from zroot (by taking snapshots and using zfs send and zfs receive to put them elsewhere -- perhaps tank temporarily), then destroy and recreate zroot containing only ada0p2. You can then use zfs send and zfs receive to move your datasets back to your new pool. Since zroot contains your operating system, you will need to boot to another disk (your FreeBSD install media will be fine) to be able to destroy and recreate it.
 
Thank for you answer!

I probably just dump the whole zroot and then reinstall the system to SSD, import the tank pool, and restore the dump. Sounds legit, doesn't it?
 
If you are new to ZFS, you might find the excellent ZFS section in the FreeBSD handbook useful. I haven't tried using dump(8) and restore(8) with ZFS, though it may work. The approach I was suggesting with ZFS tools would be something like:
  • Boot to a live CD or memory stick
  • Mount a disk that you can use to store your backup with something like mount /dev/da10p1 /mnt
  • Import the zroot pool without mounting the datasets with zpool import -f -N zroot
  • Take a recursive snapshot of your existing zroot with zfs snapshot -r zroot@migrate
  • Send (equivalent of dumping) the snapshot to your external disk as a flat file with zfs send -R zroot@migrate > /mnt/zroot-migrate.zfs
  • Check and remember the bootfs property with zpool get bootfs zroot
  • Destroy the zroot pool with zpool destroy zroot
  • Create a new zroot pool using just the device you want with zpool create zroot /dev/ada0p2
  • Receive (equivalent of restoring) the snapshot from your external disk without mounting the datasets with zfs receive -u -F zroot < /mnt/zroot-migrate.zfs [Edit following @kpa's post from: zfs receive -u zroot < /mnt/zroot-migrate.zfs]
  • Check you restored all the datasets with zfs list -r zroot
  • Set the bootfs property on the new pool with zpool set bootfs=<value you remembered> zroot
  • Cross your fingers and restart your system :)
Regarding your gpart(8) question, it might be that you created the partitions with the -a 4096 option so that they would be aligned at 4K boundaries (to match the physical disk sectors and give better performance). I wouldn't worry about the "lost" 6.5K out of 84G.
 
Last edited by a moderator:
It's a good idea to pipe the stream to gzip when doing zfs send ..., you can then immediately check with gzip -t that the resulting file is not corrupted (plus it takes less space). This doesn't guard against data corruption in the stream before it goes to gzip(1) but it's much better than no integrity check at all.

The sending step would become:

Code:
zfs send -R zroot@migrate | gzip > /mnt/zroot-migrate.zfs.gz

The restore with zfs receive would be:

Code:
gzcat /mnt/zroot-migrate.zfs.gz | zfs receive -Fuv zroot

I added the -Fv flags because it's better to see what's happening (-v) and I think you need to force the overwrite of the pool anyway (-F).
 
Thanks, @kpa: I forgot about the -F flag on the receive to overwrite the dataset at the root of the pool. I've edited my post.
 
Last edited by a moderator:
Thank you guys! I will try this at end of week. Hope all will go smooth and by the numbers. BTW if that was successful, I think someone can get the same situation. Is there a chance to put this as article in the FreeBSD wiki??
 
Back
Top