Possible to go from GPTZFSBoot to GPTZFSBoot/Mirror?

It's very easy.

If your second disk is identical to the first one, just replicate the GPT partition layout on the second disk, write the pmbr and bootcode, and then do:

# zpool add <poolname> mirror <original disk's zfs partition> <second disk's zfs partition>
e.g.
# zpool add pool0 mirror ada0p3 ada1p3


PS. Why was this moved into Ports & Packages?
 
jem said:
If your second disk is identical to the first one

My second disk is not identical. It is twice as big (160GB). Does this matter? How can I use the extra 80GB?
 
Well it depends what your future plans are.

If there's a chance you might upgrade the first disk to 160GB as well, then I'd make the freebsd-zfs partition on the new disk as large as I could. Once you upgrade the first disk, zfs will expand the pool to use the extra space. In the meantime, that 80GB of space won't be usable.

Alternatively, you could make your freebsd-zfs partition exactly the same size as on the first disk, then create another freebsd-zfs partition in the remaining space and make a single device pool out of that. Obviously, that one won't be mirrored.

Something like this:

Code:
ad4p1   64KB    freebsd-boot
ad4p2   4GB     freebsd-swap
ad4p3   76GB    freebsd-zfs (part of mirrored pool)

ad6p1   64KB    freebsd-boot
ad6p2   4GB     freebsd-swap
ad6p3   76GB    freebsd-zfs (part of mirrored pool)
ad6p4   80GB    freebsd-zfs (part of second, unmirrored pool)
 
Thanks for really good answer to a (possible) daft and not well thought through question.

I'm considering the benefits of switching to a mirrored setup:

  • Great for fallback of one disk begins to fail
  • Maybe a FreeBSD would run a little faster/cooler (?), balancing load over the two disks (cache or something like that)
  • I am able to use a disk that seldom is being used elsewhere

Here are the disadvantages I am able to think of at the moment:

  • The system uses some watts extra
  • The airflow from the intake of the case is decreased (I use a CM Stacker)
  • The root system is not so important as the documents, music, photos and movies of the second array I am going to add, and the second disk's slot could instead have been used for an additional large storage Barracuda type of disk in that array. After all, I would believe it is really easy to bring that `fileserver'-array back up and running again if the root disk breaks. This is not a high-availability server - just my private one.
 
!!

This happened:

Code:
# zpool add -n zroot mirror /dev/gpt/disk0 /dev/gpt/disk1
invalid vdev specification
use '-f' to override the following errors:
/dev/gpt/disk0 is part of active pool 'zroot'
# zpool detach zroot /dev/gpt/disk0
cannot detach /dev/gpt/disk0: only applicable to mirror and replacing vdevs

How can i solve this?
 
My apologies, my previous instruction was wrong. You need to use 'zpool attach', not 'zpool add':

# zpool attach zroot /dev/gpt/disk0 /dev/gpt/disk1

If you still have difficulties can you post the output from the following commands please?

# gpart show
# gpart show -l
# zpool status -v
 
Code:
# zpool attach zroot /dev/gpt/disk0 /dev/gpt/disk1
invalid vdev specification
use '-f' to override the following errors:
/dev/gpt/disk1 is part of potentially active pool 'zroot'
# gpart show
=>       34  312581741  ad0  GPT  (149G)
         34        128    1  freebsd-boot  (64K)
        162    8388608    2  freebsd-swap  (4.0G)
    8388770  147912685    3  freebsd-zfs  (71G)
  156301455  156280320       - free -  (75G)

=>       34  156301421  ad4  GPT  (75G)
         34        128    1  freebsd-boot  (64K)
        162    8388608    2  freebsd-swap  (4.0G)
    8388770  147912685    3  freebsd-zfs  (71G)

=>       34  156301421  ad8  GPT  (75G)
         34        128    1  freebsd-boot  (64K)
        162    8388608    2  freebsd-swap  (4.0G)
    8388770  147912685    3  freebsd-zfs  (71G)

=>       63  156301425  ad9  MBR  (75G)
         63    4096512    1  !11  (2.0G)
    4096575  152183745    2  !7  [active]  (73G)
  156280320      21168       - free -  (10M)

# gpart show -l
=>       34  312581741  ad0  GPT  (149G)
         34        128    1  (null)  (64K)
        162    8388608    2  swap1  (4.0G)
    8388770  147912685    3  disk1  (71G)
  156301455  156280320       - free -  (75G)

=>       34  156301421  ad4  GPT  (75G)
         34        128    1  (null)  (64K)
        162    8388608    2  swap1  (4.0G)
    8388770  147912685    3  disk1  (71G)

=>       34  156301421  ad8  GPT  (75G)
         34        128    1  (null)  (64K)
        162    8388608    2  swap0  (4.0G)
    8388770  147912685    3  disk0  (71G)

=>       63  156301425  ad9  MBR  (75G)
         63    4096512    1  (null)  (2.0G)
    4096575  152183745    2  (null)  [active]  (73G)
  156280320      21168       - free -  (10M)

# zpool status -v
  pool: zroot
 state: ONLINE
 scrub: none requested
config:

	NAME         STATE     READ WRITE CKSUM
	zroot        ONLINE       0     0     0
	  gpt/disk0  ONLINE       0     0     0

errors: No known data errors
 
rusma said:
Code:
# gpart show -l
=>       34  312581741  [red]ad0[/red]  GPT  (149G)
         34        128    1  (null)  (64K)
        162    8388608    2  [red]swap1[/red]  (4.0G)
    8388770  147912685    3  [red]disk1[/red]  (71G)
  156301455  156280320       - free -  (75G)

=>       34  156301421  [red]ad4[/red]  GPT  (75G)
         34        128    1  (null)  (64K)
        162    8388608    2  [red]swap1[/red]  (4.0G)
    8388770  147912685    3  [red]disk1[/red]  (71G)

Your problem is that you have given partitions on two different disks the same label. Try to change the labels on the ad4 partitions and retry the zpool attach command with the new label.

Just out of curiosity, how are all these disks connected? Some on motherboards ports, some on controller cards? What ports do you have available?

I see you have three 80GB disks installed. If I were you, I'd consider making my mirror from two of these and use the 160GB disk as a single disk pool for now.
 
Ah. I'll try destroying ... er ... ad0 or ad4. Probably ad4 (and try adding a bigger sata drive in place of ad4.

ad0 is a 160GB IDE disk
ad4 is a 80GB SATA disk
ad8 is a 80GB IDE disk (booting from this)
ad9 is a 80GB IDE disk (with WinXP, I think)
... in addition I have four 500GB Barracuda ES sata disk

After disconnecting, rearranging the cables and reconnecting, the system will not boot. Do not know what happened. This has happened before. I am only able to boot with the four Barracudas; just ad0, ad4, ad8 and ad9.

I have ordered some low end controller card (SILxx21 and Promise TX4).
 
You don't need to destroy anything. Just use the 'gpart modify' command to relabel the partitions on one of the disks.

Why did you change all your disk connections around? And why have you ordered more controllers?
 
ok, I'll try the # gpart modify soon.

I changed the disks around to group them in a more logical way. I also felt there where too much spacing between them. Now the disks that are going to be in the same pools are grouped together.

I have no previous experience with controllers, so I thought if I'm going to run the Barracudas in raidz1 or 2 I'm going to need more disks (four is just enough for Raid-Z). I'm considering some good Samsung disks.
 
A general rule with connecting SATA disks for use with ZFS is to use on-motherboard SATA ports first wherever possible. They provide the best bandwidth as in most cases they each have a dedicated path to the CPU and memory.

With a discrete controller card, all the disks will be sharing the bandwidth of that one slot. If it's a slower PCI slot, or PCIe 1x or 4x, it might not have enough bandwidth to let all the disks work at maximum speed.

What motherboard do you have?
 
I have a MSI 975X Platinum PowerUp Edition. I think a Supermicro with PCI-X133 woud suit me better. On it I now have a Promise TX4 and two identical pretty random Lycom 2port SataII PCIe x1 controllers.

I have to fix the MB hdd controller problem first. I am going to get back to this thread on a later stage.

I'm using this thread at forum-en.msi.
 
Back
Top