In theory: yes - with ZFS there is a possible way to do this.
In practice: no, because the implementation is broken.
Method:
We have a two-way mirror, and we add a third disk of same size.
Now we can create a three-way raid5 of half the intended size, and the data will exactly fit on that:
We break the mirror,
we leave the data on disk0,
we create a raid with half the disk1, the other half of disk1 and half of disk2.
We copy the data to that raid. (Dont ask about the performance of that operation, it will probably be bad.)
Then we replace the volume on the second half of disk1 with half of disk0 (this will also be quite slow).
Finally we grow the raid to the intended size.
I verified, ZFS can do that (FreeBSD 11.2):
Code:
root@edge:~ # gpart add -t freebsd-zfs -s 2097152 ada2
ada2p5 added
root@edge:~ # gpart add -t freebsd-zfs -s 2097152 -b 3882672936 ada2
ada2p6 added
root@edge:~ # gpart add -t freebsd-zfs -s 2097152 -b 3886867240 ada2
ada2p7 added
root@edge:~ # gpart show ada2
=> 40 5860533088 ada2 GPT (2.7T)
3878478632 2097152 5 freebsd-zfs (1.0G)
3880575784 2097152 - free - (1.0G)
3882672936 2097152 6 freebsd-zfs (1.0G)
3884770088 2097152 - free - (1.0G)
3886867240 2097152 7 freebsd-zfs (1.0G)
3888964392 1971568736 - free - (940G)
root@edge:~ # zpool create xxx raidz ada2p5 ada2p6 ada2p7
root@edge:~ # zpool list xxx
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
xxx 2.75G 632K 2.75G - - 0% 0% 1.00x ONLINE -
root@edge:~ # gpart resize -i 5 -s 3145728 ada2
ada2p5 resized
root@edge:~ # gpart resize -i 6 -s 3145728 ada2
ada2p6 resized
root@edge:~ # gpart resize -i 7 -s 3145728 ada2
ada2p7 resized
root@edge:~ # gpart show ada2
=> 40 5860533088 ada2 GPT (2.7T)
3878478632 3145728 5 freebsd-zfs (1.5G)
3881624360 1048576 - free - (512M)
3882672936 3145728 6 freebsd-zfs (1.5G)
3885818664 1048576 - free - (512M)
3886867240 3145728 7 freebsd-zfs (1.5G)
3890012968 1970520160 - free - (940G)
root@edge:~ # zpool set autoexpand=on xxx
root@edge:~ # zpool list xxx
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
xxx 2.75G 896K 2.75G - - 0% 0% 1.00x ONLINE -
root@edge:~ # zpool online xxx ada2p5
root@edge:~ # zpool list xxx
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
xxx 4.25G 992K 4.25G - - 0% 0% 1.00x ONLINE -
root@edge:~ # zpool status xxx
pool: xxx
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
xxx ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
ada2p5 ONLINE 0 0 0
ada2p6 ONLINE 0 0 0
ada2p7 ONLINE 0 0 0
errors: No known data errors
Voila, raid created on same disk, and expanded.
Now for the downside: I intentionally left half of the space free between the volumes. If I would not do that, i.e. if I would make the volumes adjacent to each other, like so:
Code:
root@edge:~ # gpart show ada2
=> 40 5860533088 ada2 GPT (2.7T)
3878478632 4194304 5 freebsd-zfs (2.0G)
3882672936 4194304 6 freebsd-zfs (2.0G)
3886867240 4194304 7 freebsd-zfs (2.0G)
3891061544 1969471584 - free - (939G)
and if I would then try to expand that raid, then the ZFS would become unreadable and I would get a very reproductible kernel crash. (I.e. reproducible with a SATA drive and a USB stick, on amd64 and i386. Actually that was what I tried first, because I thought it would either work or not work. In fact, it could work but it is implemented as broken.)