Yes, it is possible to do this, though it may involve booting from a live cd and using a partition editing package, such as pmagic? As generally its needing to get contiguous new space onto the end of the existing ZFS partition.
I know the general practice has been partitions for root pools and whole disks for non-root pools. And, since GRUB is mentioned...sounds like the first is the case. Need to have partitioning to have a place from GRUB to reside on the disk, I've never done GRUB for booting FreeBSD/ZFS so not sure what its exact requirements are.
I also recall that in the early days with Solaris, disk cache was disabled if ZFS was done on partitions. But, later the issue was resolved.
Not familiar with PC-BSD either, or this GUI tool. My only GUI type ZFS management has been with the Oracle 7420 ZFS Appliance, and even then for some of the heavier ZFS operations, I've gone rogue and go directly into 'shell'. (such as to delete a whole bunch of stale snapshots, some were over a year old, recovering more than 25TB of lost storage.) But, if it only deals with whole disks, sounds like by design its not for root pool manipulation.
OTOH, it also sounds like you have root pool on drive that is not the boot drive. Which is something I know GRUB can do, but mainly as alternative to figuring out the right key to hit at the right moment to get BIOS to boot from CD instead.
That said, I have a FreeBSD workstation at work, that I had originally setup with 4 drives for two mirrored sets. Two new 1TB drives and two 500GB pulls. The pulls came from an old failed disk array, and the remaining 'good' drives were likely to fail soon, but it seemed we could get some use before they get ground into dust before they could leave. And, they did last longer than the new 1TB drives (with only 1 year warranties), which I went through a number of DOAs or early failure before convincing boss to give me a pair of different model 2TB drives (with 3 year warranties, they both failed within 3 months of each other shortly after the warranty expired. At least I was able to get replacements in time, unlike at home where I didn't... )
So, back to situation, I had original had 3 partitions....
gmirror(8) for swap and root partitions and a ZFS data pool for
sysutils/backuppc, and when I moved to just a pair of 2TB drives. I combined all three, where shortly after that I converted the root partition to ZFS. Seemed like a good idea to keep backup data separated.
Well, turns out two zpools on the same disks is very poor idea, made worse because ZFS schedules I/O thinking it has the whole disk. So, I went through a process of detach mirrors,
gpart(8) edit partitions, zfs send/receives, to eventually have a single zpool for most of the drive. I still kept the swap partition, though I have since un-mirrored to have more swap space, due to demands from using
ports-mgmt/poudreire.
I had tried to do the whole thing minimal downtime/interruption, but ended up needing to boot from thumb drive. Next time I might remember to do things differently. Like break the mirror into a single drive rather than a degraded mirror. It wouldn't expand as the latter. Plus at some point I had switched the drives, so drive 0 is labeled disk 1 and vice-versa...
The Dreamer