ZFS Multipathing an existing ZFS Pool?

I have an already existing ZFS pool consisting of 5 x RAIDZ2 VDEVs (6 drives per VDEV, 30 drives total). The pool is being run off a single LSI-9211-8I HBA, connected to 2x Supermicro SAS2 backplanes. I'd like to add a second HBA both for redundancy, as well as to increase performance when it comes to scrubs and moving large files between ZVols.

The bare drives were formatted with:

gpart add -t freebsd-zfs -a 1m -l "<unique label>" da__

and added to the ZPool via that GPT partition label (i.e. the GPT partition itself, not the whole disk).

My question has to do with setting up multipath. From my reading, it seems like using gmultipath to add labels to each drive is the preferred way. Is it possible to add the multipath drive labels to the already existing drives however, or will it overwrite part of the partition and corrupt the pool?

My understanding is it's possible to write a script that manually sets up multipath each boot, but that seems like a headache waiting to happen.
Are you certain you're limited by the HBA? IIRC gmultipath stores its data in the last sector of the drive. The clean way would be to backup your ZFS pool and restore your pool from the backup. This requires enough storage and time to be a nuisance. If there is some available space in your partition table behind the ZFS partitions you could label and repartition the drives in place at your own risk.
Be careful with writing "gmultipath label" directly to the disk, as these labels get written to the same location as the secondary GPT table. Also when using the whole disk as vdev, ZFS has some metadata on the same location. I tripped over this when testing a FC-Multipath setup with 10.3 a few months ago.
Setting up the multipath by hand, not automatically with labels, or writing the labels to GPT partitions worked without these collisions.