Solved Encrypted ZFS pool stuck offline

I have 4 disks in a RAIDZ with geli encryption. I'm currently running FreeNAS 11.2. I'm posting here because their forum doesn't have a great reputation.
One of them has been having issues so I decided to pull it and run a quick test to verify things.
without thinking I decided to offline the disk, pull it out and do my test on the desktop computer.
Long story short, in my frustration I seem to have taken the wrong disk offline, and the bad disk is not responding, so I have insufficient replicas for the pool to be imported.

when I run zpool online -e pool disk it says there is no such pool. and when I try to mount the pool via geli it fails due to the disk being offline.
I think this is way more info than I need to give when the answer is probably as simple as a 1 line command, nevertheless, I would rather give you too much info than not enough.

root@freenas:~ # zpool online -e main 6091098960124744190
cannot open 'main': no such pool

root@freenas:~ # zpool import
   pool: main
     id: 6634276799256796139
  state: UNAVAIL
 status: One or more devices are missing from the system.
 action: The pool cannot be imported. Attach the missing
        devices and try again.

        main                                                UNAVAIL  insufficient replicas
          raidz1-0                                          UNAVAIL  insufficient replicas
            gptid/c6059365-d282-11e8-8f9f-408d5c225151.eli  ONLINE
            89534515526865739                               UNAVAIL  cannot open
            ada3                                            ONLINE
            6091098960124744190                             OFFLINE

This is /var/log/messages during an attempted mount
Jan 21 20:30:35 freenas ZFS: vdev state changed, pool_guid=6634276799256796139 vdev_guid=122933954024557224
Jan 21 20:30:35 freenas ZFS: vdev state changed, pool_guid=6634276799256796139 vdev_guid=89534515526865739
Jan 21 20:30:35 freenas ZFS: vdev state changed, pool_guid=6634276799256796139 vdev_guid=17847965742998905670
Jan 21 20:30:35 freenas uwsgi: [middleware.notifier:2026] Importing main [6634276799256796139] failed with: cannot import 'main': one or more devices is currently unavailable
Jan 21 20:30:35 freenas uwsgi: [middleware.exceptions:36] [MiddlewareError: Volume could not be imported]
Jan 21 20:30:35 freenas ZFS: vdev state changed, pool_guid=6634276799256796139 vdev_guid=6091098960124744190
Jan 21 20:30:35 freenas GEOM_ELI: Device gptid/c6059365-d282-11e8-8f9f-408d5c225151.eli created.
Jan 21 20:30:35 freenas GEOM_ELI: Encryption: AES-XTS 256
Jan 21 20:30:35 freenas GEOM_ELI:     Crypto: hardware
You have an odd mix of encrypted and unencrypted devices in the pool. Your ada3 is unencrypted and gptid/c6059365-d282-11e8-8f9f-408d5c225151 is encrypted. Looking at the rest of the configuration is looks like only one of the four disks is actually encrypted.
You know, that doesn't even surprise me at this point. I set up encryption on it quite a while ago through FreeNAS web GUI. I have been trying to figure out why they are all labeled like that. I very much look forward to ditching FreeNAS and moving to just FreeBSD.
Maybe something got messed up with previous drive replacements? Forgot to encrypt the disk first and simply replaced a dead encrypted drive with the unencrypted one. That's probably the most plausible explanation I can think of. You can easily make that mistake on a plain FreeBSD too.
Ok, so After many man pages and more reading documentation and talking this through with someone more creative than I. I have a solution.
What I ended up doing was re-downloading all my encryption keys and such so I had a backup of my backup of my backup. And then exported the pool.
Crossed my fingers and ran an import. told it about all the encryption, gave it the key. and it didn't throw an error. and I went and mounted the pool and all my data was there.
checked my zpool status and everything is online but that one disk, ran the online command and voila

root@freenas:~ # zpool status
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:00:24 with 0 errors on Mon Jan 21 03:45:24 2019

        NAME        STATE     READ WRITE CKSUM
        freenas-boot  ONLINE       0     0     0
          ada1p2    ONLINE       0     0     0

errors: No known data errors

  pool: main
 state: ONLINE
  scan: resilvered 7.36M in 0 days 00:00:01 with 0 errors on Sat Jan 26 00:10:09 2019

        NAME                                                STATE     READ WRITE CKSUM
        main                                                ONLINE       0     0     0
          raidz1-0                                          ONLINE       0     0     0
            gptid/c6059365-d282-11e8-8f9f-408d5c225151.eli  ONLINE       0     0     0
            gptid/c712eee9-d282-11e8-8f9f-408d5c225151.eli  ONLINE       0     0     0
            ada3                                            ONLINE       0     0     0
            gptid/d752bb0e-d282-11e8-8f9f-408d5c225151.eli  ONLINE       0     0     0

errors: No known data errors

The only thing that jumps out at me as not working correctly is my jails are complaining about their mount points. It will be simple to fix, but I thought I would leave it here for posterity sake and not be the guy who asks for help, fixes it and then just says "its fixed" and never says what the solution was.