Solved Please help, Server no longer boots

Hi All,

I have been running freeBSD 11.1 STABLE with no problem for over a year now.
Today I experienced a strange problem. I had ssh'd into the server to move some files, then logged out after. I then tried to ssh in later but couldn't as I was continually asked for the password, which wasn't accepted
Hooked up my monitor and keyboard to the server, but screen was blank.
Attempted root login, then
Bash:
shutdown -h now
blind. Server didn't poweroff. I was forced to do a hard poweroff using power button.

On reboot I got the following error:

Code:
Can't find /boot/zfsloader


FreeBSD/x86 boot

Default: zroot/Root/default:/boot/kernel/kernel

boot:

|

Can't find /boot/kernel/kernel

Default: zroot/Root/default:/boot/kernel/kernel

boot:

I have booted the server in single user mode using a live USB with freeBSD 11.1 STABLE on it.
Bash:
zpool import
shows

Code:
    pool: storage
    id: <.........>
    state: ONLINE
    action: <.........>
    config:
        storage
        mirror-0
            diskid/DISK-WD-WCC................    ONLINE
            diskid/DISK-WD-WCC................    ONLINE
        mirror-1
            diskid/DISK-WD-WCC................    ONLINE
            diskid/DISK-WD-WCC................    ONLINE
  
    pool: zroot
    id: <.........>
    state: ONLINE
    action:
    action: <.........>
    config:
        zroot                                ONLINE
        nvd0p3

nvd0 = Intel NVMe SSD -> zfs filesystem = zroot
WD-WCC= Western Digital HDDs -> mirrored "raid" holding data = storage

I have imported zroot without mounting it and then exporting it again. I have imported zroot and mounted it, listed datasets without error and exported it again.

So, it would be great to fix the boot problem, but if not, my priority is to check storage is intact.

Question 1. How to get the server to boot?
Question 2. If not Question 1. how can I check and scrub storage using the live USB, then export it so I can re-import it after reinstalling freebsd to NVMe SSD?
Question 3. How can I mount zroot and copy files to storage (assuming storage has no problems)?

Hoping someone has some advice that can help me resolve the situation.
 
Last edited:
I have been running freeBSD 11.1 STABLE with no problem for over a year now.
Or so you think, problems like these do not appear out of nowhere. Still: why insist on running STABLE (developer snapshot) instead of RELEASE, which is pretty much an officially supported version for production use?

For all I know your problem could be caused by a bug in the STABLE release.

Today I experienced a strange problem. I had ssh'd into the server to move some files, then logged out after. I then tried to ssh in later but couldn't as I was continually asked for the password, which wasn't accepted
Time to check for rootkits I guess. security/rkhunter is pretty decent for that.

Question 1. How to get the server to boot?
What does lsdev tell you when used in the boot menu? Also, can you share the output of gpart list? If that becomes a huge list: do you even have a partition of type freebsd-boot at all?

As I said: problems like these do not "just" happen. Anyway, how to boot your server...

  • Boot using your rescue setup, in the boot menu itself press escape, drop down to the prompt, run lsdev and make sure it lists your ZFS pool(s) and your boot slice. If it doesn't you got an issue.
  • Use: unload to unload everything, then:
    • load /boot/kernel/kernel
    • load /boot/kernel/opensolaris.ko
    • load /boot/kernel/zfs.ko
  • Now that we primed our kernel determine the partition which contains your ZFS data (zroot). Something in the likes of: disk3p2 (or something alike, I obviously don't know). Then:
    • set currdev="disk3p2" (fill in your own correct setup).
    • set vfs.root.mountfrom="zfs:zroot"
  • This should be enough, you can then follow up with: boot -s or boot.
When you added the right values (read: right disk specification) then the kernel should start doing its thing, mount your ZFS 'root pool' and continue as normal. So basically you booted your system using an external kernel. Easy.

Question 3. How can I mount zroot and copy files to storage (assuming storage has no problems)?
Same way you mounted it already? I thought you said you imported it, mounted it and... Oh well.

Easy: # zpool import -fR /mnt zroot. This would import zroot and set the temporary mountpoint (altroot) to /mnt. Which could cause some error messages about the filesystem being readonly but that isn't too much of a problem because that doesn't stop you from being able to mount it.

Warning though: in order to cater to beadm your ZFS root doesn't automatically mount itself: if you used the installer to install the whole thing automatically then the canmount property is off, thus you need to manually mount it. Not that hard, look into zfs mount.

Most of all: find out what caused this. Once again: problems like these don't "just" happen. And I'd definitely reconsider running STABLE if I were you.

FreeBSD forums doesn't support STABLE, but since this doesn't seem STABLE specific I'll bite..
That's incorrect. CURRENT is the version 'ahead of time' which isn't supported and which questions are best asked on the mailinglist, STABLE is allowed because it's less bleeding edge.
 
Hello,

@ShelLuser

Thank you for replying.

I run STABLE because 11.1 RELEASE did not support my network card properly and 11.1 STABLE provides networking that works for me.

I agree with you about these things don't happen without a reason. After I can access my files and data offline and take backups, I will take your advice and wipe the NVMe SSD and reinstall 11.2 RELEASE and see if my network functions as well as in 11.1 STABLE.

I am somewhat new to freeBSD so my goal is to access my WD-WCC disks and data safely. With this in mind I'll try to get the information you asked for and take slow steps with confirmation to ensure I perform commands correctly.

I rebooted using 11.1 STABLE memstick.img and choose option 3. Escape to loader prompt.

lsdev output:

Code:
  disk devices:
    disk0 : BIOS drive C (......... X 512) -> probably usb rescue memstick.img
      disk0p1 : EFI
      disk0p2 : FreeBSD boot
      disk0p3 : FreeBSD UFS
      disk0p4 : swap
    disk1: BIOS drive D (......... X 512) -> probably Western Digital HDD
    disk2: BIOS drive E (......... X 512) -> probably Western Digital HDD
    disk3: BIOS drive F (......... X 512) -> probably Western Digital HDD
    disk4: BIOS drive G (......... X 512) -> probably Western Digital HDD
    disk5: BIOS drive H (......... X 512) -> probably NVMe SSD
      disk5p1 :  FreeBSD boot
      disk5p2 :  FreeBSD swap
      disk5p3 :  FreeBSD ZFS

Output of gpart list
Code:
gpart not found

So I'm guessing that disk5 is the NVMe SSD? This would mean after unloading kernel modules I would use your commands as follows?

set currdev="disk5p3"
set vfs.root.mountfrom="zfs:zroot"


Many thanks for all your help it is very much appreciated.
 
Server didn't poweroff.
With shutdown(8) there's a difference between -h and -p.

  • -h - halts the server. You'll see a message the machine can be powered off and stays powered on. I.e. you have to physically turn it off.
  • -p - halts the server and issues an ACPI power-off command. I.e. the server powers itself off when the shutdown is completed.
 
I see NVMe and wanted to let you know that FreeBSD 11.2 introduced problems for me.
I use XG3 Toshiba and it worked great.
But with FreeBSD 11.2 if I issue a reboot command the system reboots,
but the NVMe get knocked offline and the BIOS cannot see it.
If I use shutdown -p now all is fine. I hit the button and fire it back up.
I also tried shutdown -r obviously.

To knock the device out of the bios is pretty harsh. I have never seen that behavior
I am guessing the camcontrol level or perhaps nvmectl stuff must have changed.
No data corruption on my filesystem so minor nuisance for me.
 
I see NVMe and wanted to let you know that FreeBSD 11.2 introduced problems for me.
I use XG3 Toshiba and it worked great.
But with FreeBSD 11.2 if I issue a reboot command the system reboots,
but the NVMe get knocked offline and the BIOS cannot see it.
If I use shutdown -p now all is fine. I hit the button and fire it back up.
I also tried shutdown -r obviously.

To knock the device out of the bios is pretty harsh. I have never seen that behavior
I am guessing the camcontrol level or perhaps nvmectl stuff must have changed.
No data corruption on my filesystem so minor nuisance for me.

Once the SSD was knocked out of the bios, how did you fix that? A simple power cycle, power off using button then power on using button?
 
Yes, soft off, ctl/alt/del doesn't refresh it either. Hitting the switch on my box is a 2 minute affair because of BMC.

I was just thinking I could probably use a firmware check.
Never flashed my NVMe before.

There are also issues with Samsung 961 series from other user reports.
 
I run STABLE because 11.1 RELEASE did not support my network card properly and 11.1 STABLE provides networking that works for me.
Makes perfect sense. Yeah, there are definitely situations where STABLE can be a necessity (which is probably also the reason why it's supported on these forums), just always heed the (possible) caveats.

I rebooted using 11.1 STABLE memstick.img and choose option 3. Escape to loader prompt.

lsdev output:

Code:
  disk devices:
    disk0 : BIOS drive C (......... X 512) -> probably usb rescue memstick.img
      disk0p1 : EFI
      disk0p2 : FreeBSD boot
      disk0p3 : FreeBSD UFS
      disk0p4 : swap
    disk1: BIOS drive D (......... X 512) -> probably Western Digital HDD
    disk2: BIOS drive E (......... X 512) -> probably Western Digital HDD
    disk3: BIOS drive F (......... X 512) -> probably Western Digital HDD
    disk4: BIOS drive G (......... X 512) -> probably Western Digital HDD
    disk5: BIOS drive H (......... X 512) -> probably NVMe SSD
      disk5p1 :  FreeBSD boot
      disk5p2 :  FreeBSD swap
      disk5p3 :  FreeBSD ZFS

Output of gpart list
Code:
gpart not found
Ok, I could have explained the gpart better: that is not a 'boot menu command' but a regular command available in FreeBSD. So save that for later; we might need it later to bootstrap your setup (so that you can boot normally again).

First things first: disk5p3 is the magic key here ;) So those commands I mentioned above? Replace my 'diskypx' with disk5p3 and you should be able to boot just fine. Warning: those 'load' commands are mandatory otherwise your kernel can't access your ZFS filesystem.

(basically we're manually performing tasks otherwise carried out through /boot/loader.conf).

So I'm guessing that disk5 is the NVMe SSD? This would mean after unloading kernel modules I would use your commands as follows?

set currdev="disk5p3"
set vfs.root.mountfrom="zfs:zroot"
That is correct. Be careful here; don't be lazy and think that you can use spaces or such, use the commands exactly as I did here, including the quotes and everything. That should do it.

After you booted the server we can go to step 2: bootstrapping. But for that we need the output of gpart, though you can start with: # zpool status -v zroot, that should also give us the device names and some optional extra info.
 
ShelLuser

I have a problem. When I am at the command line in the loader prompt, I need to load a UK keymap as it looks like the default keymap is US.
I can't find " (double quote) character. Is it possible to change the keyboard layout while in the loader prompt?

Edit: I can find a double quote character but it looks a bit strange to me, looks the US-International substitute for ", which is kind of slanted . Reminds me of matching quotes you find in some books.

OK, I tried the following commands:
unload
load /boot/kernel/kernel
load /boot/kernel/opensolaris.ko
load /boot/kernel/zfs.ko
set currdev="disk5p3"
set vfs.root.mountfrom="zfs:zroot"
boot -s


I verified each variable using the show command and setting with and without quotes returned the same contents for the variables currdev and vfs.root.mountfrom.

So the Server boots then after some lines, it tries to mount zfs but then the screen goes black and the Server reboots and then I'm back to the original error message. Not sure what do about this.
 
I have a problem. When I am at the command line in the loader prompt, I need to load a UK keymap as it looks like the default keymap is US.
That's correct. Unfortunately I have no idea how to change that, quite frankly I'm not even sure you actually can do this within the boot menu. /boot/defaults/loader.conf doesn't contain much information about this.

But there's always a picture of a US keyboard to be found on the Net.

So the Server boots then after some lines, it tries to mount zfs but then the screen goes black and the Server reboots and then I'm back to the original error message.
Well, that means that your problems are definitely bigger than it initially looked. It would appear that something seriously corrupted your ZFS pool somehow.

ZFS is a decently robust filesystem but once it gets corrupt then you'll have a hard time fixing that again, which even assumes that it actually can be fixed, sometimes you can't. Because unfortunately ZFS doesn't have tools such as fsck. (there is of course # zpool scrub).

First I'd suggest you check the validity of the pool. Start a rescue system, and then try mounting it: # zpool import -fR /mnt zroot. That will probably give an error message about a readonly filesystem, but check what it did using zfs list.

The error (if you got an error) is triggered because your root filesystem doesn't automatically mount. They set that up to cater to beadm (which I still think is utterly stupid, it could have been set up much cleaner) and as a result your root filesystem doesn't mount, then the other filesystems are getting automatically mounted but there's no mountpoint in /mnt. The error message gets triggered because zfs will, by default, try to create the mountpoints automatically (which obviously fails).

So the next step: zfs mount zroot/root/DEFAULT (or something close enough, I don't rely on the automated installer and I don't really bother trying to remember the nonsense it does ;)).

Once that succeeded you can then mount everything else: # zfs mount -a and after that you should have full access to your system through /mnt.

From this point I'd recommend trying # zpool scrub but I'm not too confident that it will do you much good. If the system isn't too big and you have enough room on your storage then you might even want to consider doing a re-install; backup your system onto storage, wipe out the pool and re-create it after which you restore everything.

Still, one step at a time; let's see if you can actually access your pool.
 
ShelLuser

So I booted into Single User mode using a USB flash stick with 11.1 STABLE on it earlier this afternoon.

Here's what I accomplished:
I successfully mounted a FAT32 USB hard drive to /media then imported zroot.
I then scrubbed zroot which completed with no errors.
I then mounted the datasets using the same commands you provided above.
This allowed me to copy all the data from zroot including my iocage jails over to the external USB hard drive.

I am really only interested in configuration files from the borked system. I have checked the copies on the external USB drives and everything seems to be there. At this point I think I will wipe the NVMe SSD as I think it is more hassle to fix the boot problems than to install 11.2 RELEASE from scratch.

So, now this leads me to the main question. Should I perform any operations on the zpool storage using a live USB stick? At this point I have not imported storage in a live USB environment.
Is there any difference between importing a zpool when in a rescue environment to importing a zpool into a live system? Is one safer than the other?
Would it be useful to perform a scrub on storage?
I know a scrub will take a long time as there is around 3TB of data.

Any other advice or thoughts about this situation?
 
This allowed me to copy all the data from zroot including my iocage jails over to the external USB hard drive.

I am really only interested in configuration files from the borked system.
At minimum that would be /etc, /usr/local/etc, /var (I usually grab this entirely, better to be safe than sore) and optionally any home directories you maintain.

At this point I think I will wipe the NVMe SSD as I think it is more hassle to fix the boot problems than to install 11.2 RELEASE from scratch.
Smart thinking. I fully agree, it isn't even sure that this problem can be fixed. If you have the storage capacity then this is definitely the right idea.

Tip: test your network card on 11.2 before you proceed with the installation. So start the live system and see if it can use your NIC. I figure even using ifconfig on the commandline should be enough to check for that.

So, now this leads me to the main question. Should I perform any operations on the zpool storage using a live USB stick? At this point I have not imported storage in a live USB environment.
I'd recommend against that for now. When you have your live system you'll also have more tools at your disposal but most of all: you can perform certain tasks in the background. For example: while you're scrubbing storage you can easily perform other tasks at the same time.

So: my suggestion would be to leave storage alone for now, concentrate on getting your system back up & running, and then focus your attention on storage. I definitely suggest to scrub the pool at that time just in case.

Is there any difference between importing a zpool when in a rescue environment to importing a zpool into a live system? Is one safer than the other?
That depends on the live system ;)

Technically there is no difference. Using a rescue system can be safer but that heavily depends on your live system, as such my snippy comment above ;) See: if your live system is set up to actively use the pool (for example a samba share or such) then it would be safer ("cleaner") to perform rescue / analysis operations using a rescue system. Of course: if you're sure it's only Samba using the pool then you can just as easily turn Samba off for the time being (or remove the share, you get the idea).

So, focusing on storage: it won't make much difference if you'd try to check it using your rescue setup or later on using your live system.

Would it be useful to perform a scrub on storage?
I know a scrub will take a long time as there is around 3TB of data.
Definitely.

Because something caused this mess in the first place, and right now you have no idea what could have caused it. So yeah, you should definitely check storage when you got the opportunity.

Any other advice or thoughts about this situation?
Once you have your live system setup don't get over-excited and approach storage carefully. I obviously don't know where you normally mounted it but for starters I'd use /mnt as a temporary space to avoid services trying to access it.

So after your server is back up & running: # zpool import -fR /mnt storage, then check the pool using zfs list -r storage and when everything checks out (and you can access your data using /mnt) then I'd start a scrub.

Hope this can help!
 
I will test my network card on 11.2 as you suggest.

If all goes well with that I think I will disconnect the 4 Western Digital Harddrives from the motherboard and wipe and install 11.2 RELEASE to the NVMe SSD. Do you think that is a good idea? Maybe it's safer to disconnect the HDDs for the reinstallation then reconnect them later afer the new system is online? Would I need to ensure that each drive is reconnected to the same SATA port it was connected to before?

Originally I mounted storage /. I will take your advice and mount the zpool on /mnt/storage once my system is back online.

Thanks for all your advice and help.
 
If all goes well with that I think I will disconnect the 4 Western Digital Harddrives from the motherboard and wipe and install 11.2 RELEASE to the NVMe SSD. Do you think that is a good idea?
Most definitely. That will ensure that neither you nor the installer can make a mistake and accidentally wipe out your storage pool. Smart thinking!

Would I need to ensure that each drive is reconnected to the same SATA port it was connected to before?
Good question, I'm not sure (hardware has never been my strongest point) ;)

So for what's it worth: As far as I know disks are identified based on their GID. So it should not make much difference on which device they are. But having said that... There's also something as sequences, so if possible it might be best to try and play this safe and mark them somehow.

Do me a favor and let us know how this worked out for you. I'm really hoping for the best here!
 
You don't need to know what disk is plugged in where. On freebsd, you don't need to know any IDs either.
Which is nonsense because as soon as a disk fails then it will only be the ID which allows you to detach it from your pool. Another example: press escape in the boot menu, then enter the command lsdev. Heck, just check this post (I'm assuming you didn't bother to read the thread): notice how the boot loader uses disk5p3 to identify the boot device?

How do you think it got that sequence? The 4 disks before this one? Because ID's matter.

(edit) Of course the key is how those ID's got assigned. But that's the important detail you're leaving out of your story.
 
ShelLuser

Today I wiped the NVMe SSD and reinstalled freeBSD 11.2 RELEASE, which works with my network card. The server boots and shuts down with no problems. I'll spend the rest of today configuring and securing my server. Tomorrow I'll look at storage and let you know what I find.
 
ShelLuser

Today I reconnected the 4 WD HDDs booted the server and imported storage to/mnt as you suggested. After a quick look into /mnt it would seems that the data is present.

Here is the output of # sudo zpool status storage
Code:
  pool: storage
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 484K in 4h18m with 0 errors on Mon Feb 19 20:33:59 2018
config:

        NAME        STATE     READ WRITE CKSUM
        storage     ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            ada0    ONLINE       0     0     0
            ada1    ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            ada2    ONLINE       0     0     0
            ada3    ONLINE       0     0     0

errors: No known data errors

It would seem that a zpool upgrade is being recommended. Should this be done before a zpool scrub is performed?
What would your advice be?
 
First of all: thanks for sharing. It's always nice to see that help gets picked up and used appropriately, eventually leading to an actual solution! (that's what it's all about after all!).

Today I reconnected the 4 WD HDDs booted the server and imported storage to/mnt as you suggested. After a quick look into /mnt it would seems that the data is present.
Nice! I wasn't really expecting otherwise, but it's always a relief to see confirmation.

It would seem that a zpool upgrade is being recommended. Should this be done before a zpool scrub is performed?
What would your advice be?
I wouldn't take chances. ZFS can easily use "lower tiered" ZFS pools without problems. And there's still the risk that something could be at fault with the pool.

Now, upgrading the pool isn't normally that intensive. As the error says it's merely about a few feature. Even so my recommendation, just to be safe, would be to scrub first, then upgrade. Just to be safe: if there is still a hiccup somewhere then scrub will find & fix it. After that you can be sure that nothing could somehow block or disrupt your pool upgrade.

So: scrub now, upgrade later.

For the record: I don't really foresee any issues but.. it's best to be safe, especially with data you care about.
 
So: scrub now, upgrade later.

For the record: I don't really foresee any issues but.. it's best to be safe, especially with data you care about.

Sounds like good advice to me.
sudo zpool scrub storage initiated. Only ~500hrs to go!

I will continue to reinstall the software that was previously on the server. I will drop in again to this thread (in 20 days!) to let you know how the data scrub went.
I would like to thank you ShelLuser for you kind and informative help with this problem.
 
Wow yikes you should really scrub more often than this. If you have a disk die in those 500 hours theres a decent chance you're hosed. common way to loose data with zfs.

If you check how I have set up the storage pool you will see that I could lose 1 disk and still keep my data. I would have to lose 2 disks (in worst combination) to lose the storage pool. Theoretically I could lose both disks in mirror-0 or mirror-1 and still have the other mirror, so in this case I would have to lose 3 disks to lose the storage pool.

Update
Code:
  pool: storage
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub in progress since Sun Aug  5 21:45:26 2018
        1.18T scanned out of 3.91T at 195M/s, 4h5m to go
        256K repaired, 30.21% done
config:

        NAME        STATE     READ WRITE CKSUM
        storage     ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            ada0    ONLINE       0     0     0
            ada1    ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            ada2    ONLINE       0     0     0
            ada3    ONLINE       0     0     0

errors: No known data errors

So hopefully it won't take much longer to scrub storage.
 
nihr43 I understand what you are saying and you are correct to point out the risks, it all comes down to risk.
I have 2 spare disks, 1 offsite hard copy of essential data and encrypted essential data stored on the cloud. freeBSD is relatively new to me and I chose freeBSD specifically because of ZFS as another technology which could help to minimise the risk of data loss. You are certainly correct the scrub should have been done more frequently and I will investigate why this hasn't happened.

ShelLuser
Happily, the scrub has finished:
Code:
pool: storage
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
scan: scrub repaired 256K in 6h29m with 0 errors on Mon Aug  6 04:15:23 2018
config:

        NAME        STATE     READ WRITE CKSUM
        storage     ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            ada0    ONLINE       0     0     0
            ada1    ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            ada2    ONLINE       0     0     0
            ada3    ONLINE       0     0     0

errors: No known data errors[CODE]

I will perform a zpool upgrade later today and report back on the outcome.
 
ShelLuser

After the scrub had completed I shutdown the server, as it was very late for me. When I booted the server today I noticed that the storage pool has not been mounted automatically.

So I think my next tasks are to import the storage pool again to a location that will be it's final location, ensure it is mounted automatically at boot, then upgrade the storage pool.

So I have some questions:

1. I am thinking to mount storage to /mnt as it's permanent home. Is this a reasonable idea? I may use autofs later but I think it uses /media by default and not /mnt?
2. If I use only #sudo zpool import storage I assume storage will be pounted to it's original mountpoint location of / on the new installation?
3. What would be the commands to import and mount storage to /mnt so that it is automatically mounted on every boot?
# sudo zpool import -fR /mnt storage
# sudo zfs mount -a


I am being very careful and want to double check if I am understanding things correctly and want to avoid mistakes:confused:.
 
1. I am thinking to mount storage to /mnt as it's permanent home. Is this a reasonable idea? I may use autofs later but I think it uses /media by default?
I'd steer clear of both. If you want something in the root try using /storage. Problem is that /mnt is already somewhat 'dedicated': it's meant to be an empty mountpoint which you can use when you need it, and this is especially useful in an emergency. You never have to think about "where shall I mount my stuff?" because /mnt is always available.

2. What would be the commands to import and mount storage to /mnt so that it is automatically mounted on every boot?
Something in the likes of:
  • # zpool import -fN storage.
  • # zfs set mountpoint=/storage storage
  • Optionally: # zfs mount storage
You probably don't need that last command, but it things don't get mounted then it will help. After this the filesystem has a 'mountpoint' property set which will make sure that it will be automatically mounted during a next boot.
 
Back
Top