Solved No data found after mounting zpool

I recently reinstalled Proxmox VE and lost access to our storage system. When remounting the zpool, no data appears in the folder.

zfs list
Code:
NAME                    USED  AVAIL  REFER  MOUNTPOINT
rpool                  9.36G  98.2G    96K  /rpool
rpool/ROOT              876M  98.2G    96K  /rpool/ROOT
rpool/ROOT/pve-1        876M  98.2G   876M  /
rpool/data               96K  98.2G    96K  /rpool/data
rpool/swap             8.50G   107G    64K  -
storage                27.2T  3.37T  5.29M  /storage
storage/vm-101-disk-1  56.7G  3.39T  36.5G  -
storage/vm-102-disk-1  33.0G  3.40T  4.31G  -
storage/vm-103-disk-1  33.0G  3.39T  15.5G  -
storage/vm-104-disk-1  33.0G  3.38T  27.2G  -
storage/vm-107-disk-1  33.0G  3.40T  4.66G  -
storage/vm-117-disk-1  33.0G  3.39T  14.3G  -
storage/vm-121-disk-1  33.0G  3.40T  1.13G  -
storage/vm-122-disk-1  33.0G  3.40T  3.56G  -
storage/vm-124-disk-1   206G  3.56T  11.1G  -
storage/vm-126-disk-1   207G  3.55T  18.0G  -
storage/vm-127-disk-1   264G  3.46T   172G  -
storage/vm-128-disk-1   206G  3.55T  17.6G  -
storage/vm-130-disk-1  60.7G  3.37T  60.7G  -
storage/vm-131-disk-1  66.0G  3.42T  18.0G  -
storage/vm-132-disk-1  51.6G  3.42T  3.97G  -
storage/vm-133-disk-1  51.6G  3.41T  9.67G  -
storage/vm-201-disk-1  33.0G  3.39T  8.34G  -
storage/vm-202-disk-1  33.0G  3.40T  5.44G  -
storage/vm-203-disk-1  33.0G  3.40T  5.64G  -
storage/vm-211-disk-1  33.0G  3.40T  1.83G  -
storage/vm-300-disk-1  33.0G  3.39T  9.33G  -
storage/vm-333-disk-1  33.0G  3.39T  9.34G  -
storage/vm-616-disk-1  33.0G  3.39T  8.83G  -

zpool list
Code:
NAME      SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
rpool     111G   878M   110G         -     0%     0%  1.00x  ONLINE  -
storage  43.5T  35.9T  7.56T        4M    17%    82%  1.00x  ONLINE  -

zpool status
Code:
  pool: rpool
 state: ONLINE
  scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    rpool       ONLINE       0     0     0
     mirror-0  ONLINE       0     0     0
       sdm2    ONLINE       0     0     0
       sdn2    ONLINE       0     0     0

errors: No known data errors

  pool: storage
 state: ONLINE
  scan: scrub canceled on Thu Jan 19 23:37:36 2017
config:

    NAME                                 STATE     READ WRITE CKSUM
    storage                              ONLINE       0     0     0
     raidz1-0                           ONLINE       0     0     0
       ata-ST4000DM000-1F2168_W300FARL  ONLINE       0     0     0
       ata-ST4000DM000-1F2168_W300FJ7H  ONLINE       0     0     0
       ata-ST4000DM000-1F2168_Z300KH87  ONLINE       0     0     0
       ata-ST4000DM000-1F2168_Z307A48Q  ONLINE       0     0     0
     raidz1-1                           ONLINE       0     0     0
       ata-ST4000DM000-1F2168_Z300KGEC  ONLINE       0     0     0
       ata-ST4000DM000-1F2168_S300CR7V  ONLINE       0     0     0
       ata-ST4000DM000-1F2168_S300D59T  ONLINE       0     0     0
       ata-ST4000DM000-1F2168_Z300KK7N  ONLINE       0     0     0
     raidz1-2                           ONLINE       0     0     0
       ata-ST4000VN000-1H4168_Z305YLY2  ONLINE       0     0     0
       ata-ST4000VN000-1H4168_Z3061WQX  ONLINE       0     0     0
       ata-ST4000VN000-1H4168_Z304CZX8  ONLINE       0     0     0
       ata-ST4000VN000-1H4168_Z304CXL3  ONLINE       0     0     0

errors: No known data errors

df -h
Code:
Filesystem        Size  Used Avail Use% Mounted on
udev               10M     0   10M   0% /dev
tmpfs              13G  9.8M   13G   1% /run
rpool/ROOT/pve-1  100G  903M   99G   1% /
tmpfs              32G   25M   32G   1% /dev/shm
tmpfs             5.0M     0  5.0M   0% /run/lock
tmpfs              32G     0   32G   0% /sys/fs/cgroup
rpool              99G  128K   99G   1% /rpool
rpool/ROOT         99G  128K   99G   1% /rpool/ROOT
rpool/data         99G  128K   99G   1% /rpool/data
storage           3.4T  5.3M  3.4T   1% /storage
/dev/fuse          30M   16K   30M   1% /etc/pve

It should be mounted in the storage folder, but using ls in /storage reveals no files, despite zfs list showing the files still exist.
 
Do you mean zfs list shows you
.....
storage 27.2T 3.37T 5.29M /storage
storage/vm-101-disk-1 56.7G 3.39T 36.5G -
.....
a vm disk of size 56.7G?
This is probably a zvol (virtual diskdevice) and can be found in /dev/zvol/storage
The zfs filesystem itself DOES contain some data (5.29M).
Try ls -a /storage or perhaps look for snapshots zfs list -t snapshot storage
 
Yes, zfs list shows 27.2T in use, meaning the files should be there.
ls -a /storage returns:
Code:
. ..

For zfs list -t snapshot -r storage
Code:
NAME                                            USED  AVAIL  REFER  MOUNTPOINT
storage@20140812_First                         56.8G      -  7.31T  -
storage@20140814-firstshare                     232K      -  7.26T  -
storage@20140823                               4.50G      -  8.27T  -
storage@20140901                                895M      -  8.75T  -
storage@20141009                               90.8M      -  8.38T  -
storage@addedvdev_10102014                     91.1M      -  8.39T  -
storage@UpgradedProxmoxVE_to_3.3               84.5G      -  8.87T  -
storage@2015-01-13                              108G      -  10.9T  -
storage@20150203                                137G      -  11.6T  -
storage@20150312                               58.9G      -  12.5T  -
storage@20150409                                186G      -  13.1T  -
storage@20150610                               68.5G      -  13.2T  -
storage@20160502                                152G      -  18.2T  -
storage@zfs-auto-snap_monthly-2016-07-01-1152   230G      -  19.6T  -
storage@07162016                                247G      -  19.7T  -
storage@zfs-auto-snap_monthly-2016-08-01-1152   252G      -  20.0T  -
storage@zfs-auto-snap_monthly-2016-09-01-1152  20.5G      -  20.0T  -
storage@zfs-auto-snap_monthly-2016-10-01-1152  15.5G      -  20.6T  -
storage@zfs-auto-snap_monthly-2016-11-01-1152   274G      -  20.3T  -
storage@zfs-auto-snap_monthly-2016-12-01-1252   746G      -  20.7T  -

zfs list -o space -r storage
Code:
NAME                   AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
storage                3.37T  27.2T     25.6T   5.29M              0      1.60T
storage/vm-101-disk-1  3.39T  56.7G         0   36.5G          20.2G          0
storage/vm-102-disk-1  3.40T  33.0G         0   4.31G          28.7G          0
storage/vm-103-disk-1  3.39T  33.0G         0   15.5G          17.5G          0
storage/vm-104-disk-1  3.38T  33.0G         0   27.2G          5.84G          0
storage/vm-107-disk-1  3.40T  33.0G         0   4.66G          28.4G          0
storage/vm-117-disk-1  3.39T  33.0G         0   14.3G          18.7G          0
storage/vm-121-disk-1  3.40T  33.0G         0   1.13G          31.9G          0
storage/vm-122-disk-1  3.40T  33.0G         0   3.56G          29.4G          0
storage/vm-124-disk-1  3.56T   206G         0   11.1G           195G          0
storage/vm-126-disk-1  3.55T   207G         0   18.0G           189G          0
storage/vm-127-disk-1  3.46T   264G         0    172G          92.4G          0
storage/vm-128-disk-1  3.55T   206G         0   17.6G           189G          0
storage/vm-130-disk-1  3.37T  60.7G         0   60.7G              0          0
storage/vm-131-disk-1  3.42T  66.0G         0   18.0G          48.1G          0
storage/vm-132-disk-1  3.42T  51.6G         0   3.97G          47.6G          0
storage/vm-133-disk-1  3.41T  51.6G         0   9.67G          41.9G          0
storage/vm-201-disk-1  3.39T  33.0G         0   8.34G          24.7G          0
storage/vm-202-disk-1  3.40T  33.0G         0   5.44G          27.6G          0
storage/vm-203-disk-1  3.40T  33.0G         0   5.64G          27.4G          0
storage/vm-211-disk-1  3.40T  33.0G         0   1.83G          31.2G          0
storage/vm-300-disk-1  3.39T  33.0G         0   9.33G          23.7G          0
storage/vm-333-disk-1  3.39T  33.0G         0   9.34G          23.7G          0
storage/vm-616-disk-1  3.39T  33.0G         0   8.83G          24.2G          0
 
Point is, the vm-xxx-disk-1 entries are volumes and will never show up as files. They should be accessible as block devices in /dev/zvol/<poolname>
 
Should I just revert to the snapshot?
I think not! Would you wish to revert your VM's also to that point in time?
I recently reinstalled Proxmox VE
Don't know what you exactly did, but if you are simply looking for a set of files (and do NOT want to mess up the system any further) I suggest first of all to make a snapshot of the current situation:
Code:
zfs snapshot storage@justafterbadreinstall
Then find somebody that knows proxmox (not me, had to google it myself) to help you select which files should go where. A simple copy could be done with:
Code:
 (cd /storage/.zfs/snapshot/zfs-auto-snap_monthly-2016-12-01-1252 ; tar cf - . ) | (cd /storage ; tar xfv -)
But again, think carefully what you are doing. Reading 20Tb from backup will cost you several days!

Edit: Also you might want to readup on snapshots and split your dailys in VM-Disk snapshots and the rest. And start cleaning up some old snapshots, when this is over. Your free space is ~18% and will start hurting performance.
 
I ended up rolling back to the most recent snapshot, it worked. Lost some data from the past month but most of the files are back.
 
Back
Top