Solved /dev/zvol Not Exist

Hello:


I'm new to FreeBSD , and almost first time use bsd and zfs , I used use CentOS.
(I'm NOT native English user, Sorry for misunderstood if it happen)



I want :

Config a ZFS storage server , use iSCSI share to vSphere and Windows


My Step :

I install FreeBSD in vbox , then add 12 HDD and 2 SSD .

But after I create the pool :

There is no /dev/zvol exist and /dev/zd0 or something else not exist either .

(All guide I found all need the block device to create target , and I was found some ORACLE doc , but clearly that's not same thing )

My Question :

What happen ?

What should I do if i want share a volume



My detail :


Code:
root@test:~ # zpool create pool raidz3 ada1 ada2 ada3 ada4 raidz3 ada5 ada6 ada7 ada8 raidz3 ada9 ada10 ada11 ada12

root@test:~ # zpool add pool cache da0
root@test:~ # zpool add pool cache da1
root@test:~ # zfs create pool/vol01

root@test:~ # zpool status
  pool: pool
 state: ONLINE
  scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    pool        ONLINE       0     0     0
      raidz3-0  ONLINE       0     0     0
        ada1    ONLINE       0     0     0
        ada2    ONLINE       0     0     0
        ada3    ONLINE       0     0     0
        ada4    ONLINE       0     0     0
      raidz3-1  ONLINE       0     0     0
        ada5    ONLINE       0     0     0
        ada6    ONLINE       0     0     0
        ada7    ONLINE       0     0     0
        ada8    ONLINE       0     0     0
      raidz3-2  ONLINE       0     0     0
        ada9    ONLINE       0     0     0
        ada10   ONLINE       0     0     0
        ada11   ONLINE       0     0     0
        ada12   ONLINE       0     0     0
    cache
      da0       ONLINE       0     0     0
      da1       ONLINE       0     0     0

errors: No known data errors

root@test:~ # zfs list
NAME         USED  AVAIL  REFER  MOUNTPOINT
pool         115K  5.77T    23K  /pool
pool/vol01    23K  5.77T    23K  /pool/vol01

root@test:~ # mount
/dev/ada0p2 on / (ufs, local, journaled soft-updates)
devfs on /dev (devfs, local, multilabel)
pool on /pool (zfs, local, nfsv4acls)
pool/vol01 on /pool/vol01 (zfs, local, nfsv4acls)


root@test:~ # ll /dev
total 3
crw-r--r--  1 root  wheel     0x27 Nov 20 20:43 acpi
crw-r-----  1 root  operator  0x4c Nov 20 20:43 ada0
crw-r-----  1 root  operator  0x4e Nov 20 20:43 ada0p1
crw-r-----  1 root  operator  0x4f Nov 20 20:43 ada0p2
crw-r-----  1 root  operator  0x50 Nov 20 20:43 ada0p3
crw-r-----  1 root  operator  0x4d Nov 20 20:43 ada1
crw-r-----  1 root  operator  0x79 Nov 20 20:43 ada10
crw-r-----  1 root  operator  0x7c Nov 20 20:43 ada11
crw-r-----  1 root  operator  0x7f Nov 20 20:43 ada12
crw-r-----  1 root  operator  0x53 Nov 20 20:43 ada2
crw-r-----  1 root  operator  0x54 Nov 20 20:43 ada3
crw-r-----  1 root  operator  0x5c Nov 20 20:43 ada4
crw-r-----  1 root  operator  0x5f Nov 20 20:43 ada5
crw-r-----  1 root  operator  0x74 Nov 20 20:43 ada6
crw-r-----  1 root  operator  0x76 Nov 20 20:43 ada7
crw-r-----  1 root  operator  0x77 Nov 20 20:43 ada8
crw-r-----  1 root  operator  0x78 Nov 20 20:43 ada9
crw-rw-r--  1 root  operator  0x29 Nov 20 20:43 apm
crw-rw----  1 root  operator  0x28 Nov 20 20:43 apmctl
crw-------  1 root  wheel     0x2e Nov 20 20:43 atkbd0
crw-------  1 root  kmem      0x18 Nov 20 20:43 audit
crw-------  1 root  wheel     0x17 Nov 20 20:43 auditpipe
crw-------  1 root  wheel      0xd Nov 20 20:43 bpf
lrwxr-xr-x  1 root  wheel        3 Nov 20 20:43 bpf0@ -> bpf
crw-rw-rw-  1 root  wheel     0x31 Nov 20 20:43 bpsm0
crw-r-----  1 root  operator  0x4b Nov 20 20:43 cd0
crw-------  1 root  wheel      0x5 Nov 20 20:44 console
crw-------  1 root  wheel      0xf Nov 20 20:43 consolectl
crw-rw-rw-  1 root  wheel      0xc Nov 20 20:43 ctty
crw-r-----  1 root  operator  0x49 Nov 20 20:43 da0
crw-r-----  1 root  operator  0x4a Nov 20 20:43 da1
crw-------  1 root  wheel      0x9 Nov 20 20:43 devctl
crw-------  1 root  wheel      0xa Nov 20 20:43 devctl2
cr--r--r--  1 root  wheel     0x38 Nov 20 20:43 devstat
dr-xr-xr-x  2 root  wheel      512 Nov 20 20:43 fd/
crw-------  1 root  wheel     0x11 Nov 20 20:43 fido
crw-rw-rw-  1 root  wheel     0x1a Nov 20 20:43 full
crw-r-----  1 root  operator   0x4 Nov 20 20:43 geom.ctl
dr-xr-xr-x  2 root  wheel      512 Nov 20 20:43 gptid/
crw-------  1 root  wheel     0x24 Nov 20 20:43 io
lrwxr-xr-x  1 root  wheel        6 Nov 20 20:43 kbd0@ -> atkbd0
lrwxr-xr-x  1 root  wheel        7 Nov 20 20:43 kbd1@ -> kbdmux0
crw-------  1 root  wheel     0x12 Nov 20 20:43 kbdmux0
crw-------  1 root  wheel     0x25 Nov 20 20:43 klog
crw-r-----  1 root  kmem      0x15 Nov 20 20:43 kmem
dr-xr-xr-x  2 root  wheel      512 Nov 20 20:43 led/
lrwxr-xr-x  1 root  wheel       12 Nov 20 20:44 log@ -> /var/run/log
crw-------  1 root  wheel      0xb Nov 20 20:43 mdctl
crw-r-----  1 root  kmem      0x14 Nov 20 20:43 mem
crw-rw-rw-  1 root  wheel     0x26 Nov 20 20:43 midistat
crw-r-----  1 root  operator  0x2d Nov 20 20:43 mpt0
crw-------  1 root  wheel     0x19 Nov 20 20:43 netmap
crw-------  1 root  kmem      0x16 Nov 20 20:43 nfslock
crw-rw-rw-  1 root  wheel     0x1b Nov 20 21:00 null
crw-------  1 root  operator  0x39 Nov 20 20:43 pass0
crw-------  1 root  operator  0x3a Nov 20 20:43 pass1
crw-------  1 root  operator  0x43 Nov 20 20:43 pass10
crw-------  1 root  operator  0x44 Nov 20 20:43 pass11
crw-------  1 root  operator  0x45 Nov 20 20:43 pass12
crw-------  1 root  operator  0x46 Nov 20 20:43 pass13
crw-------  1 root  operator  0x47 Nov 20 20:43 pass14
crw-------  1 root  operator  0x48 Nov 20 20:43 pass15
crw-------  1 root  operator  0x3b Nov 20 20:43 pass2
crw-------  1 root  operator  0x3c Nov 20 20:43 pass3
crw-------  1 root  operator  0x3d Nov 20 20:43 pass4
crw-------  1 root  operator  0x3e Nov 20 20:43 pass5
crw-------  1 root  operator  0x3f Nov 20 20:43 pass6
crw-------  1 root  operator  0x40 Nov 20 20:43 pass7
crw-------  1 root  operator  0x41 Nov 20 20:43 pass8
crw-------  1 root  operator  0x42 Nov 20 20:43 pass9
crw-r--r--  1 root  wheel     0x23 Nov 20 20:43 pci
crw-rw-rw-  1 root  wheel     0x30 Nov 20 20:43 psm0
dr-xr-xr-x  2 root  wheel      512 Nov 20 20:44 pts/
crw-r--r--  1 root  wheel      0x7 Nov 20 20:44 random
dr-xr-xr-x  2 root  wheel      512 Nov 20 20:43 reroot/
crw-r--r--  1 root  wheel      0x6 Nov 20 20:43 sndstat
lrwxr-xr-x  1 root  wheel        4 Nov 20 20:43 stderr@ -> fd/2
lrwxr-xr-x  1 root  wheel        4 Nov 20 20:43 stdin@ -> fd/0
lrwxr-xr-x  1 root  wheel        4 Nov 20 20:43 stdout@ -> fd/1
crw-------  1 root  wheel     0x10 Nov 20 20:43 sysmouse
crw-------  1 root  wheel     0x60 Nov 20 20:44 ttyv0
crw-------  1 root  wheel     0x61 Nov 20 20:44 ttyv1
crw-------  1 root  wheel     0x62 Nov 20 20:44 ttyv2
crw-------  1 root  wheel     0x63 Nov 20 20:44 ttyv3
crw-------  1 root  wheel     0x64 Nov 20 20:44 ttyv4
crw-------  1 root  wheel     0x65 Nov 20 20:44 ttyv5
crw-------  1 root  wheel     0x66 Nov 20 20:44 ttyv6
crw-------  1 root  wheel     0x67 Nov 20 20:44 ttyv7
crw-------  1 root  wheel     0x68 Nov 20 20:43 ttyv8
crw-------  1 root  wheel     0x69 Nov 20 20:43 ttyv9
crw-------  1 root  wheel     0x6a Nov 20 20:43 ttyva
crw-------  1 root  wheel     0x6b Nov 20 20:43 ttyvb
crw-------  1 root  wheel     0x32 Nov 20 20:43 ufssuspend
lrwxr-xr-x  1 root  wheel        9 Nov 20 20:43 ugen0.1@ -> usb/0.1.0
lrwxr-xr-x  1 root  wheel        6 Nov 20 20:43 urandom@ -> random
dr-xr-xr-x  2 root  wheel      512 Nov 20 20:43 usb/
crw-r--r--  1 root  operator  0x35 Nov 20 20:43 usbctl
crw-------  1 root  operator  0x36 Nov 20 20:43 xpt0
crw-rw-rw-  1 root  wheel     0x1c Nov 20 20:43 zero
crw-rw-rw-  1 root  operator  0x73 Nov 20 20:43 zfs
 
Code:
zfs create pool/vol01
This creates a normal ZFS file system, which as shown in your mount output is mounted at /pool/vol01.

If you want to use iSCSI then you need to create a ZFS volume.
Code:
zfs create -V 1T pool/vol01
This will create a block device under /dev/zvol/pool/vol01. You need to specify the size when you create a volume. I specified 1 terabyte above although you'll obviously want to specify whatever size you require for your use.

You can also create a sparse volume that will only take up space as it is used. A sparse volume can actually be bigger that the pool.
Code:
zfs create -s -V 10T pool/vol01
To export the volume via iSCSI you'll need to use ctld, which is configured using /etc/ctl.conf. See ctl.conf(). Here's an example of an iSCSI export I'm using for VMWare -

Code:
portal-group pg0 {
        discovery-auth-group no-authentication
        listen 192.168.200.253
}

target iqn.2015-07.net.mydomain.vm:target1 {
        auth-group no-authentication
        portal-group pg0

        lun 0 {
                path /dev/zvol/data/vmfs1
                device-id ISCSI1_VMFS1
                serial ISCSI1_1_001
        }

}

Code:
 # ctladm devlist
LUN Backend       Size (Blocks)   BS Serial Number    Device ID
  0 block            2147483648  512 ISCSI1_1_001     ISCSI1_VMFS1
 
I see , Thank you, BTW

Can the snapshot still work in the iSCSI device ?

zfs snapshot pool/vol01@date-time
 
I see , Thank you, BTW

Can the snapshot still work in the iSCSI device ?

zfs snapshot pool/vol01@date-time

Yes. But realize if it is being actively used that you are snapshotting the backing store of a live filesystem. The best snapshots are when whatever is using the volume is either disconnected or in a known good/idle state to snapshot.
 
While this obviously plausible, I doubt that decent file systems providing snapshots do not take care about this.
Understanding that even file systems have bugs now or then, are there any problem reports that prove ZFS making unreliable snapshots when the file system is used heavily?

This is not about taking a snapshot of a ZFS file system. This relates to taking a snapshot of a ZVOL, which is being actively used by a 3rd party application/system. When you snapshot a ZVOL exported to VMWare via iSCSI, it will contain a VMFS file system. This could be in any state when the snapshot is taken (and as such the snapshot could have problems or even be completely unusable when you try to actually mount it) and ZFS has absolutely no control over that.

This isn't really a bug with ZFS and isn't something ZFS can fix. When the underlying storage supports snapshots, you ideally need to make sure the data on top is consistent when you take a snapshot. On a basic ZFS file system storing a database, as you say, you can dump the data first, or maybe lock the database for a few seconds; If you don't do that then you have no guarantee the database will be 100% intact as it could have been in the middle or writing data when the snapshot was taken. If ZFS is providing block storage for a 3rd party file system, then really you need some way of telling that 3rd party file system to get itself consistent and wait while you take the snapshot.

If you just snapshot a ZVOL containing a VMFS file system, if you then try to use that snapshot it will appear to VMWare as if you're booting up after a power failure and the system lost power at the point the snapshot was taken. Most systems will recover from this sort of thing, most of the time, but it's not perfect and it's clearly possible you could lose data that was in-flight when the snapshot was taken.

The enterprise-y backup solutions I've used for VMWare take snapshots through VMWare, but they'll usually talk to the guest (if supported) and tell that to "quiesce" its file system first. Without that you could come to restore your critical SQL Server and find that it starts complaining about unclean databases and wanting to replay logs.
 
Code:
zfs create pool/vol01
This creates a normal ZFS file system, which as shown in your mount output is mounted at /pool/vol01.

If you want to use iSCSI then you need to create a ZFS volume.
Code:
zfs create -V 1T pool/vol01
This will create a block device under /dev/zvol/pool/vol01. You need to specify the size when you create a volume. I specified 1 terabyte above although you'll obviously want to specify whatever size you require for your use.

What's the difference? What's a block device? Why does he need one?

TL;DR: Whatchu guys doin'?
 
A block device shows up like another physical drive (ready to be formatted and put into use as a file system), while a filesystem is able to be accessed from the user layer to create directories and store files.

Most block devices will get formatted to provide a file system, but a frequent use of them is to provide the actual storage for an volume into a VM system either running on the same system or (via iscsi) another. (Where they will be formatted inside the VM to provide a file system.)
 
A block device shows up like another physical drive (ready to be formatted and put into use as a file system), while a filesystem is able to be accessed from the user layer to create directories and store files.

Most block devices will get formatted to provide a file system, but a frequent use of them is to provide the actual storage for an volume into a VM system either running on the same system or (via iscsi) another. (Where they will be formatted inside the VM to provide a file system.)

Oh, OK... do /dev/ada0 is a block device, /dev/ada0p1 is a block device, and if I format /dev/ada0p1 that's going to be a filesystem, but why did they need a block device to use with iSCSI?
 
Technically, you can run iscsi with a file as the backing store, but there are some nice management benefits from using a zvol -- like it's harder to accidentally delete or move a zvol rather than a file in a directory. You can explicitly set the volblocksize, snapshots are of the device and only the device (not of the file and everything else on the filesystem it happens to be on), etc.

So it's not a requirement, but it makes some tasks.... cleaner.

(And I suppose they aren't officially "block devices" on FreeBSD; but you get the idea. They are devices that show up in the /dev/ tree, and can be partitioned / formatted.)
 
Back
Top