ZFS Single Disk with ZFS best practices...

I want to use a disk with ZFS and I want to use the best practices...

Both for an SSD and for an HDD

The process:

Delete the disk

# gpart destroy -F da0
# dd if=/dev/zero of=/dev/da0 bs=1m count=128


Prepare the disk

# gpart create -s GPT da0
# gpart add -t freebsd-zfs -l storage -a 1M da0
# zpool create -f storage da0
# zfs set mountpoint=/backup storage


See the changes we have made so far

# gpart show da0

Code:
=>        40  1953525088  da0  GPT  (932G)
          40        2008       - free -  (1.0M)
        2048  1953521664    1  freebsd-zfs  (932G)
  1953523712        1416       - free -  (708K)

Then you can create volumes inside with more options, by example

# zfs create storage/test
# zfs set mountpoint=/storage/test storage/test
# zfs set quota=100G storage/test



Questions:

As I understand, the command

# gpart add -t freebsd-zfs -l storage -a 1M da0

make:
a) Create a partition.
b) Leave a space of 1MB at the beginning to make an alignment on the disk that supposedly improves performance.

The commands shown here are correct?
Any other recommendation?

Thank you very much for your answers.
 
I want to use a disk with ZFS and I want to use the best practices...

Both for an SSD and for an HDD

The process:

Delete the disk

# gpart destroy -F da0
# dd if=/dev/zero of=/dev/da0 bs=1m count=128


Prepare the disk

# gpart create -s GPT da0
# gpart add -t freebsd-zfs -a 1m -l storage da0
# zpool create -f storage da0 <-- ERROR!
# zfs set mountpoint=/backup storage

You make a very big mistake in the commands above. You created a partition table on the disk, created a partition on the disk, then told ZFS to use the entire disk (not the partition) for the pool! Big no no!

The correct command would be:
# zpool create storage da0p1

An even better command, since you labelled the partition, is to use the label (and you shouldn't need to force it; if you do, then you did something wrong!):
# zpool create storage gpt/storage
 
Other than that ... looks fine. I like your 1M alignment; that solves all questions of 512-vs-4K sectors and SSD block sizes, and is simple. My only little cosmetic complaint is this: You end up mounting the pool at backup. But you call it storage. That's already a bit confusing; why not call the pool backup too? And in general, the name "storage" is pretty content-less: Anything that involves disk drives is storage, so repeating that gives the user no information.

(The complete version of the joke above is: "Other than that Mrs. Lincoln, how was the play?")
 
It's okay friends...

Thank you very much for your advice!

The thing goes like this (following all its recommendations):

Destroy existing partition table

# gpart destroy -F da0

Code:
da0 destroyed

Delete any previous information

# dd if=/dev/zero of=/dev/da0 bs=1m count=128

Code:
128+0 records in
128+0 records out
134217728 bytes transferred in 7.853006 secs (17091255 bytes/sec)

Create new GPT partition table

# gpart create -s GPT da0

Code:
da0 created

Add the partition for the ZFS, whit the label 'data' and proper sector alignment on 4K sector drives or SSDs.

# gpart add -t freebsd-zfs -b 1M -l data da0

Code:
da0p1 added

Create new zpool on that partition
Here two options (using label or partition)
a) Using label name (the most recommended)

# zpool create data gpt/data


b) Or using the partition name

# zpool create data da0p1


See the changes we have made so far

# gpart show da0

Code:
=>        40  1953525088  da0  GPT  (932G)
          40        2008       - free -  (1.0M)
        2048  1953521664    1  freebsd-zfs  (932G)

See the disk mounted

# df -h | egrep "Filesystem|data"

Code:
Filesystem                                    Size    Used   Avail Capacity  Mounted on
data                                       899G     88K    899G     0%    /data

Now you can start adding ZFS file systems as you wish

# zfs create data/test


See the disk mounted and the new ZFS

# df -h | egrep "Filesystem|data"

Code:
Filesystem                                    Size    Used   Avail Capacity  Mounted on
data                                       899G     88K    899G     0%    /data
data/test                                  899G     88K    899G     0%    /data/test

Unmount the disk to remove it from the computer

# zpool export data


Mount the disk to use it on the computer

# zpool import data


Example to undo everything done so far

# zfs destroy data/test
# zpool destroy data
# gpart delete -i 1 da0
# gpart destroy da0


Other used sources:

 
Last edited:
Since this is a ZFS thread I feel I must mention ECC RAM and its importance. More here: https://www.ixsystems.com/community/threads/ecc-vs-non-ecc-ram-and-zfs.15449/
Good read
Quotes
" ..you really should go a server grade board with ECC ram, if you are going to risk it on the ram then do a good burn in using memtest to verify that your ram is good.."
"Most people will recognize that RAM doesn’t fail that often. So choosing to trust that you won’t have bad RAM while using ZFS with non-ECC is a risk you will have to decide on for yourself if choosing to not use ECC RAM and appropriate hardware."
 
People keep saying that you need ECC when running ZFS. And they are wrong. Except they are right.

ZFS has checksums. Not quite end-to-end (since they don't go to userspace applications, nor over the network, but then ZFS is not a cluster or network file system), but at least VM buffer to disk and back. That means ZFS eliminates the single largest cause of data corruption, which is undetected disk errors and disk IO. This is already true with a single disk. Once you have multiple disks (not the OP's situation, but a good idea in general), the reliability of the data with ZFS becomes very good. At this point, the memory becomes the next largest source of unreliability (although network traffic is close behind, TCP/IP and Ethernet only uses 32-bit checksums, and on a large fast system, you can get undetected errors there are reasonably rates). Therefore, ECC is a particularly good investment when using ZFS. That's why the argument is right.

But: Having ECC is not necessary when running ZFS. ZFS is not particularly memory hungry, nor does it leave unprotected (unwritten) date particularly long in memory. Other file systems will also use all free memory as buffer cache, so they are just vulnerable to memory corruption. So ZFS does not need ECC any more than other file systems do. It's just that for other file systems, using ECC plugs a small hole, while with ZFS it plugs the largest remaining hole.
 
Hello,

Why there is such a message when I was trying to setup the zpool "data1" on partition data1?
Code:
root@free:~ # zpool create data1 gpt/data1
invalid vdev specification
use '-f' to override the following errors:
/dev/gpt/data1 is part of potentially active pool 'data'

Below my filesystem for two single HDDs:
Code:
root@free:~ # df -h | egrep "Filesystem|data"
Filesystem            Size    Used   Avail Capacity  Mounted on
data                  2.6T     88K    2.6T     0%    /data
data/1                2.6T     88K    2.6T     0%    /data/1
data1                 2.6T     88K    2.6T     0%    /data1
data1/1               2.6T     88K    2.6T     0%    /data1/1

What gives us the creation of directories in partitions?
Code:
zfs create data/1
zfs create data1/1

Thank you in advance for your explanations.
 
Why there is such a message when I was trying to setup the zpool "data1" on partition data1?
Because the disk (or partition) was already part of an existing data pool. You cannot add the same disk (or partition) to more than one pool. Post the output from zpool status.
 
Because the disk (or partition) was already part of an existing data pool. You cannot add the same disk (or partition) to more than one pool. Post the output from zpool status.
Here you are:
Code:
root@free:~ # zpool status
  pool: data
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        data        ONLINE       0     0     0
          gpt/data  ONLINE       0     0     0

errors: No known data errors

  pool: data1
 state: ONLINE
  scan: none requested
config:

        NAME         STATE     READ WRITE CKSUM
        data1        ONLINE       0     0     0
          gpt/data1  ONLINE       0     0     0

errors: No known data errors

  pool: zroot
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        zroot       ONLINE       0     0     0
          ada2p3    ONLINE       0     0     0

errors: No known data errors
 
Because the disk (or partition) was already part of an existing data pool. You cannot add the same disk (or partition) to more than one pool. Post the output from zpool status.

Can I count on your comment?
Code:
root@free:~ # zpool status
  pool: data
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        data        ONLINE       0     0     0
          gpt/data  ONLINE       0     0     0

errors: No known data errors

  pool: data1
 state: ONLINE
  scan: none requested
config:

        NAME         STATE     READ WRITE CKSUM
        data1        ONLINE       0     0     0
          gpt/data1  ONLINE       0     0     0

errors: No known data errors

  pool: zroot
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        zroot       ONLINE       0     0     0
          ada2p3    ONLINE       0     0     0

errors: No known data errors
 
look at the lines starting with "pool:". Now, do you notice that you have several of those, and that they have their own name?
 
Back
Top