Fatman vs ZFS – Who will win

Fatman

Member

Reaction score: 2
Messages: 84

Fatman vs ZFS – Who will win

Hello Everyone,

I have decided to switch over my NAS to Freebsd from OpenSolaris now that ZFS has been ported over. This sudden change comes from the fact that I finally have some spare time and I noticed that I can’t shutdown my open solaris system. It just seems to hang for hours.

Here’s what I’m working with:

Code:
CHASSIS:  NORCO RPC-4020 
Mother;) Board: ASUS M2N-LR NF PRO3600 R 	
Power Supply:  CORSAIR CMPSU-1000HX 1000W RT 	
CPU: AMD A64 X2 5050E 2.6G AM2 RT 	
Memory: 2 x 2G PATR PVS24G6400LLKNB R 	
Hard Drives:  6x 1T WD 32M WD10EADS, 1 40g IDE Seagate
I’m currently in the process of downloading the 8.0-RELEASE-amd64-disc1.iso and will most likely start the install tomorrow (possibly tonight). I’ve opted to try installing zfs on the 40g IDE drive with the following wiki postings:

http://wiki.freebsd.org/RootOnZFS/GPTZFSBoot
http://wiki.freebsd.org/ZFSTuningGuide

If I’ve read correctly from the OpenSolaris Bible, you need to use the export command to ready a pool to be moved. ZFS records the hostname and hostid of the system that owns the pool so I need to boot back into OpenSolaris and execute:

Code:
#zfs export tank
#zfs status tank – checks to make sure I’ve release ownership
I’m also looking for a little bit of advice. Should I bother installing zfs on the ide 40G drive? I could always do a regular base install, make the necessary tunings and kernel changes than load my old pools using:

Code:
#zpool import – finds pools available for import
#zpool import [myoldpoolname]
Any input would be much appreciated. I'm sure more questions will follow once the install process has started. :p

Sincerely,

Fatman
 

Voltar

Active Member

Reaction score: 24
Messages: 191

You don't have to export the pool on the old system, you can always force import it (the command you're looking for is zpool btw, not zfs on that one)
# zpool import -f poolname/id

As for installing on the 40gb drive, that is totally up to you. Personally I have an OS only drive that contains the base OS.

There are a few really good threads about ZFS around these forums, might have a look at some of them.

Edit: You may not need to do all or any of the steps listing in the ZFS Tuning Guide. There has been a lot of improvements in the ZFS code since that was originally written.
 
OP
OP
F

Fatman

Member

Reaction score: 2
Messages: 84

Thanks Voltar.

Ya the original plan was to use the 40gigs for the os and have the 21 drive bay used for zfs. I thought it would be interesting to try to boot from zfs to have the same features.

So from what I gather, the following section does not apply to me:

amd64

FreeBSD 7.2+ has improved kernel memory allocation strategy and no tuning may be necessary on systems with more than 2 GB of RAM.

On systems using FreeBSD 7.0 and 7.1, kernel memory usage (vm.kmem_size) should be increased to around 1 GB and ARC size reduced:

vm.kmem_size_max="1024M"

vm.kmem_size="1024M"

vfs.zfs.arc_max="100M"

This might help if the machine is also loaded with other tasks, such as network activity (a file server), etc. Tuning KVA_PAGES is not required on amd64.

To increase performance, you may increase kern.maxvnodes (/etc/sysctl.conf) way up if you have the RAM for it (e.g. 400000 for a 2GB system). Keep an eye on vfs.numvnodes during production to see where it stabilizes. AMD64 uses direct mapping for vnodes, so you don't have to worry about address space for vnodes on this architecture (as opposed to i386).
Hmm... I'm starting to think twice about zfs for the os drive. I really don't care if i lose the 40g hd.
 

Voltar

Active Member

Reaction score: 24
Messages: 191

Correct, since you have more than 2GB of RAM, I would see how things go for you without tuning. My fileserver runs 7.2-STABLE (up to date as of 17-Nov, and 7-STABLE has ZFS v13 like 8.0 does), and I've been able to remove all the optimizations I previously had to use with the older versions of ZFS (I have a thread or two about ZFS issues somewhere).

As for the boot drive, I haven't really played around with ZFS boot/root. I know a few here are using it with great results though. However, I've been contemplating looking into going with a ZFS boot when I upgrade that box to 8.0 and add another four drives as ZFS is no longer considered experimental in 8.
 
OP
OP
F

Fatman

Member

Reaction score: 2
Messages: 84

Thanks again,

I'll do you the favor and give it a try which will hopefully help you make your decision. However, I will not try adding it to my raidz2 (6 x 1tb) for fear of losing the data that's on it. :\

When you mentioned there's some good zfs threads, are you recommending using those over the wiki?

This question is a little off topic but how much space should I attribute to swap. From what I've read the double rule no longer applies:

http://www.cyberciti.biz/tips/linux-swap-space.html

Is 4gigs what I should be using for my setup?

Your input is very much appreciated considering that I'm a newb trying to find his way.

Fatman
 

Voltar

Active Member

Reaction score: 24
Messages: 191

The threads might have some helpful info in case you run into any problems, and how a specific user may have set something up/fixed something. I would go by the wiki starting out.

As for swap, I've been using a straight 4GB in all my servers, and I've never seen it used either.
 

Blueprint

Member

Reaction score: 1
Messages: 37

I was running 8-RC3 with 4GB ram and root on zfs and everything was fine with no tuning until i decided to transfer about 250GB over to my storage pool via nfs. It panicked halfway through the transfer. After adding some tuning it went through fine and didn't panic since. I do run two pools though so im not sure if that requires more memory resource?

I dont think much would have changed for Release. So i kept my tuning parameters. I have had no problems with Release with root on zfs thus far.
 
OP
OP
F

Fatman

Member

Reaction score: 2
Messages: 84

I noticed that you were unsuccessful using the following wiki:

http://wiki.freebsd.org/RootOnZFS/GPTZFSBoot

Can you please provide your system specs? Are you using 32 or 64bit system?

Unless someone tells me otherwise, I will attempt to use this wiki since I feel someone should validate it. Hopefully any problems found can be resolved here and the wiki can be updated.

I definitely for see many questions coming since I am fairly new to FreeBSD but eager to learn.
 
OP
OP
F

Fatman

Member

Reaction score: 2
Messages: 84

Can someone shed some more light as to what's going on here?

# Create GPT Disk

Fixit# gpart create -s gpt ad0

# Create the boot, swap and zfs partitions

Fixit# gpart add -b 34 -s 128 -t freebsd-boot ad0
Fixit# gpart add -b 162 -s 8388608 -t freebsd-swap -l swap0 ad0
Fixit# gpart add -b 8388770 -s 125829120 -t freebsd-zfs -l disk0 ad0

This creates 3 partitions. The first partition contains the gptzfsboot loader which is able to recognize and load the loader from a ZFS partition. The second partition is a 4 GB swap partition. The third partition is the partition containing the zpool (60GB).

Note:

1. While a ZFS Swap Volume can be used instead of the freebsd-swap partition, crash dumps can't be created on the ZFS Swap Volume.
2. Sizes and offsets are specified in sectors (1 sector is typically 512 bytes).
The man pages really helped me but I also had to research GPT on wikipedia to understand the commands.

http://en.wikipedia.org/wiki/GUID_Partition_Table
http://www.freebsd.org/cgi/man.cgi?query=gpart&apropos=0&sektion=0&manpath=FreeBSD+8.0-RELEASE&format=html

Below creates GPT scheme:

Code:
Create GPT Disk
Fixit# gpart create -s gpt ad0
"create"
Create a new partitioning scheme on a provider given by provider. The -s scheme option determines the scheme to use.The kernel needs to have support for a particular scheme before that scheme can be used to partition a disk.
The above is still unclear to me.

I understand the part below and thought to share what I grasped from this section and how I'm applying it. Please correct me if I've made a mistake.

Code:
# 
Creating boot partition:
Fixit# gpart add -b 34 -s 128 -t freebsd-boot ad0
"add"
Adds a new partition to the partitioning scheme given by geom. The partition begins on the logical block address given by the -b start option. Its size is expressed in logical block numbers and given by the -s size option. The type of the partition is given by the -t type option.
Here we create the boot partition which starts at Sector 34.

In 64-bit Windows operating systems, 16,384 bytes, or 32 sectors, are reserved for the GPT, leaving LBA 34 as the first usable sector on the disk
The command also defines the size [in sectors (blocks of 512)] and define the type option. So if I understand correctly 128 = 65536 bytes.

The next two set of command creates your swap and zfs partition. However, I don't quite understand what a label is so i'll need to read up on it.

Code:
Fixit# gpart add -b 162 -s 8388608 -t freebsd-swap -l swap0 ad0
Fixit# gpart add -b 8388770 -s 125829120 -t freebsd-zfs -l disk0 ad0
-l label
The label attached to the partition. This option is only valid when used on partitioning schemes that support partition labels.
Since I have only 40 gigs for my drive, should I use the following:

Code:
Fixit# gpart add -b 34 -s 128 -t freebsd-boot ad0
Fixit# gpart add -b 162 -s 4294967296 -t freebsd-swap -l swap0 ad0
Fixit# gpart add -b 4294967458 -s 38654622720 -t freebsd-zfs -l disk0 ad0
Code:
42949672960 (40 * 1024 * 1024 * 1024)
4294967296 (4 * 1024 * 1024 * 1024)
82944 (162 * 512)

HD Space Available for ZFS = 38654622720 (35.99992275238037109375 GB)
I assume by the following that I should not use ZFS Swap Volume...

1. While a ZFS Swap Volume can be used instead of the freebsd-swap partition, crash dumps can't be created on the ZFS Swap Volume.
More info on creating swap volume
Creating a swap partition on the ZFS Filesystem using a ZFS Volume:
Fixit# zfs create -V 2gb zroot/swap
Fixit# zfs set org.freebsd:swap=on zroot/swap
Fixit# zfs set checksum=off zroot/swap
Thanks again to all those helping me out. Hopefully what I learn't will help others.

Sincerely,

Fatman
 

Voltar

Active Member

Reaction score: 24
Messages: 191

Fatman said:
I noticed that you were unsuccessful using the following wiki:

http://wiki.freebsd.org/RootOnZFS/GPTZFSBoot

Can you please provide your system specs? Are you using 32 or 64bit system?
Was that for me or Blueprint?


Fatman said:
Below creates GPT scheme:

Code:
Create GPT Disk
Fixit# gpart create -s gpt ad0

Hopefully I understand what you want here, You want an explanation of what this does?

We invoke gpart, and tell it to create a new partition table, of the type GPT, on disk ad0.

...I don't quite understand what a label is so i'll need to read up on it...
Labels make it easier to keep track of partitions and swap out whole drives in ZFS pools. For example, you could label a drive 'vdev1/disk0' which could make it easier to keep track of. See glabel() for more info.

The rest looks good, and personally I would go with a FreeBSD type swap partition just in case I need a crash dump.
 
OP
OP
F

Fatman

Member

Reaction score: 2
Messages: 84

Great, thanks again Voltar.

The msg was directed to blueprint but it doesn't really matter. Do you think you could answer my question in the third posting about how much space i should allocate to Swap?

Is it still double or has that deprecated? I read that I just need to use 4 gigs since that's what I have installed.

Do I also need to leave space towards the end of the drive or can I allocate the remainder as shown at the end of my previous post?

Are there any benefits from using Swap Volume over freebsd swap?

I feel really stupid asking these questions but I need to confirm that I'm doing things right the first time. Do not feel obligated to respond to them. I'm the type of guy that tries to know the ins/outs of everything before actually even applying it.

Thanks again,

Fatman
 

Voltar

Active Member

Reaction score: 24
Messages: 191

Fatman said:
Do you think you could answer my question in the third posting about how much space i should allocate to Swap?

Is it still double or has that deprecated? I read that I just need to use 4 gigs since that's what I have installed.
Voltar said:
As for swap, I've been using a straight 4GB in all my servers, and I've never seen it used either.
It all depends on who you ask, I've never had an issue with 4GB as I said.


Do I also need to leave space towards the end of the drive or can I allocate the remainder as shown at the end of my previous post?
If this is going to be your OS only drive, I would go ahead and use it all up.


Are there any benefits from using Swap Volume over freebsd swap?
I read months ago something along the lines of... ZFS is memory hungry so if you have a situation where you need swap, and it is on a ZFS volume, that it could lead to a panic. I don't honestly know how much of this still applies, if any.

I feel really stupid asking these questions but I need to confirm that I'm doing things right the first time. Do not feel obligated to respond to them. I'm the type of guy that tries to know the ins/outs of everything before actually even applying it.
The only stupid question is the one you don't ask. I have to say though, my dev machine in the corner is looking like a prime target to try this with.
 
OP
OP
F

Fatman

Member

Reaction score: 2
Messages: 84

Almost ready to start the install but I want to get an answer to my last question before proceeding.

Code:
Install the Protected MBR (pmbr) and gptzfsboot loader
Fixit# gpart bootcode -b /mnt2/boot/pmbr -p /mnt2/boot/gptzfsboot -i 1 ad0
The man page explains it but I'm having a hard time understanding. I also did some research on the bootstraping process and thought I'd share it:

http://en.wikipedia.org/wiki/Bootstrapping_(computing)
http://elearning.algonquincollege.com/coursemat/pincka/dat2343/lectures.f03/16-LMC-Bootstrap.htm

Who Loads the Loader? - If the Operating System provides the instructions necessary to load other programs, where do the instructions come from to load the Operating System? The traditional answer to this is that instructions to load the Operating System are contained in a ROM (Read Only Memory) which receives control on initial "power up". The instructions that perform this load of the Operating System (or of enough of the Operating System so that it can complete loading itself) is called a "bootstrap" program. Other possibilities (than ROM) exist, including a hardware facility to directly load values into memory locations through "toggle switches". The bootstrap program must be able to read data from a secondary storage device into memory (typically from a non-file-structured area of a disk) into memory and "jump" to the area of memory where this data was stored.
bootcode
Embed bootstrap code into the partitioning scheme's metadata on the geom (using -b bootcode) or write bootstrap code into a partition (using -p partcode and -i index). Not all partitioning schemes have embedded bootstrap code, so the -b bootcode option is scheme-specific in nature. For the GPT scheme, embedded bootstrap code is supported. The bootstrap code is embedded in the protective MBR rather than the GPT. The -b bootcode option specifies a file that contains the bootstrap code. The contents and size of the file are determined by the partitioning scheme. For the MBR scheme, it's a 512 byte file of which the first 446 bytes are installed as bootstrap code. The -p partcode option specifies a file that contains the bootstrap code intended to be written to a partition. The partition is specified by the -i index option. The size of the file must be smaller than the size of the partition.
/mnt2/boot/pmbr = file that contains my bootstrap code
/mnt2/boot/gptzfsbootfile = file that contains bootstrap code and that will be written to the partition

1 ad0 = index option

What does the one stand for here? Is it referring to the first sector of the first partition?

The rest of the points in section one of the ZFS wiki is pretty straight forward. Section two if pretty much the same but I was curious to know if it's possible to modify the script below to do a minimal install with just base + man (i think that's all i need). I normally get the port collection via portsnap so i don't see the need to install it.

Code:
Fixit# cd /dist/8.0-*
Fixit# export DESTDIR=/zroot
Fixit# for dir in base catpages dict doc games info lib32 manpages ports; \
          do (cd $dir ; ./install.sh) ; done
Fixit# cd src ; ./install.sh all
Fixit# cd ../kernels ; ./install.sh generic
Fixit# cd /zroot/boot ; cp -Rlp GENERIC/* /zroot/boot/kernel/

Can the script be changed for example to the following?

Fixit# for dir in base manpages; \
          do (cd $dir ; ./install.sh) ; done
Thanks,

Fatman
 

Voltar

Active Member

Reaction score: 24
Messages: 191

Fatman said:
1 ad0 = index option

What does the one stand for here? Is it referring to the first sector of the first partition?
The '-i 1' denotes the first partition in the GUID partition table, I believe (I could be very wrong, I just skimmed the man page again). The 'ad0' is the device.


The rest of the points in section one of the ZFS wiki is pretty straight forward. Section two if pretty much the same but I was curious to know if it's possible to modify the script below to do a minimal install with just base + man (i think that's all i need). I normally get the port collection via portsnap so i don't see the need to install it.
At the very least you need base and the GENERIC kernel. I just go with the bare-bones install and compile from source to get updated and any features that I need.
 
OP
OP
F

Fatman

Member

Reaction score: 2
Messages: 84

So from what I understand, my modified version should work since you copy over the kernel after the script executes.

Code:
Fixit# cd /dist/8.0-*
Fixit# export DESTDIR=/zroot
Fixit# for dir in base manpages ports; \
          do (cd $dir ; ./install.sh) ; done
Fixit# cd src ; ./install.sh all
Fixit# cd ../kernels ; ./install.sh generic
Fixit# cd /zroot/boot ; cp -Rlp GENERIC/* /zroot/boot/kernel/
DVD's burnt, I'm starting the install now. :e
 

Voltar

Active Member

Reaction score: 24
Messages: 191

Fatman said:
So from what I understand, my modified version should work since you copy over the kernel after the script executes.

Code:
Fixit# cd /dist/8.0-*
Fixit# export DESTDIR=/zroot
Fixit# for dir in base manpages ports; \
          do (cd $dir ; ./install.sh) ; done
Fixit# cd src ; ./install.sh all
Fixit# cd ../kernels ; ./install.sh generic
Fixit# cd /zroot/boot ; cp -Rlp GENERIC/* /zroot/boot/kernel/
DVD's burnt, I'm starting the install now. :e
That should work, although personally I would go with portsnap for getting ports.

Let me know how it goes.
 
OP
OP
F

Fatman

Member

Reaction score: 2
Messages: 84

Voltar,

I booted up the DVD and selected fixit. I was experiencing some weird issues and it took me a while to figure out that num lock was causing it. x(

I have a feeling that I've missed something here. When i type the first command:

gpart create -s gpt ad0
I get "geom 'ad0': File exists". I'm getting this even after fdisking the partitions and using W to write then Q to exit. When I attempt the second command I get:

gpart: start '34': invalid argument
Is this normal?
 

Voltar

Active Member

Reaction score: 24
Messages: 191

Well are you certain that your first disk (or the one you want to install to) is 'ad0'? Try something like # ls /dev | grep ad (I'm sure there is a better way to do that) to see what you have.
 
OP
OP
F

Fatman

Member

Reaction score: 2
Messages: 84

The command displays ad0 and when I go into fdisk, the disk name shows ad0.

Off to a great start.. :( I think you're right though, i must be specifying the wrong drive. Let me take a look at the dmesg output.
 
OP
OP
F

Fatman

Member

Reaction score: 2
Messages: 84

Nope, my drive is showing up as ad0 in dmesg.

Code:
ad0: 38204mb <samsung sp0411n tw100-11> at ata0-master UDMA100
 

Voltar

Active Member

Reaction score: 24
Messages: 191

Alrighty, I just popped an unused SCSI disk into my dev machine and booted the 8.0 USB stick image. Got the same error. Had to zero the old partition table with dd, then it works

# dd if=/dev/zero of=/dev/ad0 bs=512 count=1
 
OP
OP
F

Fatman

Member

Reaction score: 2
Messages: 84

That fixed it however I'm getting this error:
Code:
gpart: size 4294967296 : invalid argument
when executing:
Code:
Fixit# gpart add -b 162 -s 4294967296 -t freebsd-swap -l swap0 ad0

I thought the -s was followed by the swap size in bytes. I have for gigs so I came up with the following:

Code:
4294967296 (4 * 1024 * 1024 * 1024)
Am I to divide the total by 512 for block size? Does this make any sense?
 

Voltar

Active Member

Reaction score: 24
Messages: 191

The '-s' is for the size, in sectors.

http://wiki.freebsd.org/RootOnZFS/GPTZFSBoot#line-1 said:
Note:

1.While a ZFS Swap Volume can be used instead of the freebsd-swap partition, crash dumps can't be created on the ZFS Swap Volume.
2. Sizes and offsets are specified in sectors (1 sector is typically 512 bytes).
So 8388608 sectors (8388608 * 512) = 4294967296 bytes (4 GB)
 
OP
OP
F

Fatman

Member

Reaction score: 2
Messages: 84

Thanks, that solved that problem but I hit another road block:

Code:
Fixit# gpart bootcode -b /mnt2/boot/pmbr -p /mnt2/boot/gptzfsboot -i 1 ad0
outputs: "ad0 has bootcode". Is that ok?

Steps 1, points 6 and 7 went through fine.
 
Top