UFS trying to merge to a raid system

tony33

Member

Thanks: 2
Messages: 79

#1
Hi, I have a server running current version of freebsd 11.2

I have bought 8 10 tb hard drives and had one 920 gb drive.

the 920 gb drive is an old one that I installed when I built my first server.

I now want to set the system up as a raid. so for every 2 each 2 drives will mirror each other.

so here's where I am at. I was told to back up original hard drive on an eternal hard drive via usb. I did this..

i used dump.

Now the issue here is I never did a restore in my life and I always assumed I did the dumps correctly because i followed someone instructions and it never wiped out my hard drive.
never really checked if the dumps were good or not.

So, right now was told to do a backup of original hard drive to usb external drive which I did.

I now have original hard drive connected to server and one new drive connected to server.

so da0 is original drive, da1 is new hard drive and da2 is external hard drive with backup images.

I backed up mbr and the partition.

Now I want to do a restore.

However, I don't want to restore to the original drive which is da0. I want to do a restore on da1 device.

i see tutorials and instructions online to do restore but it looks like when using restore it's going to overwrite files that are in the original or main drive which in my case da0.

I don't want to do a restore on the original device which is da0. Instead I want to restore from the dump images which is on da2 which is the external hard drive that uses ntfs file system.

what is the best way to do this? I don't want to overwrite anything on main hard drive da0.

right now trying to practice using restore on my new hard drive that is blank. so I need to know if I can use restore or something else where I can take dump files and restore or transfer all files from the image to a hard drive and make the hard drive bootable.

the 2nd thing is what is the best way to setup a raid system? I have 8 hard drives that are 10 tb all new. I can use 8 only in my server and plan to replace the original hard drive with this new ones. I just don't want to lose any data. the data on original drive is valuable. it has about 11 years of work on it.
 

ShelLuser

Son of Beastie

Thanks: 1,590
Messages: 3,460

#2
Simple, also see the restore(8) manualpage. Restoration is only done on a filesystem of your choosing. So mount the filesystem to which you want to restore, make that active and then use something like # restore -rf <path to backup file>.

This will restore the filesystem onto the current active filesystem (the one you cd'd into).

As to the raid... depends a bit on the server. If the server has plenty of memory I'd definitely recommend to look into ZFS and setup a Raid-Z ('raidz') based pool. Also see zpool(8).
 
OP
OP
T

tony33

Member

Thanks: 2
Messages: 79

#3
Simple, also see the restore(8) manualpage. Restoration is only done on a filesystem of your choosing. So mount the filesystem to which you want to restore, make that active and then use something like # restore -rf <path to backup file>.

This will restore the filesystem onto the current active filesystem (the one you cd'd into).

As to the raid... depends a bit on the server. If the server has plenty of memory I'd definitely recommend to look into ZFS and setup a Raid-Z ('raidz') based pool. Also see zpool(8).
what do you mean by active? do you mean like cd /mount/point ? like you cd to the directory that you mount the hard drive too?

like mount /dev/da01s1 /mnt/point

after typing that. Do you mean I would have to do cd /mnt/point

and then do restore -rf <path to backup file>

just want to make sure I understood correctly before doing this.

I will look into zfs and zpool. My server has a built in bios hardware raid. I can boot into a bios for raid config at startup. It has a lot of ram memory. Total hard drive would be 80 tb.
 

ShelLuser

Son of Beastie

Thanks: 1,590
Messages: 3,460

#4
what do you mean by active? do you mean like cd /mount/point ? like you cd to the directory that you mount the hard drive too?
...
just want to make sure I understood correctly before doing this.
That is why manualpages exist you know ;)

If you had checked man restore (so: restore(8)) you'd have seen this:

Code:
     -r      Restore (rebuild a file system).  The target file system should
             be made pristine with newfs(8), mounted and the user cd(1)'d into
             the pristine file system before starting the restoration of the
             initial level 0 backup.
<cut>
            An example:
                   newfs /dev/da0s1a
                   mount /dev/da0s1a /mnt
                   cd /mnt

                   restore rf /dev/sa0
Despite restore being heavily tied into UFS the restore command can also be used on other filesystems such as ZFS, so there's no reason to worry there. Of course if you do use ZFS then you can't use any specific features which would allow you to change the block size or such.

This is also what makes restore such a powerful command and why I still favor dump/restore above anything else whenever I'm working with UFS.

I will look into zfs and zpool. My server has a built in bios hardware raid. I can boot into a bios for raid config at startup. It has a lot of ram memory. Total hard drive would be 80 tb.
Whatever you do: do not use both options mixed together. If you choose to use hardware raid then continue to rely on UFS for your system because otherwise you're going to build a mess.

My 2 cents: I massively prefer software raid over hardware raid, especially when powered with ZFS. See, the main problem is that the way the raid gets set up is usually a specific setup tied to that one brand of raid controller. In other words: should your raid controller for whatever reason break you're more or less forced to get one of the same brand, and make 3x sure that it's actually compatible with your setup.

Note (small disclaimer): it is possible that many things have changed on the market in the mean time, but this has been my experience and after a few nasty ordeals I gave up on hardware entirely.

Yet with software, ZFS, this doesn't matter. As long as your OS supports your controller(s) then ZFS can use that to maintain / restore its raid setup.

Maybe food for thought? :)
 

SirDice

Administrator
Staff member
Administrator
Moderator

Thanks: 6,618
Messages: 28,160

#5
I would go with ZFS. Build up the system with 2 10TB drives first (mirrored). Just do a clean install. If the system is booting attach the old drive and copy any data from it (don't copy the OS or packages, just the data). Remove the old drive and add the rest of the 10TB drives. You can add those easily to the existing mirror. Keep the old drive around, get an easy to use USB -> SATA interface. Use this interface to connect the drive with USB in case you forgot something.
 
Top