UFS trying to merge to a raid system

tony33

Member

Reaction score: 2
Messages: 87

Hi, I have a server running current version of freebsd 11.2

I have bought 8 10 tb hard drives and had one 920 gb drive.

the 920 gb drive is an old one that I installed when I built my first server.

I now want to set the system up as a raid. so for every 2 each 2 drives will mirror each other.

so here's where I am at. I was told to back up original hard drive on an eternal hard drive via usb. I did this..

i used dump.

Now the issue here is I never did a restore in my life and I always assumed I did the dumps correctly because i followed someone instructions and it never wiped out my hard drive.
never really checked if the dumps were good or not.

So, right now was told to do a backup of original hard drive to usb external drive which I did.

I now have original hard drive connected to server and one new drive connected to server.

so da0 is original drive, da1 is new hard drive and da2 is external hard drive with backup images.

I backed up mbr and the partition.

Now I want to do a restore.

However, I don't want to restore to the original drive which is da0. I want to do a restore on da1 device.

i see tutorials and instructions online to do restore but it looks like when using restore it's going to overwrite files that are in the original or main drive which in my case da0.

I don't want to do a restore on the original device which is da0. Instead I want to restore from the dump images which is on da2 which is the external hard drive that uses ntfs file system.

what is the best way to do this? I don't want to overwrite anything on main hard drive da0.

right now trying to practice using restore on my new hard drive that is blank. so I need to know if I can use restore or something else where I can take dump files and restore or transfer all files from the image to a hard drive and make the hard drive bootable.

the 2nd thing is what is the best way to setup a raid system? I have 8 hard drives that are 10 tb all new. I can use 8 only in my server and plan to replace the original hard drive with this new ones. I just don't want to lose any data. the data on original drive is valuable. it has about 11 years of work on it.
 

ShelLuser

Son of Beastie

Reaction score: 1,671
Messages: 3,512

Simple, also see the restore(8) manualpage. Restoration is only done on a filesystem of your choosing. So mount the filesystem to which you want to restore, make that active and then use something like # restore -rf <path to backup file>.

This will restore the filesystem onto the current active filesystem (the one you cd'd into).

As to the raid... depends a bit on the server. If the server has plenty of memory I'd definitely recommend to look into ZFS and setup a Raid-Z ('raidz') based pool. Also see zpool(8).
 
OP
OP
tony33

tony33

Member

Reaction score: 2
Messages: 87

Simple, also see the restore(8) manualpage. Restoration is only done on a filesystem of your choosing. So mount the filesystem to which you want to restore, make that active and then use something like # restore -rf <path to backup file>.

This will restore the filesystem onto the current active filesystem (the one you cd'd into).

As to the raid... depends a bit on the server. If the server has plenty of memory I'd definitely recommend to look into ZFS and setup a Raid-Z ('raidz') based pool. Also see zpool(8).
what do you mean by active? do you mean like cd /mount/point ? like you cd to the directory that you mount the hard drive too?

like mount /dev/da01s1 /mnt/point

after typing that. Do you mean I would have to do cd /mnt/point

and then do restore -rf <path to backup file>

just want to make sure I understood correctly before doing this.

I will look into zfs and zpool. My server has a built in bios hardware raid. I can boot into a bios for raid config at startup. It has a lot of ram memory. Total hard drive would be 80 tb.
 

ShelLuser

Son of Beastie

Reaction score: 1,671
Messages: 3,512

what do you mean by active? do you mean like cd /mount/point ? like you cd to the directory that you mount the hard drive too?
...
just want to make sure I understood correctly before doing this.
That is why manualpages exist you know ;)

If you had checked man restore (so: restore(8)) you'd have seen this:

Code:
     -r      Restore (rebuild a file system).  The target file system should
             be made pristine with newfs(8), mounted and the user cd(1)'d into
             the pristine file system before starting the restoration of the
             initial level 0 backup.
<cut>
            An example:
                   newfs /dev/da0s1a
                   mount /dev/da0s1a /mnt
                   cd /mnt

                   restore rf /dev/sa0
Despite restore being heavily tied into UFS the restore command can also be used on other filesystems such as ZFS, so there's no reason to worry there. Of course if you do use ZFS then you can't use any specific features which would allow you to change the block size or such.

This is also what makes restore such a powerful command and why I still favor dump/restore above anything else whenever I'm working with UFS.

I will look into zfs and zpool. My server has a built in bios hardware raid. I can boot into a bios for raid config at startup. It has a lot of ram memory. Total hard drive would be 80 tb.
Whatever you do: do not use both options mixed together. If you choose to use hardware raid then continue to rely on UFS for your system because otherwise you're going to build a mess.

My 2 cents: I massively prefer software raid over hardware raid, especially when powered with ZFS. See, the main problem is that the way the raid gets set up is usually a specific setup tied to that one brand of raid controller. In other words: should your raid controller for whatever reason break you're more or less forced to get one of the same brand, and make 3x sure that it's actually compatible with your setup.

Note (small disclaimer): it is possible that many things have changed on the market in the mean time, but this has been my experience and after a few nasty ordeals I gave up on hardware entirely.

Yet with software, ZFS, this doesn't matter. As long as your OS supports your controller(s) then ZFS can use that to maintain / restore its raid setup.

Maybe food for thought? :)
 

SirDice

Administrator
Staff member
Administrator
Moderator

Reaction score: 6,971
Messages: 28,968

I would go with ZFS. Build up the system with 2 10TB drives first (mirrored). Just do a clean install. If the system is booting attach the old drive and copy any data from it (don't copy the OS or packages, just the data). Remove the old drive and add the rest of the 10TB drives. You can add those easily to the existing mirror. Keep the old drive around, get an easy to use USB -> SATA interface. Use this interface to connect the drive with USB in case you forgot something.
 
OP
OP
tony33

tony33

Member

Reaction score: 2
Messages: 87

I followed the above advice and tried to clone my hard drive to a 2nd drive. I can see an exact copy of the drive to the 2nd drive but when I take the primary drive out and try to boot the machine with the new copy it doesn't detect a bootable device. How do you make an exact copy and make the copy backup drive be bootable?
 
OP
OP
tony33

tony33

Member

Reaction score: 2
Messages: 87

No, but I am not right now doing a raid. just right now trying to clone the original hard drive and be able to take the old drive and put the new drive and have it work as the original drive. I am just trying to make a backup copy of original boot drive before deciding to make a raid system.
 

Lanakus

Active Member

Reaction score: 101
Messages: 158

Are you using ufs or zfs now? I think SirDice advice above was related to zfs and creation of RAID-Z starting with a VDEV mirror (works like RAID-1).

If you just want to clone the old disk with ufs, its easy
Code:
 gpart backup da0 | gpart restore -F da2
To make your new disk bootable (on legacy hardware) you need to install a PMBR and boot loader
Code:
 gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 da2
Use the -i option to tell gpart(8) which partion to copy the bootloader.

If you are booting on UEFI: FreeBSD provides an efi partition as /boot/boot1.efifat. You can copy that to your boot partition with dd(1).

But I would also recommend zfs if you plan to build RAID.
 
OP
OP
tony33

tony33

Member

Reaction score: 2
Messages: 87

Are you using ufs or zfs now? I think SirDice advice above was related to zfs and creation of RAID-Z starting with a VDEV mirror (works like RAID-1).

If you just want to clone the old disk with ufs, its easy
Code:
 gpart backup da0 | gpart restore -F da2
To make your new disk bootable (on legacy hardware) you need to install a PMBR and boot loader
Code:
 gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 da2
Use the -i option to tell gpart(8) which partion to copy the bootloader.

If you are booting on UEFI: FreeBSD provides an efi partition as /boot/boot1.efifat. You can copy that to your boot partition with dd(1).

But I would also recommend zfs if you plan to build RAID.

I am using ufs for now. just wanting to backup my original hard drive and make a duplicate backup and then take original out and put in the new copy in to test and see if it can boot the system up and load an exact copy of the orignal system with it's files. Once I do t his and know I have a good working backup.

I will work on making a raid system using the ZFS.

I am trying to use dump and restore to make an exact copy of an existing hard drive that uses UFS file system. I need to get this to work before I start converting my system ot a ZFS system. I am using a server that uses the older boot system.
 

Lanakus

Active Member

Reaction score: 101
Messages: 158

Ok. I think dump(8) is largely obsolete these days and people use more advanced backup software for file-level backup, e.g. Bacula or Tarnsnap. But you can use sysutils/clone or net/rsync to clone the content of ufs filesystems.
Take a look at Thread 57377, thats might be what you are looking for...
 

SirDice

Administrator
Staff member
Administrator
Moderator

Reaction score: 6,971
Messages: 28,968

I think dump(8) is largely obsolete these days
It's not exactly obsolete, it still has its uses. But I personally never bother with backing up the OS because a complete, clean, reinstall is typically faster than restoring a backup (a restore(8) can take several hours while a (re)install is done in less than 20 minutes). The only things that are actually worth backing up is data, not the OS or the applications (since those can be easily reinstalled).
 
OP
OP
tony33

tony33

Member

Reaction score: 2
Messages: 87

It's not exactly obsolete, it still has its uses. But I personally never bother with backing up the OS because a complete, clean, reinstall is typically faster than restoring a backup (a restore(8) can take several hours while a (re)install is done in less than 20 minutes). The only things that are actually worth backing up is data, not the OS or the applications (since those can be easily reinstalled).
So, how can I backup the drive and make the drive bootable? not using raid using ide.
.
 
Last edited:
OP
OP
tony33

tony33

Member

Reaction score: 2
Messages: 87

My server uses the older system the MBR system. I am just trying right now to clone a ufs drive to make an exact copy. the drive is a bootable drive. I am able to make an exact copy both the mbr and the actual partician. The issue is that when I take the original drive out and swap it with the new copied drive it would not at all book. The server boots to a black screen saying no bootable device found. Even though when I looked at the drive i can see all files exactly copied over. Any ideas what I can try and do? any commands I can use to see why this drive isn't being detected as bootable?
 

tingo

Daemon

Reaction score: 365
Messages: 1,939

Use gpart(8) to compare the two drives, like this gpart show -p drive. Example:
Code:
tingo@kg-core1$ gpart show -p ada0
=>       34  250069613    ada0  GPT  (119G)
         34        128  ada0p1  freebsd-boot  (64K)
        162  119537664  ada0p2  freebsd-ufs  [bootme]  (57G)
  119537826    8388608  ada0p3  freebsd-swap  (4.0G)
  127926434  122143213  ada0p4  freebsd-ufs  [bootme]  (58G)
 
OP
OP
tony33

tony33

Member

Reaction score: 2
Messages: 87

it should have freebsd-boot on both drives? if it doesn't how do you copy that over?
 

SirDice

Administrator
Staff member
Administrator
Moderator

Reaction score: 6,971
Messages: 28,968

it should have freebsd-boot on both drives?
The whole point of a RAID setup is to allow the machine to continue to function if one of the drives fails. So if your ada0 breaks you'll want to be able to boot from ada1 for example. So, both disks need to have the boot partition.

if it doesn't how do you copy that over?
gpart backup/ gpart restore can be used to copy the partition tables. The bootcode can be written by gpart bootcode, see gpart(8).
 
OP
OP
tony33

tony33

Member

Reaction score: 2
Messages: 87

The whole point of a RAID setup is to allow the machine to continue to function if one of the drives fails. So if your ada0 breaks you'll want to be able to boot from ada1 for example. So, both disks need to have the boot partition.


gpart backup/ gpart restore can be used to copy the partition tables. The bootcode can be written by gpart bootcode, see gpart(8).

I did gpart show -p device and I don't see
freebsd-boot on any of the drives.

I see a freebsd-ufs on both drives and a freebsd-swap on the original drive but not on the copy drive.

The original drive boots find and I do know there's boot code on the first 512kb of data. I mean the original drive has no issues on booting.

It's just the clone or copy that doesn't boot at all.

I will try and use gpart but will need a tutorial or reading material to make sure I know how I can use it and not overwrite the original hard drive by mistake.
 
OP
OP
tony33

tony33

Member

Reaction score: 2
Messages: 87

Then it's likely an MBR partitioning. GPT and MBR boot a little differently.
Yes, is does have an MBR and I have it copied to the 2nd drive too. I can see this.

the first 512kb is the MBR I used the dump and restore commands for that 512kb to backup and put it on the new drive.

Right now all I want to do is copy the main drive and make an exact clone and then turn off the server and start it again with the new drive and having the new drive boot.
I just want to do this because I have never done this I decided for this server to not do a raid. Instead I will have my storage serves setup in a raid and future servers will be setup in a raid.
I will do it this way until I learn exactly what files are important that I can backup and then reinstall freebsd and just put back config files for server software.

Right now like I been working on the server and website since 2008 and there's just too much important files that I cannot afford to lose.

Right now I cannot make an exact copy of the old drive and have it bootable. I can clone and have the exact files copied over including the mbr record but when I turn the server off and swop out the old drive with then newly cloned drive and then boot the system. It shows a black screen showing text stating there's no bootable device found.

If I turn it off and put back the old drive and have the new clone in the 2nd slot. The old drive boots the system fine. I have no idea as to why. I copied the first 512kb store it on a external drive. I then restore that image to the first 512kb to the new drive.

Now, I thought maybe the hard drive wasn't be detected by the system. The issue with this is that when I boot off the old drive. The OS shows while booting that it detects all the hard drives including external hard drive. I can mount the new drive and external drive to the system and actually access the files and the files on that new drive are identical to the old drive.

I just don't know if I have to do anything more than this? does the drive have to be flagged as a bootable drive? I do remember when I first installed freebsd on the old drive years ago. I was asked if this was a primary drive or a secondary drive when making the partition . I have no clue if there's flags that get put on to indicate the drive is bootable.

The server uses old hardware that was bought in 2008. I right now bought new servers and in the process of building a data center. My business grew and I need to design the system to be scale able when needed. So, right now I just need to copy the old drive exactly and make it bootable. it uses the old system has an MBR. I am guessing that the drive needs to have a flag to indicate it 's a bootable drive aka primary.
 

Phishfry

Son of Beastie

Reaction score: 1,163
Messages: 3,341

Do you know how to run dd ?
dd your old disk onto your new disk.
You do this from a FreeBSD memstick medium using LiveCD mode. There are other ways too.
It will copy every block including the empty ones and even fragmented ones.
Because of this it will take some time. It will also cause some wear and tear on the two drives.
It is so simple I had to throw it out there.
As long as the new drive is bigger you can use dd. (You will need to growfs the new disk)
Make sure you use the correct drive nodes as it is the disk destroyer. There is no recovery mode.
 

Phishfry

Son of Beastie

Reaction score: 1,163
Messages: 3,341

I also wanted to advise you that what you want to do may not be possible with a disk cloning via dd.
>>Clone a MBR Disk to a 10TB drive.<<<
MBR maxes out at 2TB I believe. That is the impetus for GPT. Works on over 2TB drives.

Because GPT needs to have a separate /boot partition this becomes more work.
So you would need to build the GPT/PMBR structure on your 10TB drive and then restore files.
You would also have to point /etc/fstab to the GPT partition name (/dev/ada0p2) instead of the mbr partition/slice name..

Similar question....
https://forums.freebsd.org/threads/converting-mbr-to-gpt.64463/
https://forums.freebsd.org/threads/mbr-gpt.63257/
 
Top