UFS trying to merge to a raid system

Hi, I have a server running current version of freebsd 11.2

I have bought 8 10 tb hard drives and had one 920 gb drive.

the 920 gb drive is an old one that I installed when I built my first server.

I now want to set the system up as a raid. so for every 2 each 2 drives will mirror each other.

so here's where I am at. I was told to back up original hard drive on an eternal hard drive via usb. I did this..

i used dump.

Now the issue here is I never did a restore in my life and I always assumed I did the dumps correctly because i followed someone instructions and it never wiped out my hard drive.
never really checked if the dumps were good or not.

So, right now was told to do a backup of original hard drive to usb external drive which I did.

I now have original hard drive connected to server and one new drive connected to server.

so da0 is original drive, da1 is new hard drive and da2 is external hard drive with backup images.

I backed up mbr and the partition.

Now I want to do a restore.

However, I don't want to restore to the original drive which is da0. I want to do a restore on da1 device.

i see tutorials and instructions online to do restore but it looks like when using restore it's going to overwrite files that are in the original or main drive which in my case da0.

I don't want to do a restore on the original device which is da0. Instead I want to restore from the dump images which is on da2 which is the external hard drive that uses ntfs file system.

what is the best way to do this? I don't want to overwrite anything on main hard drive da0.

right now trying to practice using restore on my new hard drive that is blank. so I need to know if I can use restore or something else where I can take dump files and restore or transfer all files from the image to a hard drive and make the hard drive bootable.

the 2nd thing is what is the best way to setup a raid system? I have 8 hard drives that are 10 tb all new. I can use 8 only in my server and plan to replace the original hard drive with this new ones. I just don't want to lose any data. the data on original drive is valuable. it has about 11 years of work on it.
 
Simple, also see the restore(8) manualpage. Restoration is only done on a filesystem of your choosing. So mount the filesystem to which you want to restore, make that active and then use something like # restore -rf <path to backup file>.

This will restore the filesystem onto the current active filesystem (the one you cd'd into).

As to the raid... depends a bit on the server. If the server has plenty of memory I'd definitely recommend to look into ZFS and setup a Raid-Z ('raidz') based pool. Also see zpool(8).
 
Simple, also see the restore(8) manualpage. Restoration is only done on a filesystem of your choosing. So mount the filesystem to which you want to restore, make that active and then use something like # restore -rf <path to backup file>.

This will restore the filesystem onto the current active filesystem (the one you cd'd into).

As to the raid... depends a bit on the server. If the server has plenty of memory I'd definitely recommend to look into ZFS and setup a Raid-Z ('raidz') based pool. Also see zpool(8).

what do you mean by active? do you mean like cd /mount/point ? like you cd to the directory that you mount the hard drive too?

like mount /dev/da01s1 /mnt/point

after typing that. Do you mean I would have to do cd /mnt/point

and then do restore -rf <path to backup file>

just want to make sure I understood correctly before doing this.

I will look into zfs and zpool. My server has a built in bios hardware raid. I can boot into a bios for raid config at startup. It has a lot of ram memory. Total hard drive would be 80 tb.
 
what do you mean by active? do you mean like cd /mount/point ? like you cd to the directory that you mount the hard drive too?
...
just want to make sure I understood correctly before doing this.
That is why manualpages exist you know ;)

If you had checked man restore (so: restore(8)) you'd have seen this:

Code:
     -r      Restore (rebuild a file system).  The target file system should
             be made pristine with newfs(8), mounted and the user cd(1)'d into
             the pristine file system before starting the restoration of the
             initial level 0 backup.
<cut>
            An example:
                   newfs /dev/da0s1a
                   mount /dev/da0s1a /mnt
                   cd /mnt

                   restore rf /dev/sa0
Despite restore being heavily tied into UFS the restore command can also be used on other filesystems such as ZFS, so there's no reason to worry there. Of course if you do use ZFS then you can't use any specific features which would allow you to change the block size or such.

This is also what makes restore such a powerful command and why I still favor dump/restore above anything else whenever I'm working with UFS.

I will look into zfs and zpool. My server has a built in bios hardware raid. I can boot into a bios for raid config at startup. It has a lot of ram memory. Total hard drive would be 80 tb.
Whatever you do: do not use both options mixed together. If you choose to use hardware raid then continue to rely on UFS for your system because otherwise you're going to build a mess.

My 2 cents: I massively prefer software raid over hardware raid, especially when powered with ZFS. See, the main problem is that the way the raid gets set up is usually a specific setup tied to that one brand of raid controller. In other words: should your raid controller for whatever reason break you're more or less forced to get one of the same brand, and make 3x sure that it's actually compatible with your setup.

Note (small disclaimer): it is possible that many things have changed on the market in the mean time, but this has been my experience and after a few nasty ordeals I gave up on hardware entirely.

Yet with software, ZFS, this doesn't matter. As long as your OS supports your controller(s) then ZFS can use that to maintain / restore its raid setup.

Maybe food for thought? :)
 
I would go with ZFS. Build up the system with 2 10TB drives first (mirrored). Just do a clean install. If the system is booting attach the old drive and copy any data from it (don't copy the OS or packages, just the data). Remove the old drive and add the rest of the 10TB drives. You can add those easily to the existing mirror. Keep the old drive around, get an easy to use USB -> SATA interface. Use this interface to connect the drive with USB in case you forgot something.
 
I followed the above advice and tried to clone my hard drive to a 2nd drive. I can see an exact copy of the drive to the 2nd drive but when I take the primary drive out and try to boot the machine with the new copy it doesn't detect a bootable device. How do you make an exact copy and make the copy backup drive be bootable?
 
No, but I am not right now doing a raid. just right now trying to clone the original hard drive and be able to take the old drive and put the new drive and have it work as the original drive. I am just trying to make a backup copy of original boot drive before deciding to make a raid system.
 
Are you using ufs or zfs now? I think SirDice advice above was related to zfs and creation of RAID-Z starting with a VDEV mirror (works like RAID-1).

If you just want to clone the old disk with ufs, its easy
Code:
 gpart backup da0 | gpart restore -F da2
To make your new disk bootable (on legacy hardware) you need to install a PMBR and boot loader
Code:
 gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 da2
Use the -i option to tell gpart(8) which partion to copy the bootloader.

If you are booting on UEFI: FreeBSD provides an efi partition as /boot/boot1.efifat. You can copy that to your boot partition with dd(1).

But I would also recommend zfs if you plan to build RAID.
 
Are you using ufs or zfs now? I think SirDice advice above was related to zfs and creation of RAID-Z starting with a VDEV mirror (works like RAID-1).

If you just want to clone the old disk with ufs, its easy
Code:
 gpart backup da0 | gpart restore -F da2
To make your new disk bootable (on legacy hardware) you need to install a PMBR and boot loader
Code:
 gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 da2
Use the -i option to tell gpart(8) which partion to copy the bootloader.

If you are booting on UEFI: FreeBSD provides an efi partition as /boot/boot1.efifat. You can copy that to your boot partition with dd(1).

But I would also recommend zfs if you plan to build RAID.


I am using ufs for now. just wanting to backup my original hard drive and make a duplicate backup and then take original out and put in the new copy in to test and see if it can boot the system up and load an exact copy of the orignal system with it's files. Once I do t his and know I have a good working backup.

I will work on making a raid system using the ZFS.

I am trying to use dump and restore to make an exact copy of an existing hard drive that uses UFS file system. I need to get this to work before I start converting my system ot a ZFS system. I am using a server that uses the older boot system.
 
I think dump(8) is largely obsolete these days
It's not exactly obsolete, it still has its uses. But I personally never bother with backing up the OS because a complete, clean, reinstall is typically faster than restoring a backup (a restore(8) can take several hours while a (re)install is done in less than 20 minutes). The only things that are actually worth backing up is data, not the OS or the applications (since those can be easily reinstalled).
 
It's not exactly obsolete, it still has its uses. But I personally never bother with backing up the OS because a complete, clean, reinstall is typically faster than restoring a backup (a restore(8) can take several hours while a (re)install is done in less than 20 minutes). The only things that are actually worth backing up is data, not the OS or the applications (since those can be easily reinstalled).

So, how can I backup the drive and make the drive bootable? not using raid using ide.
.
 
Last edited:
My server uses the older system the MBR system. I am just trying right now to clone a ufs drive to make an exact copy. the drive is a bootable drive. I am able to make an exact copy both the mbr and the actual partician. The issue is that when I take the original drive out and swap it with the new copied drive it would not at all book. The server boots to a black screen saying no bootable device found. Even though when I looked at the drive i can see all files exactly copied over. Any ideas what I can try and do? any commands I can use to see why this drive isn't being detected as bootable?
 
Use gpart(8) to compare the two drives, like this gpart show -p drive. Example:
Code:
tingo@kg-core1$ gpart show -p ada0
=>       34  250069613    ada0  GPT  (119G)
         34        128  ada0p1  freebsd-boot  (64K)
        162  119537664  ada0p2  freebsd-ufs  [bootme]  (57G)
  119537826    8388608  ada0p3  freebsd-swap  (4.0G)
  127926434  122143213  ada0p4  freebsd-ufs  [bootme]  (58G)
 
it should have freebsd-boot on both drives?
The whole point of a RAID setup is to allow the machine to continue to function if one of the drives fails. So if your ada0 breaks you'll want to be able to boot from ada1 for example. So, both disks need to have the boot partition.

if it doesn't how do you copy that over?
gpart backup/ gpart restore can be used to copy the partition tables. The bootcode can be written by gpart bootcode, see gpart(8).
 
The whole point of a RAID setup is to allow the machine to continue to function if one of the drives fails. So if your ada0 breaks you'll want to be able to boot from ada1 for example. So, both disks need to have the boot partition.


gpart backup/ gpart restore can be used to copy the partition tables. The bootcode can be written by gpart bootcode, see gpart(8).


I did gpart show -p device and I don't see
freebsd-boot on any of the drives.

I see a freebsd-ufs on both drives and a freebsd-swap on the original drive but not on the copy drive.

The original drive boots find and I do know there's boot code on the first 512kb of data. I mean the original drive has no issues on booting.

It's just the clone or copy that doesn't boot at all.

I will try and use gpart but will need a tutorial or reading material to make sure I know how I can use it and not overwrite the original hard drive by mistake.
 
Then it's likely an MBR partitioning. GPT and MBR boot a little differently.
Yes, is does have an MBR and I have it copied to the 2nd drive too. I can see this.

the first 512kb is the MBR I used the dump and restore commands for that 512kb to backup and put it on the new drive.

Right now all I want to do is copy the main drive and make an exact clone and then turn off the server and start it again with the new drive and having the new drive boot.
I just want to do this because I have never done this I decided for this server to not do a raid. Instead I will have my storage serves setup in a raid and future servers will be setup in a raid.
I will do it this way until I learn exactly what files are important that I can backup and then reinstall freebsd and just put back config files for server software.

Right now like I been working on the server and website since 2008 and there's just too much important files that I cannot afford to lose.

Right now I cannot make an exact copy of the old drive and have it bootable. I can clone and have the exact files copied over including the mbr record but when I turn the server off and swop out the old drive with then newly cloned drive and then boot the system. It shows a black screen showing text stating there's no bootable device found.

If I turn it off and put back the old drive and have the new clone in the 2nd slot. The old drive boots the system fine. I have no idea as to why. I copied the first 512kb store it on a external drive. I then restore that image to the first 512kb to the new drive.

Now, I thought maybe the hard drive wasn't be detected by the system. The issue with this is that when I boot off the old drive. The OS shows while booting that it detects all the hard drives including external hard drive. I can mount the new drive and external drive to the system and actually access the files and the files on that new drive are identical to the old drive.

I just don't know if I have to do anything more than this? does the drive have to be flagged as a bootable drive? I do remember when I first installed freebsd on the old drive years ago. I was asked if this was a primary drive or a secondary drive when making the partition . I have no clue if there's flags that get put on to indicate the drive is bootable.

The server uses old hardware that was bought in 2008. I right now bought new servers and in the process of building a data center. My business grew and I need to design the system to be scale able when needed. So, right now I just need to copy the old drive exactly and make it bootable. it uses the old system has an MBR. I am guessing that the drive needs to have a flag to indicate it 's a bootable drive aka primary.
 
I wanted to mention that what you want to do may not be possible with a disk cloning via dd.
>>Clone a MBR Disk to a 10TB drive.<<<
MBR maxes out at 2TB I believe. That is the impetus for GPT. Works on over 2TB drives.

Because GPT needs to have a separate /boot partition this becomes more work.
So you would need to build the GPT/PMBR structure on your 10TB drive and then restore files.
You would also have to point /etc/fstab to the GPT partition name (/dev/ada0p2) instead of the mbr partition/slice name..

Similar question....
https://forums.freebsd.org/threads/converting-mbr-to-gpt.64463/
https://forums.freebsd.org/threads/mbr-gpt.63257/
 
Last edited:
How do I copy over the boot code or the boot loader? I copied the MBR but was told that I need to dump and restore the boot code or the boot loader. how can I do this?
 
When i perform a ports update of my server i always first check the update process under Hyper-V virtual machine. To do this i make a backup with dump(8) then i use restore(8) into virtual machine under i test the update process while taking notes to see if anything brakes and after that i perform the same update process on the actual server.

Here's is the entire process of backup/restore that works for me under UFS.

To make a snapshot of the life system (dump -L) the soft updates journal must be disable in advance. To do this i first reboot the server in single user mode and disable jurnal with
# tunefs -j disable /
Note that this will slow down the fsck after improper shutdown.

After reboot you can check if the journal is disabled using mount

Before disabled
/dev/da0p2 on / (ufs, local, journaled soft-updates)
After disabled jounral
/dev/da0p2 on / (ufs, local, soft-updates)

After that you can proceed with the backup using dump.
For each server need to have a print copy of the output of the following commands
# gpart show
# more /etc/fstab
# dmesg
(I hope I do not have to use them)

The backup of the server must be
1) full, 2) on removable media, 3) offline, 4) offsite, 5) tested for actual restore...

I don't need to backup some directories which i can easy restore from Internet or others that i don't need to have in my backup set that's why i exclude them using nodump flag.

# chflags nodump /usr/ports

Ports Collection can be recreated easy after restore using portsnap(8).

my partition layout is the following:
Code:
% gpart show
=>        40  1953519536  raid/r0  GPT  (932G)
          40        1024        1  freebsd-boot  (512K)
        1064  1946156024        2  freebsd-ufs  (928G)
  1946157088     7362487        3  freebsd-swap  (3.5G)
  1953519575           1           - free -  (512B)

I'm backing up my system over ssh on remote server using the following command:
(i need only r0p2 which is the freebsd-ufs)

# dump -C16 -b64 -0Lau -h0 -f - / | gzip | ssh -p 22 user@backups.mydomain.com dd of=/home/user/03042019.dump.gz

If you have locally attached hard-disk for backups you can mount it under /backups and use
dump -C16 -b64 -0uanL -h0 -f - / | /usr/bin/gzip > /backups/backupdate.dump.gz

The restore process

Depending of the boot if i'm using a legacy BIOS mode which under Hyper-v this is Generation1 or new UEFI boot which is Hyper-V generation 2 i create different partition layout depending under which generation i decide to restore.

First i download the backupdate.dump.gz file on Hyper-V server using winscp and then create an ISO (UDF DVD) so i can easy mount it as virtual CD-rom drive and access it under the virtual machine.

The restore for Hyper-V looks like this:
Create a virtual machine gen2 with big enough VHDx disk add two virtal CD-rom drives (one for the FreeBSD bootonly ISO and second for backup.ISO)

Disable secure boot for the virtual machine. Then start the virtual machine and boot from FreeBSD-12.0-RELEASE-amd64-bootonly.iso then select LIVE CD.

Create the GPT disk scheme using
gpart create -s gpt da0

For Hyper-V Gen2 UEFI boot (skip those two steps if you are using BIOS boot)

Add the efi partition which i recommend to be 200MB to save your trouble in the future increase of efi boot or other tools that needs to be in the efi partition. The minimum size can be 800K which is the current size of boot1.efifat.

gpart add -t efi -s 200M da0

Copy /boot/boot1.efifat into the new created efi partition using

dd if=/boot/boot1.efifat of=/dev/da0p1

For Hyper-V Gen1 BIOS boot (skip those two steps if you are using UEFI)

Add the boot partition which can't be more than 512K due to current boot code limitation.

gpart add -b 40 -s 512K -t freebsd-boot da0

Copy the protected MBR bootstrap code into the newly created freebsd-boot partition using

gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 da0

Then your system is ready for boot but first we need to restore the root partition from the backup. I need one big partition under it i can restore the backup and a second partition under the restore process can store it's temporary files.

gpart add -t freebsd-ufs -s 40G da0
gpart add -t freebsd-ufs -s 5G da0
gpart add -t freebsd-swap -s 2G da0

Create a new file system for root and tmp partition
gpart show
newfs -U -L FreeBSD /dev/da0p2
newfs -U /dev/da0p3

Because i'm using LiveCD and i can't mount the root partition as read-write and i need more than one mount point i will create two empty directories under /tmp which will act as my mount points.

cd /tmp
mkdir root
mkdir temp

mount -o rw /dev/da0p2 /tmp/root
mount -o rw /dev/da0p3 /tmp/temp

mount_cd9660 /dev/cd1 /media
or if you are using UDF DVD use
mount_udf /dev/cd1 /media

set TMPDIR which restore need to have write permissions using
for csh use
setenv TMPDIR /tmp/temp

or for other shells
TMPDIR=/tmp/temp
export TMPDIR

then we can proceed with the restore
cd /tmp/root
zcat /media/backupdate.dump.gz | restore -rvf -
if you want interactive restore use
zcat /media/backupdate.dump.gz | restore -ivf -

after the restore you can remove restoresymtable
rm restoresymtable

Before the reboot edit your fstab that must reflect the new drive for example from ada0 to da0 and/or the new partition layout if you are creating more than one partition for example our swap partition now is on da0p4 then edit rc.conf if you want to disable some services or change the network configuration before starting the virtual machine (like disabling mail queue if there's some pending mails)
ee /tmp/root/etc/fstab
ee /tmp/root/etc/rc.conf

Then shutdown and remove the virtual DVD or change the boot order to boot from HDD.

Edit:
You can also move a live system on the fly. You can attach your second sparky new big hard disk to the server, create the requred boot partitions (efi for UEFI boot or freebsd-boot fro BIOS boot) then create new partition layout with desire new size, create the newfs on it and mount it under /mnt then you can restore the live system on it with

lets say your new disk is /dev/da1 and it boot with legacy BIOS mode and need freebsd-boot
and your current disk is /dev/da0 and root is at index 2 /dev/da0p2

gpart create -s gpt da1 gpart add -b 40 -s 512K -t freebsd-boot da1 gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 da1 gpart add -t freebsd-ufs -s 900G da1 gpart add -t freebsd-swap -s 4G da1 newfs -U /dev/da1p2 mount /dev/da1p2 /mnt cd /mnt dump -0Lauf - /dev/da0p2 | restore -rf -
 
Last edited:
Back
Top