UFS Cloning an SSD to multiple HDD targets using dd

I haven't tried this, but I don't see why it shouldn't work. tee(1) in FreeBSD and Linux looks the same (POSIX?), and redirection of stdout ('>') is a well established standard for many, many years now. sh(1) doesn't mention any restrictions on redirection that I could find. And pipes are standard too.
The interesting part will be if you hit any bottlenecks, or get a good speed out of this.
Will you be doing this on one machine? Or across a network? If a network is involved, misc/buffer might be of help.
 
Hi, I have prepared an SSD (500Gb) of a FreeBSD 10.2 install and am looking to clone this installation to multiple same sized HDD's (10 at a time).
Bad idea! Do you want to clone ssh host keys and to have 10 servers with the same ssh host key? That is what I thought :) So what is the solution?

  1. Do custom installation of 10 servers (very time consuming but once upon a time a system admin was in charge of no more than 10 servers so it was doable).
  2. Create a custom installation image. (you will find yourself keep updating those custom images all the time and having the servers which are out of sink)
  3. Use PXE boot + Kickstart with post installation scripts on RedHat. This is a must have skill for any wannabe system admin. OpenBSD has essentially copied that feature from RedHat PXE boot +site.tgz + autoinstall It was even covered in one of BSD now episodes . Also carefully read section 4 of OpenBSD FAQ. What about FreeBSD? I think I have read once that PC-BSD/TrueBSD installer has this feature but I have not used it. I am not sure about vanilla FreeBSD installer. I have total of 5 FreeBSD servers and they are all custom done. I would really like to hear from people who use more FreeBSD than me how is this done properly (including automatic installation root on ZFS mirror). I know that you like DF. Don't bother looking for something like this in DF. They just fixed partition tool so that UFS and HAMMER installation look the same. It is a tiny project.
  4. Finally even previous method is inefficient in some sense as you have not address maintenance. For something like that you will need orchestration software. Historically the first one and the most scientific one is sysutils/cfengine . Personally I opted out for something simpler with gentler learning curve sysutils/ansible. They just published a nice book: Ansible: Up and Running I would not bother with Puppet and Chief unless your organization already uses that orchestration methods.
 
All disks will be SSDs? SSDs make things funny with their speed. If you have < master writer | writer | ... > last_copy, there's a lot of copying in and out between processes and system. I suspect it won't be faster than parallel dd's. Please report how you did it and time it!

Juha
 
Bed idea! Do you want to clone ssh host keys and to have 10 servers with the same ssh host key? That is what I thought :) So what is the solution?

  1. Do custom installation of 10 servers (very time consuming but once upon a time a system admin was in charge of no more than 10 servers so it was doable).
  2. Create a custom installation image. (you will find yourself keep updating those custom images all the time and having the servers which are out of sink)
  3. Use PXE boot + Kickstart with post installation scripts on RedHat. This is a must have skill for any wannabe system admin. OpenBSD has essentially copied that feature from RedHat PXE boot +site.tgz + autoinstall It was even covered in one of BSD now episodes . Also carefully read section 4 of OpenBSD FAQ. What about FreeBSD? I think I have read once that PC-BSD/TrueBSD installer has this feature but I have not used it. I am not sure about vanilla FreeBSD installer. I have total of 5 FreeBSD servers and they are all custom done. I would really like to hear from people who use more FreeBSD than me how is this done properly (including automatic installation root on ZFS mirror). I know that you like DF. Don't bother looking for something like this in DF. They just fixed partition tool so that UFS and HAMMER installation look the same. It is a tiny project.
  4. Finally even previous method is inefficient in some sense as you have not address maintenance. For something like that you will need orchestration software. Historically the first one and the most scientific one is sysutils/cfengine . Personally I opted out for something simpler with gentler learning curve sysutils/ansible. They just published a nice book: Ansible: Up and Running I would not bother with Puppet and Chief unless your organization already uses that orchestration methods.

Extremely happy to have all these responses! Thanks!

Concerning the above. These would be very basic installs, i.e. a base system, a GUI and some productivity tools. No SSH keys. I would be using the SSD essentially as the source disk so that I can leverage the read speed. Concerning the number of machines, it would be 10 to start with, and more machines in various locations that may or may not have networks or Internet access. Thus I really need an "offline" method of doing this and cloning disks in a tower and then just delivering the disks to put in the machines seems like the most expedient. The maintenance needs would be minimal once they are set up. When a disk dies in a machine, it just gets swapped out for a clone and rebooted.
 
All disks will be SSDs? SSDs make things funny with their speed. If you have < master writer | writer | ... > last_copy, there's a lot of copying in and out between processes and system. I suspect it won't be faster than parallel dd's. Please report how you did it and time it!

Juha

Hi! Only the source disk would be an SSD. I definitely will be reporting back.
 
I was wrong in both ways. System memory is still much much faster than SSD. Testing my humble SSD is 250MB/s and zeroing 256kB buffer million times gives 20 GB/s.

Juha:oops:

Edit: but pipes do go a bit slower, million 256kB writes into | cat > /dev/null is 2 GB/s.
 
I had once toyed with the idea of having a USB boot image which would then treat the disc as a torrent file, installing the system via network from all the other systems currently being installed. That way a lot of machines may be transformed from brebuild instruments of torment (say, $WINDOWS_INSTALL_BY_VENDOR) into what you wanted it to be without too much screwing around (pun intended).
 
Just to add to the discussion, I also commented in another thread about using ZFS and installing to a mirror of disks and then simply detaching the mirrors into separate bootable pools to use in the different machines. This would actually be faster as well as simpler (I've already tried it with a few disks), but I'd like to have an additional option just in case.
 
I know that you like DF. Don't bother looking for something like this in DF. They just fixed partition tool so that UFS and HAMMER installation look the same. It is a tiny project.

I tried it actually. One just needs to change /dev/serno/xx to /dev/da0 so that the newly cloned disks don't go looking for the serial number of the original source disk. Other than that, clones from a dd'd source drive work fine.
 
Some numbers from writing 256 GB thru | cat | ...

bzero 13 sec 20 GB/s (million times, 256kB)
1 pipe 128 sec 2 GB/s (million writes)
2 pipes 200 sec 1.2 GB/s
5 pipes 362 sec 707 MB/s
7 pipes 442 sec 579 MB/s
10 pipes 628 sec 408 MB/s

lowly laptop, but for the trend
Juha
 
These would be very basic installs, i.e. a base system, a GUI and some productivity tools. No SSH keys.
So you are telling me that you are not going to ssh to those machines (ssh server must have a unique key which is on OpenBSD generated on the first boot). OK then. Between I fixed "bed" spelling mistake. Please change it to the "bad idea".
 
Just to add to the discussion, I also commented in another thread about using ZFS and installing to a mirror of disks and then simply detaching the mirrors into separate bootable pools to use in the different machines. This would actually be faster as well as simpler (I've already tried it with a few disks), but I'd like to have an additional option just in case.
You can mirror HAMMER to slave disk pull the slave disk out and promote it to the Master. The only problem you are missing boot partition /boot which has to be UFS.
 
So you are telling me that you are not going to ssh to those machines. OK then. Between I fixed "bed" spelling mistake. Please change it to the "bad idea".

No, no need to. Its a time time all in one configuration designed for a workstations.
 
You can mirror HAMMER to slave disk pull the slave disk out and promote it to the Master. The only problem you are missing boot partition /boot which has to be UFS.

Yes, it requires more work and simpler solutions are available. With the HAMMER slave option there is a lot of behind the scenes stuff that needs to be done.
 
Back
Top