HOWTO: Modern FreeBSD Install RELOADED (vermaden way)

All these years sysinstall(8) was helping us to install FreeBSD with most options we needed, today with new filesystems/features like GJournal/ZFS/Geli/GMirror/GStripe its no longer up to the task, because it only supports creating installation on UFS filesystem with SoftUpdates turned ON or OFF.

In this guide you will learn how to setup FreeBSD installation in simple yet flexible setup based on read-only UFS (without SoftUpdates) for 'base system' [1], some SWAP space, /tmp mounted on SWAP and all the other filesystems (/var /usr ...) mounted on ZFS. It will not require rebuilding anything, just simple setup on plain MBR partitions. I should also mention that we would be using AHCI mode for disks. I also provided two versions, for system with one harddisk and with three of them for redundant setup.

Here is the layout of the system with 1 harddisk:
Code:
MBR SLICE 1 |    / | 512 MB | UFS/read-only 
            | SWAP |   2 GB |
            | /tmp | 512 MB | mounted on SWAP with[B] mdmfs(8)[/B]
------------+------+---------------------------------------
MBR SLICE 2 | /usr |   REST | ZFS dataset
            | /var |   REST | ZFS dataset

... and here layout for single disk for system with 3 disks:
Code:
MBR SLICE 1 |    / | 512 MB | UFS/read-only 
------------+------+--------+------------------------------
MBR SLICE 2 | SWAP |   1 GB |
            | /tmp | 512 MB | mounted on SWAP with [B]mdmfs(8)[/B]
------------+------+--------+------------------------------
MBR SLICE 3 | /usr |   REST | ZFS dataset
            | /var |   REST | ZFS dataset

Redundancy planning for system with 3 disks:
Code:
 [ [B]DISK0[/B] ]           [ [B]DISK1[/B] ]           [ [B]DISK2[/B] ]
 [   /   ] [color="Silver"]< RAID1 >[/color] [   /   ] [color="Silver"]< RAID1 >[/color] [   /   ]
 [ SWAP0 ]           [ SWAP1 ]           [ SWAP2 ]
 [   Z   ] [color="Silver"]< RAID5 >[/color] [   F   ] [color="Silver"]< RAID5 >[/color] [   S   ]

FreeBSD core, the 'base system' [1] should remain almost unchanged/untouched on daily basis while you can mess all other filesystems, this ensures that when things go wrong, you will be able to fix anything still having working 'base system' [1].

You will need *-dvd-* disk or *-memstick-* image for this installation, *-disk1-* will not do since it does not contain livefs system.

Here is the procedude, described as simple as possible.

1.0. I assume that our disk for the installation would be /dev/ad0 (/dev/ad0 /dev/ad1 /dev/ad2 for system with 3 disks)

1.1. Boot *-dvd-* from DVD disk or *-memstick-* image from pendrive
Code:
Country Selection --> United States
Fixit --> CDROM/DVD ([file]*-dvd-*[/file]) or USB ([file]*-memstick-*[/file])

1.2. Create your temporary working environment
Code:
fixit# [color="Blue"]/mnt2/bin/csh[/color]
# [color="blue"]setenv PATH /mnt2/rescue:/mnt2/usr/bin:/mnt2/sbin[/color]
# [color="blue"]set filec[/color]
# [color="blue"]set autolist[/color]
# [color="blue"]set nobeep[/color]

1.3. Load needed modules
Code:
fixit# [color="#0000ff"]kldload /mnt2/boot/kernel/geom_mbr.ko[/color]
fixit# [color="blue"]kldload /mnt2/boot/kernel/opensolaris.ko[/color]
fixit# [color="blue"]kldload /mnt2/boot/kernel/zfs.ko[/color]

1.4. Create/mount needed filesystems
Code:
[B]DISKS: 3[/B]                                               | [B]DISKS: 1[/B]
# [color="Blue"]cat > part << __EOF__[/color]                                | # [color="Red"]cat > part << __EOF__[/color]
[color="Blue"]p 1 165 63  512M[/color]                                       | [color="Red"]p 1 165 63  2560M[/color]
[color="Blue"]p 2 165  * 1024M[/color]                                       | [color="Red"]p 2 159  *     *[/color]
[color="Blue"]p 3 159  *     *[/color]                                       | [color="Red"]p 3   0  0     0[/color]
[color="Blue"]p 4   0  0     0[/color]                                       | [color="Red"]p 4   0  0     0[/color]
[color="Blue"]a 1[/color]                                                    | [color="Red"]a 1[/color]
[color="Blue"]__EOF__[/color]                                                | [color="Red"]__EOF__[/color]
                                                       |
# [color="Blue"]fdisk -f part ad0[/color]                                    | # [color="Red"]fdisk -f part ad0[/color]
# [color="Blue"]fdisk -f part ad1[/color]                                    |
# [color="Blue"]fdisk -f part ad2[/color]                                    |
                                                       |
# [color="Blue"]kldload /mnt2/boot/kernel/geom_mirror.ko[/color]             |
# [color="Blue"]gmirror label  rootfs ad0s1[/color]                          |
# [color="Blue"]gmirror insert rootfs ad1s1[/color]                          |
# [color="Blue"]gmirror insert rootfs ad2s1[/color]                          |
                                                       |
# [color="Blue"]bsdlabel -B -w /dev/mirror/rootfs[/color]                    | # [color="Red"]cat > label << __EOF__[/color]
                                                       | [color="Red"]# /dev/ad0s1:[/color]
                                                       | [color="Red"]8 partitions:[/color]
                                                       | [color="Red"]  a: 512m  0 4.2BSD[/color]
                                                       | [color="Red"]  b: *     * swap[/color]
                                                       | [color="Red"]__EOF__[/color]
                                                       |
                                                       | # [color="Red"]bsdlabel -B -w ad0s1[/color]
                                                       | # [color="Red"]bsdlabel       ad0s1 | tail -1 >> label[/color]
                                                       | # [color="Red"]bsdlabel -R    ad0s1 label[/color]
                                                       |
# [color="Blue"]glabel label swap0 ad0s2[/color]                             | # [color="Red"]glabel label rootfs ad0s1a[/color]
# [color="Blue"]glabel label swap1 ad1s2[/color]                             | # [color="Red"]glabel label swap   ad0s1b[/color]
# [color="Blue"]glabel label swap2 ad2s2[/color]                             |
                                                       |
# [color="Blue"]newfs /dev/mirror/rootfsa[/color]                            | # [color="Red"]newfs /dev/label/rootfs[/color]
# [color="Blue"]zpool create basefs raidz ad0s3 ad1s3 ad2s3[/color]          | # [color="Red"]zpool create basefs ad0s2[/color]
# [color="Blue"]zfs create basefs/usr[/color]                                | # [color="Red"]zfs create basefs/usr[/color]
# [color="Blue"]zfs create basefs/var[/color]                                | # [color="Red"]zfs create basefs/var[/color]
# [color="Blue"]mkdir /NEWROOT[/color]                                       | # [color="Red"]mkdir /NEWROOT[/color]
# [color="Blue"]mount /dev/mirror/rootfsa /NEWROOT[/color]                   | # [color="Red"]mount /dev/label/rootfs /NEWROOT[/color]
# [color="Blue"]zfs set mountpoint=/NEWROOT/usr basefs/usr[/color]           | # [color="Red"]zfs set mountpoint=/NEWROOT/usr basefs/usr[/color]
# [color="Blue"]zfs set mountpoint=/NEWROOT/var basefs/var[/color]           | # [color="Red"]zfs set mountpoint=/NEWROOT/var basefs/var[/color]

1.5. Actually install needed FreeBSD sets
Code:
# [color="blue"]setenv DESTDIR /NEWROOT[/color]
# [color="blue"]cd /dist/8.0-RELEASE[/color]

# [color="blue"]cd base[/color]
# [color="blue"]./install.sh[/color] (answer [I]'y'[/I] here)
# [color="blue"]cd ..[/color]

# [color="blue"]cd manpages[/color]
# [color="blue"]./install.sh[/color]
# [color="blue"]cd ..[/color]

# [color="Blue"]cd kernels[/color]
# [color="blue"]./install.sh generic[/color]
# [color="Blue"]cd ..[/color]

# [color="Blue"]cd /NEWROOT/boot[/color]
# [color="Blue"]rm -r kernel[/color]
# [color="Blue"]mv GENERIC kernel[/color]
 
1.6. Provide basic configuration needed to boot new system
1.6.1.
Code:
[B]DISKS: 3[/B]                                          | [B]DISKS: 1[/B]
# [color="Blue"]cat > /NEWROOT/etc/fstab << __EOF__[/color]             | # [color="Red"]cat > /NEWROOT/etc/fstab << __EOF__[/color]
[color="Blue"]#dev                #mount #fs  #opts #dump #pass[/color] | [color="Red"]#dev              #mount #fs  #opts #dump #pass[/color]
[color="Blue"]/dev/mirror/rootfsa /      ufs  rw    1     1[/color]     | [color="Red"]/dev/label/rootfs /      ufs  rw    1     1[/color]
[color="Blue"]/dev/label/swap0    none   swap sw    0     0[/color]     | [color="Red"]/dev/label/swap   none   swap sw    0     0[/color]
[color="Blue"]/dev/label/swap1    none   swap sw    0     0[/color]     | [color="Red"]__EOF__[/color]
[color="Blue"]/dev/label/swap2    none   swap sw    0     0[/color]     |
[color="Blue"]__EOF__[/color]                                           |
                                                  |
# [color="Blue"]cat > /NEWROOT/boot/loader.conf << __EOF__[/color]      | # [color="Red"]cat > /NEWROOT/boot/loader.conf << __EOF__[/color]
[color="Blue"]zfs_load="YES"[/color]                                    | [color="Red"]zfs_load="YES"[/color]
[color="Blue"]ahci_load="YES"[/color]                                   | [color="Red"]ahci_load="YES"[/color]
[color="Blue"]geom_mirror_load="YES"[/color]                            | [color="Red"]__EOF__[/color]
[color="Blue"]__EOF__[/color]                                           |

1.6.1.
Code:
# [color="Blue"]cat > /NEWROOT/etc/rc.conf << __EOF__
zfs_enable="YES"
__EOF__[/color]

1.7. Unmount filesystems and reboot
Code:
# [color="Blue"]cd /[/color]
# [color="Blue"]zfs umount -a[/color]
# [color="Blue"]umount /NEWROOT[/color]
# [color="Blue"]zfs set mountpoint=/usr basefs/usr[/color]
# [color="Blue"]zfs set mountpoint=/var basefs/var[/color]
# [color="Blue"]zpool export basefs[/color]
# [color="Blue"]reboot[/color]

Now lets talk things you will need to do after reboot.

2.0. At boot loader select boot into single user mode

[font="Courier New"]4. Boot FreeBSD in single user mode[/font]

Code:
Enter full pathname of shell or RETURN for /bin/sh: [color="Blue"]/bin/csh[/color]
% [color="Green"]/rescue/mount -w /[/color]
% [color="Green"]/rescue/zpool import -D || /rescue/zpool import -f basefs[/color]
% [color="Green"]exit[/color]

2.1. Login as root without password
Code:
login: [color="Green"]root[/color]
password: [color="Green"](just hit ENTER)[/color]

2.2. Set root password
Code:
# [color="Green"]passwd[/color]

2.3. Set hostname
Code:
# [color="Green"]echo hostname=\"HOSTNAME\" >> /etc/rc.conf[/color]

2.4. Set timezone and date/time
Code:
# [color="Green"]tzsetup[/color]
# [color="Green"]date 201001142240[/color]

2.5. Tune the ZFS filesystem (only for i386)
Code:
# [color="Green"]cat > /boot/loader.conf << __EOF__
vfs.zfs.prefetch_disable=0      # enable prefetch
vfs.zfs.arc_max=134217728       # 128 MB
vm.kmem_size=536870912          # 512 MB
vm.kmem_size_max=536870912      # 512 MB
vfs.zfs.vdev.cache.size=8388608 #   8 MB
__EOF__[/color]

2.6. Mount /tmp on SWAP
Code:
# [color="Green"]cat >> /etc/rc.conf << __EOF__
tmpmfs="YES"
tmpsize="512m"
tmpmfs_flags="-m 0 -o async,noatime -S -p 1777"
__EOF__[/color]

2.7. Move termcap into /etc (instead of useless link on crash)
Code:
# [color="Green"]rm /etc/termcap[/color]
# [color="Green"]mv /usr/share/misc/termcap /etc[/color]
# [color="Green"]ln -s /etc/termcap /usr/share/misc/termcap[/color]

2.8. Add latest security patches
Code:
# [color="Green"]freebsd-update fetch[/color]
# [color="Green"]freebsd-update install[/color]

2.9. Make all changes to configuration in /etc, then set / to be mounted read-only in /etc/fstab
Code:
[B]DISKS: 3[/B]                                           | [B]DISKS: 1[/B]
 #dev                #mount #fs  #opts #dump #pass |  #dev              #mount #fs  #opts #dump #pass
[color="Lime"]+/dev/mirror/rootfsa /      ufs  ro    1     1[/color]     | [color="Lime"]+/dev/label/rootfs /      ufs  ro    1     1[/color]
[color="Red"]-/dev/mirror/rootfsa /      ufs  rw    1     1[/color]     | [color="Red"]-/dev/label/rootfs /      ufs  rw    1     1[/color]
 /dev/label/swap0    none   swap sw    0     0     |  /dev/label/swap   none   swap sw    0     0
 /dev/label/swap1    none   swap sw    0     0     |
 /dev/label/swap2    none   swap sw    0     0     |

2.10. Reboot and enjoy modern install of FreeBSD system
Code:
# [color="Green"]shutdown -r now[/color]
 
To summarise, this setup provides us these things:
-- bulletproof 'base system' [1] on UFS (w/o SU) mounted read-only
-- /tmp filesystem mounted on SWAP
-- usage of new AHCI mode in FreeBSD
-- flexibility for all other filesystems on ZFS
-- fully working environment on crash (/etc/termcap)
-- disks/filesystems mounted by label, possible device name changes are harmless
-- RAID1 for / and RAID5 for all other systems on setup with 3 disks

[1] base system is / and /usr while this setup, in context of this setup I name 'base system'
the most important core of FreeBSD, the / filesystem and its binaries/libraries/configuration
(thanks to phoenix for reminding me what REAL base system is/means) ;)

CHANGELOG

1.0 / 2010-01-14 / initial version
1.1 / 2010-01-15 / simplified PATH
+fixit# setenv PATH /mnt2/rescue:/mnt2/usr/bin
-fixit# setenv PATH /mnt2/bin:/mnt2/sbin:/mnt2/usr/bin:/mnt2/usr/sbin
1.2 / 2010-01-15 / added link for termcap (instead of duplicate on /etc and /usr) [2.6.]
.# rm /etc/termcap
+# mv /usr/share/misc/termcap /etc
+# ln -s /etc/termcap /usr/share/misc/termcap

-# cp /usr/share/misc/termcap /etc
1.3 / 2010-01-21 / removed unneeded mount commands [2.0.]
-# zfs mount basefs/var
-# zfs mount basefs/usr
1.4 / 2010-03-08 / added setup for 3 disks + cleanup
too much to fit here, we can as well call this new version RELOADED ;)

MIRROR THREAD: http://daemonforums.org/showthread.php?t=4200
POLISH VERSION: http://bsdguru.org/dyskusja/viewtopic.php?t=19392

ADDED: 2010/10/21


After rethinking setup from my HOWTO and after phoenix thoughts I currently use that setup for most FreeBSD installations that include ZFS.

LOGICAL SETUP

Code:
[SIZE="3"]UFS 512m /           ro
ZFS *    /home       rw | atime=off
RAM 128m /tmp        rw | async
UFS *    /usr        ro | softupdates (mounted r/w only for packages updates)
ZFS *    /usr/obj    rw | atime=off | checksum=off
ZFS *    /usr/ports  rw | atime=off
ZFS *    /usr/src    rw | atime=off
ZFS *    /var        rw
UFS 128m /var/db/pkg ro | softupdates (mounted r/w only for packages updates)[/SIZE]

PHYSICAL SETUP (LAPTOP w/ 1 DISK)

Code:
[SIZE="3"]p1 8g disk0s1a 512m UFS /           newfs -m 1    /dev/label/root
      disk0s1e 128m UFS /var/db/pkg newfs -m 1 -U /dev/label/pkg
      disk0s1f    * UFS /usr        newfs -m 1 -U /dev/label/usr

p2 *g disk0s2  ZFS/home             zfs create -o mountpoint=/home      pool/home
               ZFS/var              zfs create -o mountpoint=/var       pool/var
               ZFS/usr              zfs create -o mountpoint=none       pool/usr
               ZFS/usr/src          zfs create -o mountpoint=/usr/src   pool/usr/src
               ZFS/usr/obj          zfs create -o mountpoint=/usr/obj   pool/usr/obj
               ZFS/usr/ports        zfs create -o mountpoint=/usr/ports pool/usr/ports

               (if You need SWAP, omit on CF/Pendrive/SSD disks)
               ZFS/SWAP             zfs create -V 2g                    pool/swap

RAM/SWAP 128m  /tmp                 tmpmfs=YES --> /etc/rc.conf[/SIZE]

PHYSICAL SETUP (CF + DISKS)

Code:
[SIZE="3"]8g CF    disk0s1a 512m UFS /           newfs -m 1    /dev/label/root
         disk0s1e 128m UFS /var/db/pkg newfs -m 1 -U /dev/label/pkg
         disk0s1f    * UFS /usr        newfs -m 1 -U /dev/label/usr

*g ZFS   ZFS/home                      zfs create -o mountpoint=/home      pool/home
         ZFS/var                       zfs create -o mountpoint=/var       pool/var
         ZFS/usr                       zfs create -o mountpoint=none       pool/usr
         ZFS/usr/src                   zfs create -o mountpoint=/usr/src   pool/usr/src
         ZFS/usr/obj                   zfs create -o mountpoint=/usr/obj   pool/usr/obj
         ZFS/usr/ports                 zfs create -o mountpoint=/usr/ports pool/usr/ports

         (if You need SWAP)
         ZFS/SWAP                      zfs create -V 2g                    pool/swap

128M RAM /tmp                          tmpmfs=YES --> /etc/rc.conf[/SIZE]

Of course for serious storage/backup servers it would be 'nice' to have that CF (or pendrive) mirrored via GEOM/mirror.
 
hmm, are you able to boot into single user mode with ZFS?
I can't for some reason, maybe because my HDD's are encrypted. But there are devices with eli (decrypted), weird
 
@killasmurf86

/ is on UFS (with bsdlabel) so no problem to boot into single user mode, I havent played with encrypted / to check here, maybe I will in some free time @ virtualbox.
 
Couple of questions about implementation and migration

First off, thanks to Vermaden for posting this! It's a very slick way to take advantage of the best filesystems that FreeBSD offers. I'm considering migrating a home server to 8.0, and this seems like a great setup for me. I've got a couple of questions before I give it a try, though.

First, would including a fourth disk be as simple as it looks? I've got 7.2's ZFS spanning 3 disks at the moment, but I've got a fourth sitting around and figure that it might as well be in the server.

Second, do you have any recommendations for maintaining data integrity during the move? I've got external HDD's that can hold all of my stuff, but they're just FAT32 and lack the kind of checksum protection that ZFS is giving me on the current system. My plan would be to use all three of the current disks and a fourth in the new system, but that will require wiping out the current filesystem.

Thanks in advance!
 
@dewarrn1

First, would including a fourth disk be as simple as it looks? I've got 7.2's ZFS spanning 3 disks at the moment, but I've got a fourth sitting around and figure that it might as well be in the server.
It will fit well on 4 disks, but it will require to recreate the ZFS pool, you will just create raidz over 4 disks.

Second, do you have any recommendations for maintaining data integrity during the move? I've got external HDD's that can hold all of my stuff, but they're just FAT32 and lack the kind of checksum protection that ZFS is giving me on the current system. My plan would be to use all three of the current disks and a fourth in the new system, but that will require wiping out the current filesystem.
You can tar(1) and split(1) all your data into that fat32 filesystem (parts need to be smaller then 4GB), You may as well create TWO copies of your data there, in two folders, or just make UFS filesystem there.
 
Very cool, I'll get that process underway. I'll probably generate some par2 data for those split tar files as well, just in case. Thanks!
 
This worked almost exactly as advertised! I ended up with 4x500GB HDDs with a 4-way mirrored base system, 4 GB swap spread across the disks, and ~1.3TB ZFS. My only hiccup was the "zpool import -D" bit, which for some reason didn't want to play nice. However, "zpool import basefs" did the trick, and now I'm getting things back onto ZFS. Nice work, V!
 
dewarrn1 said:
This worked almost exactly as advertised! I ended up with 4x500GB HDDs with a 4-way mirrored base system, 4 GB swap spread across the disks, and ~1.3TB ZFS. My only hiccup was the "zpool import -D" bit, which for some reason didn't want to play nice. However, "zpool import basefs" did the trick, and now I'm getting things back onto ZFS. Nice work, V!

Good to know that it also works for others ;)

The first version included zpool import basefs, but after messing with 3 disks it imported with zpool import -D so I changed the formula, I think I will include both just in case, thanks.
 
I've used your guide and had the same problem as dewarrn1 when trying to import the pool ([cmd=]zpool import -D[/cmd]), but [cmd=]zpool import basefs[/cmd] did the trick here as well :)

Oh, I installed the system on 2 drives and used raid1 for all the slices (/, /usr and /var).

I plan to add another disk in the future and I will keep you updated how the process of adding another disk to the zfs pool goes.

Anyway, great guide.
 
Kami said:
I plan to add another disk in the future and I will keep you updated how the process of adding another disk to the zfs pool goes.
But remember that You would have to destroy the current mirror and then create RAIDZ for example.

Kami said:
Anyway, great guide.
Thanks mate.
 
Dear vermaden or anybody,

would it be possible/prudent to modify the installation to have the entire / (USB) and /usr (ZFS) on one (mirrored) disk, in particular flash and having the rest of the file system, i.e., SWAP, /tmp, /var, /home, etc., on RAIDed hard drives using ZFS?

The motivation would be to further separate the OS/Application (fairly static on my system) from the data.

Thank you,

M
 
mefizto said:
Dear vermaden or anybody,

would it be possible/prudent to modify the installation to have the entire / (USB) and /usr (ZFS) on one (mirrored) disk, in particular flash and having the rest of the file system, i.e., SWAP, /tmp, /var, /home, etc., on RAIDed hard drives using ZFS?

The motivation would be to further separate the OS/Application (fairly static on my system) from the data.

Thank you,

M

I don't see any reason why this couldn't be done.

P.S.
Are you from Latvian Linux center?
 
@mefizto

If You want both / and /usr on separate disks/USB then it would be better, to create RAID 1 with gmirror on that USB drives, and use UFS for /usr on that USB disk, and then put all other filesystems with swap on remaining harddisks.
 
Dear killasmurf86,

thank you for the reply. And, no, I am not from Latvian Linux center.

Dear vermaden,

Code:
. . .it would be better, to create RAID 1 with gmirror on that USB drives, . . .

That was what I meant by "to have the entire / (USB) and /usr (ZFS) on one (mirrored) disk". Sorry for my imprecise English.

Code:
. . .use UFS for /usr on that USB disk. . .

What would be the advantage of using UFS instead of ZFS for /usr?

If it is not much bother, could you indicate, at least in general terms, which parts of your procedure have to be changed?

Thank you,

M
 
What would be the advantage of using UFS instead of ZFS for /usr?

If it is not much bother, could you indicate, at least in general terms, which parts of your procedure have to be changed?
Keep small / and /usr (base system) on UFS (in case any problems with ZFS) to have fully working 'repair' environment, put all the rest on ZFS tank.

About changes, You would have to create additional UFS partition (e) for /usr of course in section 1.4:

Code:
# cat > label << __EOF__
# /dev/ad0s1:
8 partitions:
  a: 512m  0 4.2BSD
  e: *     * 4.2BSD
__EOF__
 
Code:
DISKS: 3                                            
# cat > part << __EOF__                               
p 1 165 63  512M                                     
p 2 165  * 1024M                                      
p 3 159  *     *                                      
p 4   0  0     0                                    
a 1                                                
__EOF__                                               
                                                      
# fdisk -f part ad0                               
# fdisk -f part ad1                               
# fdisk -f part ad2

My disks are ad4, ad6, ad8 and ad10..

what would I change here? because this is not working for me;

I understand fdisk -f part ad4 would be the right command but when I do that I get this error:

Code:
******* Working on device /dev/ad4 *******
fdisk: invalid fdisk partition table found
fdisk: geom not found: "ad4"

thanks in advance..
 
So it looks like I found that I was not using the full path to add the kernel modules and was getting an error that I over looked.. hence why it could not find ad4; *blush* my bad..

So after that I was able to finish the install but when I get to the 'reboot in single user mode' I have some issues..

I have a usb keyboard and can not use the keyboard at boot time; odd..
when it boots I get the mountroot> prompt; also odd..

but when I look back through dmesg I do not see my (ad4, ad6, ad8, ad10) but rather I see ada0, ada1, ada2, ada3..

I am assuming that has *something* to do with ahci (something I've not used yet)

so what do I do with that?

Code:
# fdisk -f part ada0                                    
# fdisk -f part ada1                               
# fdisk -f part ada2
# fidsk -f part ada3

but do I need to load the ahci module beforehand?

thanks in advance.
 
I may be partially in the wrong place to ask this question. I have a sparc64 system and before I set myself up for a lot of desk to face action, I was wondering if I could accomplish this sort of install using the livefs in combination with disk1?
 
Back
Top