Solved Cannot destroy partition table

Greetings all,

Consulting gpart(8) how do destroy partition table:
-F Forced destroying of the partition table even if it
is not empty.
Code:
# gpart destroy -F /dev/nvd0
gpart: Device busy

O.K., let us delete the partitions:
Code:
# gpart delete -i 1 /dev/nvd0
gpart: /dev/nvd0p1 deleted
# gpart delete -i 2 /dev/nvd0
gpart: Device busy
O.K., perhaps the /dev/nvd0p2 is mounted:
Code:
# cat /etc/fstab
Nope.

What else to try?

Kindest regards,

M
 
Hi msplsh,

thank you for trying to help. Tried your suggestion, /dev/nvd0p2 is not mounted.

Also tried:
Code:
# ls -l /dev/nvd0*
crw-r----- 1 root operator 0x6a Mar 3 12:30 /dev/nvd0
crw-r----- 1 root operator 0x6e Mar 3 12:30 /dev/nvd0p2
# fstyp /ev/nvd0p2
fstyp: /dev/nvd0p2: filesystem not recognized

Kindest regards,

M
 
What happpens, now that you've removed index 1, if you try again with gpart destroy /dev/nvd0? Are you still getting device busy? I've run into this, and I vaguely remember being able to do it once I destroyed indexes, but don't remember the exact circumstances.
 
Hi scottro,

thank you, but:
Code:
# gpart destroy -F /dev/nvd0
gpart: Device busy
I really do not feel like dding 229GB.

Kindest regards,

M
 
Hi msplsh,

thank you for trying to help. Actually, the command that you proposed alerted me to the problem, it returned:
Code:
gpart: geom 'nvd0': File exists
Hence my attempt to destroy it.

And, yes, it is an attempt to install zfs.

Kindest regards,

M
 
Device busy makes me think it's being managed by the ZFS subsystem, is in a zpool, and needs to be unmounted & dropped.
 
Hi msplsh,

thank you. This is an attempt to install FreeBSD via a script, on a machine that cannot boot from the /dev/nvd0, wherein the bootloader will be on USB and hand-off to the /dev/nvd0. I already have the bootloader on the USB, so I tried the installation on the /dev/nvd0.

I had a problem (typo) with the script, part of which is reproduced:
Code:
#!/bin/sh
# FreeBSD installation script 03/30/2021, no encryption, Beadm compatible

# Set installation disk:
DISK="/dev/nvd0"

echo "Destroying old partitions on the destination drive"
gpart destroy -F $DISK

echo "Configuring zfs for ashift=12"
# Force ZFS to use 4k blocks, i.e., ashift=12 before creating the pool
sysctl -i vfs.zfs.min_auto_ashift=12

# Create the gpt structure on the drives.
echo "Partitioning the destination drive using gpt"
gpart create -s gpt $DISK
gpart add -t freebsd-swap -l swap -a4k -s 4G $DISK
gpart add -t freebsd-zfs -l zfspool -a4k $DISK

# Create new ZFS root pool, mount it, and set properties
#(/mnt, /tmp and /var are writeable)
echo "Creating pool system"
zpool create -f -o altroot=/mnt -m none system "/dev/gpt/zfspool"
zfs set atime=off system
zfs set checksum=fletcher4 system
zfs set compression=lz4 system

echo "Configuring zfs filesystem"
# The parent filesystem for the boot environment.
# All filesystems underneath will be tied to a particular boot environment.
zfs create -o mountpoint=none system/BE
zfs create -o mountpoint=/ -o refreservation=2G system/BE/default

# Datasets excluded from the bootenvironment:

# home
zfs create -o mountpoint=/home -o compression=on -o setuid=off system/data
At this point the script stopped, complaining about non-existing parent data-set. When I correctd a typo and re-run the script, it complained that both partitions exist, although as you can see, the second command should destroy the partition table.

Kindest regards,

M
 
Yeah, your destroy command isn't going to work if ZFS has control. Just do a zpool status / zfs list and get rid of anything containing the drive in there
 
Thank you SirDice, msplsh, and Zirias, for your patience and help.
This was exactly the problem. How did you know that?

Thinking about it in view of your solution, I was able to destroy partition 1 because it is not mounted until defined in /etc/fstab, correct?

Kindest regards,
M
 
Device busy usually means a program has it. If it's not mounted, then nothing has it... unless... the ZFS subsystem doesn't show disks as mounted, only pools. Also, ZFS starts as a service... aka a program. You didn't show your mount results, so I had to guess.

I don't know what partition 1 is, but if it was an EFI partition, then ZFS probably doesn't care about it and didn't have a lock on it. Partition 1 was FreeBSD swap which was probably not turned on, so it's not under control of the kernel and ZFS doesn't care. Partition 2 was under control of ZFS, which means the root device couldn't be modified without releasing that lock.

fstab doesn't have anything to do with it. IIRC, it's only for stuff that needs to be mounted on boot, not current mounts, which means mount is the the only place to be looking for reliable answers.
 
because it is not mounted until defined in /etc/fstab, correct?
Stop, that's already wrong. /etc/fstab predefines mounts, so you can just mount by giving the path (potentially as well as a normal user), and, without the noauto option, tells the system to mount these at startup. It will never tell you what is mounted right now. Just type mount (without any arguments) to find out. BTW, zfs datasets have their own means to be mounted (automatically), you'll never find these in fstab.

And then, mounting a filesystem isn't the only way you can "use" a partition. If it's part of an imported zpool, it's in use as well. Or if it's part of a GEOM mirror, for example.
 
Hi msplsh,

thank you for the explanation. Yes, since the script quit before the swap was added to the /etc/fstab later in the script, it was not mounted.

Hi Zirias,

you are, of course, correct, I was making an assumption that it is understood that we were referring to the script, and as noted in my reply to msplsh, swap was not mounted yet.

Kindest regards,

M
 
Back
Top