bhyve question about vm-bhyve create zvol vm in freebsd15 ?

dear all:
i have create a freebsd15 server with 2 zpools (system2026 for freebsd system, vmdata for vm-bhyve vm ). below was the configure ..
Code:
Script started on Fri Apr 10 11:36:36 2026
root@yjjy002:~ # zpool lis
zpool list
NAME         SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
system2026   764G  6.54G   757G        -         -     0%     0%  1.00x    ONLINE  -
vmdata      7.27T  3.84G  7.26T        -         -     0%     0%  1.00x    ONLINE  -

#zpool status
  pool: system2026
 state: ONLINE
config:

    NAME        STATE     READ WRITE CKSUM
    system2026  ONLINE       0     0     0
      mirror-0  ONLINE       0     0     0
        da0p3   ONLINE       0     0     0
        da1p3   ONLINE       0     0     0

errors: No known data errors

  pool: vmdata
 state: ONLINE
config:

    NAME        STATE     READ WRITE CKSUM
    vmdata      ONLINE       0     0     0
      mirror-0  ONLINE       0     0     0
        da2     ONLINE       0     0     0
        da3     ONLINE       0     0     0

errors: No known data errors
root@yjjy002:~ # zfs list
NAME                      USED  AVAIL  REFER  MOUNTPOINT
system2026               6.54G   734G    96K  /system2026
system2026/ROOT          6.54G   734G    96K  none
system2026/ROOT/default  6.54G   734G  6.54G  /
system2026/home            96K   734G    96K  /home
system2026/tmp            104K   734G   104K  /tmp
system2026/usr            288K   734G    96K  /usr
system2026/usr/ports       96K   734G    96K  /usr/ports
system2026/usr/src         96K   734G    96K  /usr/src
system2026/var            788K   734G    96K  /var
system2026/var/audit       96K   734G    96K  /var/audit
system2026/var/crash       96K   734G    96K  /var/crash
system2026/var/log        300K   734G   300K  /var/log
system2026/var/mail       104K   734G   104K  /var/mail
system2026/var/tmp         96K   734G    96K  /var/tmp
vmdata                   3.84G  7.14T    96K  /vmdata
vmdata/vm                3.84G  7.14T   248K  /vmdata/vm
vmdata/vm/fb15            238M  7.14T   120K  /vmdata/vm/fb15
vmdata/vm/fb15/disk0      237M  7.14T   236M  -
vmdata/vm/fb16           1.30M  7.14T   112K  /vmdata/vm/fb16
vmdata/vm/fb16/disk0     1.23M  7.14T   236M  -
vmdata/vm/hlw001         1.80G  7.14T  1.80G  /vmdata/vm/hlw001
vmdata/vm/hlw002         11.2M  7.14T  1.79G  /vmdata/vm/hlw002
vmdata/vm/hlw003         1.79G  7.14T  1.79G  /vmdata/vm/hlw003

root@yjjy002:~ # cat /etc/rc.conf
hostname="yjjy002.laifeng.xian"
ifconfig_igb0="inet 10.8.1.253 netmask 255.255.255.0"
defaultrouter="10.8.1.254"
sshd_enable="YES"
moused_nondefault_enable="NO"
# Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable
dumpdev="AUTO"
zfs_enable="YES"
vm_enable="yes"
vm_dir="zfs:vmdata/vm"

root@yjjy002:~ # vm switch list
NAME    TYPE      IFACE      ADDRESS  PRIVATE  MTU  VLAN  PORTS
public  standard  vm-public  -        no       -    -     igb0

root@yjjy002:~ # vm list
NAME    DATASTORE  LOADER     CPU  MEMORY  VNC              AUTO  STATE
fb15    default    bhyveload  1    2gb     -                No    Stopped
fb16    default    bhyveload  1    2gb     -                No    Stopped
hlw001  default    uefi       8    32gb    10.8.1.253:5900  No    Running (6308)
hlw002  default    uefi       8    32gb    10.8.1.253:5901  No    Running (6854)
hlw003  default    uefi       8    16gb    10.8.1.253:5902  No    Running (7246)
root@yjjy002:~ # exit
question was in below picture ...when i used command : # vm create -t freebsd-zvol -m 2gb -s 20gb freebsd15 . ..got a error message in the picture. please help me. thanks.....
 

Attachments

  • image_2026-04-09_08-55-25.jpeg
    image_2026-04-09_08-55-25.jpeg
    176.4 KB · Views: 22
dear SirDice : morning ...below was the output ... yesterday i have install two vm ..now, cannot poweroff ....jesus ... m.looks like all vm locked by system. ....jesus ...please help me ...thanks.
somebody told me we can directly rm the run.lock file . then start vm .... is that right way ? thanks.
 

Attachments

  • image_2026-04-10_23-36-34.png
    image_2026-04-10_23-36-34.png
    69.3 KB · Views: 3
Dear SirDice :
morning. before i create a vmdata zfs pool , did i need to running below step ?
1. gpart destroy -F /dev/da2
2. gpart destroy -F /dev/da3
3. reboot
4. gpart create -s gpt /dev/da2
5.gpart create -s gpt /dev/da3
6 . zpool create vmdata mirror /dev/da2 /dev/da3
?

or
1. gpart destroy -F /dev/da2
2. gpart destroy -F /dev/da3
3. reboot
4. zpool create vmdata mirror /dev/da2 /dev/da3

which one was right ? thanks.
 
other quesiton. the fresh freebsd15 boot vm-bhyve infor:
vm-public: warning: adding member interface igb0 which has an ip address assigned is depreated and will be unsupported in a future release .
Dear SirDice : if i just only physical nework card igb0 , what is right configure for vm-bhyve bridge ? thanks.
 

Attachments

  • image_2026-04-10_23-48-19.jpg
    image_2026-04-10_23-48-19.jpg
    262.5 KB · Views: 2
Back
Top