bhyve - CPU, vCPU, cores nad threads

IPTRACE

Well-Known Member

Reaction score: 21
Messages: 314

Hello!

Can someone explain how it works?
I mean the following kernel options with optimal configuration.

hw.vmm.topology.cores_per_package
hw.vmm.topology.threads_per_core


I've got 2 physical CPU with 10 cores with HT.
So the FreeBSD shows 40 - CPU0-CPU39.
Actually I use FreeBSD OS as VM but I have one Windows 10 VM as well.
What are differences between the following configurations?

A
hw.vmm.topology.cores_per_package: 1
hw.vmm.topology.threads_per_core: 1

B
hw.vmm.topology.cores_per_package: 2
hw.vmm.topology.threads_per_core: 1

C
hw.vmm.topology.cores_per_package: 1
hw.vmm.topology.threads_per_core: 2

D
hw.vmm.topology.cores_per_package: 2
hw.vmm.topology.threads_per_core: 2

E
hw.vmm.topology.cores_per_package: 4
hw.vmm.topology.threads_per_core: 8



Is it correct?
C: FreeBSD = vCPU: 1, cores: 1, threads: 2, logical: 2; Windows = vCPU: 1, threads: 1, logical: 1
D: FreeBSD = vCPU: 2, cores: 4, threads: 2, logical: 8; Windows = vCPU: 2, threads: 2, logical: 4
 

SirDice

Administrator
Staff member
Administrator
Moderator

Reaction score: 7,763
Messages: 30,893

I've got 2 physical CPU with 10 cores with HT.
That would be 2 packages (the physical CPU), each package has 10 cores with 20 threads. Giving a total of 40 threads (logical CPUs).

A) Is a single core CPU without hyper-threading; 1 logical
B) Is a dual-core CPU without hyper-threading; 2 logical
C) Is a single core CPU with hyper-threading; 2 logical
D) Is a dual-core CPU with hyper-threading; 4 logical
E) Is weird. You can't have 8 threads per core. HT only allows for 2.
 
OP
OP
IPTRACE

IPTRACE

Well-Known Member

Reaction score: 21
Messages: 314

Thanks, I'll try to set correct values for my environment.
 

abishai

Aspiring Daemon

Reaction score: 170
Messages: 735

Values don't matter. They are useful for licensing, as Windows has limitations on physical CPUs based on versions
 
OP
OP
IPTRACE

IPTRACE

Well-Known Member

Reaction score: 21
Messages: 314

What values did you end up going with?? I'm also running 2x 10c with 20 threads cpu.
As abishai wrote, It depends on Windows license. Windows 10 64-bit supports up to the 2 physical CPUs.
My config is A) from the first. I haven't yet changed it due to that configuration has impact for others VM as well.
If you want to have 2 CPUs with HT (4 logical cores), use D) example.
 

kedar

New Member


Messages: 4

As abishai wrote, It depends on Windows license. Windows 10 64-bit supports up to the 2 physical CPUs.
My config is A) from the first. I haven't yet changed it due to that configuration has impact for others VM as well.
If you want to have 2 CPUs with HT (4 logical cores), use D) example.
Yea was just curious, i'm using windows 10 and when i use 2 - 4 cores its smooth, but any cores higher it feels like windows 10 is a bit laggy. I thought maybe my tunables are wrong.
hw.vmm.topology.cores_per_package: 10
hw.vmm.topology.threads_per_core: 2
 

jbulow

New Member


Messages: 1

Yea was just curious, i'm using windows 10 and when i use 2 - 4 cores its smooth, but any cores higher it feels like windows 10 is a bit laggy. I thought maybe my tunables are wrong.
Did you found out a way solve the laggy behavior of windows 10 when using multiple cores? I have the same problem. Tried different topologies but it seems that bhyve runs all guest cores one one physical core. I don't have this problem with windows 7.

The output from top on the FreeBSD host shows a CPU usage between 25-100% when running a heavy load on windows 10 with 8 cores. The same load on windows 7 showed a CPU usage between ~200%-~800%.
 

kedar

New Member


Messages: 4

nope ive tried everything I can think of just sticking to vms with 2 cores at the moment
 

kedar

New Member


Messages: 4

Did you found out a way solve the laggy behavior of windows 10 when using multiple cores? I have the same problem. Tried different topologies but it seems that bhyve runs all guest cores one one physical core. I don't have this problem with windows 7.

The output from top on the FreeBSD host shows a CPU usage between 25-100% when running a heavy load on windows 10 with 8 cores. The same load on windows 7 showed a CPU usage between ~200%-~800%.
nope ive tried everything I can think of just sticking to vms with 2 cores at the moment
 
OP
OP
IPTRACE

IPTRACE

Well-Known Member

Reaction score: 21
Messages: 314

I've set as below but without visible issue on bhyve (FreeBSD) VMs but for Windows I see 1 CPU socket with HT ( bhyve -c 2).

hw.vmm.topology.cores_per_package: 2
hw.vmm.topology.threads_per_core: 2


I mean the number of vCPU in FreeBSD VM. I set bhyve -c 1 and I've seen 1 vCPU. I expected to see 4 as 2x2 (2x CPU with HT).
Have I missed something?

Or it means that VM uses 4 host logical CPUs per 1 vCPU defined in bhyve command line?

Then I changed to the following.

hw.vmm.topology.cores_per_package=1
hw.vmm.topology.threads_per_core=2


Windows shows the same CPU like for the previous settings.
But when I set for this setting bhyve -c 4 Windows has shown 2 CPU with HT (2 sockets and 4 logical CPUs)

And again, I changed as below.

hw.vmm.topology.cores_per_package=2
hw.vmm.topology.threads_per_core=2


Set bhyve -c 8 for Windows and in turn 8 logical CPUs with HT (2 sockets with 2 cores per socket and HT).

Conclusions:
1. hw.vmm.topology.X settings don't weigh to the number of vCPU in bhyve FreeBSD guests. There is only bhyve -c 1 where you can control it.
2. hw.vmm.topology.X settings weigh to Windows number of logical CPUs. I suppose the equation is simillar like this.

Y = sockets * hw.vmm.topology.threads_per_core * hw.vmm.topology.cores_per_package
Where Y is set in command line bhyve -c Y and bhyve controls how optimally set the CPUs between sockets, cores and threads.
And the number of sockets for Windows clients (7, 10 etc. no servers) is max 2.
As well as HT (hw.vmm.topology.threads_per_core) is equal 2.
 

grehan@

Member
Developer

Reaction score: 83
Messages: 84

Your conclusions are correct. The total number of CPUs is given with the -c <num> option to bhyve. The topology options only change how these are reported to the guest.
 
Top