Impact of mixing PCIe 2.0 & 2.1 2-port Cu NICs in PCIe 3.0 motherboard Intel C600 Series with 2xXeon 8-Core CPU

Hi FreeBSD Gurus!

We have FreeBSD (12 but planning to update to 13) installed on a old (born in 2014) server:
PCIe 3.0 motherboard Intel C600 Series with 2xXeon 8-Core CPU
with a 6 of PCIe v.3
4 of x8 slots (dedicated to CPU 1)
2 of x16 slots (dedicated to CPU 2)

Both CPU are:
Xeon E5-2670 8C/16T 2.60GHz 20MB 8.00GT/s 1600MHz 115W

How impact mixing the 2-head NICs:
Intel® I350-AM2 Powerwille based - 1Gb/s - PCIe 2.1
ftp://ftp.kontron.com/Products/Accessories/EoL_Components/LAN_WLAN/DS_F5000-L004_Dual-LAN_D3035.pdf
Intel® 82576EB Kawela based NICs - 1Gb/s - PCIe 2.0

to overall network performance for delay/jitter sensitive applications like video/voice streaming?

Is that CPU loading increases because higher IRQs numbers from NICs, etc... ?

Tank You all for detailed answering!
 
When you have a Dual CPU machine I feel it is best to group together functions.
All my 10G NIC's are on one CPU and all my NVMe are on the other CPU.
NUMA is how information is shared between CPU's. It is slower than QPI which is built into the CPU.
So my thought is if you can keep the tasks from jumping CPU you save latency.
For example a gmirror of NVMe should not use different CPU's.
NUMA is a slight performance bottleneck.
Likewise a LAGG across CPU's buses would not be desirable.

I can't talk to your interrupt problem.
Study your vmstat -ai output.

Here is my LAGG0 on my router.
Code:
[SNIP]
irq67: t5nex0:0a9                 257129          1
stray irq67                            0          0
irq68: t5nex0:1a0                 218233          0
stray irq68                            0          0
irq69: t5nex0:1a1                 245691          1
stray irq69                            0          0
irq70: t5nex0:1a2                 286330          1
stray irq70                            0          0
irq71: t5nex0:1a3                 239806          0
stray irq71                            0          0
irq72: t5nex0:1a4                 195039          0
stray irq72                            0          0
irq73: t5nex0:1a5                 186268          0
stray irq73                            0          0
irq74: t5nex0:1a6                 211110          0
stray irq74                            0          0
irq75: t5nex0:1a7                 222689          0
stray irq75                            0          0
irq76: t5nex0:1a8                 174816          0
stray irq76                            0          0
irq77: t5nex0:1a9                 153821          0

My motherboard ethernet port used as WAN:
Code:
irq183: em0:rxq1                 2565309          5
stray irq183                           0          0
 
How impact mixing the 2-head NICs:
to overall network performance for delay/jitter sensitive applications like video/voice streaming?

Is that CPU loading increases because higher IRQs numbers from NICs, etc... ?
I am having a hard time with your question.

It seems from your title that you are concerned about 2 NIC's using different bus protocols.
That should not be an issue at all. Gigabit ethernet will not top out your bus. PCIe 2.0 or PCIe 2.1.
So mixing those cards should not be a problem at all.
 
How impact mixing the [different] 2-head NICs:
to overall network performance
Lets just chop the question up.

Without knowing your network this is impossible. Do you have switches in your network?
Are you connecting to clients directly?

The older NIC will use more interrupts. It will also run hotter.
Nothing to affect performance.
 
I am having a hard time with your question.
Thank You for attention and patience! ;)
It seems from your title that you are concerned about 2 NIC's using different bus protocols.
That should not be an issue at all.
This my question may be divided on two(2):
- is PCI 2.0 / PCI 2.1 supported NICs installed in PCI 3.0+ supported motherboard make negative impact on overall server’s network productivity?
- is mixing NICs (part of them are PCI 2.0/2.1, the rest are PCI 3.0 supported) on PCI 3.0 supported motherboard make negative impact on overall server’s network productivity?

And total numbers on NICs are 6-8.

Gigabit ethernet will not top out your bus. PCIe 2.0 or PCIe 2.1.
So mixing those cards should not be a problem at all.
So… What about 10-20G NICs ?
(Of coarse from good quality brands like Mellanox, etc…)
 
Without knowing your network this is impossible. Do you have switches in your network?
Yes, NICs from server (fw-router) connected to Level 3 switches.
And some switches (mostly in management networks) have tuned settings to prioritize some kind of time-sensitive traffic.

Are you connecting to clients directly?
This is DC site, so exist some small office/admins room and a lot of remote connections from outside.

Could You be so please to explain this Your question?

The older NIC will use more interrupts. It will also run hotter.
Nothing to affect performance.
Thank You.
 
Q: Are you connecting to clients directly?
Q: Could You be so please to explain this Your question?
I was asking if you are connecting directly like SFP to SFP without a switch. DAC cables.


This my question may be divided on two(2):
- is PCI 2.0 / PCI 2.1 supported NICs installed in PCI 3.0+ supported motherboard make negative impact on overall server’s network productivity?
Ethernet knows nothing about the PCIe Lanes right? Look at it as the on-ramp.
Theoretically a T4 Chelsio 10G card (PCIe 2.x) should perform the same as a T5 Chelsio (PCIe 3.x) albeit with more heat simply because of die shrink between T4 to T5 ASIC.
Airflow over ethernet cards is a given specification for Chelsio. They need it as they are fanless.


- is mixing NICs (part of them are PCI 2.0/2.1, the rest are PCI 3.0 supported) on PCI 3.0 supported motherboard make negative impact on overall server’s network productivity?
Once again the Ethernet Protocol, whether 10G or 100G, knows nothing of the on-ramp.

Knowing how you motherboard assigns PCIe lanes and types is essential to tuning your server.
For example an 8x slot might only deliver 4x lanes 'electrically'.
Use pciconf -lv and check that lane usage is what you are expecting.
 
Intel® I350-AM2 Powerwille based - 1Gb/s - PCIe 2.1
One thing that would be interesting to see is SR-IOV.
i350 is the only 1G ethernet chipset to support SR-IOV on FreeBSD.
I would ensure All virtualization features are enabled in the BIOS and check whether SR-IOV is available.
The i350 might need to be in a PCIe 3.x slot for the feature to work, If it even does on a PCIe 2.x card.
 
Q: Are you connecting to clients directly?
Q: Could You be so please to explain this Your question?
I was asking if you are connecting directly like SFP to SFP without a switch. DAC cables.
Only 1 LAN link are directly internal server connected, other LANs links going to Level 3 internal switches.

As for now all WANs connected by Copper Eth to ISP equipment but we have a nearest plan to exclude this equipment from chain and connect fibers to SFP on server directly.

Is that You asking about?
(Sorry for my some kind of stupidness…;)

Ethernet knows nothing about the PCIe Lanes right? Look at it as the on-ramp.
Theoretically a T4 Chelsio 10G card (PCIe 2.x) should perform the same as a T5 Chelsio (PCIe 3.x) albeit with more heat simply because of die shrink between T4 to T5 ASIC.
Airflow over ethernet cards is a given specification for Chelsio. They need it as they are fanless.
Understand. Thank You!

Once again the Ethernet Protocol, whether 10G or 100G, knows nothing of the on-ramp.

Knowing how you motherboard assigns PCIe lanes and types is essential to tuning your server.
For example an 8x slot might only deliver 4x lanes 'electrically'.
Use pciconf -lv and check that lane usage is what you are expecting.
I would be very thankful to You if You explain much in details how to determine this
how you motherboard assigns PCIe lanes and types
 
I would ensure All virtualization features are enabled in the BIOS and check whether SR-IOV is available.
The i350 might need to be in a PCIe 3.x slot for the feature to work, If it even does on a PCIe 2.x card.
Hm. Interesting… Before I have no time to dive so deep (because we practically not using virtualization in this site) ;) Well…

Right now ALL VIRTUALIZATION functions and ALL POWER CONTROL are DISABLED in BIOS: this mean all settings are tuned (and rest of them disabled) to giving AS MUCH PERFORMANCE AS POSSIBLE on the price of electricity consuming.

So….
I read carefully the link on markmcb blog, really interesting feature for cases when You using jails, containers, etc…

I still thinking that this NICs virtualization make a small overhead that would be more noticeable in a hi-load (10,20,40…50…100Gb/s), and especially on short-living tcp connections.
But for small loading like small/middle company server - this overhead would be just nothing…

Am I wrong?

Please explain to me in details how this enabling may impact on overall network performance (or decreasing interruptions calls and CPU loading) in exactly my case?

Thank You again one time for patience!
 
Back
Top