Tiny server for multi-protocol traffic generation for lab test (Lenovo ThinkCentre M92p VS DELL OptiPlex 9020)

Dear FreeBSD Gurus!

Please give You suggestion about tiny station as multi-protocol traffic generator for lab tests:
Lenovo ThinkCentre M29P
VS
Dell OptiPlex 9020

Main GOAL is generating different type of traffic by Seagull from small bunch (10ps) of this tiny servers to applications servers behind the WAN of pfSense.
(For test purpose the WAN of pfSense would be connected to hi-load switch and bunch of all of this tiny servers also connected to this hi-load switch.).
Each tiny server would have specific setup & working script for Seagull to imitate real behaviors of real apps end-users.

Let's to note that also discrete Intel-based NIC may be added as option. (I am not sure, that the both of i5-3470 and i5-4590 able to handle 10G traffic, even a NIC would be 10G.

Thank You for Your time and opinion!
 
Packet/traffic generation isn't usually very demanding on the hardware. so instead of configuring and maintaining a horde of 'small-ish' but still relatively expensive fully-fledged machines, I'd go for one system with decent uplink capabilities. If the sytstem you'd like to stress-test only has a 1GBit uplink there isn't really a point in generating 10Gbit of traffic - you will only test the congestion handling capabilities of the switch that will slow your packet generation down to the bandwidth of the smallest link in...
If you really want/need to oversize, give that box a 10Gbit uplink; but you won't be able to get any meaningful results for everything over the uplink capacity of the tested system... E.g. many Xeon-D systems already come with 2x10GBit NICs (and varying numbers of 1Gbit ports); we are running our routers and gateways on Supermicro Xeon-D 15xx systems with 2x10G/6x1G NICs.

If you really want a bunch of discrete, small systems and the burden of maintaining all of them, I'd suggest something that runs off a SD-card so you could at least provision them relatively quick and cheap via a pre-configured image and quickly switch out the SD cards to upgrade or change testing scenarios etc... Depending on the ammount of traffic you *really* need for the system(s) you are going to test, something like small OrangePi or similar that can even be powered via PoE is sufficient.
E.g. I'm using their smallest variant (OrangePi zero) combined with a PoE-splitter if I need some small "throwaway" system somewhere e.g. for testing, as a short-term network sensor or to control something (GPIO) etc pp. They are cheap, low-maintenance (just replace the SD-card) and only need either a PoE network jack or a usb-charger (and connect via wifi).
 
Packet/traffic generation isn't usually very demanding on the hardware. so instead of configuring and maintaining a horde of 'small-ish' but still relatively expensive fully-fledged machines, I'd go for one system with decent uplink capabilities. If the sytstem you'd like to stress-test only has a 1GBit uplink there isn't really a point in generating 10Gbit of traffic - you will only test the congestion handling capabilities of the switch that will slow your packet generation down to the bandwidth of the smallest link in...
1. The cost of M29P on aftermarket are $50-70, so $500-700 is not so huge budget for real flexible system that able to generate (and automated by scripts) multi-protocol traffic;

2. The servers/appliance that intend to be tested have 10G (and more) uplink. ;)

If you really want/need to oversize, give that box a 10Gbit uplink; but you won't be able to get any meaningful results for everything over the uplink capacity of the tested system... E.g. many Xeon-D systems already come with 2x10GBit NICs (and varying numbers of 1Gbit ports); we are running our routers and gateways on Supermicro Xeon-D 15xx systems with 2x10G/6x1G NICs.
In case on bunch (10) small systems in addition to automating by scripting the whole testing process, we receive:
- redundancy (no problem if 1-2-3 node just die for some reason, - the testing goes forward and no wasting any time);
- each sub-group of nodes (for example 4+3+3) may be tuned for certain type of testing;

If you really want a bunch of discrete, small systems and the burden of maintaining all of them,
No any extra burdens, more than we already have for lab testing. Anyway, all 10 would be identical, so less configuration & maintenance work.

I'd suggest something that runs off a SD-card so you could at least provision them relatively quick and cheap via a pre-configured image and quickly switch out the SD cards to upgrade or change testing scenarios etc... Depending on the ammount of traffic you *really* need for the system(s) you are going to test, something like small OrangePi or similar that can even be powered via PoE is sufficient.
E.g. I'm using their smallest variant (OrangePi zero) combined with a PoE-splitter if I need some small "throwaway" system somewhere e.g. for testing, as a short-term network sensor or to control something (GPIO) etc pp. They are cheap, low-maintenance (just replace the SD-card) and only need either a PoE network jack or a usb-charger (and connect via wifi).
I am not sure that small systems like Raspberry/Orange able to generate such amount of traffic (mean 1G).
Are You sure this systems suitable for hiload testing?
 
- redundancy (no problem if 1-2-3 node just die for some reason, - the testing goes forward and no wasting any time);
Thats 100% docker/kubernetes-thinking there... "we know our stuff is crap and dies all the time, so we just run more of it and optimize for fast deployment of even more of that crap". of course without ever looking into the reason WHY something is failing...
 
Thats 100% docker/kubernetes-thinking there... "we know our stuff is crap and dies all the time, so we just run more of it and optimize for fast deployment of even more of that crap". of course without ever looking into the reason WHY something is failing...
Heh! :)
Thank You make me smiling!

But anyway, this M29P model are **very robust** according Googling (and my own experience). Not so overheated as 9020.

A wrote *1-2-3 nodes die due some reason* but in real life *i never see how this model die,- no overheating, hangs, memory or CPU errors, nothing.
Need to say more: we doing stress-test 24/7/3day long, and....no one problem.
*(Possible using as coffee cup /royal sandwich warmer during the tests :) Joke...*
 
I hope I'm skirting the pfSense/FreeBSD line here, but one could always look at the specs on the NetGate devices to see their hardware and what their testing has shown (they seem to do some good capacity testing).
That said the network config has a lot to do with the results. Lets say you have source that can saturate a 10G link, but you are terminating that link into a switch. If that switch can't handle the 10G into it, you are automatically getting reduced results.
if the switch can handle the 10G into it, it needs to have 10G capacity out (10 ports at 1G example).

If the system is to have a 10G uplink but the other side can't saturate that (10 ports at 1G, 4 at 2.5G) then it will be mostly idle.
 
Thats 100% docker/kubernetes-thinking there... "we know our stuff is crap and dies all the time, so we just run more of it and optimize for fast deployment of even more of that crap". of course without ever looking into the reason WHY something is failing...
this post really gets my personal "most useless post I have read today" award, but yeah, modern resilient distributed software architecture has not reached everyone ...

however, have you tried to build ostinato? I do not know whether it runs on FreeBSD though, have just dabbled around with it a few years ago...
 
I hope I'm skirting the pfSense/FreeBSD line here, but one could always look at the specs on the NetGate devices to see their hardware and what their testing has shown (they seem to do some good capacity testing).
That said the network config has a lot to do with the results. Lets say you have source that can saturate a 10G link, but you are terminating that link into a switch. If that switch can't handle the 10G into it, you are automatically getting reduced results.
if the switch can handle the 10G into it, it needs to have 10G capacity out (10 ports at 1G example).

If the system is to have a 10G uplink but the other side can't saturate that (10 ports at 1G, 4 at 2.5G) then it will be mostly idle.
Primary I seek for flexible&powerful toolset for testing TCP/IP-based protocols, with scripting and results analysis ability.
For testing network & hardware settings inside-from-inside, then inside-from-outside (from another geolocation).

So, after settings of all was “polished to maximum performance & stability” during inside-from-inside testing, only few corrections in settings needed after inside-from-outside testing done.
 
  • Like
Reactions: mer
So basically try to saturate up and down links and make sure you know reasons for drops like saturated 10G into 1G drops 90%. Fair enough.
Also (as I noted before) application settings tuning is important things, even much more than hw or FreeBSD Settings.
 
  • Like
Reactions: mer
If I understand your questions correctly, it comes down to: You have a certain workload, and you are asking us whether a certain Lenovo model or a certain Dell model is better.

Nowhere do you tell us what the workload is, other than the generic term "traffic generation", and hinting that there may be a 10G NIC involved.

Nowhere do you tell use what the specs of the two machines are, you expect us to do all that research.

Is the traffic generation a userspace ping, with a single packet of a few dozen bytes, one every second? Are you trying to saturate the 10G Ethernet link? What protocol are you trying to generate traffic for? There is a huge difference between something inherently simple (like ICMP packets, jumbo frames, with thousands of bytes of all zeroes in them) versus something exceedingly complex that requires lots of computation to generate the packets. Are you needing to analyze the packets that are returned? Will the traffic generators have to keep internal state? What characteristics of the traffic are you going to measure?
 
Back
Top