bhyve Advice on Firewall Deployment Architecture

dear my friend
no there is no enterprise customer. I am the enterprise organization considering to upgrade my security infrastructure. thats why im telling this issue in this community.
 
In my opinion Bridges, ePairs and Taps suck the throughput out of devices. If you insist on going virt use PPT passthru and use real interfaces.
That is virtualization networking basics in my opinion.
 
In my opinion Bridges, ePairs and Taps suck the throughput out of devices. If you insist on going virt use PPT passthru and use real interfaces.
That is virtualization networking basics in my opinion.

I generally agree with this, especially for firewall and edge-network use cases. Using bridges, taps, or epairs adds extra layers to the networking path, which can introduce unnecessary overhead and latency under load.





For a production firewall, PCI passthrough with direct access to physical NICs is clearly the preferred approach if you care about predictable performance and throughput. In that context, minimizing the virtual networking stack is simply good design.





That said, bridges and virtual interfaces still have their place for less latency-sensitive workloads, but for a WAN-facing firewall, passthrough is the right tool.
 
One exception might be high end cards and Network VF.SR-IOV Function of IO. Chelsio has it down-pat and it works sometimes on Intel.
HP servers and Dell Servers tend to build in crappy adapters. Broadcom ones onboard and others. You can use them to serve slower boxes..

You are showing em0 in your script.
Those work good for your internet connection. 1G to home common. You need to start with a good network facing interface.
You don't want a internet facing interface with shared BMC. That is hacking device waiting to go off.

Think about your expansion capabilities. You want a switch at top of rack. Then have your FreeBSD gateway/firewall controlling the flow.
10G minimum to top of rack switch. That is a good way to think. The used market is selling 40G adapters for $50. So what is your next step. Think ahead.

Cisco is good top of rack switch. I bought a used one and out grew it. Went to better used Cisco-Nexus. Their used stuff is worthy. Documentation outstanding.
You have to learn their command line syntax. All switches have their own OS you must learn.
 
I'll show this community how to move forward in this technology
I don't believe, that you have to show this community how to use technology in the future.

All members here have a good knowledge about running secure networks. Even in the future.

Me for example, I work in company which sells openBSD firewalls and we make a good job in selling hardware Firewalls. Believe me, the experience tells us that customers what to have firewalls like this on bare metall.

We also have firewalls for the cloud, but for the company the front firewall is alway on bare metall.
 
I don't believe, that you have to show this community how to use technology in the future.

All members here have a good knowledge about running secure networks. Even in the future.

Me for example, I work in company which sells openBSD firewalls and we make a good job in selling hardware Firewalls. Believe me, the experience tells us that customers what to have firewalls like this on bare metall.

We also have firewalls for the cloud, but for the company the front firewall is alway on bare metall.
The statement "front firewall is always on bare metal" is incorrect.
Key clarification:
  • Firewalls are not required to run on bare metal (physical hardware). They can be deployed as virtual machines (VMs) in cloud or virtualized environments.
  • Virtual firewalls are standard in modern infrastructure (e.g., AWS Firewall Manager, VMware NSX, Palo Alto VM-Series).
  • Deployment choice depends on factors like security policies, scalability, and infrastructure (e.g., cloud-native setups favor VMs; air-gapped systems might use bare metal for isolation).
Should it be on bare metal?
Not necessarily. It is not a universal requirement. Virtual deployments are common and often preferred for flexibility. Bare metal might be chosen for specific high-security scenarios, but it is not mandatory.
 
If you're looking for a resilient solution for an enterprise environment, get OPNSense from Deciso with a proper support contract from them and put that product on two separate physical nodes
Or just use vanilla FreeBSD and configure carp + pfsync *A LOT* faster than over some clunky GUI.
And still: putting the actual firewall/gateway into a jail gives even more isolation (and flexibility) than bare-metal. I've been running this for many years now and never want to switch back. Forklift upgrade? just set up a new jail and switch over. Want to test some configuration? Clone the jail, test on some other network segment/VLAN and never touch the production gateway(s).
You don't even need to run ssh in that jail, so your firewall is de facto only reachable via the host from inside your network (and/or maybe a hardened jumphost also running in a jail)
I had to administer an OPNSense appliance for ~2 years and while they are nice for the typical "prosumer" that needs a GUI and video tutorials even for lacing their shoes, it is extremely restrictive and biased in every regard and it takes insane amounts of time to get stuff configured that would only touch 2-3 config files and some entries in rc.conf and could be done in a mere 5 minutes. Therefore, if someone is asking in the FreeBSD forums, I'd *always* recommend proper, vanilla FreeBSD. In forums like STH where you mainly find such "prosumers" asking those questions: sure, go with the next best GUI thingy...

For a production firewall, PCI passthrough with direct access to physical NICs is clearly the preferred approach if you care about predictable performance and throughput.
No. If we are talking about FreeBSD based solutions, jails with interfaces passed through via vnet are preferred as they have zero overhead and are basically bare metal. But sure, go with whatever your hallucinating chatbot is telling you...
We're doing pretty much all the heavy lifting within our network that way on passed-through 25G and 40G interfaces as well as combinations of vlan/bridge/epair (apart from some policy-based routing on our core switch for some of the *very* heavy lifting between some VLANs that can be dealt with with rather simple rulesets). Only the edge routers are OpenBSD VMs (on bhyve, no pcie passthrough) for various other reasons - the virtualization penalty doesn't matter with only symmetrical 1G uplinks and even 10G should be OK with decent host hardware. Our branch VPN/edge routers are also OpenBSD (bare metal), but anything that routes to/from/between internal networks is PF in jails and around 1/3 of them runs as redundant pairs via carp+pfsync.

I'll show this community how to move forward in this technology
Sure. Then why do you even bother asking for advice here *from people that actually work with FreeBSD* (some even on a daily basis), if you then insult them by insisting that your chatbot knows better?
 
You refuse to answer the most basic question here: why use a hypervisor on top of a single node to run a single instance of OPNSense?

All that does, is introduce a pile of unnecessary complexity to your stack. It adds moving parts (like your vibe-coded hardening script that *YOU* have to maintain into the future), and moving parts break. That's a law of nature. You haven't explained what benefit your virtualized setup gives you to offset this disadvantage.

Personally I agree with sko that plain vanilla FreeBSD with jails would be a faster and more resilient setup in a real pro environment that you slim down to just the required parts that your business really needs. That does require someone at the wheel who knows exactly what they're doing and doesn't need an LLM or a community forum when stuff breaks in the middle of the night!

FreeBSD itself is plenty capable of pushing packets across very fast links and jails are zero-cost abstractions that help your operational work immensely. You, however, are not running a sufficiently mature operation right now, which I gather from the way you reason about these things and blindly outsource the hardening of your host environment to an LLM. Two big red flags right here in this single forum thread.

My assessment? You are out of your depth here. Hire someone to implement this for your business and to get you up to speed on this subject. There is no shame in not knowing everything. OPNSense could be a very realistic choice in your scenario. If you do pick OPNSense, give Deciso a call and get them to help you implement this so that your company ends up with a supported stack. If you really are running a business on this (you saying "enterprise" to me means a big business with >100 people depending on your competence), winging it yourself at this stage would be penny wise pound foolish. You have been warned.

Also note that OPNSense itself is off-topic on this forum, it has a community of its own.

Phrases like these:
I'll show this community how to move forward in this technology
..are not very convincing or helpful coming from you right now.
 
Thanks for taking the time to explain your setup in detail — I appreciate the insight and the real-world experience you shared.
I now have a much better understanding of the FreeBSD + jails (vnet) approach and why it can be preferable to full virtualization in certain environments, especially when performance and flexibility are the primary goals. The way you describe using jails for routing and firewalling makes a lot of sense.
In my specific case, this system is directly tied to a production business, so for now I’m leaning toward a more conservative and predictable setup. That said, your explanation clarified a lot, and it’s definitely an approach I want to explore more deeply in a non-production environment.
Thanks again for sharing your experience.
Or just use vanilla FreeBSD and configure carp + pfsync *A LOT* faster than over some clunky GUI.
And still: putting the actual firewall/gateway into a jail gives even more isolation (and flexibility) than bare-metal. I've been running this for many years now and never want to switch back. Forklift upgrade? just set up a new jail and switch over. Want to test some configuration? Clone the jail, test on some other network segment/VLAN and never touch the production gateway(s).
You don't even need to run ssh in that jail, so your firewall is de facto only reachable via the host from inside your network (and/or maybe a hardened jumphost also running in a jail)
I had to administer an OPNSense appliance for ~2 years and while they are nice for the typical "prosumer" that needs a GUI and video tutorials even for lacing their shoes, it is extremely restrictive and biased in every regard and it takes insane amounts of time to get stuff configured that would only touch 2-3 config files and some entries in rc.conf and could be done in a mere 5 minutes. Therefore, if someone is asking in the FreeBSD forums, I'd *always* recommend proper, vanilla FreeBSD. In forums like STH where you mainly find such "prosumers" asking those questions: sure, go with the next best GUI thingy...


No. If we are talking about FreeBSD based solutions, jails with interfaces passed through via vnet are preferred as they have zero overhead and are basically bare metal. But sure, go with whatever your hallucinating chatbot is telling you...
We're doing pretty much all the heavy lifting within our network that way on passed-through 25G and 40G interfaces as well as combinations of vlan/bridge/epair (apart from some policy-based routing on our core switch for some of the *very* heavy lifting between some VLANs that can be dealt with with rather simple rulesets). Only the edge routers are OpenBSD VMs (on bhyve, no pcie passthrough) for various other reasons - the virtualization penalty doesn't matter with only symmetrical 1G uplinks and even 10G should be OK with decent host hardware. Our branch VPN/edge routers are also OpenBSD (bare metal), but anything that routes to/from/between internal networks is PF in jails and around 1/3 of them runs as redundant pairs via carp+pfsync.


Sure. Then why do you even bother asking for advice here *from people that actually work with FreeBSD* (some even on a daily basis), if you then insult them by insisting that your chatbot knows better?
 
Thanks for taking the time to write such a detailed response. I understand your point about unnecessary complexity and the risks that come with adding extra layers, especially in a single-node setup.
Your comments helped clarify an important gap in my own reasoning: I hadn’t articulated clearly enough what concrete benefit a hypervisor would provide in this specific scenario, and that’s a fair criticism. From a purely technical standpoint, I agree that a slim, well-understood FreeBSD setup can be both faster and more resilient when operated by someone with deep, hands-on experience.

In my case, this system is directly tied to a running business, which is why I’m now leaning toward a more conservative and predictable approach, even if it’s not the most elegant from a purist’s perspective. Your feedback reinforced the importance of reducing moving parts at this stage rather than optimizing prematurely.
I appreciate the warning and the candid advice. It gave me a clearer view of the operational risks involved, and it’s something I’ll take seriously going forward.
You refuse to answer the most basic question here: why use a hypervisor on top of a single node to run a single instance of OPNSense?

All that does, is introduce a pile of unnecessary complexity to your stack. It adds moving parts (like your vibe-coded hardening script that *YOU* have to maintain into the future), and moving parts break. That's a law of nature. You haven't explained what benefit your virtualized setup gives you to offset this disadvantage.

Personally I agree with sko that plain vanilla FreeBSD with jails would be a faster and more resilient setup in a real pro environment that you slim down to just the required parts that your business really needs. That does require someone at the wheel who knows exactly what they're doing and doesn't need an LLM or a community forum when stuff breaks in the middle of the night!

FreeBSD itself is plenty capable of pushing packets across very fast links and jails are zero-cost abstractions that help your operational work immensely. You, however, are not running a sufficiently mature operation right now, which I gather from the way you reason about these things and blindly outsource the hardening of your host environment to an LLM. Two big red flags right here in this single forum thread.

My assessment? You are out of your depth here. Hire someone to implement this for your business and to get you up to speed on this subject. There is no shame in not knowing everything. OPNSense could be a very realistic choice in your scenario. If you do pick OPNSense, give Deciso a call and get them to help you implement this so that your company ends up with a supported stack. If you really are running a business on this (you saying "enterprise" to me means a big business with >100 people depending on your competence), winging it yourself at this stage would be penny wise pound foolish. You have been warned.

Also note that OPNSense itself is off-topic on this forum, it has a community of its own.

Phrases like these:

..are not very convincing or helpful coming from you right now.
 
Back when I was a small lad, my father caught me speeding my bicycle down a steep public road.

I got into a lot of trouble for that, mainly because I was standing on the handlebars at the time! Back then, misdemeanors generally called for the lawyer cane, but this was a special case, and the leather razor strop was deployed.

I mention this incident because it illustrates the importance of understanding the difference between what we can do and what is sensible can sometimes only be appreciated by the accumulation of experience.

The dangers of virtualising core services generally materialise as edge conditions in a complex environment, i.e. during unusual events where your operational survival is already at risk. And those dangers can metastasize over time, as change control (assuming it exists) fails to prevent changes which create additional unintended inter-dependencies and failure modes. Complexity is the enemy of reliability.

This perspective does not come from LLMs (which, as far as I can see, are not generally good at appreciating edge conditions). It comes from decades of hard lived experience. I have heard people with titles like "Microsoft Certified Consultant" claim that there is "nothing you can not virtualise." I have seen "experts" virtualise core services. And I have see those exact virtualised services abjectly fail because the levels of expertise and attention required to professionally maintain complex inter-dependencies over time were not sustained, or at least not focused in the right places (even in an environment where paying top dollar for capable people was completely the norm).

My advice is exactly the same as my father would have given. Keep your posterior close to the seat, and your hands in reach of the brake levers, at all times.
 
Back when I was a small lad, my father caught me speeding my bicycle down a steep public road.

I got into a lot of trouble for that, mainly because I was standing on the handlebars at the time! Back then, misdemeanors generally called for the lawyer cane, but this was a special case, and the leather razor strop was deployed.

I mention this incident because it illustrates the importance of understanding the difference between what we can do and what is sensible can sometimes only be appreciated by the accumulation of experience.

The dangers of virtualising core services generally materialise as edge conditions in a complex environment, i.e. during unusual events where your operational survival is already at risk. And those dangers can metastasize over time, as change control (assuming it exists) fails to prevent changes which create additional unintended inter-dependencies and failure modes. Complexity is the enemy of reliability.

This perspective does not come from LLMs (which, as far as I can see, are not generally good at appreciating edge conditions). It comes from decades of hard lived experience. I have heard people with titles like "Microsoft Certified Consultant" claim that there is "nothing you can not virtualise." I have seen "experts" virtualise core services. And I have see those exact virtualised services abjectly fail because the levels of expertise and attention required to professionally maintain complex inter-dependencies over time were not sustained, or at least not focused in the right places (even in an environment where paying top dollar for capable people was completely the norm).

My advice is exactly the same as my father would have given. Keep your posterior close to the seat, and your hands in reach of the brake levers, at all times.

Thanks for the detailed explanation. After reflecting on your points about complexity and operational risk, I’ve decided not to pursue virtualization on FreeBSD for this setup.
Your comments helped clarify the difference between what’s technically possible and what’s sensible in a production environment.
 
Back
Top