Multiple public network interfaces not accessable on Amazon EC2/VPC

Is it possible to have a Amazon EC2 instance attach to multiple VPC interfaces?

I currently have 3 VPC interfaces with public ip address attached to a FreeBSD 10.3-RELEASE instance. The configuration looks good and appears to be quite flexible. However I can only ping the public address of the default interface.

All interfaces are attached and the network security group is wide open. On closer inspection it looks like each interface that is magically created is setup to route through the loop back interface. I am unsure how this is intended to work but it seems to me that adding a removing interfaces should just work?

Any clues on how to make this work? I have tried adding routes to no avail.

#netstat -r
Destination        Gateway            Flags      Netif Expire
default           UGS         xn0
localhost          link#1             UH          lo0         link#2             U           xn0       link#2             UHS         lo0       link#3             UHS         lo0       link#4             UHS         lo0

Destination        Gateway            Flags      Netif Expire
::                 localhost          UGRS        lo0
localhost          link#1             UH          lo0
::ffff:     localhost          UGRS        lo0
fe80::             localhost          UGRS        lo0
fe80::%lo0         link#1             U           lo0
fe80::1%lo0        link#1             UHS         lo0
ff01::%lo0         localhost          U           lo0
ff02::             localhost          UGRS        lo0
ff02::%lo0         localhost          U           lo0

lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
    inet6 ::1 prefixlen 128
    inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1
    inet netmask 0xff000000
xn0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
    ether 02:37:83:5f:33:13
    inet netmask 0xffffff00 broadcast
    media: Ethernet manual
    status: active
xn1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
    ether 02:11:45:d7:f4:69
    inet netmask 0xffffff00 broadcast
    media: Ethernet manual
    status: active
xn2: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
    ether 02:93:b8:78:92:d9
    inet netmask 0xffffff00 broadcast
    media: Ethernet manual
    status: active
If you need the outgoing packets to route via a different interface based on their source address, that can't be done with a single traditional routing table, as it strictly routes based on destination address. You have to implement what Cisco would call "policy routing", or similar. Exactly how you do it depends on exactly what you are doing with those addresses and what type of firewall you use. With any firewall config (including no firewall), using multiple FIBs (routing tables) can enable per-process routing using setfib(1) (or FIB-aware code can even do it per socket). With IPFW, you can match the source address and add a setfib action. With PF, you can use route-to and reply-to options (and not bother with FIBs).

The interface addresses having a route on lo0 is perfectly normal and has nothing to do with outgoing traffic from those addresses.

If you explain the big picture, particularly why you feel you need for multiple addresses and what you intend to do with them, people may be able to offer better advice.
Murph, I am thinking there is something missing from the ami image network configuration. If I attach 3 interfaces to an instance with the vanilla configuration I should be able to ping them.

Screen Shot 2016-07-28 at 12.40.07 PM.png

As you can see each public IP is routed to a internal ip. The instance doesn't have to concern its self with the public ip. As far as the instance is concerned it only deals with the private ips. The security groups are permissive the network devices are attached I am at a loss and think it is a bug with the ami configuration.
It depends on the configuration of the underlying network infrastructure. If they have a very strict implementation of BCP 38 (such as a default Cisco uRPF config), then only the first address in your type of config would be expected to work without fully configuring the host side of it. I.e. the network infrastructure config may require you to send packets through the correct interface and the network will drop sent packets which do not match the interface.

At this point, I'd probably do a quick check with tcpdump -nv -i xn1 and tcpdump -nv -i xn2 to see if the inbound packets are arriving as expected on those interfaces. If they are, then it is likely that you are simply not permitted to send those secondary addresses via xn0 (assuming that none of your security/firewall config is getting in the way).

N.B. It is largely a waste of an IP address to have a single system configured as both primary and secondary MX. It complicates mail configuration for little to no benefit.
Murph, thanks. `tcpdump` shows packets coming in to `xn1` and out of `xn0`.

I have tried to add the following static routes.

route add -iface xn1

only to return:

route: writing to routing socket: File exists
add net gateway xn1 fib 0: route already in table

Neither pf or ipf are running. I am thinking this is a bug with the implementation of the ec2-net-utils package.

P.S I have my reasons (not related) to this issue for running the primary mx, secondary mx, imap, webmail, instances on the same machine.
You can't fix this through just adding routes, if my suspicion that the outbound traffic is being blocked by the infrastructure is correct. The standard/traditional Unix routing model does not allow for this type of configuration. You have to either use multiple FIBs or a firewall config which forcibly overrides the normal routing decisions. Without that special handling, all outbound traffic will always be through a single interface, and the source address of the outgoing packets will be completely ignored for routing decisions.
To add to that, on FreeBSD you can't redirect locally originating outgoing traffic using a packet filter such as pf(4). You need FIBs for that. If your FreeBSD system was the router/firewall for a separate LAN network you could do what is known as "policy routing" by matching traffic incoming on the LAN interface and instructing PF with the route-to keyword to take a different routing decision for the traffic matched.


Staff member
I am thinking this is a bug with the implementation of the ec2-net-utils package.
It's not. It's entirely expected behavior. You will get the exact same result using physical hardware.
I use the FreeBSD AMI everywhere I can on AWS and haven't seen any problems. If you're looking to run an application on different addresses, then aliasing the interface works. I think the problem here is that the address of the secondary nic is in the same subnet as the primary which is difficult routing since with both nics live on the same host i.e.:
xn0 inet netmask 0xffffff00 broadcast
xn1 inet netmask 0xffffff00 broadcast
The netmask/broadcast are identical. You can attempt to prioritize interfaces, but... If it's a case of nested vm or back-end database involving multiple hosts...etc, then provision the primary interfaces of the hosts in one subnet, and the secondary on a different one. Example: If using the default VPC, something like, sliced into 3 subnets (, and, put the all the primary interfaces in, and the secondary interfaces in Or you could create a custom VPC and slice up the network as you like. In either case the primaries won't be able to talk to the secondaries and vise-versa. The only caveat being the need to tailor the security group(s) to the subnets.

It is possible to route multiple Elastic IPs using FIBs.
For example, we allocated 3 subnets,, and assigned to independent interfaces.

1. add to /boot/loader.conf

2. add to /etc/rc.conf
static_routes="ena1 ena2"
route_ena1="default -fib 1"
route_ena2="default -fib 2"

3. create /etc/rc.local
ifconfig ena1 fib 1
ifconfig ena2 fib 2

chmod +x /etc/rc.local


Now you can ping all 3 IPs. If you want to use for example ena1 IP:
setfib 1 csh

For IPv6 you can also add to /etc/rc.conf :
ipv6_static_routes="ena1 ena2"
ipv6_route_ena1="default fe80::xx:xxxx:xxxx:xxxx%ena1 -fib 1"
ipv6_route_ena2="default fe80::xx:xxxx:xxxx:xxxx%ena2 -fib 2"

With this settings I have 3 Elastic IPs accessible from Internet via IPv4 and IPv6 and assigned to different interfaces.

P.S. All subnets must be assigned to the same routing table in AWS VPC.