Skip to content

My Introduction to OpenBSD

feature

Cover photo by Stelio Puccinelli on Unsplash

In this post, I'd like to share why I consider OpenBSD a viable but often overlooked platform for building routers and firewalls. I tried to highlight the networking features of OpenBSD that in my opinion make this OS stand out.

Please, treat this post more as a collection of my personal notes and findings. It doesn't claim to be objective and comprehensive.

Preface

Recently I was looking for an open-source routing solution to build site-to-site VPN gateways and stateful firewalls. My considerations included ease of support and automation, feature richness, and low resource footprint. I dismissed such options as VyOS and pfSense quite early because I enjoy building things myself and prefer to have more control over the system. So I continued my search among Linux distributions. But as much as I love Linux, its "batteries not included" approach still seemed to require too much effort. I was looking for more balance between freedom of choice and the convenience of ready-made functionality. This made me look around for other open-source alternatives and eventually led to OpenBSD.

First impressions

The first thing I noticed when I started working with OpenBSD was the sense of a complete and well-designed system. All my humble networking needs were covered by the base system. That included BGP, OSPF, IPSec, and a packet filter. The daemons implementing the above protocols had very similar configuration syntax and philosophy which made me feel comfortable around the system quickly.

My next discovery was that OpenBSD had everything to build a high-availability appliance out of the box. There is CARP which covers FHRP, pfsync to keep packet filter state tables synchronized across nodes, and even sasyncd to synchronize IPSec SAs (although I didn't try the latter).

And since I've touched CARP and pfsync I can't help but mention the use of pseudo interfaces in OpenBSD. Many things in OpenBSD are done via interfaces. For instance, you don't need a daemon to export flow data. You just configure the pflow interface with ifconfig and that's it. Or if you need to access firewall logs, you just point tcpdump to a pflog interface. How cool is that?

Simplicity and coherence

To me, systemd and netplan feel like overkill when it comes to managing network interfaces of a server, especially a router. OpenBSD keeps all network interface configs in separate /etc/hostname. files which contain parameters for ifconfig. And if you need to make sure that the config is always applied to the right interface you can use /etc/hostname.<lladr> to bind it to the link-layer address (e.g., /etc/hostname.00:00:5e:00:53:af).

Same with the service management. The most complex case of service management on a router that comes to my mind is running multiple instances of the same daemon in different VRFs (1). That can be easily achieved with the rcctl tool even for daemons not natively aware of rdomains by passing the rtable option. More on that later.

  1. From here on I will refer to VRFs as routing domains or rdomains as a more idiomatic OpenBSD term.

I also like how OpenBSD network daemons follow the low coupling and high cohesion design principles. A good example of that would be the utilization of kernel features such as route labels and packet tags.

I was puzzled at first about how to configure route redistribution between BGP and OSPF because I couldn't find the redistribute bgp command in the ospfd.conf man page. But soon I realized I could mark BGP routes with an arbitrary label in bgpd.conf and then match that label in ospfd config.

Packet filter

The big part of the OpenBSD networking stack is of course PF, or packet filter. I don't want to dive much into technical details here. There are plenty of articles on the Internet and even a book dedicated to PF.

In terms of traffic filtering functionality, PF is on par with iptables. Although its syntax is very straightforward and human-friendly, it takes some time to wrap your head around its operation principles if you come from iptables or traditional firewalls. PF is the only firewall I know where the last matching rule determines the outcome. This forces you to start with broad common rules, such as block all, and then add more and more specific ones.

As already mentioned OpenBSD features pfsync protocol. It enables you to build highly available firewall clusters, similar to those offered by big vendors.

Routing

OpenBSD supports OSPF and BGP out of the box with OpenOSPFD and OpenBGPD respectively. FRRouting and BIRD can also be installed as external packages, but I yet haven't tried them.

Both BGPD and OSPFD have accompanying CLI tools, namely bgpctl, and ospfctl, which allow you to extract operational data (i.e., show commands) and make runtime changes such as clearing neighbors.

As I've mentioned before OpenBSD supports virtual routing with rtables and rdomains. Since you can assign multiple rtables only to the default rdomain both terms are often interchangeable. But they should not be confused. Separate rtables in the default rdomain can be used for policy routing. This is done by matching packets with pf and sending them to a specific rtable where route lookup should happen. Rdomains can be used to assign interfaces and are like VRFs. I recommend this article to learn more about this topic.

IPSec

IPSec support is provided by iked for IKEv2 and isakmpd for IKEv1 protocols. Unfortunately, you can't run them both on the same machine, because they listen on the same UDP ports (500 and 4500). Perhaps this can be circumvented by running iked and isakmpd in different rdomains, but you'll still need two public IPs for that.

Both iked and isakmpd need only one config file to describe both Phase 1 and Phase 2. Though I find it a bit confusing that while the iked config is called /etc/iked.conf, it's /etc/ipsec.conf for isakmpd.

OpenBSD IPSec stack utilizes a special enc pseudo-interface. It allows you to apply pf rules to IPSec encapsulated traffic and monitor traffic going to or from an IPSec tunnel before encryption and after decryption with tcpdump. In my opinion, this contributes significantly to the process of troubleshooting.

With the recent 7.4 release, OpenBSD got support for route-based IPSec which looks very promising, but I haven't had a chance to try it yet.

Monitoring

I used Zabbix to monitor OpenBSD boxes by installing zabbix-agent from packages. There is an official OpenBSD template that can be used as a starting point.

There is also node_exporter available in packages if you prefer Prometheus.

Automation

OpenBSD is supported by all major configuration management systems, such as Puppet, Chef, Ansible, and Salt. Although the availability of ready-made modules is not as abundant as with various Linux distributions.

In my case, I used Puppet to automate almost all aspects of system configuration. I relied on the bsd module to manage network interfaces and the pf module to manage PF rules. For daemons like BGPD, IKED, and OSPFD, I created my own modules.

Documentation

Compared to Linux, there are relatively few articles and blog posts about OpenBSD. This is expected given the comparatively small user base. However, it is compensated by the quality and completeness of the manual pages.

Performance

Up to this point, it may seem like it's too good to be true. However, I found network performance considerably lower than that of Linux when I was testing IPSec.

I was passing iperf traffic through two OpenBSD VMs running on the same KVM host. There was a GRE over IPSec tunnel between them with AES256 GCM encryption which is hardware accelerated on both OpenBSD and Linux. A separate KVM host connected via 1 Gbit/s network was running 2 VMs with iperf which generated traffic. This wasn't the best test topology since the physical gigabit connection between hosts created a bottleneck. But unfortunately, OpenBSD didn't even hit that bottleneck as you can see in the results below.

$ iperf3 -B 172.16.2.11 -c 172.16.2.10
Connecting to host 172.16.2.10, port 5201
[  5] local 172.16.2.11 port 42727 connected to 172.16.2.10 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  69.4 MBytes   582 Mbits/sec   35    189 KBytes
[  5]   1.00-2.00   sec  67.5 MBytes   566 Mbits/sec   43    189 KBytes
[  5]   2.00-3.00   sec  63.6 MBytes   533 Mbits/sec   48    178 KBytes
[  5]   3.00-4.00   sec  63.0 MBytes   529 Mbits/sec   38    195 KBytes
[  5]   4.00-5.00   sec  64.0 MBytes   536 Mbits/sec   17    245 KBytes
[  5]   5.00-6.00   sec  64.3 MBytes   540 Mbits/sec   48    218 KBytes
[  5]   6.00-7.00   sec  60.9 MBytes   511 Mbits/sec   32    253 KBytes
[  5]   7.00-8.00   sec  61.7 MBytes   518 Mbits/sec   32    280 KBytes
[  5]   8.00-9.00   sec  60.0 MBytes   503 Mbits/sec   48    218 KBytes
[  5]   9.00-10.00  sec  61.1 MBytes   513 Mbits/sec   43    245 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec   636 MBytes   533 Mbits/sec  384             sender
[  5]   0.00-10.00  sec   634 MBytes   531 Mbits/sec                  receiver

iperf Done.
$ iperf3 -B 172.16.2.11 -c 172.16.2.10
Connecting to host 172.16.2.10, port 5201
[  5] local 172.16.2.11 port 58431 connected to 172.16.2.10 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   112 MBytes   937 Mbits/sec  107    346 KBytes
[  5]   1.00-2.00   sec   110 MBytes   923 Mbits/sec    6    324 KBytes
[  5]   2.00-3.00   sec   109 MBytes   912 Mbits/sec   12    233 KBytes
[  5]   3.00-4.00   sec   109 MBytes   912 Mbits/sec    9    250 KBytes
[  5]   4.00-5.00   sec   109 MBytes   912 Mbits/sec   36    376 KBytes
[  5]   5.00-6.00   sec   109 MBytes   912 Mbits/sec    5    254 KBytes
[  5]   6.00-7.00   sec   109 MBytes   912 Mbits/sec   25    282 KBytes
[  5]   7.00-8.00   sec   109 MBytes   912 Mbits/sec   11    326 KBytes
[  5]   8.00-9.00   sec   106 MBytes   892 Mbits/sec   13    292 KBytes
[  5]   9.00-10.00  sec   108 MBytes   902 Mbits/sec    9    319 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1.06 GBytes   913 Mbits/sec  233             sender
[  5]   0.00-10.00  sec  1.06 GBytes   910 Mbits/sec                  receiver

iperf Done.
┌────────────────────────────────────────────┐
│ KVM01                                      │
│     ┌────────────┐        ┌────────────┐   │
│     │openbsd01   │        │openbsd02   │   │
│     │            │        │            │   │
│     │          vio1──────vio1          │   │
│     │            │        │            │   │
│     │            │        │            │   │
│     └────vio2────┘        └────vio2────┘   │
│           │                     │          │
│           └────────bridge───────┘          │
│                      │                     │
│                      │                     │
└─────────────────────eth0───────────────────┘
                  ┌────┴─────┐
                  │  switch  │ 1 Gbit/s
                  └────┬─────┘
┌─────────────────────eth0───────────────────┐
│                      │                     │
│                      │                     │
│           ┌────────bridge───────┐          │
│           │                     │          │
│     ┌────eth0────┐        ┌────eth0────┐   │
│     │            │        │            │   │
│     │lo0:        │        │lo0:        │   │
│     │172.16.2.11 │        │172.16.2.10 │   │
│     │            │        │            │   │
│     │iperf vm    │        │iperf vm    │   │
│     └────────────┘        └────────────┘   │
│ KVM02                                      │
└────────────────────────────────────────────┘

This was a quick test without any kernel tuning or anything, so maybe I've missed something. Such performance was sufficient for my use case though. However, for many, it may be a deal breaker.

Cloud and virtualization support

I run my OpenBSD boxes on KVM hosts and utilize cloud-init for the initial provisioning. Cloud-init is not a part of the base system, but, fortunately, there are prebuilt OpenBSD images with cloud-init installed available for download.

The only issue I still have with my setup is that I can't get QEMU guest agent to communicate with the virtualization host. Because of this VMs can't be gracefully shut down which can be quite inconvenient. There is a workaround for this for Proxmox setups, but I couldn't adapt it to pure KVM.

Running OpenBSD on the public cloud might also present some challenges from what I've gathered. You will need to build your own images if you want to run OpenBSD on AWS, Azure, or GCP. However, you can find native OpenBSD support on some smaller hosting providers, such as Vultr and openbsd.amsterdam. The latter donates a small amount from each VM to the OpenBSD Foundation.

Conclusion

I hope this post will encourage more network engineers familiar with Linux to try OpenBSD. I believe it's a very strong candidate for the role of a network gateway or a highly available firewall. It brings diversity to the Linux-dominated open-source landscape and gives a different perspective on how things can be done.

Resources

Comments