OpenBSD Power Management


OpenBSD's power management features are powerful and plenty. My current setup floats the battery at 80% charged, to reduce battery wear during the work week, and it doesn't suspend when I close the lid as long as it's plugged in. On battery power, it adjusts the CPU speed under load to optimize battery life without sacrificing performance when I need it, and it will automatically suspend at 5% battery to save me from the system powering off unexpectedly. When I plan to head out, I can use a quick alias to allow the battery to charge all the way, which takes about 20 minutes from 80%. We'll go through how I have it set up.

Long-time readers of HiR will not be surprised I'm running OpenBSD as my primary general-purpose operating system on my ThinkPad X1 Carbon. Over on Instagram, people do a double-take. One of the surprises was the fact that in a NeoFetch screenshot, it was noted that my CPU was 400 MHz because I was on battery power without anything heavy running. Power management on OpenBSD also blew some minds.

Out of the box, power management is disabled on OpenBSD. It's one of the few things that do not "just work" by default. The vast majority of OpenBSD systems do not need power management features, but it's a must for laptops.

Understanding apmd

apmd is the Advanced Power Management daemon. If you just set apmd to start with no options, on a well-supported laptop like the 8th Generation Lenovo ThinkPad X1 Carbon I'm using, your laptop will probably suspend when you close the lid or use "zzz" on the command line. And it will wake up when you open the lid or mess with the keyboard. apmd will also enable automatic performance adjustment mode -- clocking down the CPU when there's not much load.  That's a pretty good start. There will likely be no warning when your battery is close to dying, and there won't be anything there to stop it from just turning off abruptly when it hits 0%. That's not optimal. Also, when you close the laptop lid and it's plugged in, you might want the system to remain active. You can do that by tweaking machdep.lidaction with sysctl, but there's a much better way.

Looking at the manual page for apmd, we have a lot of useful options.

We can start apmd in high-performance mode (-H) to get the most processing power out of the system, low-performance mode (-L) to extend battery life at the expense of CPU speed, or force automatic performance adjustment mode (-A) which happens to be the default.

The -a option (lowercase) will block incoming BIOS suspend requests, such as those coming from closing the lid, if the system is plugged in. You can still manually suspend through your window manager or the command line zzz utility.

The -z [percent] option will automatically suspend the system if it is not plugged in and the battery is at or below the threshold percentage.

Enable and configure apmd. I assume that you've configured doas. I've covered this on several pages, like my OpenBSD webserver article. I explicitly set automatic performance mode (-A), blocking suspend when plugged in (-a), and set it to automatically suspend at 5% battery (-z 5). Feel free to change this however you please.

doas rcctl enable apmd
doas rcctl set apmd flags -A -a -z 5

The apm command line utility

Simply running the apm utility will provide battery and charge status

Battery state: high, 79% remaining, 153 minutes life estimate
AC adapter state: not connected
Performance adjustment mode: auto (400 MHz)

There are a number of display flags that you can pass to APM to get specific details, for instance, apm -m will display the number of minutes of estimated battery life (or time to achieve a full charge). See the man page for apm for all the details. This is useful if you are making scripts to determine power management status such as for tmux/powerline or custom status bar scripts.

You can also adjust the performance mode on-the-fly, using apm -H, apm -L or apm -A to enable high performance, low-performance or automatic performance modes respectively, without restarting apmd.

sysctl, power management and sensors

This will dump out everything from the hw.sensors tree in sysctl:
sysctl hw.sensors

From here, we can see fan speeds, temperatures, the number of battery charge cycles and even things like the battery's factory design capacity and last fully-charged capacity in Watt-hours. 

By dividing last full capacity by the design capacity, you can see how far below the rated capacity your battery has deteriorated, in a way a measure of battery health. For example, my battery's design capacity is 51 Wh, but my last full charge was 39.76 Wh. 39.76 / 51 is about 0.78, so my "full" capacity is about 78% of what it was when new. That's not bad for a 4-year-old laptop on the original battery with about 500 discharge cycles.

Extending battery health with charging optimization

Many devices are now limiting the battery charge to about 80% when the system spends a lot of time plugged in. The MacBook my employer issued to me does this using some kind of magic algorithm that determines if it hasn't been running on battery power much lately. Leaving your battery slightly discharged like this actually extends the life of the battery substantially if you use it while plugged in most of the time.

OpenBSD can do the same thing, to an extent. The maximum charge level can be set with sysctl, so you can place the following line in /etc/sysctl.conf so that it's set immediately when booting up:


Then, manually set it with sysctl, or reboot:

doas sysctl hw.battery.chargestop=80

If your battery is fully charged, it won't do anything, but the next time you run the battery down below 80% and plug it back in, it will stop charging the battery once it hits 80%. As far as I know, the battery will still charge to 100% if you turn the system off, though.

You can set or adjust the maximum charge level from the command line as well. For instance, if you know you're going to be on the go later today and want to actually charge the battery to 100%, running this command will take care of things:

doas sysctl hw.battery.chargestop=100

There are also additional hw.battery.chargemode options and an hw.battery.chargestart variable, for advanced use cases that I haven't needed. You can reference the hw.battery section of the detailed manual page for the sysctl API to read more about these settings.

Finally, I've been using XFCE lately, and the package "xfce4-battery" does a decent job, allowing me to place a battery widget in any of the XFCE panels. There may be additional widgets or plugins for your GUI of choice.


Running your own Wireguard VPN server and Travel Router

If you travel, or work from the road a lot, you probably have a good reason to set up a travel router and VPN. Travel routers let you create a private network for all of your personal devices. Paired with a VPN, you can obscure the nature of your activity from the local network, and evade IP address or geographical restrictions. 

The good use cases for “privacy” focused VPN services are vanishing. Improved encryption and protocols prevent many of the ways a casual attacker can spy on you with wifi. On top of that, many such providers have been caught selling user data to third parties and turning over information to authorities under subpoena, making them possibly worse than any attacker you’re sharing the hotel wifi with.

Running your own cloud VPN is easy and affordable. Once you know how to set it up, you can run it on most hosting providers anywhere in the world, or set it up at home so that you can virtually hop on your home network while you’re out and about. Actually installing Wireguard is the main part that’s different between operating systems. 

OpenBSD Server

It's probably no surprise that I run Wireguard on my OpenBSD Servers. OpenBSD has had full kernel support for Wireguard for years, so it's just a matter of installing the userland tools, and setting up the interface.

doas pkg_add wireguard-tools


inet NONE
!/usr/local/bin/wg setconf wg0 /etc/wireguard/wg0.conf

Amazon Linux

Amazon Linux is just one easy example I found of a Red Hat-based system. These steps should work similarly on others like Rocky or Alma. 

sudo wget -O /etc/yum.repos.d/wireguard.repo https://copr.fedorainfracloud.org/coprs/jdoss/wireguard/repo/epel-7/jdoss-wireguard-epel-7.repo

sudo yum upgrade

sudo yum clean all

sudo yum install wireguard-tools wireguard-dkms iptables-services

Debian Linux

As the root of many other distributions like Ubuntu and RaspiOS, it made sense to also cover Debian since these instructions will also likely work on many distributions.

sudo apt update

sudo apt install wireguard


Generating Public and Private Keys

Most of the travel routers I've seen don't have a way to generate Wireguard keys on the device if you're manually configuring it. These can be generated on your VPN server and imported. We're changing the umask here to ensure the files are not world or group readable. We're going to be editing files as root, so just use sudo -i (linux) or doas -s  (OpenBSD)

sudo -i

umask 077 

Create the client keys:

wg genkey | tee client-private.key | wg pubkey > client-public.key

And then server keys:
cd /etc/wireguard
wg genkey | tee private.key | wg pubkey > public.key

Figure out your main network interface:

ip a

In Amazon AWS EC2, the interface was enX0 but it may very well be eth0 or something ridiculous like enp37s8lmaowtf depending on your configuration. You'll need this interface name for your iptables rules.

Using this example skeleton configuration file as a template, paste it into /etc/wireguard/wg0.conf and edit the interface name and fill in the appropriate public and private keys.  You can pick any port number you wish. There is no standardized port for Wireguard.


PrivateKey = [the contents of /etc/wireguard/private.key]
ListenPort = 57609
Address =
PostUp = iptables -t nat -I POSTROUTING -o [Interface] -j MASQUERADE
PostUp = ip6tables -t nat -I POSTROUTING -o [Interface] -j MASQUERADE
PreDown = iptables -t nat -D POSTROUTING -o [Interface] -j MASQUERADE
PreDown = ip6tables -t nat -D POSTROUTING -o [Interface] -j MASQUERADE

PublicKey = [the contents of client-public.key]
AllowedIPs =

Final Setup and starting the server


For OpenBSD, you won't need the Address or IPTables entries in wg0.conf above. You'll need to tell PF to NAT traffic for wg0, though. Again, you'll need the primary interface name, which you can find with ifconfig. Place the following lines into /etc/pf.conf AFTER the "pass" and before the block commands at the end of the file and restart pf.

pass in on wg0
pass in inet proto udp from any to any port 51820
pass out on egress inet from (wg0:network) nat-to ([Interface]:0)

doas pfctl -f /etc/pf.conf

Enable IP Forwarding by adding these lines to /etc/sysctl.conf:


To start Wireguard, run the following commands, or reboot:

doas sysctl net.inet.ip.forwarding=1

doas net.inet6.ip6.forwarding=1

doas sh /etc/netstart wg0


For Amazon Linux or Debian, it's also similar. Add these to /etc/sysctl.conf:



Reload sysctl:

sudo sysctl -p

Enable and start the Wireguard service with systemctl

sudo systemctl enable wg-quick@wg0.service

sudo systemctl start wg-quick@wg0.service

Travel Router Configuration

I've been using GL.iNet routers with Wireguard for about 3 years. The example screenshots are from my GL-SFT1200 "Opal" travel router. Manually configure the Wireguard client and set these values:


IP Address: (or your "peer" address from the Wireguard server config)

Private key: Contents of client-private.key file we generated earlier


Public Key: Contents of /etc/wireguard/public.key from the wireguard server

Endpont host: IP address and port of your wireguard server (e.g.

Allowed IPs: (or, all IP addresses are allowed through the Wireguard server)



Once you have configured the Wireguard client, you can connect to the VPN. Browse to an IP address checking site like whatismyip.com to verify you're coming from the VPN server's IP address.

Many travel routers have a mode switch on the side that allows you to easily change how the router works. I set up my Opal router so that the mode switch enables or disables Wireguard on the fly so I have more flexibility without worrying about having to log into the admin control panel and change settings. 


November 2023 SecKC Presentation: Mobile SDR

Thanks to all who showed up and asked questions!


Introducing NEMO for the M5Stick C Plus

I've been working on this project for a couple of weeks, and it's pretty close to finished. I've been trying to build some more skills in the embedded systems, microcontroller and Internet of Things realm, and when I decided it was time to expand my experience to ESP32, I wanted a dev kit with a little bit of everything built in. I already have breadboards, displays, servos, sensors, LEDs and accessories galore. I just wanted something cute that'd keep my interest for a while. Enter the M5Stack M5Stick C Plus. Powered by an ESP32, featuring an AXP192 power management unit, accelerometer, IR and red LEDs, a 100mAh battery, microphone, speaker, display, a few buttons and plenty of exposed GPIO pins, it seemed like a good place to start.

My usual method of learning involves sketching out a rough plan for demonstrating mastery of core concepts, so my first few projects were about getting the ESP-IDF and arduino environments working with simple programs. I also ported CircuitPython to it for some of my early projects. I focused on the WiFi stack and designing user interfaces at first, then using UART, SPI and I2C via the GPIO pins.

With most of the tech community excited about the Flipper Zero, I started thinking about what sorts of high-tech pranks one could get away with on a platform like this. The end result is NEMO, named after the titular character in Finding Nemo, in contrast to some other high-tech toy named after a fictional dolphin.

The Stick C Plus has no IR sensor, but it does have a transmitter. Infrared replay attacks might work if you plugged an IR receiver into the GPIO, but I'm not worried about that. I settled for an implementation of TV-B-Gone, relying on previous work by Ken Shirriff and a local hacker, MrARM. I had previously messed with similar projects in both CircuitPython, and at the source-code level, way back in 2008 with the DefCon 16 badge, which also featured an infrared TV killer mode.

Right about the time I was starting to work on this, DefCon 31 was wrapping up, and a ton of folks were commenting on the bizarre behavior of their iOS devices at the conference, seemingly always displaying pop-ups trying to connect AirPods or other accessories. This became known as the "AppleJuice" attack, and relies on bluetooth low energy beacon advertisements, and iOS's user experience that tries to make device pairing easier. I found a very bare-bones implementation for ESP32, that was somewhat broken.  I fixed it and gave it a decent two-button user interface as well.

I rounded out the pranks with WiFi Spamming, using a list of funny WiFi SSIDs, the now-popular "RickRoll" SSIDs and a mode that spams hundreds of randomly-named SSIDs per minute.

It defaults to a "watch" mode with a 24-hour clock backed by the on-board real-time-clock. There's a few kilobytes of non-volatile EEPROM storage on board, of which I'm using a few bytes to keep settings like left/right hand rotation, brightness, auto-dimming timer and TV-B-Gone region settings persistent through deep sleep or power off mode. All in all, it's a few existing projects just kind of glued together in a novel way that's easy to use. Those who've known me for a while would say that's on-brand. 

A few people have asked me if it's for sale. I have no plans to sell anything, such as M5Stick units pre-flashed with NEMO. This is open-source software I put together for fun, and anyone can use it and extend it. You can buy the device and learn how to load my code on it, but I'd be more excites to hear about people being inspired to build their own cool projects on it. 

At $20-$30 depending on the site and accessories you get with the M5Stick C Plus, it has a lot of capabilities. Here's an Amazon Affiliate Link to buy a version with a watch strap, lego mounting and wall-mounting options. The project source code and pre-compiled binaries are up on the m5stick-NEMO GitHub repository, and I am keeping the project up to date in the M5Burner app. You can see a quick walk-through reel on my Instagram as well.



Running a Kubernetes Cluster with OpenBSD VMM

Kubernetes relies on Linux containers and cgroups, so you can't run Kubernetes or even docker containers directly on OpenBSD, but Alpine Linux runs great under OpenBSD's VMM hypervisor. Alpine shares a lot of the same ideologies as OpenBSD, and it has become a favorite in the Linux container ecosystem. 


What's the point? 

Kubernetes is a system for deploying containerized applications at scale, in a clustered environment. This lets developers create microservices that run in a mesh configuration, or large, monolithic apps that run in Docker. These docker containers can then be deployed to a kubernetes cluster for testing and production use. In the modern enterprise world, it's becoming far less common to build and provision web servers and run apps on them. More often than not, the infrastructure is virtual, software-defined, and deployed in containers.

It would be far faster and efficient to follow my OpenBSD HTTPD guide and then install Wordpress directly, so this is less about getting wordpress running on OpenBSD than it is about getting one's feet wet in clustered workloads and applying modern DevOps principles in an OpenBSD-centric home lab. Once this is up and running, you can use kubectl, kustomize, or helm to deploy all kinds of things to the cluster with relative ease. I'm gonna try OWASP Juice Shop next.

Caveat Emptor

This is not a good project for someone who’s never used OpenBSD before. I make some assumptions that you’re generally familiar with OpenBSD and performing routine system administration and housekeeping tasks.

This is not a supported configuration. I’ve been running kubernetes on OpenBSD’s VMM hypervisor in my home lab for about 7 months as of the time of writing. I wanted to play with Kubernetes at home, and at the time, my Dell PowerEdge R410 running OpenBSD was the only thing I had available, so I decided to try this out. Occasionally, the network or nodes lag a little, but it’s been pretty stable for the most part.

There are a few limitations to this setup. VMM currently has no way to allocate more than one CPU core to a VM. This causes two problems for Kubernetes:

First is that Kubernetes’ master node minimum requirement is for 2 cores. We can override it, and it seems to run with just one core, but it’s not optimal.

Second is that some workloads may request more than 1 CPU worth of computing resources for a pod. The master node will be unable to find a node that can support that pod, because these worker nodes will only have one core, or 1000 milli-cores, of CPU capacity each. If you see errors like “Unschedulable” and “Insufficient cpu,” this is one potential reason.

VMM Vs. The World

I’ve been running production websites on OpenBSD VMM guests for years without any problems, but I won’t pretend that VMM’s relative immaturity isn’t a deal-breaker for clustered workloads like this. This project was my way of better understanding how kubernetes works under the hood, and an excuse to play UNIX nerd.

Since late last year, I have also been running a similar K8S cluster based on Alpine Linux VMs, but hosting it on slightly better hardware running the ProxMox VE hypervisor. I’m still using the NFS server on my R410, though. ProxMox allows me to allocate 4 cores to each node, so I can run heavier applications. This setup is production-ready and I could justify operating a business on a ProxMox-based cluster like this if I were on a tight budget.

There are commercial solutions that take a lot of the work out of on-premise kubernetes, and affordable options for managed cluster workloads (in the clouds or whatever). Do what makes sense.


Commands you should run are in green

File contents are in blue

OpenBSD Host Considerations

Minimum System Requirements

  • 8GB RAM

  • 4-core CPU that has VMM/EPT support (most i5 or i7 CPU should work)

  • 50GB of storage

For this example, we’ll run 3 Alpine Linux VMs – one master node and two worker nodes. Each node will get 2GB of RAM and be set up for 10GB of storage, but qcow2 images only allocate storage when it’s used.

Installing OpenBSD

I make some assumptions about your OpenBSD environment:

  • A default OpenBSD install with all installsets

  • /etc/doas.conf is configured so you can use it from a user account you created during the installation. The below is a minimalist doas.conf:

    • permit persist :wheel

  • Enough storage for 3 Alpine Linux virtual machines. 

    • I use a 100GB filesystem mounted at /vmm for this

    • The QCOW2 files in my example lab as of the time of writing are only using about 7GB of storage

  • Enough storage for the NFS server to support persistent storage for some applications. 

    • I use a 100GB filesystem mounted at /k8s

    • A full WordPress/MySQL install in k8s is using less than 1GB

  • You can change these example directories around as you need to, but you’ll see these paths in the documentation. 

Networking the OpenBSD host and the VMM Guests

We are going to bridge our guest VMs to the LAN. This is easier if you have multiple ethernet interfaces on your OpenBSD host, but we can work around it if you only have one.

Two ethernet interfaces

On a system with more than one ethernet interface, use a secondary interface to bridge the VMs to the network. In this example, I allowed the installer to configure the ethernet interface bnx0 with dhcp but I also connected the second ethernet interface bnx1 to the network. Set the second interface to come up with no IP address needed.



doas sh /etc/netstart bnx1

 –or – Single ethernet interface

If we use a DHCP client on the OpenBSD host, it will intercept all DHCP traffic on the interface and keep the guest VMs from obtaining DHCP leases. 

Set up a DHCP reservation for the MAC address of your OpenBSD host. This is only so your router doesn’t issue your IP address to some other system. Configure the OpenBSD host to use the reserved IP address in a static configuration.

An example where my ethernet interface is em0. Static address of this interface is I have marked this IP address as reserved on my home router.



! route add default

Your static IP details will likely differ from the above. Set appropriately. You may also need to manually configure /etc/resolv.conf

VM Bridge

Add the interface that you’ll be using to bridge0 as in the example below. For my lab machine, I have two interfaces and I’m using bnx1 to bridge my VMs.


add bnx1

Make sure you bring up the bridge interface

doas sh /etc/netstart bridge0

Set up NFS

Kubernetes requires some kind of network-attached storage that all of the containers can access regardless of which worker node they get allocated to. NFS is a popular option, and we’ll eventually install an automated NFS Client storage provisioner on the cluster. We might as well run this on the VMM server itself, since OpenBSD comes with an NFS server in the base install. 

Export the /k8s filesystem (with enough storage, as mentioned above) via /etc/exports. You must ensure the network and netmask are appropriate for your LAN.


/k8s -alldirs -network=192.168.1 -mask= -mapall=root

Each Persistent Volume will invoke a new NFS thread, so we’ll max it out with 20 server threads for NFS and enable both TCP and UDP modes. mountd, portmap, statd and lockd are all part of the RPC and NFS system. Let’s enable them and start the services.

doas rcctl enable mountd

doas rcctl enable statd

doas rcctl enable lockd

doas rcctl enable portmap

doas rcctl enable nfsd

doas rcctl set nfsd flags -t -u -n 20

doas rcctl start mountd statd lockd portmap nfsd

Create Template VM

Directory setup

As mentioned above, this should have plenty of space, at least 30GB. We make it group writable so our user level account (which should be in the wheel group) can use it easily. 

doas mkdir /vmm

doas chmod 770 /vmm

cd /vmm

Configure VMD’s virtual switch

Create A very bare-bones vm.conf to set up our bridge network. We’ll add to this later.


switch "bridged" {

     interface bridge0


Enable and start vmd

doas rcctl enable vmd

doas rcctl start vmd

Download alpine-virt ISO image

ftp https://dl-cdn.alpinelinux.org/alpine/v3.17/releases/x86_64/alpine-virt-3.17.2-x86_64.iso

Create template VM disk image

vmctl create -s 10G alpine-template.qcow2

Boot the template vm

This will start the VM, boot from the iso, and attach your template VM disk image.

doas vmctl start -c -d alpine-virt-3.17.2-x86_64.iso \

     -d alpine-template.qcow2 -m 2G -n bridged alpine-template

Install Alpine Linux

Kubernetes works best when there’s no swap, so we’ll disable it with a variable we pass to setup-alpine. Mostly, just follow the prompts.

  • Log in as root and run the below command 

  • SWAP_SIZE=0 setup-alpine

  • Specify a hostname (I usually use ‘kube’ but we’ll change it later)

  • Configure network (DHCP is fine)

  • Busybox for cron

  • Create a user-level account

  • Use Openssh server

  • Use the vdb disk as “sys” volume

  • halt

After a while, hit enter. vmctl will exit back to the OpenBSD shell with an [EOT].

Reboot into the template VM

Same command as above but without the installer ISO attached

doas vmctl start -c \

      -d alpine-template.qcow2 -m 2G -n bridged alpine-template

Set up the basic software

Log in and immediately elevate to root

su -

Edit /etc/apk/repositories. The default Alpine Linux install only includes vi. If you must, you can use “apk add nano” first. 

We only want these 3 repos enabled. You can leave the # in front of the others in the file.





Update the package index and install some software we need:

apk update

apk add doas kubernetes kubeadm docker cni-plugins kubelet kubectl nfs-utils cni-plugin-flannel flannel flannel-contrib-cni docker uuidgen git

We’ll use doas from this point forward. The default configuration of doas should be fine for our purposes (allows wheel-group users to do anything, caches the password for the session). Exit your root shell.


Test doas – this will also cache the password so the rest of the commands run all at once if you just paste them in. 

doas id

Paste the below into your terminal if you want. Or type it. I’m not the boss of you. 

doas rm /etc/ssh/*key*

printf "#!/bin/sh\nmount --make-rshared /\n" |\

doas tee /etc/local.d/sharemetrics

doas chmod +x /etc/local.d/sharemetrics

doas rc-update add local

doas rc-update add docker default

doas rc-update add containerd default

doas rc-update add kubelet default

doas rc-update add rpc.statd default

doas rc-update add ntpd default

echo "net.bridge.bridge-nf-call-iptables=1" | doas tee -a /etc/sysctl.conf

doas apk add 'kubelet=~1.26'

doas apk add 'kubeadm=~1.26'

doas apk add 'kubectl=~1.26'

The last 3 commands ensure we don’t let Kubernetes components get upgraded past the minor version we specify. Replace 1.26 with the current minor version (1.22.3-r1 would be 1.22). As of writing, 1.26 is the latest. This keeps the cluster nodes from inadvertently updating to a higher version with incompatibilities.

Shut down the system. This has most of what we need for all the kubernetes nodes.

doas halt

Once vmctl exits and you are back at the OpenBSD shell, make this template disk image read-only. This will be our base image for the 3 Alpine Linux VMs, and if something changes in this file, it’ll break the derivative images.

doas chmod 400 alpine-template.qcow2

Clone Template VM

Create the derivative images for the master node and two workers.

doas vmctl create -b alpine-template.qcow2 kube-master.qcow2

doas vmctl create -b alpine-template.qcow2 kube-w1.qcow2

doas vmctl create -b alpine-template.qcow2 kube-w2.qcow2

Add them to /etc/vm.conf. We specify MAC addresses inside the interface clause because if we don’t, they will get a different random MAC address (and probably IP address) at runtime.  My /etc/vm.conf now looks like this:


switch "bridged" {

  interface bridge0



vm "kube-master" {


  memory 2048M

  disk "/vmm/kube-master.qcow2"

  interface {

switch "bridged"

lladdr fe:e1:ba:d9:fb:c0



vm "kube-w1" {


  memory 2048M

  disk "/vmm/kube-w1.qcow2"

  interface {

switch "bridged"

lladdr fe:e1:ba:d9:fb:c1



vm "kube-w2" {


  memory 2048M

  disk "/vmm/kube-w2.qcow2"

  interface {

switch "bridged"

lladdr fe:e1:ba:d9:fb:c2



Before we continue, it’s a good idea to set up a DHCP reservation for the MAC address you use for the master node. If the master node IP changes, the workers can’t find it, and fixing the cluster is not a very straightforward task. This isn’t too important if you’re just playing around with kubernetes for fun, though.

If you have more than 8GB of RAM, I’d allocate about 25% of your RAM to each VM. 16GB of RAM? 4096M per node. 32GB of RAM? Give each node 8192M. 

If you have a system with more than 4 cores, and more than 32GB of RAM, consider adding more worker nodes and tuning the RAM on them. 2GB per node is an absolute minimum and will lag a lot.

Reload the configuration and start up the master node with a console

doas vmctl reload

doas vmctl start -c kube-master

Bootstrap the master node

Login to the console or SSH into the master node.

Set the host name and machine-id

These are important for the k8s networking components. 

echo "master" | doas tee /etc/hostname

doas hostname -F /etc/hostname

uuidgen | doas tee /etc/machine-id

Install k8s management tools

doas apk add k9s helm curl

Initialize Kubernetes

Here, we have to work-around the requirement for two CPU cores.

doas kubeadm init --ignore-preflight-errors=NumCPU \

     --pod-network-cidr= --node-name=master

This could take some time to pull images and generate crypto keys. It will eventually spit out a bunch of stuff including the below (slightly modded). 

Execute these with your user-level account on master. 

    mkdir -p $HOME/.kube

    doas cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

    doas chown $(id -u):$(id -g) $HOME/.kube/config

The initialization will also show the command to join nodes to the cluster...

    kubeadm join 192.168.1.xxx:6443 --token foobar.foobarfoobarfoob \

    --discovery-token-ca-cert-hash sha256: \


Save this kubeadm join command. You'll need it for all your workers. I create a file on master and save it just so it’s easy to find.

Install Flannel CNI plugin.

This is technically the first package you manually deploy to your cluster. It creates a daemonset of containers (pods) that will self-replicate across all nodes as they join the cluster.

kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

Worker Nodes

Boot up the first worker node. On your OpenBSD VMM host, run:

doas vmctl start -c kube-w1

Login to the console or SSH into this worker node.

Set the host name and machine-id

Similar to how we set up the master node, we need each worker to have a unique name and machine-id

echo "kube-w1" | doas tee /etc/hostname

doas hostname -F /etc/hostname

uuidgen | doas tee /etc/machine-id

Run the command to join the cluster as provided when initializing the master node. Don’t forget to use doas.

doas kubeadm join 192.168.1.xxx:6443 --token foobar.foobarfoobarfoob \

     --discovery-token-ca-cert-hash sha256: \


Kubernetes tokens are only valid for 24 hours. If you want to join a new worker to the cluster, you can generate and print a new join token at any time. You must execute this on the MASTER node, then run the output of this command on your new worker node.

kubeadm token create --print-join-command

That’s it. Log off. You're done with this worker node.

You can hit enter and then use the “tilde period” escape sequence ~. to break out of the vmctl console while the VM remains running. If connected to your VM server over SSH, note that ~. is also the sequence to terminate your SSH session, so use ~~. instead.

Rinse, Repeat

Repeat the above process for kube-w2 (make sure you change the hostname in the commands above) before you continue. You can repeat this process to make as many worker nodes as you have the resources for. You’ll need to make new drive images and VMs in /etc/vm.conf to support them.

Behold, your Kubernetes cluster

Login to the master node again. 

Basic kubectl commands

kubectl get nodes

kubectl get pods --all-namespaces

You should see stuff running. 

K9S Navigation

K9S is a curses-based Kubernetes UI. Just run “k9s

If k9s exits with an error message, it may be due to your termcap being unsupported. Try the below command, then run k9s again. If that works, consider adding this to the .profile file so it runs every time you log in.

export TERM=xterm-256color

":" will let you jump to the various pages. Some pages worth knowing:

  • node   - all connected nodes

  • deploy   - deployments, which configure pods and replicaSets

  • daemonset    - daemonsets - groups of pods that should run on many nodes

  • statefulset     - Stateful applications

  • pods   - Individual containers

  • namespace   - namespaces in the cluster

On each page, note the hotkey menu at the top. You can also use "/" to search the current content of the given page. You can on-the-fly edit a manifest of a deployment, for example, or delete a pod to evict it (usually results in a restart which may or may not be on the same node) - Also pay attention to the number hotkeys which let you filter by namespace. Often, you'll want 0 for ALL namespaces

Troubleshooting with K9S

Cursor over an asset (like a pod, daemonset, deployment) and use “d” to describe the asset. You can usually see information about what’s going on. This is useful for figuring out why a pod is in a CrashLoopBackoff state.  

Use “l” (lowercase L) to fetch logs. You may need to use the hotkeys to view head or tail the logs, adjust line wrapping, etc.

Use “e” to edit a manifest on-the-fly. You can often “hack” quick fixes in if something is typoed or you want to scale up a deployment manually.

Use “s” on a pod to try to get a shell on the container. Some containers do not support shells. Containers that are crashing will not remain up long enough to do anything meaningful in a shell.

Provision NFS Storage

SSH back into your MASTER node. 

When an application requires storage, it makes a “Persistent Volume Claim.” These will lurk around forever and stop the application from deploying until a persistent volume is created that meets the criteria for the claim. In a managed cluster, this happens automatically, so we’re going to deploy our own magic process to create these volumes on the fly, using storage we provide via NFS.

For this, we will use nfs-subdir-external-provisioner, and set up Kustomize to tweak it for our environment. Kustomize can modify and patch stock manifests so you don’t have to edit them directly – and as you’ll see here, sometimes you don’t even need to download them, you can just reference the repository. 

Make a directory

Generally speaking, we work in a directory for a given task. We’ll make a new directory for our storage provisioning.

mkdir storage; cd storage

Create configuration files

Create the below files inside the storage directory with the contents as shown. YAML is extremely picky and your browser might not copy/paste these correctly. You can snag all of the YAML files I use in this example from my github: n0xa/k8s-playground


apiVersion: v1

kind: Namespace


  name: nfs-provisioner

Be sure to modify the highlighted values in the patch file to match your NFS server’s details.


apiVersion: apps/v1
kind: Deployment
    app: nfs-client-provisioner
  name: nfs-client-provisioner
        - name: nfs-client-provisioner
            - name: NFS_SERVER
            - name: NFS_PATH
        - name: nfs-client-root


namespace: nfs-provisioner


  - github.com/kubernetes-sigs/nfs-subdir-external-provisioner//deploy


  - namespace.yaml


  - patch_nfs_details.yaml

Deploy with Kubectl

This will use the kustomization.yaml to patch the manifest files, and apply the resulting assets to your cluster. 

kubectl apply -k ./

Deploy MetalLB Load Balancer

Now we need to build a load balancer to expose the service to an external set of IP addresses. We will use MetalLB for this. We are going to arbitrarily use some LAN IP addresses here. In my home lab, the range through is out of the normal DHCP scope. You may wish to set aside a range of addresses that aren’t managed by DHCP at all. 

Make a directory

mkdir ~/metallb; cd ~/metallb

Install MetalLB with kubectl

kubectl apply -f  https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml

Configure Load Balancer IP Addresses

Create a configuration yaml for the IP addresses you’re reserving for the load balancer. You can specify IP address ranges, CIDR blocks or single IPs if you want. You can add as many lines as you wish. You can see I defined two IP address ranges, one named “production” and the other (which is technically one IP address) named “wordpress”


apiVersion: metallb.io/v1beta1

kind: IPAddressPool


  name: production

  namespace: metallb-system





apiVersion: metallb.io/v1beta1

kind: IPAddressPool


  name: wordpress

  namespace: metallb-system





apiVersion: metallb.io/v1beta1

kind: L2Advertisement


  name: l2-advert

  namespace: metallb-system

Apply the configuration

Note that the API extensions this manifest references won’t be available until after the MetalLB resources are all up and running.

kubectl apply -f address.yaml

By default, any new service that gets deployed will get an IP address from the “production” pool. If we add the proper annotations to any new service that gets deployed, we can force it to use the wordpress IP address we specified. 

Deploy WordPress

Let’s deploy a simple Wordpress blog using helm. 

Add the Bitnami Helm Repository

helm repo add bitnami https://charts.bitnami.com/bitnami

Configure Wordpress

Here, I’ve put together a YAML file with just enough details to get it up and running. We’ll give it a title, a username, and details about our storage and load balancer service annotations. You should change any of these details as you need.


wordpressUsername: myuser

wordpressEmail: ax0n@example.com

wordpressFirstName: Noah

wordpressLastName: Axon

wordpressBlogName: "Ax0n's Blog"


  storageClass: "nfs-client"



    metallb.universe.tf/address-pool: wordpress

Install Wordpress with Helm

We’ll make a new “wordpress” namespace in k8s for this installation.

helm install wordpress -f wp-values.yaml --create-namespace wordpress \

    --namespace wordpress bitnami/wordpress  

This will spit out a whole bunch of text explaining how to find the randomly-generated password, the service IP address and more. Of course, we know it should pick up the IP address we defined for Wordpress in the MetalLB configuration. Grab the password via the commands provided.

Helm will create a whole bunch of resources, including a statefulset for MariaDB, a deployment for the Wordpress webserver, volumes for the database and wordpress web application itself, a secret to hold the generated password, and a number of other resources. This can take several minutes to complete. Feel free to open k9s again and watch the logs as MariaDB initializes, Wordpress waits to connect, and the entire provisioning process unfolds.

Once it’s complete, browse to the IP address you set up for Wordpress.


Use the admin URL and log in.


I did run into a well-documented issue with the installation, in that the health checks (readiness and liveness probes) caused the Wordpress container to restart before it was finished setting everything up. I had to manually create a “themes” directory inside my NFS server under the directory provisioned for wordpress and then manually download and set a theme inside the admin page.  This seems to happen on lower-performance clusters. sometimes.