2017-04-29

OpenBSD vmm Hypervisor Part 2: Installation and Networking

* Updated November 1, 2018, Thanks to some commenters below! 
This is going to be a (likely long-running, infrequently-appended) series of posts as I poke around in vmm.  A few months ago, I demonstrated some basic use of the vmm hypervisor as it existed in OpenBSD 6.0-CURRENT around late October, 2016. We'll call that video Part 1.

As good as the OpenBSD documentation is (and vmm/vmd/vmctl are no exception) I had to do a lot of fumbling around, and asking on the misc@ mailing list to really get to the point where I understand this stuff as well as I do. These are my notes so far.

Quite a bit of development was done on vmm before 6.1-RELEASE, and it's worth noting that some new features made their way in. Work continues, of course, and I can only imagine the hypervisor technology will mature plenty for the next release. As it stands, this is the first release of OpenBSD with a native hypervisor shipped in the base install, and that's exciting news in and of itself. In this article, we assume a few things:

  1. You are using OpenBSD 6.4-RELEASE or -STABLE  (or newer)
  2. You're running the amd64 architecture version of OpenBSD
  3. The system has an Intel CPU with a microarchitecture from the Nehalem family, or newer. (e.g. Westmere, Sandy Bridge, Ivy Bridge). Basically, if you've got an i5 or i7, you should be good to go.
  4. Your user-level account can perform actions as root via doas. See the man page for doas.conf(5) for details.
 Notes on formatting:
  1. Lines in blue are text inside configuration files.
  2. Lines in green are shell commands to be executed on the command line.

Basic network setup

To get our virtual machines onto the network, we have to spend some time setting up a virtual ethernet interface. We'll run a DHCP server on that, and it'll be the default route for our virtual machines. We'll keep all the VMs on a private network segment (via a bridge interface), and use NAT to allow them to get to the network. There is a way to directly bridge VMs to the network in some situations, but I won't be covering that today. (Hint: If you bridge to an interface with dhclient running in the host OS, no DHCP packets will reach the VMs, so it's best to static-assign your VM host's LAN port. or use a different interface entirely.)

Enable IP forwarding in sysctl. Add the following to /etc/sysctl.conf:

net.inet.ip.forwarding=1
net.inet6.ip6.forwarding=1

You can reboot to enable these, or run these commands manually:

doas sysctl net.inet.ip.forwarding=1
doas sysctl net.inet6.ip6.forwarding=1


We're going to be using "vether0" as the VM interface, and in my current configuration, "athn0" as the external interface. The below block of pf configuration was taken partially from /etc/example/pf.conf and some of the PF NAT example from the OpenBSD FAQ. Change your ext_if as needed, and save this as /etc/pf.conf:

ext_if="athn0"
vmd_if="vether0"

set skip on lo

block return
pass           
 

match out on $ext_if inet from $vmd_if:network to any nat-to ($ext_if)

Next, you'll need to specify an IP and subnet for vether0. I am going with 10.13.37.1 (and a 255.255.255.0 netmask for a /24 network) in my example. Place something like this in /etc/hostname.vether0:

inet 10.13.37.1 255.255.255.0

We need to create a bridge interface for the guest VMs to attach to, and bridge vether0 to it. Place the following in /etc/hostname.bridge0:

add vether0

Bring vether0 and bridge0 online by running the following commands:

doas sh /etc/netstart vether0
doas sh /etc/netstart bridge0

Reload the pf configuration (h/t to voutilad for the heads up in the comments about moving this step down a few spots):

doas pfctl -f /etc/pf.conf

Create a basic DHCP server configuration file that matches the vether0 configuration. I've specified 10.13.37.32-127 as the default DHCP pool range, and set 10.13.37.1 as the default route. Save this in /etc/dhcpd.conf:

option  domain-name "vmm.openbsd.local";
option  domain-name-servers 8.8.8.8, 8.8.4.4;

subnet 10.13.37.0 netmask 255.255.255.0 {
        option routers 10.13.37.1;

        range 10.13.37.32 10.13.37.127;

}

Configure a switch for vmm, so the VMs have connectivity. Put the following in /etc/vm.conf:

switch "local" {
        interface bridge0
}


Enable and start the DHCP server. We also need to set the flags on dhcpd so that it only listens on vether0. Otherwise, you'll end up with a rogue DHCP server on your primary network. Your network administrators and other users on your network will not appreciate this because it could potentially keep others from getting on the network properly.

doas rcctl enable dhcpd
doas rcctl set dhcpd flags vether0
doas rcctl start dhcpd

Enable vmd, then start it as well. For good measure, re-run fw_update to make sure you have the proper vmm-bios package.

doas rcctl enable vmd
doas rcctl start vmd
doas fw_update

You should notice a new interface, likely named "bridge0" in ifconfig now.

VM Creation and installation

Create an empty disk image for your new VM. I'd recommend 1.5GB to play with at first. You can do this without doas or root if you want your user account to be able to start the VM later. I made a "vmm" directory inside my home directory to store VM disk images in. You might have a different partition you wish to store these large files in. Adjust as needed.

mkdir ~/vmm
vmctl create ~/vmm/test.img -s 1500M

Boot up a brand new vm instance. You'll have to do this as root or with doas. You can download a -CURRENT install kernel/ramdisk (bsd.rd) from an OpenBSD mirror, or you can simply use the one that's on your existing system (/bsd.rd) like I'll do here.

The below command will start a VM named "test.vm", display the console at startup, use /bsd.rd (from our host environment) as the boot image, allocate 256MB of memory, attach the first network interface to the switch called "local" we defined earlier in /etc/vm.conf, and use the test image we just created as the first disk drive. 

doas vmctl start "test.vm" -c -b /bsd.rd -m 512M -n "local" -d ~/vmm/test.img

You should see the boot message log followed by the OpenBSD installer. Enter "I" at the prompt to install OpenBSD. I won't walk through how to install OpenBSD. If you've gotten this far, you've done this at least once before. I can say that the vm console works by emulating a serial port, so you should leave these values at their defaults (just press enter):
Change the default console to com0? [yes]
Available speeds are: 9600 19200 38400 57600 115200.
Which speed should com0 use? (or 'done') [9600]
You will also likely need to install packages without a prefetch area. Answer yes to this.
Cannot determine prefetch area. Continue without verification? [no] yes
After installation and before rebooting, I'd recommend taking note of the vio0 interface's generated MAC address (lladdr) so you can specify a static lease in dhcpd later. If you don't specify one, it seems to randomize the last few bytes of the address.
CONGRATULATIONS! Your OpenBSD install has been successfully completed!
To boot the new system, enter 'reboot' at the command prompt.
When you login to your new system the first time, please read your mail
using the 'mail' command.

# ifconfig vio0                                                               
vio0: flags=8b43 mtu 1500
        lladdr fe:e1:bb:d1:23:c4
        llprio 3
        groups: dhcp egress
        media: Ethernet autoselect
        status: active
        inet 10.13.37.39 netmask 0xffffff00 broadcast 10.13.37.255
Shut down the vm with "halt -p" and then press enter a few extra times. Enter the characters "~." (tilde period) to exit the VM console. Show the VM status, and stop the VM. I've been starting and stopping VMs all morning, so your VM ID will probably be 1 instead of 13.
# halt -p

(I hit ~. here...)
[EOT]
$ vmctl status
   ID   PID VCPUS  MAXMEM  CURMEM     TTY        OWNER NAME
   13 96542     1    256M   52.6M   ttyp3         root test.vm
$ doas vmctl stop 13
vmctl: terminated vm 13 successfully

VM Configuration

Now that the VM disk image file has a full installation of OpenBSD on it, build a VM configuration around it by adding the below block of configuration (with modifications as needed for owner, path and lladdr) to /etc/vm.conf.

# H-i-R.net Test VM
vm "test.vm" {
        disable
        owner axon
        memory 512M
        disk "/home/axon/vmm/test.img"
        interface {
                switch "local"
                lladdr fe:e1:bb:d1:23:c4
        }
}


I've noticed that VMs with much less than 256MB of RAM allocated tend to be a little unstable for me. You'll also note that in the "interface" clause, I hard-coded the lladdr that was generated for it earlier. By specifying "disable" in vm.conf, the VM will show up in a stopped state that the owner of the VM (that's you!) can manually start without root access.

Go ahead and reload the VM configuration with "doas vmctl reload", then look at the status again.
$ doas vmctl reload
$ vmctl status
   ID   PID VCPUS  MAXMEM  CURMEM     TTY        OWNER NAME

    -     -     1    512M       -       -         axon test.vm

DHCP Reservation (Optional)

I opted to set up a DHCP reservation for my VM so that I had a single IP address I knew I could SSH to from my VM host. This, to me, seems easier than using the VM console for everything.

Add this clause (again, modified to match the MAC address you noted earlier) to /etc/dhcpd.conf. You have to place this above the last curly-brace.

        host static-client {
                hardware ethernet fe:e1:bb:d1:23:c4;
                fixed-address 10.13.37.203;
        }

Restart dhcpd.

doas rcctl restart dhcpd

Go ahead and fire up the VM, and attach to the console.

vmctl start test.vm -c

You'll see the seabios messages, the boot> prompt, and eventually, all the kernel messages and the login console. Log in and have a look around. You're in a VM! Make sure the interface got the IP address you expected. Ping some stuff and make sure NAT works.
$ ifconfig vio0
vio0: flags=8b43 mtu 1500
        lladdr fe:e1:bb:d1:23:c4
        index 1 priority 0 llprio 3
        groups: egress
        media: Ethernet autoselect
        status: active
        inet 10.13.37.203 netmask 0xffffff00 broadcast 10.13.37.255
$ ping -c4 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: icmp_seq=0 ttl=54 time=9.166 ms
64 bytes from 8.8.8.8: icmp_seq=1 ttl=54 time=7.295 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=54 time=9.178 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=54 time=25.935 ms

--- 8.8.8.8 ping statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/std-dev = 7.295/12.893/25.935/7.568 ms

Again, you can use <enter>~. to exit the VM console. You probably want to log off before you do that.  Go ahead and SSH in as well.

$ ssh axon@10.13.37.203
The authenticity of host '10.13.37.203 (10.13.37.203)' can't be established.
ECDSA key fingerprint is SHA256:f39GrAZouHQ3L+3/Qy3FHBma5K+eOe84B9QoFwqNpXo.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.13.37.203' (ECDSA) to the list of known hosts.
axon@10.13.37.203's password:
Last login: Sat Apr 29 14:21:54 2017
OpenBSD 6.1 (GENERIC) #19: Sat Apr  1 13:42:46 MDT 2017

Welcome to OpenBSD: The proactively secure Unix-like operating system.

Please use the sendbug(1) utility to report bugs in the system.
Before reporting a bug, please try to reproduce it with the latest
version of the code.  With bug reports, please try to ensure that
enough information to reproduce the problem is enclosed, and if a
known fix for it exists, include that as well.

$

Forward ports through the NAT (optional)

One last thing I did was to port-forward SSH and HTTP (on high ports) to my VM so that I could get to the VM from anywhere on my home network.

Add these lines to /etc/pf.conf directly under the vmd NAT rule we added earlier.

pass in on $ext_if proto tcp from any to any port 2200 rdr-to 10.13.37.203 port 22
pass in on $ext_if proto tcp from any to any port 8000 rdr-to 10.13.37.203 port 80

Reload the pf configuration

doas pfctl -f /etc/pf.conf

You should now be able to ssh to port 2200 on your host VM's IP. Obviously, browsing to port 8000 won't work until you've set up a web server on your VM, but this was how I started out when working on the PHP/MySQL guides.

2017-04-15

PHP/MySQL Walk-Throughs updated for OpenBSD 6.1

I've stopped maintaining the page for nginx because it hasn't received many views since I updated it about six months ago. I suspect that nginx on OpenBSD is not all that popular because the httpd that ships with OpenBSD provides almost the same functionality as a basic installation of nginx. The pages for httpd in base and Apache2 now cover PHP7 and both configurations appear to work really well.  

PHP/MySQL on OpenBSD's relayd-based httpd 
HiR's Secure OpenBSD/Apache/MySQL/PHP Guide

Feel free to comment on this post if you find any discrepancies or have suggestions. Comments are disabled on the walk-through pages.

(2017-04-23) Edited to add: I went ahead and re-worked the nginx guide, based on the httpd guide with new formatting.
PHP/MySQL with nginx on OpenBSD

2017-04-11

OpenBSD 6.1 Released

I just saw a subtle hint from beck@ that OpenBSD 6.1 is live on the mirrors. Judging from the timestamps on the files,  it looks like the OpenBSD developers stealthily dropped 6.1-Release onto their FTP servers on April Fool's day and didn't tell anyone...

Go get it.

Edited to add: The OpenBSD 6.1 release page has been changed. It's official.