2017-05-18

Logical Domains on SunFire T2000 with OpenBSD/sparc64

I've been on a bit of a virtualization kick. This is about OpenBSD, but not vmm.

A couple of years ago, I picked up a Sun Fire T2000. This is a 2U rack mount server. Mine came with four 146GB SAS drives, a 32-core UltraSPARC T1 CPU and 32GB of RAM. Standard hardware includes redundant power supplies, network and serial Advanced Lights Out Management (ALOM), four gigabit ethernet interfaces and a number of USB ports. A number of PCIe and PCI-X slots are available for adding things like RAID controllers, additional network adapters or what-have-you. This system is a decade old, give or take a year or two.


Sun Microsystems incorporated Logical Domains (LDOMs) on this class of hardware. You don't often need 32 threads and 32GB of RAM in a single server. LDOMs are a kind of virtualization technology that's a bit closer to bare metal than vmm, Hyper-V, VirtualBox or even Xen. It works a bit like Xen, though. You can allocate processor, memory, storage and other resources to virtual servers on-board, with a blend of firmware that supports the hardware allocation, and some software in userland (on the so-called primary or control domain, similar to Xen DomU) to control it.

LDOMs are similar to what IBM calls Logical Partitions (LPARs) on its Mainframe and POWER series computers. My day job from 2006-2010 involved working with both of these virtualization technologies, and I've kind of missed it.

While upgrading OpenBSD to 6.1 on my T2000, I decided to delve into LDOM support under OpenBSD. This was pretty easy to do, but let's walk through it. Resources I used:
Once you get comfortable with the fact that there's a little-tiny computer (the ALOM) powered by VXWorks inside that's acting as the management system and console (there's no screen or keyboard/mouse input), Installing OpenBSD on the base server is pretty straightforward. The serial console is an RJ-45 jack, and, yes, the ubiquitous blue-colored serial console cables you find for certain kinds of popular routers will work fine. The networked part of ALOM, if enabled, will probably get a DHCP address and listen for an SSH connection. "admin/changeme" is the default. Resetting it isn't too hard, if you need to. The Internet can help you if you know how to search.

If you have seven minutes to spare to watch the ALOM and the T2000 itself boot up, watch the below video. Put on your earplugs (not your headphones), because this sucker is *loud*. Otherwise, just skip past it.



If there isn't an operating system on the box, the "ok" prompt should show up in the console eventually. If there is an operating system, you can use the "break" command to get to "ok". From there, cd booting is as easy as:
setenv boot-device cdrom
boot
OpenBSD installs quite easily, with the same installer you find on amd64 and i386. I chose to install to /dev/sd0, the first SAS drive only, leaving the others unused. It's possible to set them up in a hardware RAID configuration using tools available only under Solaris, or use softraid(4) on OpenBSD, but I didn't do this.

Enable ldomd:
doas rcctl enable ldomd

I set up the primary LDOM to use the first ethernet port, em0. I decided I wanted to bridge the logical domains to the second ethernet port. You could also use a bridge and vether interface, with pf and dhcpd to create a NAT environment, similar to how I networked the vmm(4) systems.

Cable the second ethernet port to the desired network segment. I'm assuming you got the first ethernet port configured by now. This second port doesn't need its own IP address from the primary LDOM. Configure it by putting this in /etc/hostname.em1:
-inet6
up
While you're at it, put those same two lines in /etc/hostname.vnet0. Create additional hostname.vnetN files for as many LDOMs as you plan to set up.

Create a bridge interface configuration with this chunk of text in /etc/hostname.bridge0. Add the rest of your vnet interfaces if you made more than one.
-inet6
up
add em1
add vnet0
Fetch the sparc64 minirootXX.fs file from an OpenBSD mirror. Make a blank 4GB disk image for the VM, then write the miniroot to the beginning of it. This will be the easiest way to boot to the OpenBSD installer inside the VM console.

mkdir ~/vm
dd if=/dev/zero of=~/vm/ldom1 bs=1m count=4000 
dd if=miniroot61.fs of=~/vm/ldom1 bs=64k conv=notrunc
 
Create an LDOM configuration file. You can put this anywhere that's convenient. All of this stuff was in a "vm" subdirectory of my home. I called it ldom.conf:
domain primary {
    vcpu 8
    memory 8G
}
domain puffy {
    vcpu 8
    memory 4G
    vdisk "/home/axon/vm/ldom1"
    vnet

Make as many disk images as you want, and make as many additional domain clauses as you wish. Be mindful of system resources. I couldn't actually allocate a full 32GB of RAM across all the LDOMs. I ended up allocating 31.

We're going to dump the factory default LDOM configuration (that is, everything assigned to the primary LDOM, with no actual VMs). We need to copy this as a template, so that we can modify it and send it back to the firmware, telling it to re-allocate hardware assets to an LDOM.

# dump the factory
mkdir ~/vm/default 
cd ~/vm/default 
doas ldomctl dump 

# copy the configuration to the "myldom" directory
cd .. 
cp -r default myldom
cd myldom

# Modify the configuration using the ldom.conf we created
doas ldomctl init-system ~/vm/ldom.conf

# Push ("download") the new "myldom" configuration to the system controller.
cd ..
doas ldomctl download myldom

At this point, you can list the configurations and see that your new one will boot next.

$ doas ldomctl list
factory-default [current]
myldom [next]

Halt the system.
doas halt

Exit the console with "#." and at the system controller prompt, run "reset -c"


Once the system boots up and you log in, check out the running LDOMS.

$ doas ldomctl status
primary           running           OpenBSD running                     0%
ldom1              running           OpenBSD running                     0%

Connect to the console of ldom1 (ttyV0). Additional LDOMs will have consoles at ttyV1, ttyV2, etc. Voila. OpenBSD installer! Assuming your network configuration and cabling is right, it will show up as a machine directly on the LAN.

$ doas cu -l ttyV0
Connected to /dev/ttyV0 (speed 9600)

(I)nstall, (U)pgrade, (A)utoinstall or (S)hell? I
At any prompt except password prompts you can escape to a shell by
typing '!'. Default answers are shown in []'s and are selected by
pressing RETURN.  You can exit this program at any time by pressing
Control-C, but this can leave your system in an inconsistent state.

System hostname? (short form, e.g. 'foo')
...

I eventually provisioned seven LDOMs (in addition to the primary) on the T2000, each with 3GB of RAM and 4 vcpu cores. If you get creative with use of network interfaces, virtual ethernet, bridges and pf rules, you can run a pretty complex environment on a single chassis, with services that are only exposed to other VMs, a DMZ segment, and the internal LAN.


If, like me, you end up with an unbootable LDOM configuration, you can always revert back to the factory default, or switch between stored configurations from the system controller. At the sc> prompt, use:

bootmode config="factory-default"
reset -c

Any configuration that you've downloaded to the controller can be used. They will show up by name in the "ldomctl list" command.

2017-04-29

OpenBSD vmm hypervisor: Part 2

This is going to be a (likely long-running, infrequently-appended) series of posts as I poke around in vmm.  A few months ago, I demonstrated some basic use of the vmm hypervisor as it existed in OpenBSD 6.0-CURRENT around late October, 2016. We'll call that video Part 1.

As good as the OpenBSD documentation is (and vmm/vmd/vmctl are no exception) I had to do a lot of fumbling around, and asking on the misc@ mailing list to really get to the point where I understand this stuff as well as I do. These are my notes so far.

Quite a bit of development was done on vmm before 6.1-RELEASE, and it's worth noting that some new features made their way in. Work continues, of course, and I can only imagine the hypervisor technology will mature plenty for the next release. As it stands, this is the first release of OpenBSD with a native hypervisor shipped in the base install, and that's exciting news in and of itself. In this article, we assume a few things:

  1. You are using OpenBSD 6.1-RELEASE or -STABLE (I'll likely start exploring -CURRENT again in a month or two)
  2. You're running the amd64 architecture version of OpenBSD
  3. The system has an Intel CPU with a microarchitecture from the Nehalem family, or newer. (e.g. Westmere, Sandy Bridge, Ivy Bridge). Basically, if you've got an i5 or i7, you should be good to go.
  4. Your user-level account can perform actions as root via doas. See the man page for doas.conf(5) for details.
 Notes on formatting:
  1. Lines in blue are text inside configuration files.
  2. Lines in green are shell commands to be executed on the command line.

Basic network setup

To get our virtual machines onto the network, we have to spend some time setting up a virtual ethernet interface. We'll run a DHCP server on that, and it'll be the default route for our virtual machines. We'll keep all the VMs on a private network segment, and use NAT to allow them to get to the network. There is a way to directly bridge VMs to the network in some situations, but I won't be covering that today. (Hint: If you bridge to an interface with dhclient running in the host OS, no DHCP packets will reach the VMs, so it's best to static-assign your VM host's LAN port. or use a different interface entirely.)

Enable IP forwarding in sysctl. Add the following to /etc/sysctl.conf:

net.inet.ip.forwarding=1
net.inet6.ip6.forwarding=1

You can reboot to enable these, or run these commands manually:

doas sysctl net.inet.ip.forwarding=1
doas sysctl net.inet6.ip6.forwarding=1


We're going to be using "vether0" as the VM interface, and in my current configuration, "athn0" as the external interface. The below block of pf configuration was taken partially from /etc/example/pf.conf and some of the PF NAT example from the OpenBSD FAQ. Change your ext_if as needed, and save this as /etc/pf.conf:

ext_if="athn0"
vmd_if="vether0"

set skip on lo

block return
pass           
 

match out on $ext_if inet from $vmd_if:network to any nat-to ($ext_if)

Next, you'll need to specify an IP and subnet for vether0. I am going with 10.13.37.1 (and a 255.255.255.0 netmask for a /24 network) in my example. Place something like this in /etc/hostname.vether0:

inet 10.13.37.1 255.255.255.0

Bring vether0 online by running the following command:

doas sh /etc/netstart vether0

Reload the pf configuration (h/t to voutilad for the heads up in the comments about moving this step down a few spots):

doas pfctl -f /etc/pf.conf


Create a basic DHCP server configuration file that matches the vether0 configuration. I've specified 10.13.37.32-127 as the default DHCP pool range, and set 10.13.37.1 as the default route. Save this in /etc/dhcpd.conf:

option  domain-name "vmm.openbsd.local";
option  domain-name-servers 8.8.8.8, 8.8.4.4;

subnet 10.13.37.0 netmask 255.255.255.0 {
        option routers 10.13.37.1;

        range 10.13.37.32 10.13.37.127;

}

Configure a switch for vmm, so the VMs have connectivity. Put the following in /etc/vm.conf:

switch "local" {
        add vether0
        up
}


Enable and start the DHCP server. We also need to set the flags on dhcpd so that it only listens on vether0. Otherwise, you'll end up with a rogue DHCP server on your primary network. Your network administrators and other users on your network will not appreciate this because it could potentially keep others from getting on the network properly.

doas rcctl enable dhcpd
doas rcctl set dhcpd flags vether0
doas rcctl start dhcpd

Enable vmd, then start it as well. For good measure, re-run fw_update to make sure you have the proper vmm-bios package.

doas rcctl enable vmd
doas rcctl start vmd
doas fw_update

You should notice a new interface, likely named "bridge0" in ifconfig now.

VM Creation and installation

Create an empty disk image for your new VM. I'd recommend 1.5GB to play with at first. You can do this without doas or root if you want your user account to be able to start the VM later. I made a "vmm" directory inside my home directory to store VM disk images in. You might have a different partition you wish to store these large files in. Adjust as needed.

mkdir ~/vmm
vmctl create ~/vmm/test.img -s 1500M

Boot up a brand new vm instance. You'll have to do this as root or with doas. You can download a -CURRENT install kernel/ramdisk (bsd.rd) from an OpenBSD mirror, or you can simply use the one that's on your existing system (/bsd.rd) like I'll do here.

The below command will start a VM named "test.vm", display the console at startup, use /bsd.rd (from our host environment) as the boot image, allocate 256MB of memory, attach the first network interface to the switch called "local" we defined earlier in /etc/vm.conf, and use the test image we just created as the first disk drive. 

doas vmctl start "test.vm" -c -b /bsd.rd -m 256M -n "local" -d ~/vmm/test.img

You should see the boot message log followed by the OpenBSD installer. Enter "I" at the prompt to install OpenBSD. I won't walk through how to install OpenBSD. If you've gotten this far, you've done this at least once before. I can say that the vm console works by emulating a serial port, so you should leave these values at their defaults (just press enter):
Change the default console to com0? [yes]
Available speeds are: 9600 19200 38400 57600 115200.
Which speed should com0 use? (or 'done') [9600]
You will also likely need to install packages without a prefetch area. Answer yes to this.
Cannot determine prefetch area. Continue without verification? [no] yes
After installation and before rebooting, I'd recommend taking note of the vio0 interface's generated MAC address (lladdr) so you can specify a static lease in dhcpd later. If you don't specify one, it seems to randomize the last few bytes of the address.
CONGRATULATIONS! Your OpenBSD install has been successfully completed!
To boot the new system, enter 'reboot' at the command prompt.
When you login to your new system the first time, please read your mail
using the 'mail' command.

# ifconfig vio0                                                               
vio0: flags=8b43 mtu 1500
        lladdr fe:e1:bb:d1:23:c4
        llprio 3
        groups: dhcp egress
        media: Ethernet autoselect
        status: active
        inet 10.13.37.39 netmask 0xffffff00 broadcast 10.13.37.255
Shut down the vm with "halt -p" and then press enter a few extra times. Enter the characters "~." (tilde period) to exit the VM console. Show the VM status, and stop the VM. I've been starting and stopping VMs all morning, so your VM ID will probably be 1 instead of 13.
# halt -p

(I hit ~. here...)
[EOT]
$ vmctl status
   ID   PID VCPUS  MAXMEM  CURMEM     TTY        OWNER NAME
   13 96542     1    256M   52.6M   ttyp3         root test.vm
$ doas vmctl stop 13
vmctl: terminated vm 13 successfully

VM Configuration

Now that the VM disk image file has a full installation of OpenBSD on it, build a VM configuration around it by adding the below block of configuration (with modifications as needed for owner, path and lladdr) to /etc/vm.conf.

# H-i-R.net Test VM
vm "test.vm" {
        disable
        owner axon
        memory 256M
        disk "/home/axon/vmm/test.img"
        interface {
                switch "local"
                lladdr fe:e1:bb:d1:23:c4
        }
}


I've noticed that VMs with much less than 256MB of RAM allocated tend to be a little unstable for me. You'll also note that in the "interface" clause, I hard-coded the lladdr that was generated for it earlier. By specifying "disable" in vm.conf, the VM will show up in a stopped state that the owner of the VM (that's you!) can manually start without root access.

Go ahead and reload the VM configuration with "doas vmctl reload", then look at the status again.
$ doas vmctl reload
$ vmctl status
   ID   PID VCPUS  MAXMEM  CURMEM     TTY        OWNER NAME

    -     -     1    256M       -       -         axon test.vm

DHCP Reservation (Optional)

I opted to set up a DHCP reservation for my VM so that I had a single IP address I knew I could SSH to from my VM host. This, to me, seems easier than using the VM console for everything.

Add this clause (again, modified to match the MAC address you noted earlier) to /etc/dhcpd.conf. You have to place this above the last curly-brace.

        host static-client {
                hardware ethernet fe:e1:bb:d1:23:c4;
                fixed-address 10.13.37.203;
        }

Restart dhcpd.

doas rcctl restart dhcpd

Go ahead and fire up the VM, and attach to the console.

vmctl start test.vm -c

You'll see the seabios messages, the boot> prompt, and eventually, all the kernel messages and the login console. Log in and have a look around. You're in a VM! Make sure the interface got the IP address you expected. Ping some stuff and make sure NAT works.
$ ifconfig vio0
vio0: flags=8b43 mtu 1500
        lladdr fe:e1:bb:d1:23:c4
        index 1 priority 0 llprio 3
        groups: egress
        media: Ethernet autoselect
        status: active
        inet 10.13.37.203 netmask 0xffffff00 broadcast 10.13.37.255
$ ping -c4 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: icmp_seq=0 ttl=54 time=9.166 ms
64 bytes from 8.8.8.8: icmp_seq=1 ttl=54 time=7.295 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=54 time=9.178 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=54 time=25.935 ms

--- 8.8.8.8 ping statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/std-dev = 7.295/12.893/25.935/7.568 ms

Again, you can use <enter>~. to exit the VM console. You probably want to log off before you do that.  Go ahead and SSH in as well.

$ ssh axon@10.13.37.203
The authenticity of host '10.13.37.203 (10.13.37.203)' can't be established.
ECDSA key fingerprint is SHA256:f39GrAZouHQ3L+3/Qy3FHBma5K+eOe84B9QoFwqNpXo.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.13.37.203' (ECDSA) to the list of known hosts.
axon@10.13.37.203's password:
Last login: Sat Apr 29 14:21:54 2017
OpenBSD 6.1 (GENERIC) #19: Sat Apr  1 13:42:46 MDT 2017

Welcome to OpenBSD: The proactively secure Unix-like operating system.

Please use the sendbug(1) utility to report bugs in the system.
Before reporting a bug, please try to reproduce it with the latest
version of the code.  With bug reports, please try to ensure that
enough information to reproduce the problem is enclosed, and if a
known fix for it exists, include that as well.

$

Forward ports through the NAT (optional)

One last thing I did was to port-forward SSH and HTTP (on high ports) to my VM so that I could get to the VM from anywhere on my home network.

Add these lines to /etc/pf.conf directly under the vmd NAT rule we added earlier.

pass in on $ext_if proto tcp from any to any port 2200 rdr-to 10.13.37.203 port 22
pass in on $ext_if proto tcp from any to any port 8000 rdr-to 10.13.37.203 port 80

Reload the pf configuration

doas pfctl -f /etc/pf.conf

You should now be able to ssh to port 2200 on your host VM's IP. Obviously, browsing to port 8000 won't work until you've set up a web server on your VM, but this was how I started out when working on the PHP/MySQL guides.

2017-04-15

PHP/MySQL Walk-Throughs updated for OpenBSD 6.1

I've stopped maintaining the page for nginx because it hasn't received many views since I updated it about six months ago. I suspect that nginx on OpenBSD is not all that popular because the httpd that ships with OpenBSD provides almost the same functionality as a basic installation of nginx. The pages for httpd in base and Apache2 now cover PHP7 and both configurations appear to work really well.  

PHP/MySQL on OpenBSD's relayd-based httpd 
HiR's Secure OpenBSD/Apache/MySQL/PHP Guide

Feel free to comment on this post if you find any discrepancies or have suggestions. Comments are disabled on the walk-through pages.

(2017-04-23) Edited to add: I went ahead and re-worked the nginx guide, based on the httpd guide with new formatting.
PHP/MySQL with nginx on OpenBSD

2017-04-11

OpenBSD 6.1 Released

I just saw a subtle hint from beck@ that OpenBSD 6.1 is live on the mirrors. Judging from the timestamps on the files,  it looks like the OpenBSD developers stealthily dropped 6.1-Release onto their FTP servers on April Fool's day and didn't tell anyone...

Go get it.

Edited to add: The OpenBSD 6.1 release page has been changed. It's official.