Former student pleads guilty in "USB Killer" case

A few weeks old, from the Department of Justice website, comes the first mention I've heard of a "USB Killer" being used nefariously at scale:

Akuthota admitted that on February 14, 2019, he inserted a “USB Killer” device into 66 computers, as well as numerous computer monitors and computer-enhanced podiums, owned by the college in Albany.  The “USB Killer” device, when inserted into a computer’s USB port, sends a command causing the computer’s on-board capacitors to rapidly charge and then discharge repeatedly, thereby overloading and physically destroying the computer’s USB port and electrical system.

Akuthota admitted that he intentionally destroyed the computers, and recorded himself doing so using his iPhone, including making statements such as “I’m going to kill this guy” before inserting the USB Killer into a computer’s USB port.  Akuthota also admitted that his actions caused $58,471 in damage, and has agreed to pay restitution in that amount to the College.

This is the predominant threat model that came to mind when USB Killer Hype kicked in about a year and a half ago. That is, someone repeatedly using it to attack unattended computers. While USB Killer devices are no longer one-off devices, and they have achieved a sort of "commercial viability," the kind that look convincing enough for a random person to insert into their own PC cost more than $60 USD. That's a lot of cash to spend on potentially destroying devices belonging to a random person by just leaving it laying around. Cheaper ones that are chunky (or have no case at all, or have cases emblazoned with menacing logos) are easier to come by, but obviously look more suspicious.
This is a pretty "clean" way for someone to destroy a computer they have physical access to, but ultimately, "physical access is total access" as the saying goes.


OpenBSD 6.5 released early

A few days late posting this, but OpenBSD 6.5 hit the wire last week, ahead of the May 1 target release date. Our OpenBSD Web Server Guide -- using the built-in httpd -- has been updated. And the PHP-FPM quirks from OpenBSD 6.4 got ironed out.

As far as installation and daily use go, you probably won't notice much has changed in OpenBSD 6.5. There was a ton of work done in areas of hardware support and network-stack enhancements.

If your console supports it, you may notice a new default console font (called "Spleen"). I've seen this on my OpenBSD-Current laptop for a few months. At first, I didn't really like it, but it's quite readable and has grown on me when working in text-only mode. I'm considering setting it as my default xterm font as well.

If you use OpenBSD-CURRENT with snapshots, however, there's already some fun stuff unfolding there, with sysupgrade(8) among them. This makes in-place upgrades a breeze. While it's not available in OpenBSD 6.5, upgrading from one release to the next should get a lot easier in about a year's time. The 6.6 to 6.7 upgrade will be the first supported release with this tool, unless they backport it to 6.5 with an errata/patch -- unlikely, indeed...


OpenBSD VMM Hypervisor Part 4: Running Ubuntu (and possibly other distros)

TL;DR: you cheat.

I've been trying for almost a year to figure out how to get the cloud-init meta-data service to work with the Ubuntu Cloud image. I've asked on misc@ and other OpenBSD groups, and no one has an answer. The documentation is vague. If anyone ever figures out how to configure meta-data, let me know. I'd still like to give it a shot.

Last week, I rescued a server from a pile of computers destined to be scrapped and recycled. For me, it's the perfect setup for getting serious with OpenBSD VMM in my home lab. Two older Xeon E5-2620 CPUs and 128 GB of RAM. No hard drives, but it came with enough empty drive trays for getting started. I threw a pair of old SAS drives into it.

No surprise, OpenBSD just worked. This renewed my fervor for replicating a bunch of my cloud instances at home, and there's a lot of Ubuntu in use.

I decided to bite the bullet and just use qemu to do the installation and configuration of Ubuntu. Install qemu from packages:

doas pkg_add qemu

Download Ubuntu Server. I've actually used both 18.04 LTS and 16.04 LTS. I'm focusing on 16.04 for this because that's what I'm running on most of my EC2 instances.

Create a disk image.

vmctl create qcow2:ubuntu16lts.qcow2 -s 20G

Boot the ubuntu ISO and attach the new ubuntu disk image to qemu:

qemu-system-x86_64 -boot d -cdrom ~/Downloads/ubuntu-16.04.5-server-amd64.iso -drive file=ubuntu16lts.qcow2,media=disk -m 640

Install Ubuntu as usual. I didn't bother adding anything other than the SSH server during installation. qemu is really slow on OpenBSD, but it works... eventually. When the install is done, shut down and then restart qemu without the installation ISO attached.

qemu-system-x86_64 -drive file=ubuntu16lts.qcow2,media=disk -m 640

Log in with the user-level account you created. There are only two things to tweak before it's ready to run in vmm: Configuring the serial console, and the network interface.

Under qemu, Ubuntu sees "ens3" as the network interface. Under vmm, the network interface is "enp0s3". Change "ens3" to "enp0s3" in /etc/network/interfaces if you're using 16.04. On Ubuntu 18.04, you must instead change the "netplan" config file in /etc/netplan/50-cloud-init.yaml with the same kind of change, ens3 to enp0s3.

To configure the serial console, edit /etc/default/grub and change this line:



GRUB_CMDLINE_LINUX="console=tty0 console=ttyS0,115200n8"

then run

sudo update-grub

Shut down qemu again. Your disk image is basically ready to go under vmm.

To save the trouble of having to mess with qemu again, I recommend creating derivative images of the one you just created, and using those for vmm.

vmctl create qcow2:ubuntu16lts-1.qcow2 -b ubuntu16lts.qcow2

Add the new disk image to a configuration clause in /etc/vm.conf on your OpenBSD host system. Mine looks like this:

vm "Ubuntu16.04" {
        owner axon
        memory 4096M
        disk "/home/axon/vmm/ubuntu16lts-1.qcow2"
        interface {
                switch "local"
                lladdr fe:e1:ba:f0:eb:b0

For more information about setting up switches and networks in vmm, see Part 2 of my VMM series.

Voila! Ubuntu in VMM!

Although the configuration files you must edit to make it work might vary, you can do the same thing and it may very well work for text-mode-only distributions.

I actually didn't need to use qemu to get arch linux installed in vmm, but it was somewhat tedious to do entirely in vmm, and it took me a few tries to get it right. Qemu might have been easier.


OpenBSD vmm Hypervisor Part 3: qcow2 and derived disk images

With OpenBSD 6.4, the VMM hypervisor got support for qcow2 disk images. This format is used by QEMU, but it has several features that make it a better choice than raw image files. The images are dynamically-allocated, so the disk image file grows as you use more space instead of taking up the entire filesystem size when the image is created. It won't ever shrink, though. "Derived images" are also supported. While VMM doesn't officially support snapshots yet, you can kind of get away with using derived images to do something similar. I'll cover that toward the end of this article.

You will probably want to have the networking set up on your OpenBSD VM host before you continue. That information is covered in Part 2 of my VMM series.

To create a qcow2 image, prefix the image file name with qcow2:

vmctl create qcow2:obsd64-base.qcow2 -s 10G


You can also use the qemu-img utility (from qemu in the package repository) to convert an existing raw image to qcow2 format, if you've already been using VMM before OpenBSD 6.4 was released. This image file will not be dynamically sized, but it can serve as a base image for derivatives:

qemu-img convert obsd64.img obsd64-base.qcow2

Start the VM using your bsd.rd as the boot image, then follow the installer prompts. 

doas vmctl start obsd64-base -n local -m 512m -d obsd64-base.qcow2 -b /bsd.rd -c

When the install is done, rebooting will just bring the installer back. Exit to shell instead, type "halt -p" and use the ~. command sequence to exit the VM. Anything else you press will probably reboot the system (back into the installer). Now you have a pristine, freshly-installed OpenBSD image to start from.

To create a derived image, select your base image with the -b option to vmctl create:

vmctl create qcow2:obsd64-test1.qcow2 -b obsd64-base.qcow2

WARNING: If you make any changes to the base image, all derived image files it was based on will become corrupt and unusable. You can remove write access to the base image if you want. The VMs relying on derived images will run fine. 

chmod 400 obsd64-base.qcow2

Now, create a VM in /etc/vm.conf with the new obsd64-test1.qcow2 image file. All changes will be stored in this new image file. The original filesystem image will remain unchanged, and you can make as many derived images as you want from it.

# bridge0 for VMs, NAT and dhcpd (required for networking in this example)
switch "local" {
interface bridge0

# OpenBSD Stable
vm "test.vm" {
owner axon
memory 512M
disk "/home/axon/vmm/obsd64-test1.qcow2"
interface {
switch "local"
lladdr fe:e1:ba:d0:eb:ab

Reload vmm's configuration:
doas vmctl reload

Then go ahead and boot it up with the console attached:
vmctl start test.vm -c

For snapshot-like functionality, you can make a copy of your derived image and save it with another file name in the same directory. You should shut down the VM before you do this, though. To restore, just copy it back over the derived image, or create a new vm clause in /etc/vm.conf pointing to your saved derived image file.

cp obsd64-test1.qcow2 snapshot-2018-11-01_obsd64-test1.qcow2

You can run multiple VMs at the same time, with different derived images from the base image as well. If I create a new derived image file and add a vm clause for it, both VMs can run at the same time.

vmctl create qcow2:obsd64-test2.qcow2 -b obsd64-base.qcow2

I added this to /etc/vm.conf:
# OpenBSD test2
vm "test2.vm" {
owner axon
memory 512M
disk "/home/axon/vmm/obsd64-test2.qcow2"
interface {
switch "local"
lladdr fe:e1:ba:d0:eb:ac

Reload vmm, and start up your VMs!
doas vmctl reload
vmctl start test.vm
vmctl start test2.vm

You can attach to the consoles of each to see that they're running. Remember that you can use the [RETURN]~. key sequence to exit the console.

vmctl console test.vm
vmctl console test2.vm


New OpenBSD FAQ: Virtualization

OpenBSD has, arguably, some of the best officially-maintained documentation of any modern operating system. Solene Rapenne added a new FAQ section for Virtualization that covers getting OpenBSD's VMM hypervisor off the ground, and it gets the basics out of the way pretty well.

The FAQ kind of glosses over the more elaborate network configuration schemes, one of which I covered in Part 2 of my VMM article a while ago, though if you poke around between the FAQ and man pages, you can find pretty much all you need.

There are some new features to VMM which I plan on writing about soon. Stay tuned!

Via Undeadly


Windows Defender can now run in a sandbox

Via the Microsoft Security Blog:

Windows Defender Antivirus has hit a new milestone: the built-in antivirus capabilities on Windows can now run within a sandbox. With this new development, Windows Defender Antivirus becomes the first complete antivirus solution to have this capability and continues to lead the industry in raising the bar for security.
Sandboxes isolate processes in such a way as to prevent them from causing systemic harm, and because of the way modern antiviruses work, many of them have proven vulnerable to targeted arbitrary code execution attacks -- that's right, proof-of-concept malware exists that can exploit the antivirus suite! This is a major step toward improving the security of the Windows platform, and as far as I can tell, Defender is the first in its class to adopt this sort of fortification.

Right now, It's not set up by default. I'd imagine that may change in the near future.
Users can also force the sandboxing implementation to be enabled by setting a machine-wide environment variable (setx /M MP_FORCE_USE_SANDBOX 1) and restarting the machine. This is currently supported on Windows 10, version 1703 or later.


Small TFT displays for Kali on the Raspberry Pi

Earlier this week, I saw this hot tip from Hack A Day with regards to a high-performance driver for SPI-driven displays on the Pi. That article was published just as I had been digging into getting my Adafruit 3.5" PiTFT display working under Kali so I can run FruityWiFi and other tools with a super-portable kit.

I've had the PiTFT working under Raspbian for years, but Kali isn't Raspbian, and I remember that getting it working the way I wanted, even with the Adafruit helper tool, was somewhat of an ordeal.

Although a number of folks (e.g. re4son) have published unofficial Kali images for the Pi, some of which claim to work with various add-on displays, I tried and a few and failed to get them to work properly, if at all, even without the display. I started with a fresh official Kali Linux 2018.3 RaspberryPi 2 and 3 image.

The fbcp-ili9341 driver doesn't work out-of-the-box on Kali, either, but getting it up and running wasn't too hard. It doesn't support touch input yet, but for me, Kali requires at least a keyboard, and my trusty Logitech K400r (affiliate link) is always nearby. One thing I like about framebuffer copy (fbcp) is that you can have the Pi plugged into HDMI (or not...) and the video is mirrored to the TFT, but needless to say, you'll have to start with the Pi plugged into an external monitor until you get the TFT working.
To get the driver to compile on Kali, I had to download libbcm_host.so, libvchiq_arm.so and libvcos.so from the opt/vc/lib directory of the RaspberryPi Git repository (or you could copy them from a running raspbian host or SD card).

I put the library files in /opt/vc/lib (I had to create this directory) and then added it to the libary path by creating a file called /etc/ld.so.conf.d/vc.conf :
#vc libs

run ldconfig to reload the library cache.

I'll be going through the steps specific to the Adafruit PiTFT 3.5, however it looks like a lot of various, generic displays have been tested to work, as long as you pass the right options to cmake when you build it. The Readme in the repository has a lot of helpful tips on cmake options, but the basic "get it compiled" instructions are pretty simple:

For the Adafruit PiTFT 3.5 display, this was the magic sauce for the cmake command, though you may wish to mess with the Clock Divisor timing:


Before I ran "make -j" per the above, I edited config.h to make some tweaks, uncommenting two options to get the orientation of the display the way I wanted it (so the Raspberry Pi power plug sticks out of the top when looking at the screen upright) and to fill more of the display:


Once I got it running nicely, I copied the binary to /usr/local/bin/fbcp (because I can't remember "fbcp-ili9341")

Next, I edited /boot/config.txt and experimented with the various video modes to give me a display that was both legible on a tiny screen, and filled as much of the display as possible. I ended up with a 480p 16:9 mode that, combined with the BREAK_ASPECT_RATIO fbcp build config, looks about as good as I can get it on the display. You'll have to tinker with these options to find what works best on yours. I added this to the end of /boot/config.txt:


If you're cool with running full brightness, you can skip this next part. If you want variable brightness on the backlight, we have to configure GPIO. This display uses GPIO Pin 18 for the backlight LEDs. Other TFTs might not support PWM brightness control, or may use a different pin than 18 for it. By default, the display is on 100% full brightness, but if you tweak the GPIO configuration in the bootloader, you can use PWM to modulate the brightness. I added the following line to /boot/config.txt:


This change will kill the power to the backlight at boot (as the PWM mode will default to no power output) so you'll need to initialize and power-up the GPIO at boot if you want the display to be usable. To do this, I created a "rc.local" file in /etc (which systemd will run at boot) to launch the fbcp driver, initialize the GPIO, and set the display to 50% brightness. I'm running a really high frequency on the GPIO because a lower frequencies created a very audible high-pitch whine, and very low values (e.g. periods of 255 to 10000) were not giving any kind of granularity to the backlight brightness. /etc/rc.local:

echo 0 > /sys/class/pwm/pwmchip0/export 
echo 10000000 > /sys/class/pwm/pwm0/period 
echo 5000000 > /sys/class/pwm/pwm0/duty_cycle 
echo 1 > /sys/class/pwm/pwm0/enable 

Make sure it's executable:
chmod 755 /etc/rc.local

I opted to enable automatic login (as root) on this since it's basically a plug-in-and-go appliance. I followed this quick guide.

Reboot to test it out. You should see a white screen (or whatever was on the screen before rebooting) for a few seconds, then the backlight should go out until rc.local is executed right before it goes into GUI mode.

Finally, I created a "backlight.sh" script that handles setting the brightness. You'll need to make this executable, too. Syntax is basically "./brightness.sh (percentage)" where 0 is off, 1 is very dim, and 100 is full brightness.

if [ -n $1 ]
    echo ${1}00000 > /sys/class/pwm/pwm0/duty_cycle