2025-09-17

Raspberry Pi Home-Lab IDS with Suricata and Wazuh

I recently set up Suricata IDS in my home lab again as part of a re-build.  
You'll need a RaspberryPi 3, 4 or 5 and an inexpensive smart switch that can mirror traffic from your home lab environment.


I opted for the TP-Link TL-SG105e and TL-SG-108e switches for my home lab, with 5 and 8 1GBPS ports, respectively. I've been using these switches for years and they seem to be popular in the homelab community. 

I think the 4GB Raspberry Pi 4 is probably a good balance of affordability and resources. This setup was just a little sluggish on the Pi 3, but it worked fine once it was up and running. On 32-bit platforms like the Raspberry Pi 2, only older versions of Suricata seem to be available.

I would avoid buying Raspberry Pi boards from Amazon, as they're usually overpriced, fulfilled by sketchy resellers, or only sold as part of cost-ineffective bundles by companies that deal primarily in hobby electronics accessories. In North America, Adafruit is probably the most reliable place to buy one online, if you don't have a retail storefront that sells them locally. 

Flash the latest RasPiOS bookworm lite image to SD Card. Once it's flashed, set it up for remote SSH access. You can do this 100% headless by preparing the SD card. If you're on Linux or MacOS, you can go open the boot partition of the SD Card and run these commands to auto-provision your account and enable SSH on first boot. Obviously, choose a different username and password than this:

echo myusername:$(echo 'mypassword' | openssl passwd -6 -stdin) > userconf.txt

touch ssh

Next, log in to your smart switch and set up port mirroring. I mirrored only the port for my target lab machine on port 1 to mySuricata Raspberry Pi on port 2. Generally, you should only mirror one single port to the pi, and be careful about mirroring the uplink if there's a lot going on in your lab. Under most conditions, you should be able to use the single Ethernet interface on your Raspberry Pi for both management and IDS sniffing.
 


Make sure the OS is up to date, then install suricata, tcpdump and jq. 

sudo apt update && sudo apt -y upgrade

sudo apt -y install suricata tcpdump jq 

We need to edit the configuration slightly. You may want to adjust $HOME_NET to focus only on the "target" part of your home lab, and we definitely need to fix the rule path to align with the rule set we're installing, because the default rules won't catch anything useful.

edit /etc/suricata/suricata.yaml and change

default-rule-path: /etc/suricata/rules
 to 
default-rule-path: /var/lib/suricata/rules

If you plan on using Suricata to detect attacks that happen entirely within your LAN, you should update home-net to a list of your target systems, for example my home lab target is 192.168.1.135, so HOME_NET = "[192.168.1.135/32]"  
However, if you're watching all of your NAT targets for attacks involving the public internet, the default list is fine, and covers all RFC1918 addresses.

If you have a substantially large SD card and feel like you will want the option to deeply examine the raw packet data for identified attacks, enable pcap-log in /etc/suricata/suricata.yaml. The default settings will likely eat up many gigabytes of space. Mine looks more like this.

  - pcap-log:
      enabled: yes
      filename: log.pcap
      limit: 1000mb
      max-files: 10
      compression: none
      mode: normal


Add the Emerging-All rule source and run suricata-update to install them.

sudo suricata-update add-source et-all https://rules.emergingthreats.net/open/suricata-6.0/emerging-all.rules.tar.gz

sudo suricata-update -v

I had to stop and start suricata to get the new rules to load. A simple "restart" didn't work for some reason.

sudo systemctl stop suricata
sudo systemctl start suricata


You can use jq to parse the event log looking for alerts

jq '. | select(.event_type=="alert")' /var/log/suricata/eve.json

and it's not too hard to set up the Wazuh agent to send these to your home lab SIEM. Once you have installed wazuh-agent on your Raspberry Pi, you can add various log files to monitor by editing /var/ossec/etc/ossec.conf and adding this block near the end of the file. 

  <localfile>
    <log_format>json</log_format>
    <location>/var/log/suricata/eve.json</location>
  </localfile>

 


 

 

 

 

 

 

 

 

 

 

 

 

Restart wazuh to pick up the changes.

sudo systemctl restart wazuh-agent 

As long as you're getting alert events in eve.json (which you should be able to check with the jq command above), then the events should also start funneling into your Wazuh instance. You will probably want to refresh the wazuh-alerts-* index from the Dashboard Management menu in Wazuh after Suricata alerts start coming in, so that the new fields are searchable.


 


 

2025-09-16

Build your home-lab SIEM with Wazuh


To land that SOC role, you need SIEM experience. How do you get it without the infosec job? Wazuh is an open-source SIEM you can set up in minutes. It has some surprisingly huge production deployments, so it's not just a toy for the home lab. I've been using Wazuh and it's predecessor, OSSec, at home for close to twenty years, but I recently rebuilt my home lab security monitoring stack.
 
I started with a Debian 13 VM on ProxMox and followed the instructions for a single-node install. Mind the system requirements. 4 cores, 8GB RAM and 50GB of storage are recommended at minimum. You could run it on a laptop or a small home server as well. The version numbers and instructions are subject to change, so I'd recommend following the official procedure, rather than my trying to copy and paste steps here.

I ran into one snag during installation that caused a bunch of errors on the main dashboard and kept some stats from loading. Buried in the GitHub issues for Wazuh, I found a command that I had to run from inside the single-node Docker Compose directory to initialize wazuh-modules: 
 
sudo docker exec single-node-wazuh.manager-1 /var/ossec/bin/wazuh-modulesd 

I rebooted my wazuh server but you could probably just restart the containers with docker-compose down; docker-compose up -d 
 
After you start the docker container, wait a few minutes then visit https://<your IP>/ and accept the self-signed certificate. The default credentials are admin:SecretPassword and you should change those ASAP. 

The "Endpoints" page has a "Deploy new agent" link that will help you generate a small script to run on your Windows, Mac and Linux machines to install, enable and start the agent. You'll have to run it manually on the endpoint, either on the console or through a remote session (like RDP, VNC, or SSH).
 
Then you can get attack alerts, watch the logs, check security benchmarks, and start building in-demand cybersecurity skills at home, or just use it for monitoring your fleet of computers. 
 
The main dashboard will show you a summary of all the agents and alerts (or a bunch of errors if you ran into the snag I ran into and haven't run the work-around yet). And the "Discover" app inside Wazuh gives you a robust event log search. 

I've found, especially as new logs start coming in from various operating systems, you should refresh the field lists for the wazuh-alerts index. From the main menu on the upper left, select "Dashboards Management" near the bottom of the menu, click "Index Patterns", then "wazuh-alerts-*" and near the upper right, click the refresh icon next to the trash can icon. This will allow you to search on new fields in the Discover app. 

In my next post, I'll cover setting up a Suricata IDS on a Raspberry Pi, and integrating Suricata network IDS alerts into Wazuh, too. 

2025-08-19

Modernizing The HoneyNet: Bringing Community Honey Network Into 2025

Eight years ago, I wrote about building a honeypot army with Raspberry Pi, EC2, and Modern Honey Network. Back then, the Modern Honey Network (MHN) project was the gold standard for deploying a large fleet of honeypots, but it was already showing its age as Python 2 deprecation loomed. in 2020, some folks forked the work and got it running on Python 3, but haven't maintained it since. To the surprise of everyone who remembers me introducing the SecKC MHN effort and the stunning WebGL dashboard Wintel and I worked on together, my old MHN server was still running in 2025, dutifully grinding away on Ubuntu 16.04 LTS and Python 2.7.12. For the past 2 years, folks have asked me how to build their own, and I've had to say "First, time-travel back to 2017..."

I'd actually set out to do this work over my vacation a few months ago, but I was not in a good headspace and really just needed to disconnect. I did get familiar with CHN back in May. It ran fine in Docker but it was still dated, and it was going to need some work to integrate with the WebGL Dashboard. In many ways, I felt like some kind of archaeologist, looking at the mostly-abandoned CHN work, the few-and-infrequent MHN updates prior to 2018, and the fossils of purpose-built honeypots, many untouched since before 2015.

What's left of a once-vibrant honeypot ecosystem is a bit of a shame. The original MHN stack had become a museum of dependencies that couldn't be installed on modern systems. Even CHN featured Ubuntu 18.04 base images, archaic PyMongo trying to talk to MongoDB 8.x (spoiler: it can't), and many other outdated Python packages. I also realized that if anything happened to my precious EC2 instance, the entire stack was going to be offline indefinitely. I have backups of everything, but trying to untangle everything to restore a fragile artifact like this wasn't my idea of fun, and I'd probably just let it fade into obscurity. At the same time, I also envisioned getting the whole thing running in Python virtual environments on one of my OpenBSD systems, instead of relying on Docker on a cloud node somewhere. Docker is great, but it should be a deployment consideration, not a hard requirement.

Claude Code -- or -- vibe-technical-debt-repayment

This was the first seriously large project I've used Claude Code to help with. I used it mostly for documentation and test automation. I started by having it unpack how all of the pieces work together and creating a Mermaid syntax diagram (left) and description in Markdown to help better understand the data flow between the honeypots, hpfeeds, mnemosyne, MongoDB, SQLite and the core CHN-Server itself. I also had Claude help me keep track of the project plan, broken up into phases, and when I was getting tired and had to take a break, it could summarize the commit logs, diffs, and create a quick checklist of next-steps that I could come back to the next day to get my head back into the game quicker. When I found a class of problems in the code, Claude highlighted other places the same patterns were found. That wasn't anything I couldn't do in a modern IDE, but it was a nice touch, since I was working in vim the entire time. If you actually try to have Claude write code, it's kind of a mixed bag. Many of its decisions are based on popular and harmful anti-patterns propagated through StackOverflow, Quora and Reddit. You'll have to babysit it and tell it no quite often.

The Modernization Journey

The initial scope seemed straightforward enough: upgrade some dependencies, fix some syntax errors and APIs, clean up a few bugs, get it all running in an OpenBSD VMM instance, then focus on rolling some new Docker containers. What started as a simple refresh turned into a journey through eight years of accumulated technical debt. I started with just the CHN-Server repository, but it had dependencies on the hpfeeds3 repository, and updating that meant messing with the mnemosyne repository. Several projects install my updated hpfeed3 package directly from GitHub in requirements.txt or the Dockerfile. There were layers of compatibility issues forcing me to take a phased, incremental approach to modernization, and a few gotchas that bit me several times, particularly syntax changes around PyMongo. Some bugs weren't obvious until I tried end-to-end testing with real honeypots in the lab. Honestly, I'm sure I missed a few obvious ones, but none that impact basic functionality so far. I threw hundreds of events per hour at the system using an infinite loop of telnet attempts. The system didn't even notice. It's much less resource-intensive.

The front-end needed some work, too. I parameterized the base API URL inside the JavaScript. Rob Scanlon's original work was built with Grunt, and Wintel's middleware used an ancient python2 WebSocket library that is incompatible with anything in modern JavaScript. After a few hours of trying to remove all of the Deprecated bits from the front-end, I decided to revert back to the basics, and gut the WebSocket code from the middleware entirely, and add a REST API to poll the honeypot attacks. I tweaked the Javascript as minimally as possible. I'm really not that comfortable with front-end code.

End to end also revealed a few things that'll need attention soon. The deployment scripts all reference docker containers with architecture tags (-amd64, -arm, etc) that don't exist, for example. Hpfeeds-logger and hpfeeds-cif start up and appear functional, but they need more comprehensive testing than just "the container starts and doesn't immediately crash." I do my own log collection with a custom HPFeeds scraper I wrote in Go last year, replacing the PHP junk I had written back in 2017. I haven't used CIF since 2017 either, and don't really have much need for it. That's future work or maybe even someone else's problem. I'm working on a plan to submit my improvements upstream in a practical way that doesn't require the current maintainers to review massive, monolithic pull requests (if they're even paying attention to PRs). I may end up having to just maintain this fork myself for a while. I'll come up with a more sustainable strategy to keep it up to date if it looks like that's how it will be.

Getting CHN deployed in 2025 is easier than it was in 2017, assuming you can navigate the initial setup. Once I had verified all of the pieces kind of worked on their own, I forked the chn-quickstart project. The guided Docker Compose generator handles most of the complexity, and the whole CHN stack comes up with a simple docker-compose up -d. It's missing the attack-map and custom middleware for the time being, but they're not too hard to deploy next to CHN once it's up and running. I did manage to dockerize the middleware. As of last night, I cut over the DNS records from my old EC2 instance to my new VPS. This thing's actually running in production! nginx handles routing the web requests to the right places, and hosting the static assets of the animated front-end.

The OpenBSD deployment in my lab environment has been rock-solid, with Cowrie honeypots running on Raspberry Pi devices feeding data back to the central server. Watching the attack data flow in real-time through the web interface brings back that same sense of satisfaction I felt eight years ago - except now it's running on modern infrastructure. I need to clean up the OpenBSD init scripts and write some documentation around that part. I don't think it'll be as easy to deploy on bare metal (in Python Virtual Environments with uWSGI) as I'd like.

During the 2020 CHN effort, Duke University's STINGAR team built a bunch of dockerized honeypots that are still functional today, but similarly dated. Modernizing those honeypot projects would be nice. If we concede that the honeypot deployment strategy is just docker images on single board computers, home lab VMs and cloud nodes, it's not really that urgent. Cowrie is actively maintained, and a bare-metal deployment script is on my list of things to build soon.

Links

2024-06-29

OpenBSD Power Management

 

OpenBSD's power management features are powerful and plenty. My current setup floats the battery at 80% charged, to reduce battery wear during the work week, and it doesn't suspend when I close the lid as long as it's plugged in. On battery power, it adjusts the CPU speed under load to optimize battery life without sacrificing performance when I need it, and it will automatically suspend at 5% battery to save me from the system powering off unexpectedly. When I plan to head out, I can use a quick alias to allow the battery to charge all the way, which takes about 20 minutes from 80%. We'll go through how I have it set up.

Long-time readers of HiR will not be surprised I'm running OpenBSD as my primary general-purpose operating system on my ThinkPad X1 Carbon. Over on Instagram, people do a double-take. One of the surprises was the fact that in a NeoFetch screenshot, it was noted that my CPU was 400 MHz because I was on battery power without anything heavy running. Power management on OpenBSD also blew some minds.

Out of the box, power management is disabled on OpenBSD. It's one of the few things that do not "just work" by default. The vast majority of OpenBSD systems do not need power management features, but it's a must for laptops.

Understanding apmd

apmd is the Advanced Power Management daemon. If you just set apmd to start with no options, on a well-supported laptop like the 8th Generation Lenovo ThinkPad X1 Carbon I'm using, your laptop will probably suspend when you close the lid or use "zzz" on the command line. And it will wake up when you open the lid or mess with the keyboard. apmd will also enable automatic performance adjustment mode -- clocking down the CPU when there's not much load.  That's a pretty good start. There will likely be no warning when your battery is close to dying, and there won't be anything there to stop it from just turning off abruptly when it hits 0%. That's not optimal. Also, when you close the laptop lid and it's plugged in, you might want the system to remain active. You can do that by tweaking machdep.lidaction with sysctl, but there's a much better way.

Looking at the manual page for apmd, we have a lot of useful options.

We can start apmd in high-performance mode (-H) to get the most processing power out of the system, low-performance mode (-L) to extend battery life at the expense of CPU speed, or force automatic performance adjustment mode (-A) which happens to be the default.

The -a option (lowercase) will block incoming BIOS suspend requests, such as those coming from closing the lid, if the system is plugged in. You can still manually suspend through your window manager or the command line zzz utility.

The -z [percent] option will automatically suspend the system if it is not plugged in and the battery is at or below the threshold percentage.

Enable and configure apmd. I assume that you've configured doas. I've covered this on several pages, like my OpenBSD webserver article. I explicitly set automatic performance mode (-A), blocking suspend when plugged in (-a), and set it to automatically suspend at 5% battery (-z 5). Feel free to change this however you please.

doas rcctl enable apmd
doas rcctl set apmd flags -A -a -z 5

The apm command line utility

Simply running the apm utility will provide battery and charge status

apm
Battery state: high, 79% remaining, 153 minutes life estimate
AC adapter state: not connected
Performance adjustment mode: auto (400 MHz)


There are a number of display flags that you can pass to APM to get specific details, for instance, apm -m will display the number of minutes of estimated battery life (or time to achieve a full charge). See the man page for apm for all the details. This is useful if you are making scripts to determine power management status such as for tmux/powerline or custom status bar scripts.

You can also adjust the performance mode on-the-fly, using apm -H, apm -L or apm -A to enable high performance, low-performance or automatic performance modes respectively, without restarting apmd.

sysctl, power management and sensors

This will dump out everything from the hw.sensors tree in sysctl:
sysctl hw.sensors

From here, we can see fan speeds, temperatures, the number of battery charge cycles and even things like the battery's factory design capacity and last fully-charged capacity in Watt-hours. 



By dividing last full capacity by the design capacity, you can see how far below the rated capacity your battery has deteriorated, in a way a measure of battery health. For example, my battery's design capacity is 51 Wh, but my last full charge was 39.76 Wh. 39.76 / 51 is about 0.78, so my "full" capacity is about 78% of what it was when new. That's not bad for a 4-year-old laptop on the original battery with about 500 discharge cycles.

Extending battery health with charging optimization

Many devices are now limiting the battery charge to about 80% when the system spends a lot of time plugged in. The MacBook my employer issued to me does this using some kind of magic algorithm that determines if it hasn't been running on battery power much lately. Leaving your battery slightly discharged like this actually extends the life of the battery substantially if you use it while plugged in most of the time.

OpenBSD can do the same thing, to an extent. The maximum charge level can be set with sysctl, so you can place the following line in /etc/sysctl.conf so that it's set immediately when booting up:

hw.battery.chargestop=80

Then, manually set it with sysctl, or reboot:

doas sysctl hw.battery.chargestop=80

If your battery is fully charged, it won't do anything, but the next time you run the battery down below 80% and plug it back in, it will stop charging the battery once it hits 80%. As far as I know, the battery will still charge to 100% if you turn the system off, though.

You can set or adjust the maximum charge level from the command line as well. For instance, if you know you're going to be on the go later today and want to actually charge the battery to 100%, running this command will take care of things:

doas sysctl hw.battery.chargestop=100

There are also additional hw.battery.chargemode options and an hw.battery.chargestart variable, for advanced use cases that I haven't needed. You can reference the hw.battery section of the detailed manual page for the sysctl API to read more about these settings.

Finally, I've been using XFCE lately, and the package "xfce4-battery" does a decent job, allowing me to place a battery widget in any of the XFCE panels. There may be additional widgets or plugins for your GUI of choice.

2024-05-04

Running your own Wireguard VPN server and Travel Router

If you travel, or work from the road a lot, you probably have a good reason to set up a travel router and VPN. Travel routers let you create a private network for all of your personal devices. Paired with a VPN, you can obscure the nature of your activity from the local network, and evade IP address or geographical restrictions. 

The good use cases for “privacy” focused VPN services are vanishing. Improved encryption and protocols prevent many of the ways a casual attacker can spy on you with wifi. On top of that, many such providers have been caught selling user data to third parties and turning over information to authorities under subpoena, making them possibly worse than any attacker you’re sharing the hotel wifi with.

Running your own cloud VPN is easy and affordable. Once you know how to set it up, you can run it on most hosting providers anywhere in the world, or set it up at home so that you can virtually hop on your home network while you’re out and about. Actually installing Wireguard is the main part that’s different between operating systems. 

OpenBSD Server

It's probably no surprise that I run Wireguard on my OpenBSD Servers. OpenBSD has had full kernel support for Wireguard for years, so it's just a matter of installing the userland tools, and setting up the interface.

doas pkg_add wireguard-tools

/etc/hostname.wg0:

inet 10.0.0.1 255.255.255.0 NONE
up
!/usr/local/bin/wg setconf wg0 /etc/wireguard/wg0.conf

Amazon Linux

Amazon Linux is just one easy example I found of a Red Hat-based system. These steps should work similarly on others like Rocky or Alma. 

sudo wget -O /etc/yum.repos.d/wireguard.repo https://copr.fedorainfracloud.org/coprs/jdoss/wireguard/repo/epel-7/jdoss-wireguard-epel-7.repo

sudo yum upgrade

sudo yum clean all

sudo yum install wireguard-tools wireguard-dkms iptables-services


Debian Linux


As the root of many other distributions like Ubuntu and RaspiOS, it made sense to also cover Debian since these instructions will also likely work on many distributions.

sudo apt update

sudo apt install wireguard

 

Generating Public and Private Keys

Most of the travel routers I've seen don't have a way to generate Wireguard keys on the device if you're manually configuring it. These can be generated on your VPN server and imported. We're changing the umask here to ensure the files are not world or group readable. We're going to be editing files as root, so just use sudo -i (linux) or doas -s  (OpenBSD)

sudo -i

umask 077 

Create the client keys:

wg genkey | tee client-private.key | wg pubkey > client-public.key

And then server keys:
cd /etc/wireguard
wg genkey | tee private.key | wg pubkey > public.key

Figure out your main network interface:

ip a

In Amazon AWS EC2, the interface was enX0 but it may very well be eth0 or something ridiculous like enp37s8lmaowtf depending on your configuration. You'll need this interface name for your iptables rules.

Using this example skeleton configuration file as a template, paste it into /etc/wireguard/wg0.conf and edit the interface name and fill in the appropriate public and private keys.  You can pick any port number you wish. There is no standardized port for Wireguard.

/etc/wireguard/wg0.conf

[Interface]
PrivateKey = [the contents of /etc/wireguard/private.key]
ListenPort = 57609
Address = 10.0.0.1/24
PostUp = iptables -t nat -I POSTROUTING -o [Interface] -j MASQUERADE
PostUp = ip6tables -t nat -I POSTROUTING -o [Interface] -j MASQUERADE
PreDown = iptables -t nat -D POSTROUTING -o [Interface] -j MASQUERADE
PreDown = ip6tables -t nat -D POSTROUTING -o [Interface] -j MASQUERADE

[Peer]
PublicKey = [the contents of client-public.key]
AllowedIPs = 10.0.0.2/32

Final Setup and starting the server

OpenBSD

For OpenBSD, you won't need the Address or IPTables entries in wg0.conf above. You'll need to tell PF to NAT traffic for wg0, though. Again, you'll need the primary interface name, which you can find with ifconfig. Place the following lines into /etc/pf.conf AFTER the "pass" and before the block commands at the end of the file and restart pf.

pass in on wg0
pass in inet proto udp from any to any port 51820
pass out on egress inet from (wg0:network) nat-to ([Interface]:0)

doas pfctl -f /etc/pf.conf

Enable IP Forwarding by adding these lines to /etc/sysctl.conf:

net.inet.ip.forwarding=1
net.inet6.ip6.forwarding=1 

To start Wireguard, run the following commands, or reboot:

doas sysctl net.inet.ip.forwarding=1

doas net.inet6.ip6.forwarding=1

doas sh /etc/netstart wg0

Linux

For Amazon Linux or Debian, it's also similar. Add these to /etc/sysctl.conf:

net.ipv4.ip_forward=1

net.ipv6.conf.all.forwarding=1 

Reload sysctl:

sudo sysctl -p

Enable and start the Wireguard service with systemctl

sudo systemctl enable wg-quick@wg0.service

sudo systemctl start wg-quick@wg0.service

Travel Router Configuration

I've been using GL.iNet routers with Wireguard for about 3 years. The example screenshots are from my GL-SFT1200 "Opal" travel router. Manually configure the Wireguard client and set these values:

Interface

IP Address: 10.0.0.2 (or your "peer" address from the Wireguard server config)

Private key: Contents of client-private.key file we generated earlier

Peer

Public Key: Contents of /etc/wireguard/public.key from the wireguard server

Endpont host: IP address and port of your wireguard server (e.g. 3.45.67.89:57609)

Allowed IPs: 0.0.0.0/0 (or, all IP addresses are allowed through the Wireguard server)

 

 


Once you have configured the Wireguard client, you can connect to the VPN. Browse to an IP address checking site like whatismyip.com to verify you're coming from the VPN server's IP address.

Many travel routers have a mode switch on the side that allows you to easily change how the router works. I set up my Opal router so that the mode switch enables or disables Wireguard on the fly so I have more flexibility without worrying about having to log into the admin control panel and change settings. 



2023-11-14

November 2023 SecKC Presentation: Mobile SDR

Thanks to all who showed up and asked questions!

2023-10-01

Introducing NEMO for the M5Stick C Plus

I've been working on this project for a couple of weeks, and it's pretty close to finished. I've been trying to build some more skills in the embedded systems, microcontroller and Internet of Things realm, and when I decided it was time to expand my experience to ESP32, I wanted a dev kit with a little bit of everything built in. I already have breadboards, displays, servos, sensors, LEDs and accessories galore. I just wanted something cute that'd keep my interest for a while. Enter the M5Stack M5Stick C Plus. Powered by an ESP32, featuring an AXP192 power management unit, accelerometer, IR and red LEDs, a 100mAh battery, microphone, speaker, display, a few buttons and plenty of exposed GPIO pins, it seemed like a good place to start.

My usual method of learning involves sketching out a rough plan for demonstrating mastery of core concepts, so my first few projects were about getting the ESP-IDF and arduino environments working with simple programs. I also ported CircuitPython to it for some of my early projects. I focused on the WiFi stack and designing user interfaces at first, then using UART, SPI and I2C via the GPIO pins.

With most of the tech community excited about the Flipper Zero, I started thinking about what sorts of high-tech pranks one could get away with on a platform like this. The end result is NEMO, named after the titular character in Finding Nemo, in contrast to some other high-tech toy named after a fictional dolphin.

The Stick C Plus has no IR sensor, but it does have a transmitter. Infrared replay attacks might work if you plugged an IR receiver into the GPIO, but I'm not worried about that. I settled for an implementation of TV-B-Gone, relying on previous work by Ken Shirriff and a local hacker, MrARM. I had previously messed with similar projects in both CircuitPython, and at the source-code level, way back in 2008 with the DefCon 16 badge, which also featured an infrared TV killer mode.

 
Right about the time I was starting to work on this, DefCon 31 was wrapping up, and a ton of folks were commenting on the bizarre behavior of their iOS devices at the conference, seemingly always displaying pop-ups trying to connect AirPods or other accessories. This became known as the "AppleJuice" attack, and relies on bluetooth low energy beacon advertisements, and iOS's user experience that tries to make device pairing easier. I found a very bare-bones implementation for ESP32, that was somewhat broken.  I fixed it and gave it a decent two-button user interface as well.

I rounded out the pranks with WiFi Spamming, using a list of funny WiFi SSIDs, the now-popular "RickRoll" SSIDs and a mode that spams hundreds of randomly-named SSIDs per minute.

It defaults to a "watch" mode with a 24-hour clock backed by the on-board real-time-clock. There's a few kilobytes of non-volatile EEPROM storage on board, of which I'm using a few bytes to keep settings like left/right hand rotation, brightness, auto-dimming timer and TV-B-Gone region settings persistent through deep sleep or power off mode. All in all, it's a few existing projects just kind of glued together in a novel way that's easy to use. Those who've known me for a while would say that's on-brand. 

A few people have asked me if it's for sale. I have no plans to sell anything, such as M5Stick units pre-flashed with NEMO. This is open-source software I put together for fun, and anyone can use it and extend it. You can buy the device and learn how to load my code on it, but I'd be more excites to hear about people being inspired to build their own cool projects on it. 

At $20-$30 depending on the site and accessories you get with the M5Stick C Plus, it has a lot of capabilities. Here's an Amazon Affiliate Link to buy a version with a watch strap, lego mounting and wall-mounting options. The project source code and pre-compiled binaries are up on the m5stick-NEMO GitHub repository, and I am keeping the project up to date in the M5Burner app. You can see a quick walk-through reel on my Instagram as well.