2025-09-17

Raspberry Pi Home-Lab IDS with Suricata and Wazuh

I recently set up Suricata IDS in my home lab again as part of a re-build.  
You'll need a RaspberryPi 3, 4 or 5 and an inexpensive smart switch that can mirror traffic from your home lab environment.


I opted for the TP-Link TL-SG105e and TL-SG-108e switches for my home lab, with 5 and 8 1GBPS ports, respectively. I've been using these switches for years and they seem to be popular in the homelab community. 

I think the 4GB Raspberry Pi 4 is probably a good balance of affordability and resources. This setup was just a little sluggish on the Pi 3, but it worked fine once it was up and running. On 32-bit platforms like the Raspberry Pi 2, only older versions of Suricata seem to be available.

I would avoid buying Raspberry Pi boards from Amazon, as they're usually overpriced, fulfilled by sketchy resellers, or only sold as part of cost-ineffective bundles by companies that deal primarily in hobby electronics accessories. In North America, Adafruit is probably the most reliable place to buy one online, if you don't have a retail storefront that sells them locally. 

Flash the latest RasPiOS bookworm lite image to SD Card. Once it's flashed, set it up for remote SSH access. You can do this 100% headless by preparing the SD card. If you're on Linux or MacOS, you can go open the boot partition of the SD Card and run these commands to auto-provision your account and enable SSH on first boot. Obviously, choose a different username and password than this:

echo myusername:$(echo 'mypassword' | openssl passwd -6 -stdin) > userconf.txt

touch ssh

Next, log in to your smart switch and set up port mirroring. I mirrored only the port for my target lab machine on port 1 to mySuricata Raspberry Pi on port 2. Generally, you should only mirror one single port to the pi, and be careful about mirroring the uplink if there's a lot going on in your lab. Under most conditions, you should be able to use the single Ethernet interface on your Raspberry Pi for both management and IDS sniffing.
 


Make sure the OS is up to date, then install suricata, tcpdump and jq. 

sudo apt update && sudo apt -y upgrade

sudo apt -y install suricata tcpdump jq 

We need to edit the configuration slightly. You may want to adjust $HOME_NET to focus only on the "target" part of your home lab, and we definitely need to fix the rule path to align with the rule set we're installing, because the default rules won't catch anything useful.

edit /etc/suricata/suricata.yaml and change

default-rule-path: /etc/suricata/rules
 to 
default-rule-path: /var/lib/suricata/rules

If you plan on using Suricata to detect attacks that happen entirely within your LAN, you should update home-net to a list of your target systems, for example my home lab target is 192.168.1.135, so HOME_NET = "[192.168.1.135/32]"  
However, if you're watching all of your NAT targets for attacks involving the public internet, the default list is fine, and covers all RFC1918 addresses.

If you have a substantially large SD card and feel like you will want the option to deeply examine the raw packet data for identified attacks, enable pcap-log in /etc/suricata/suricata.yaml. The default settings will likely eat up many gigabytes of space. Mine looks more like this.

  - pcap-log:
      enabled: yes
      filename: log.pcap
      limit: 1000mb
      max-files: 10
      compression: none
      mode: normal


Add the Emerging-All rule source and run suricata-update to install them.

sudo suricata-update add-source et-all https://rules.emergingthreats.net/open/suricata-6.0/emerging-all.rules.tar.gz

sudo suricata-update -v

I had to stop and start suricata to get the new rules to load. A simple "restart" didn't work for some reason.

sudo systemctl stop suricata
sudo systemctl start suricata


You can use jq to parse the event log looking for alerts

jq '. | select(.event_type=="alert")' /var/log/suricata/eve.json

and it's not too hard to set up the Wazuh agent to send these to your home lab SIEM. Once you have installed wazuh-agent on your Raspberry Pi, you can add various log files to monitor by editing /var/ossec/etc/ossec.conf and adding this block near the end of the file. 

  <localfile>
    <log_format>json</log_format>
    <location>/var/log/suricata/eve.json</location>
  </localfile>

 


 

 

 

 

 

 

 

 

 

 

 

 

Restart wazuh to pick up the changes.

sudo systemctl restart wazuh-agent 

As long as you're getting alert events in eve.json (which you should be able to check with the jq command above), then the events should also start funneling into your Wazuh instance. You will probably want to refresh the wazuh-alerts-* index from the Dashboard Management menu in Wazuh after Suricata alerts start coming in, so that the new fields are searchable.


 


 

2025-09-16

Build your home-lab SIEM with Wazuh


To land that SOC role, you need SIEM experience. How do you get it without the infosec job? Wazuh is an open-source SIEM you can set up in minutes. It has some surprisingly huge production deployments, so it's not just a toy for the home lab. I've been using Wazuh and it's predecessor, OSSec, at home for close to twenty years, but I recently rebuilt my home lab security monitoring stack.
 
I started with a Debian 13 VM on ProxMox and followed the instructions for a single-node install. Mind the system requirements. 4 cores, 8GB RAM and 50GB of storage are recommended at minimum. You could run it on a laptop or a small home server as well. The version numbers and instructions are subject to change, so I'd recommend following the official procedure, rather than my trying to copy and paste steps here.

I ran into one snag during installation that caused a bunch of errors on the main dashboard and kept some stats from loading. Buried in the GitHub issues for Wazuh, I found a command that I had to run from inside the single-node Docker Compose directory to initialize wazuh-modules: 
 
sudo docker exec single-node-wazuh.manager-1 /var/ossec/bin/wazuh-modulesd 

I rebooted my wazuh server but you could probably just restart the containers with docker-compose down; docker-compose up -d 
 
After you start the docker container, wait a few minutes then visit https://<your IP>/ and accept the self-signed certificate. The default credentials are admin:SecretPassword and you should change those ASAP. 

The "Endpoints" page has a "Deploy new agent" link that will help you generate a small script to run on your Windows, Mac and Linux machines to install, enable and start the agent. You'll have to run it manually on the endpoint, either on the console or through a remote session (like RDP, VNC, or SSH).
 
Then you can get attack alerts, watch the logs, check security benchmarks, and start building in-demand cybersecurity skills at home, or just use it for monitoring your fleet of computers. 
 
The main dashboard will show you a summary of all the agents and alerts (or a bunch of errors if you ran into the snag I ran into and haven't run the work-around yet). And the "Discover" app inside Wazuh gives you a robust event log search. 

I've found, especially as new logs start coming in from various operating systems, you should refresh the field lists for the wazuh-alerts index. From the main menu on the upper left, select "Dashboards Management" near the bottom of the menu, click "Index Patterns", then "wazuh-alerts-*" and near the upper right, click the refresh icon next to the trash can icon. This will allow you to search on new fields in the Discover app. 

In my next post, I'll cover setting up a Suricata IDS on a Raspberry Pi, and integrating Suricata network IDS alerts into Wazuh, too. 

2025-08-19

Modernizing The HoneyNet: Bringing Community Honey Network Into 2025

Eight years ago, I wrote about building a honeypot army with Raspberry Pi, EC2, and Modern Honey Network. Back then, the Modern Honey Network (MHN) project was the gold standard for deploying a large fleet of honeypots, but it was already showing its age as Python 2 deprecation loomed. in 2020, some folks forked the work and got it running on Python 3, but haven't maintained it since. To the surprise of everyone who remembers me introducing the SecKC MHN effort and the stunning WebGL dashboard Wintel and I worked on together, my old MHN server was still running in 2025, dutifully grinding away on Ubuntu 16.04 LTS and Python 2.7.12. For the past 2 years, folks have asked me how to build their own, and I've had to say "First, time-travel back to 2017..."

I'd actually set out to do this work over my vacation a few months ago, but I was not in a good headspace and really just needed to disconnect. I did get familiar with CHN back in May. It ran fine in Docker but it was still dated, and it was going to need some work to integrate with the WebGL Dashboard. In many ways, I felt like some kind of archaeologist, looking at the mostly-abandoned CHN work, the few-and-infrequent MHN updates prior to 2018, and the fossils of purpose-built honeypots, many untouched since before 2015.

What's left of a once-vibrant honeypot ecosystem is a bit of a shame. The original MHN stack had become a museum of dependencies that couldn't be installed on modern systems. Even CHN featured Ubuntu 18.04 base images, archaic PyMongo trying to talk to MongoDB 8.x (spoiler: it can't), and many other outdated Python packages. I also realized that if anything happened to my precious EC2 instance, the entire stack was going to be offline indefinitely. I have backups of everything, but trying to untangle everything to restore a fragile artifact like this wasn't my idea of fun, and I'd probably just let it fade into obscurity. At the same time, I also envisioned getting the whole thing running in Python virtual environments on one of my OpenBSD systems, instead of relying on Docker on a cloud node somewhere. Docker is great, but it should be a deployment consideration, not a hard requirement.

Claude Code -- or -- vibe-technical-debt-repayment

This was the first seriously large project I've used Claude Code to help with. I used it mostly for documentation and test automation. I started by having it unpack how all of the pieces work together and creating a Mermaid syntax diagram (left) and description in Markdown to help better understand the data flow between the honeypots, hpfeeds, mnemosyne, MongoDB, SQLite and the core CHN-Server itself. I also had Claude help me keep track of the project plan, broken up into phases, and when I was getting tired and had to take a break, it could summarize the commit logs, diffs, and create a quick checklist of next-steps that I could come back to the next day to get my head back into the game quicker. When I found a class of problems in the code, Claude highlighted other places the same patterns were found. That wasn't anything I couldn't do in a modern IDE, but it was a nice touch, since I was working in vim the entire time. If you actually try to have Claude write code, it's kind of a mixed bag. Many of its decisions are based on popular and harmful anti-patterns propagated through StackOverflow, Quora and Reddit. You'll have to babysit it and tell it no quite often.

The Modernization Journey

The initial scope seemed straightforward enough: upgrade some dependencies, fix some syntax errors and APIs, clean up a few bugs, get it all running in an OpenBSD VMM instance, then focus on rolling some new Docker containers. What started as a simple refresh turned into a journey through eight years of accumulated technical debt. I started with just the CHN-Server repository, but it had dependencies on the hpfeeds3 repository, and updating that meant messing with the mnemosyne repository. Several projects install my updated hpfeed3 package directly from GitHub in requirements.txt or the Dockerfile. There were layers of compatibility issues forcing me to take a phased, incremental approach to modernization, and a few gotchas that bit me several times, particularly syntax changes around PyMongo. Some bugs weren't obvious until I tried end-to-end testing with real honeypots in the lab. Honestly, I'm sure I missed a few obvious ones, but none that impact basic functionality so far. I threw hundreds of events per hour at the system using an infinite loop of telnet attempts. The system didn't even notice. It's much less resource-intensive.

The front-end needed some work, too. I parameterized the base API URL inside the JavaScript. Rob Scanlon's original work was built with Grunt, and Wintel's middleware used an ancient python2 WebSocket library that is incompatible with anything in modern JavaScript. After a few hours of trying to remove all of the Deprecated bits from the front-end, I decided to revert back to the basics, and gut the WebSocket code from the middleware entirely, and add a REST API to poll the honeypot attacks. I tweaked the Javascript as minimally as possible. I'm really not that comfortable with front-end code.

End to end also revealed a few things that'll need attention soon. The deployment scripts all reference docker containers with architecture tags (-amd64, -arm, etc) that don't exist, for example. Hpfeeds-logger and hpfeeds-cif start up and appear functional, but they need more comprehensive testing than just "the container starts and doesn't immediately crash." I do my own log collection with a custom HPFeeds scraper I wrote in Go last year, replacing the PHP junk I had written back in 2017. I haven't used CIF since 2017 either, and don't really have much need for it. That's future work or maybe even someone else's problem. I'm working on a plan to submit my improvements upstream in a practical way that doesn't require the current maintainers to review massive, monolithic pull requests (if they're even paying attention to PRs). I may end up having to just maintain this fork myself for a while. I'll come up with a more sustainable strategy to keep it up to date if it looks like that's how it will be.

Getting CHN deployed in 2025 is easier than it was in 2017, assuming you can navigate the initial setup. Once I had verified all of the pieces kind of worked on their own, I forked the chn-quickstart project. The guided Docker Compose generator handles most of the complexity, and the whole CHN stack comes up with a simple docker-compose up -d. It's missing the attack-map and custom middleware for the time being, but they're not too hard to deploy next to CHN once it's up and running. I did manage to dockerize the middleware. As of last night, I cut over the DNS records from my old EC2 instance to my new VPS. This thing's actually running in production! nginx handles routing the web requests to the right places, and hosting the static assets of the animated front-end.

The OpenBSD deployment in my lab environment has been rock-solid, with Cowrie honeypots running on Raspberry Pi devices feeding data back to the central server. Watching the attack data flow in real-time through the web interface brings back that same sense of satisfaction I felt eight years ago - except now it's running on modern infrastructure. I need to clean up the OpenBSD init scripts and write some documentation around that part. I don't think it'll be as easy to deploy on bare metal (in Python Virtual Environments with uWSGI) as I'd like.

During the 2020 CHN effort, Duke University's STINGAR team built a bunch of dockerized honeypots that are still functional today, but similarly dated. Modernizing those honeypot projects would be nice. If we concede that the honeypot deployment strategy is just docker images on single board computers, home lab VMs and cloud nodes, it's not really that urgent. Cowrie is actively maintained, and a bare-metal deployment script is on my list of things to build soon.

Links