Showing posts with label ubuntu. Show all posts
Showing posts with label ubuntu. Show all posts

2018-12-02

OpenBSD VMM Hypervisor Part 4: Running Ubuntu (and possibly other distros)

TL;DR: you cheat.

I've been trying for almost a year to figure out how to get the cloud-init meta-data service to work with the Ubuntu Cloud image. I've asked on misc@ and other OpenBSD groups, and no one has an answer. The documentation is vague. If anyone ever figures out how to configure meta-data, let me know. I'd still like to give it a shot.

Last week, I rescued a server from a pile of computers destined to be scrapped and recycled. For me, it's the perfect setup for getting serious with OpenBSD VMM in my home lab. Two older Xeon E5-2620 CPUs and 128 GB of RAM. No hard drives, but it came with enough empty drive trays for getting started. I threw a pair of old SAS drives into it.



No surprise, OpenBSD just worked. This renewed my fervor for replicating a bunch of my cloud instances at home, and there's a lot of Ubuntu in use.

I decided to bite the bullet and just use qemu to do the installation and configuration of Ubuntu. Install qemu from packages:

doas pkg_add qemu

Download Ubuntu Server. I've actually used both 18.04 LTS and 16.04 LTS. I'm focusing on 16.04 for this because that's what I'm running on most of my EC2 instances.


Create a disk image.

vmctl create qcow2:ubuntu16lts.qcow2 -s 20G

Boot the ubuntu ISO and attach the new ubuntu disk image to qemu:

qemu-system-x86_64 -boot d -cdrom ~/Downloads/ubuntu-16.04.5-server-amd64.iso -drive file=ubuntu16lts.qcow2,media=disk -m 640

Install Ubuntu as usual. I didn't bother adding anything other than the SSH server during installation. qemu is really slow on OpenBSD, but it works... eventually. When the install is done, shut down and then restart qemu without the installation ISO attached.

qemu-system-x86_64 -drive file=ubuntu16lts.qcow2,media=disk -m 640

Log in with the user-level account you created. There are only two things to tweak before it's ready to run in vmm: Configuring the serial console, and the network interface.

Under qemu, Ubuntu sees "ens3" as the network interface. Under vmm, the network interface is "enp0s3". Change "ens3" to "enp0s3" in /etc/network/interfaces if you're using 16.04. On Ubuntu 18.04, you must instead change the "netplan" config file in /etc/netplan/50-cloud-init.yaml with the same kind of change, ens3 to enp0s3.

To configure the serial console, edit /etc/default/grub and change this line:

GRUB_CMDLINE_LINUX=""

to

GRUB_CMDLINE_LINUX="console=tty0 console=ttyS0,115200n8"

then run

sudo update-grub

Shut down qemu again. Your disk image is basically ready to go under vmm.

To save the trouble of having to mess with qemu again, I recommend creating derivative images of the one you just created, and using those for vmm.

vmctl create qcow2:ubuntu16lts-1.qcow2 -b ubuntu16lts.qcow2

Add the new disk image to a configuration clause in /etc/vm.conf on your OpenBSD host system. Mine looks like this:

vm "Ubuntu16.04" {
        disable
        owner axon
        memory 4096M
        disk "/home/axon/vmm/ubuntu16lts-1.qcow2"
        interface {
                switch "local"
                lladdr fe:e1:ba:f0:eb:b0
        }
}


For more information about setting up switches and networks in vmm, see Part 2 of my VMM series.

Voila! Ubuntu in VMM!



Although the configuration files you must edit to make it work might vary, you can do the same thing and it may very well work for text-mode-only distributions.

I actually didn't need to use qemu to get arch linux installed in vmm, but it was somewhat tedious to do entirely in vmm, and it took me a few tries to get it right. Qemu might have been easier.

2011-03-29

OpenVAS & Greenbone Security Assistant Basics

This is the second part of a series on OpenVAS, the open-source vulnerability scanner. In my last post, I walked you through compiling the various pieces of OpenVAS and getting it up and running. Now it's time to talk about the fundamentals. For this and future posts, we'll be using the web front-end to OpenVAS, called Greenbone Security Assistant, and we'll assume it's running on your local machine.


Why bother with OpenVAS, or vulnerability scanning in general?
Vulnerability scanners are not "hacking tools!" They're very noisy. They're ungainly. They lack finesse. They're riddled with false positives (vulnerabilities you try to manually verify and turn out to be non-existent) and false negatives (vulnerabilities that it doesn't know about or can't be easily detected and are thus missed). With so many weaknesses, why would you even bother?

Simply put, running frequent vulnerability scans on your network gives you a good baseline complete with the ability to notice a change from one week to the next. At the very least, you get a good feel for the "low-hanging fruit" -- the obvious and easy targets on your network. Additionally, many vulnerability scanners including OpenVAS have the ability to use a scanner agent installed on systems, and login credentials to inspect the local security of your servers, workstations and infrastructure. In this way, you can identify software that's out of date and security settings that are out of compliance. This can be a huge asset to your IT security stance once you have the scanner configured properly and running smoothly. That's easier said than done, unfortunately.

If you'll be using this system as a vulnerability scanner regularly, I recommend a few things:

Make sure the openvas services start at boot. I just added this stuff to /etc/rc.local on Ubuntu server:
echo "Starting OpenVAS Scanner Daemon..."
/usr/local/sbin/openvassd && echo [ OK ]
echo "Starting OpenVAS Manager Daemon..."
/usr/local/sbin/openvasmd && echo [ OK ]
echo "Starting OpenVAS Administrator Daemon..."
/usr/local/sbin/openvasad && echo [ OK ]
echo "Starting Greenbone Security Assistant Web Interface..."
/usr/local/sbin/gsad --http-only && echo [ OK ]
echo "Downloading NVT Updates..."
/usr/local/sbin/openvas-nvt-sync && echo [ OK ]
Make sure you have nightly NVT Updates. I put this in root's crontab to run at 4:00AM each day:
0 4 * * * /usr/local/sbin/openvas-nvt-sync
And there you have it.

When you navigate to the web interface (usually http://localhost) and log in, you'll see the task screen, which I had shown you previously. Take note of the options on the left pane, as we'll be going through most of them.

258832485

One of the first things you'll want to do if you didn't set up daily updates is to hit the "NVT Feed" link (not shown above) and update the NVT database.

00-NVTSync2

With that out of the way, our first stop is with scan configurations. OpenVAS comes with five template configurations, each of which might do something useful for you.
01-ScanConfigs


You don't need to create a custom scan config to get started with OpenVAS, but If you decide to create a new Scan Config, you'll have the ability to edit it (the wrench will not be greyed out)
03-NewScanConfig2

and you'll be faced with a huge assortment of scanning options allowing you to fine-tune your scan. You'll also see options for so-called NASL Wrappers, which are scripts that help OpenVAS utilize third-party tools such as nmap, nikto, w3af and others. Tuning your scan parameters is important, but complicated enough that it's beyond the scope of this series. Most vulnerability scanners I've used (Nessus, ISS, etc...) have a configuration section like this, and it's always a very, very deep rabbit-hole. Mastering this is a bit of an art, but I usually break the enterprise up into "classes" so that like-systems are scanned with relevant checks so I'm not throwing 5,000 futile Windows checks at the Linux servers in the DMZ, for example. Feel free to leave me a comment if you want me to discuss this kind of classification setup in more detail.

When building custom configs, I recommend using the existing scan configs as a template, and tweaking things from there to get your bearings. Try the "Full and very deep" scan first if you have any doubts. It's unlikely to knock anything off the network, but be careful! The "Trend" radio button selects whether this scan config will grow and import new NVT plugins or remain static with only the plugins you selected for that particular plugin family. If you start using OpenVAS frequently, you'll probably want to become familiar with tuning scan configs to get rid of false positives or enable more features.

04-NewScanConfig3

Schedules are triggers for one-time or recurring scans. It's not uncommon to schedule a network vulnerability scan to happen after business hours, so this option helps you there. I usually run weekly scans so that I can compare my security stance from one week to the next. Here, I've created a weekly trigger that runs at midnight (central time) every Tuesday. You can create as many schedules as you want, but none of them will actually do anything until you assign the schedule to a task. By the way, OpenVAS uses UTC for its clock. Keep that in mind.
05-NewSchedule

In the introduction, I had mentioned using credentials or agents to run local security checks. OpenVAS is pretty flexible here, so experiment with the credential options. Create credentials in Greenbone Security Assistant, and make sure that they match an account on the target system. I recommend creating a dedicated account with the bare minimum privileges needed to run the local security checks. In a Windows environment, consider using an active directory service account on the domain. Authenticated scans and local checks open up some of the most powerful features of many vulnerability scanners. I may cover the use of Agents later, but for now, they're beyond the "basics" scope of this post.
06-NewCredential

Escalator is a funny word for this feature, but this robust option gives you the ability to trigger events based on the completion of a scan. Here, I'm just configuring it to send an email to me when a scan has finished running. Note: you will probably have to install the "mailutils" package or some equivalent on Ubuntu for this to work.
07-NewEscalator

We can finally start picking what hosts or networks we want to scan with the "Targets" option. The target hosts can be single IP addresses, IP address ranges (192.168.0.1-192.168.0.23 or 192.168.0.1-23), CIDR networks like the example below, DNS names, or any combination of them separated by commas. I had mentioned setting up "classes" of scans earlier. Here, you may just insert a comma-separated list of similar servers, for example. The comment is optional, and the port range can also be a comma-separated list of individual port numbers or ranges. "default" uses all of the ports found in /usr/local/share/openvas/openvas-services, which contains over 8,000 ports, a far cry from 65,535. YMMV here. If you wish to use credentials, select them now.

08-NewTarget

The moment you've probably been waiting for. Create a new task. This is where you'll get to put it all together and start scanning! Here, I assigned a weekly scan schedule. This will run on its own, using the schedule I defined earlier.
09-NewSchedTask

If you don't define a scan schedule, you'll end up with an item on the task list, but it won't run on its own until you hit the "Play" icon (Green triangle). I added a manual scan to the task list as well. You can see both the scheduled and manual scans waiting to run here:
10-Status

Clicking the spyglass icon on a task will show you a list of summaries from each time you've run the task. This weekly scan has only run one time, though, so you only see one summary here.
11-results1

And clicking the spyglass on a scan summary pulls up the detailed results, which you can filter a number of ways. This page goes on and on, containing every item that was noted in the scan. You can also export the results a number of ways.
12-results2

One thing that I like about OpenVAS is the fact that the web UI allows you to make remarks about the scan findings, assign arbitrary severity levels (including "false positive") and tune things so that future scans can take your professional opinion into account, if you so desire. You can perform these overrides or add notes to a single instance of a vulnerability or make sure that it applies to other hosts in the same scan. This can make OpenVAS extremely versatile.

Anyway, that's the basics of the OpenVAS scanner and Greenbone Security Assistant. Should be enough to get you started playing around in your own lab environments, or perhaps in a small office environment.

If you get serious about using OpenVAS, you may consider going with the Greenbone's Professional NVT Feed, which operates on a similar model to Tenable Security's Nessus ProfessionalFeed. Again, it's hard to compare OpenVAS and Nessus side by side, but they both try to fill the same niche. I've used both (and several other competing products) and I still can't say any one is actually better than another. The Greenbone Security Assistant Web UI seems like one of the best vulnerability scanner interfaces I've seen, though.

2011-03-21

OpenVAS on Ubuntu 10.10 Maverick Meerkat Install Notes

When Tenable took Nessus through a code re-write and closed its source, the old code was forked a few times. As far as I can tell, OpenVAS is the strongest surviving variant. There's a really old version in most Linux distributions' package repositories, but it's out of date, the 2.x version.


I wanted to get the new version up and running. It turns out that compiling it for the first time was a gigantic clustercoitus of library dependencies and unnecessary branches in the OpenVAS subversion repository. So, I did what I usually do when I meet a challenge worth dissecting: I set up a VM, take some snapshots, and document it.

There are four components to OpenVAS: The scanner, administrator and managers, and then a client program. There are three clients to choose from:
  • Greenbone Security Desktop, which looks a lot like the older Nessus GUI
  • Greenbone Security Assistant, a clean web UI similar to the new Nessus, except more feature rich
  • OpenVAS-cli, a tool that's good for lightweight scheduled scanning
There are well over 100 dependencies to get OpenVAS installed, but this big pile knocked them all out on both Ubuntu 10.10 server and desktop versions:
sudo apt-get install build-essential libpcap-dev subversion cmake libgpgme11-dev libglib2.0-dev uuid-dev doxygen libgnutls-dev libmicrohttpd-dev bison xmltoman libsqlite3-dev sqlfairy libxslt-dev texlive-latex-extra xsltproc

One last thing: If you really want to use the Greenbone Security Desktop GUI, there's a whole lot more you'll need, but they're all dependencies of libq4-dev. I have grown to really like the Web GUI, so you may want to play with that first before you decide to go with GSD.

sudo apt-get install libqt4-dev

If you pull up the SVN repository, you'll see the following branches. You do not need all of them, and some of them are absolutely massive. It's a big waste of bandwidth, drive space and time to check out everything.

# bindings/
# doc/
# gsa/
# gsd/
# image-packages/
# openvas-administrator/
# openvas-cli/
# openvas-client/
# openvas-compendium/
# openvas-libraries/
# openvas-manager/
# openvas-packaging/
# openvas-plugins/
# openvas-scanner/
# sladinstaller/
# tools/
# winslad/

We only want openvas-libraries, openvas-scanner, openvas-manager, openvas-administrator, openvas-cli, gsa and gsd. When you first run subversion, you'll have to accept the SSL certificate from OpenVAS.

mkdir openvas-source
cd openvas-source
svn checkout https://svn.wald.intevation.org/svn/openvas/trunk/openvas-libraries openvas-libraries
svn checkout https://svn.wald.intevation.org/svn/openvas/trunk/openvas-scanner openvas-scanner
svn checkout https://svn.wald.intevation.org/svn/openvas/trunk/openvas-manager openvas-manager
svn checkout https://svn.wald.intevation.org/svn/openvas/trunk/openvas-administrator openvas-administrator
svn checkout https://svn.wald.intevation.org/svn/openvas/trunk/openvas-cli openvas-cli
svn checkout https://svn.wald.intevation.org/svn/openvas/trunk/gsa gsa
svn checkout https://svn.wald.intevation.org/svn/openvas/trunk/gsd gsd

OpenVAS uses cmake, which is actually pretty slick as long as your dependencies are in order. Simply go into each of the directories above, and run the following commands to compile and install. I'll use openvas-libraries as an example:

cd openvas-libraries
cmake .
make
sudo make install
cd ..

One thing to keep in mind is that several libraries are deployed with the openvas-libraries package, and those are needed for the other packages. Make sure you run ldconfig to update the library cache before compiling the other packages.

sudo ldconfig

Do the same for openvas-scanner, openvas-manager, openvas-administrator, openvas-cli, gsa and (if you want to use the native gui), gsd.

Once everything is installed, you need to do a few quick things to set everything up. First, start the OpenVAS Scanner Daemon:

sudo openvassd

update the plugins. This takes a long time the first time you run it.

sudo openvas-nvt-sync

Create a CA (walk through the prompts):

sudo openvas-mkcert 

Create a client certificate for OpenVAS Manager (om):

sudo openvas-mkcert-client -n om -i

Rebuild the OpenVAS Manager database, then start OpenVAS Manager

sudo openvasmd --rebuild
sudo openvasmd

Start OpenVAS Administrator, then create an administrator account for yourself:

sudo openvasmd
sudo openvasad -c 'add_user' -n Admin (or other desired username) - It will prompt you for details.

Launch a client tool. I noticed that on Ubuntu, libmicrohttpd (a library the web UI uses) had some issues with SSL. I'm generally averse to running over plain HTTP, but if you make sure you run it locally or through a tunnel, you should be fine. I had to start Greenbone Security Assistant in http-only mode:

sudo gsad --http-only

Point your browser at http://localhost/ - It looks like this, if you have everything working properly. Here, I'm in the middle of a test scan.

258832485

Alternatively, you can run GSD:

gsd

Which looks a bit like this. You use the tabs to navigate it, export reports and all that.

gsd

I had trouble getting either GSD or GSA to export the report in PDF format. There may be a library or CLI tool that I'm missing. The HTML export works like a champ.

Update: Poking through the errors I found in /tmp, I discovered that I needed some files provided by LaTeX. Installing texlive-latex-extra and its dependencies got PDF export working, thus I've included it in the list of packages to install with apt-get at the beginning of this post.

In summary, OpenVAS works, and it's come a long way since the original fork of Nessus. It's difficult (and honestly, pointless) to compare OpenVAS to Nessus in their current states. They're not the same, and they likely have different strengths. I've spent quite a bit of time working with the latest versions of Nessus, so OpenVAS is new territory for me. Now that I have it up and running, I look forward to putting it through the paces.

I'll be talking about OpenVAS more in the coming days (or weeks, if things stay as busy as they have been lately). There are some interesting aspects of OpenVAS' architecture I'm playing with.

2010-02-08

Wrapping insecure web apps with Apache

When dealing with a web service which for one reason or another cannot or should not be allowed on the web. Apache has several wonderful modules which allows the services to be wrapped and behave like a web app should (working SSL certificates, forced encryption, authentication ...)

In this article I will discuss and show some examples on how to create an authenticated reverse proxy with mod_authnz, mod_proxy,mod_rewrite and mod_security.

-=-=-=-=-=-=-=-ToC-=-=-=-=-=-=-=-
1. Prerequisites
2. Installation of Apache
3. Configuration of Apache

4. Configuration of mod_rewrite
5. Configuration of mod_proxy
6. Configuration of mod_authnz(optional)
7. Configuration of mod_security
8. Summary

9. Informative Resources
-=-=-=-=-=-=-=--=-=-=-=-=-=-=-=-=-
1. Prerequisites

In this example you will need:

  • Ubuntu Linux
  • LDAP compatible server with valid SSL certificate
  • Apache2
  • Wildcard ssl certificate or valid certificates for each service published
  • Apache mod_rewrite
  • Apache mod_proxy
  • Apache mod_authnz
  • Apache mod_security
2. Installation of Apache
Install Apache2 by any of your favorite package managers or at the prompt:
sudo apt-get install apache2
3. Configuration of Apache
Then create a new config file for each of your new relays.
Inside of the virtual host tag:
UseCanonicalName Off
LogFormat "%V %h %l %u %t \"%r\" %s %b" vcommon
#incase you have a self signed certificate on the ldap server

LDAPVerifyServerCert off
SSLEngine On
SSLCertificateFile /etc/apache2/ssl/generic/example.com.crt
SSLCertificateKeyFile /etc/apache2/ssl/generic/example.com.key
Servername weirdone_wrapped.example.com
CustomLog /var/log/apache2/access_log.relay-weird.vhost vcommon

4. Configuration of mod_rewrite
(mod-rewrite is included with apache2)
To enable mod_rewrite:
a2enmod rewrite
Then add the following virtual host entry to redirect http traffic:
RewriteEngine On

#Force HTTPS
RewriteCond %{HTTPS} !=on
RewriteRule ^(.*) https://%{SERVER_NAME}/$1 [R,L]

5. Configuration of mod_proxy
First install additional mod_proxy:
sudo apt-get install libapache2-mod-proxy-html
Then enable the modules:
a2enmod proxy proxy_connect proxy_html proxy_http
Insert the proxy section and commands into the SSL (port 443) vhost section:
Order deny,allow ProxyPreserveHost On ProxyPass / http://weirdapp.example.com:50281/ ProxyPassReverse / http://weirdapp.example.com:50281/
6. Configuration of mod_authnz(optional)
First install mod_authnz:
apt-get install libapache2-mod-authnz-external
Then insert the following into the proxy block for ldap authentication of the connection:
AuthType Basic AuthBasicProvider ldap
AuthName "Please authenticate your connection using your network login."
#Some Ldap servers will reject un-encrypted simple authentication, plus this is

#just a good idea any way.

AuthLDAPURL "ldaps://1.2.3.4/?cn" SSL

AuthzLDAPAuthoritative on
AuthLDAPBindDN cn=authbot,ou=users,o=org
AuthLDAPBindPassword password
AuthLDAPRemoteUserAttribute uid

AuthLDAPRemoteUserIsDN on

AuthLDAPGroupAttributeIsDN on

AuthLDAPGroupAttribute member

Require ldap-group cn=Staff,ou=groups,o=org
Satisfy All

7. Configuration of mod_security
First install mod_security:
apt-get install libapache-mod-security
Then enable it:
a2enmod mod-security
Mod_security is fairly tricky, I am using a default configuration but I am only logging errors and not preventing them. Configuration beyond this is outside the scope of this article.

Edit /etc/apache2/mods-available/mod_security.conf and use the configuration example in
"/usr/share/doc/mod-security-common/examples/" as a template.

If it proves to be too restrictive, you can switch the part which says:

SecRuleEngine On

to

SecRuleEngine DetectionOnly

8. Summary
So, after this is installed, Apache will listen to a static IP then relay a a website to the end user over SSL after authenticating the connection with an LDAP server. And if anything fishy happens it will be logged/(or blocked) with mod-security.

This is not a 100% silver bullet solution. Apache http authentication is generally a bad idea, especially over an unencrypted session. In this example it is partially mitigated with mod_rewrite but at this time Apache does not natively support any modern authentication technologies with hooks for LDAP or any other authentication service. If you have the opportunity to prevent the need to do this then make it so.

The best way is to do it right the first time and write into your web application (or specify in the RFQ) the correct security measures.

9. Informative Resources

Breach Security "Mod Security home page". (Accessed April 2009)
http://www.modsecurity.org

The Apache Software foundation. "Apache webserver website". (accessed Jan 2010)
http://httpd.apache.org/

See also :
Asmodian X's Securing php web applications:
http://www.h-i-r.net/2009/05/securing-php-web-applications.html

Ax0n's OAMP (Apache, Mysql, PHP on OpenBSD) Article:
http://www.h-i-r.net/2008/12/sysadmin-sunday-amp-on-openbsd-44.html

Asmodian X's Name based hosting mini-howto:
http://www.h-i-r.net/2008/10/sysadmin-sunday-apache-name-based.html

Asmodian X's Workbench - Suhosin :
http://www.h-i-r.net/2008/12/asmodians-workbench-suhosin-hardened.html

2009-12-09

How to better fix the GDM "face browser" login issue

It's really not that hard. I went poking through the documentation for gdm-simple-greeter and found an option outlined called disable_user_list. It took me a bit to figure out how to disable the feature, and I broke gdm a bunch of times before googling it and finding a great post by [daten] on the Fedora forums that outlines it.

So first, if you followed my angrily-penned directions from last night, undo that with these steps:

In a terminal window, execute:
$ sudo dpkg-reconfigure gdm
(select gdm instead of xdm at the dialog box)


$ sudo /etc/init.d/xdm stop
(X11 will bail. Go ahead and login at the console prompt)

Continue as below, starting with the gconftool-2 command. You don't have to stop gdm, obviously. You can just start it.

If you didn't switch to xdm first...


Now, we can simply tell gdm to disable the user list with a lengthy gconftool-2 command. Make sure you scroll to see the whole thing:

$ sudo gconftool-2 --direct --config-source xml:readwrite:/etc/gconf/gconf.xml.defaults --type bool --set /apps/gdm/simple-greeter/disable_user_list true

Log off. The change may not take effect until you stop and start gdm. If you still see the user list, press ctrl-alt-F1 to get to the console, log in and run the following commands:

$ sudo /etc/init.d/gdm stop
$ sudo /etc/init.d/gdm start

At that point, you should have a new, still squishy and pretty login screen without the face browser of doom.


FYI, "axon" wasn't filled in automatically, I had to type it. This is much better!

2009-12-08

Fixing Ubuntu's broken excuse for a login screen

This is fscking unacceptable. Yah, it's slick. All windows-esque. Whatever. I hate it. I'd like to be able to type my user name in, and not have a freaking list of enumerated accounts sitting there on my damn login window. Now get off my lawn *shakes cane*


Today, it finally annoyed me enough that I'd be willing to do whatever was needed to fix it. How about a real display manager?

In a terminal window, run:
$ sudo apt-get install xdm

You'll get a prompt. Select xdm.

Then, log off from your workstation, and hit Ctrl-alt-F1 to go to the text console. Log in with your user account and run the following commands to shut down gdm and start our new, tasty xdm.

$ sudo /etc/init.d/gdm stop
$ sudo /etc/init.d/xdm start

New, ugly but functional login screen. Yay.

By the way, the link to the Debian logo is buried in the xdm configuration file /etc/X11/xdm/Xresources. If you really want to change it, you can edit this config file and/or Bring out the Gimp and start crack-a-lacking.

fin.

2009-07-05

Sysadmin Sunday: Guard against file corruption with PAR

Introduction:
Bit rot, File corruption, partial file transfer, call it what you will, digital transmission mediums some times fail and you are left with a corrupted fragment of data if any at all. In the case of large files in which re-transmission would take hours or days, this is a tough situation.

PAR uses a RAID like technique to salvage corrupted files in most cases only needing to obtain files containing restore information that are a fraction of the size of the original file.

This article is intended for people with basic to intermediate understanding of a un*x style operating system.

-=-=-=-=-=-=-=-=-=-=-=-=-=-

Table of contents:
1. PAR and the Reed-Solomon error correction algorithm
2. Available applications based off of PAR
3. Examples
4. Informative resources

-=-=-=-=-=-=-=-=-=-=-=-=-=-
1. PAR and the Reed-Solomon error correction algorithm

The Reed-Solomon algorithm was developed in 1960 by Irving S. Reed and Gustave Solomon. It is used in many technologies such as CD's, BlueRay, DSL Modems, RAID6 and more. This method of error correction is used to protect against certain forms of media defects or data transmition errors.

The PAR utility was developed by Tobias Rieper and Stefan Wehlus for the purpose of recovering corrupted files and file fragments from Usenet posts with out needing to download the file all over again. Later, to compensate for some limitations of PAR, the PAR2 specification was developed by Michael Nahas and Peter Clements. Clements then wrote some of the first PAR2 applications.

A simple way of explaining what PAR does is that it takes the original source files then applies the mathematical algorithm to it which contains a sort of processed description of what that file looks like. Then lets say you send someone a file but for some reason the transmission fails mid way through the file transmission. All that needs to be done is to download the results of the mathematical operation (which are significantly smaller than the original file) and run the par utility to apply the math to the file fragment. Par can fill in the blanks using the algorithm and restore the the file.

2. Available applications based off of PAR
There is of course the fore-mentioned open source application written by Peter Clements et all. There are a slew of other PAR clients for Mac, OS 9 and 10, Windows, Linux, BSD and more. Though the PAR1 specifications are incompatible with the PAR2 specification most clients support both formats side by side. For a detailed list of PAR compliant projects check out the Parchive sourceforge website. If you are using Linux, you can either download a Linux rpm or source tarball from the sourceforge site . Or use a package system such as apt-get to download it from your distributions package archives.

3. Examples
In this example I am using Ubuntu Linux.

  1. This will require the Ubuntu Universe repository. You can uncomment this in "/etc/apt/sources.list" using "sudo vi /etc/apt/sources.list".
  2. Then update your sources using "sudo apt-get update".
  3. Finally get the par2 package using "sudo apt-get install par2" .
Now lets test par2 to see if it can recover a file:
  1. Using dd create a 10MB test data file from /dev/zero "dd if=/dev/zero of=/tmp/testdata.bin bs=1024 count=10240"
  2. Then create our par2 file and recovery blocks: "par2 create testdata.par2 testdata.bin"
  3. Now im going to copy the original data to a different name then make some changes to it.
  4. Then I run "par2 verify testdata.par2 testdata.bin"
  5. par2 tells me that I need one recovery block to repair the file. (* during the create process par2 created several repair blocks. Since par2 over-samples, I can use the either the largest repair file or a combination of the smaller files for the same effect.) In this case I just need to have the repair block file called testdata.vol000+01.par2 in the same directory.
  6. I then type in "par2 repair testdata.par2 testdata.bin" where it then reports that the file has been repaired.
4. Informative resources
Clements,Peter Gallagher,Ryan Nahas,Mike et. all. "Parity Archive Volume Set: File
Specification, Clients, and Related Resources" (Accessed July 2009)
http://parchive.sourceforge.net/

Wikipedia.org "Reed-Solomon Error Correction" (Accessed July 2009).
http://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction

Wikipedia.org "Parchive" (Accessed July 2009).
http://en.wikipedia.org/wiki/Parchive

2009-05-07

Introduction to Snort IDS

Snort is a software package which monitors a network for suspicious traffic and provides advanced warning of an attack. Snort can also be useful in security failure mode analysis, where it can provide a log of network wide events over a pririod of time. Snort is open source software under the GPL License which means it is free to distribute provided the source is made available.

This article is intended for network administrators and requires an intermediate functional knowledge of server administration and networking skills in a Linux environment.

======ToC======
1. Introduction
2. Installation
3. Implementation
4. Monitoring
5. Informative Resources
===============

1. Introduction
The trouble with managing a network of any size is that we only know about a breach of security after it happens. Most servers have logging but so much is being logged that its impractical to keep up with it. Yet there are many vulnerabilities which manipulate the logs or the signs of the intrusion are so cryptic it blends in with the every day noise of doing business.

Firewalls and Anti-virus only detect a small portion of network security issues. Enter the next piece of the puzzle: The Intrusion Detection System. An IDS sits at the top level network and checks the network traffic for patterns of known attacks then logs them and it can be configured to provide advanced warning of an attack in progress.

SNORT is an IDS and is free open source software (free as in beer) which can be configured to fit almost any IDS role. SNORT is not the end-all be-all security technology, it is just another security tool to be used in conjunction with other tools and practices to keep your network safer.
Like all pattern recognition based security, it must be updated regularly to be able to detect new threats.

Most security vendors are moving towards a Unified Threat Management System, which pulls firewall, vpn, IDS, Antivirus/mal-ware into one centrally maintained appliance available by subscription.

2. Installation
For this example we will be using Ubuntu Linux Server Edition on a computer with 2 or more network adapters. Since snort will be performing a great deal of logging, the more space you make available, the better off it will be.

$sudo apt-get install snort

The package manager will download all of the dependencies and install them for you.
It will then ask you for the network range you will be monitoring. (ex. 192.168.1.0/24 )

Snort will begin logging traffic it sees in /var/log/snort/alert .

Syslog is the system log daemon which manages the various reports and logs which are produced by the services currently running on your machine. Should you need to report the information to a central server or log management database (like Cisco MARS) you can create a cusom local log by:

1. Edit snort.conf and add in output "alert_syslog: LOG_LOCAL4 LOG_ALERT"
2. Edit syslog.conf "local4.alert ww.xx.yy.zz" (Where ww.xx.yy.zz is the ip address or DNS name of your logging server.)
3. Restart Snort and syslogd

3. Implementation

Most networks use a switched network which means traffic not destined for your port on the switch doesn't go there. An intelligent switch can be configured to copy all traffic to your port in addition to its intended destination. This is the ideal solution in that if we are using Gigabyte Ethernet the only other option to sniff traffic is an active bridge or hardware Ethernet tap between the top level switch and the rest of the network. Gigabyte Ethernet uses all of the pairs of a cable for receiving and transmitting so creating a passive tap between it and another host would significantly change the electrical properties of the cable and cause significant degradation of signal. 10/100 Ethernet however only uses two pairs to transmit and receive so its possible to create a passive Ethernet tap where the sending and receiving pairs would be read by a nic on your sniffing machine. This is where the specification for two or more nic's comes in because you have to use one nic to read the transmit pair and one nic to read the receive pair.

4. Monitoring

The information from the Snort sensor is normally captured in a logfile on that sensor. We configured it to send the log information to a central syslog server. Snort also has plug-ins for MySql and Postgres SQL so the information can be accessed from a database, and also allows for the use of a web-front end. SnortCenter ,SAM and ACID are examples of a web based snort data viewer.

There are also stand-alone applications such as Razorback which can display Snort logs. Snort also has a iptables firewall plugin called Snortsam which can modify the firewall settings on the fly if prevention functionality is needed.

5. Informative Resources

Cisco Systems, Inc. "Device Configuration Guide for Cisco Security MARS, Release 6.x ." (Accessed May 2009)
http://www.cisco.com/en/US/docs/security/security_management/cs-mars/6.0/device/configuration/guide/chSnort.html (September 2008)

Danyliw, Roman "Analysis Console for Intrusion Databases." (Accessed May 2009)
http://www.andrew.cmu.edu/user/rdanyliw/snort/snortacid.html (Last Update 3/9/2003)

Freiberg, Sam "Snort Alert Monitor." (Accessed May 2009)
http://projects.darkaslight.com/projects/show/sam

InterSect Alliance. "RazorBack: The SNORT GUI for displaying events." (Accessed May, 2009)
http://www.intersectalliance.com/projects/RazorBack/index.html

Knobbe, Frank "SnortSam." (Accessed May 2009)
http://www.snortsam.net/

The SNORT Team. "Snort - the de facto standard for intrusion detection/prevention." (Accessed May 2009)

http://www.snort.org


See Also:

2009-05-02

Holy War: BSD Vs. Linux

Ah, holy wars. vi vs. emacs. Mac vs. Windows. Marmite vs. starving to death. Who doesn't love a good, old-fashioned battle royale? Today, we're pitting BSD vs. Linux.

Background
Whilst in college, I was living in a bachelor pad with two other hackers. I'd been running Red Hat Linux 5.2 on my new PC for a few months when one of my roomies introduced me to FreeBSD 2.2.8. This single event sparked my love for BSD in general. Later, I'd come to really settle on OpenBSD. Over the last 15 years, I've written quite a bit about various operating systems including the BSDs. I by no means hate Linux. I still have to use it for some things. I simply have my gripes about it.

Leading up to the release of OpenBSD 4.5, I got in a few debates -- holy wars, kind of.

Wednesday, I got into a Linux/BSD debate with Mubix.

Then Ben, the instigator that he is, brought up a decent point in the public info-sec fora that is Twitter:
"... Why should I try [OpenBSD]? What advantages does it have over Linux?"

I, always ready to inject semantics to prove a point, started with the obvious: Linux is a kernel, not an operating system. I also quickly pointed out that Holy Wars are hard to do on Twitter. So here I am. Ben really wanted a comparison of OpenBSD vs. his current solution of Debian Linux.

Really, though, semantics have a lot to do with it. Linux is not a complete operating system.

Lineage of Linux
The Linux kernel itself is maintained by a core of kernel developers. Almost all Linux distributions come with the GNU system -- the so-called "userland environment" -- which was itself designed to replace the proprietary UNIX userland in the 1980s. The GNU system and the Linux kernel are developed independently of one another. In fact, Linus Torvalds was completing work on the Linux Kernel around the same time as The Free Software Foundation was putting the finishing touches on GNU. With these two free software components combined, a truly free operating system could be rolled out. This is, of course, why The Free Software Foundation prefers that people use "GNU/Linux" when talking about Linux as an operating system, rather than simply Linux as a kernel. Debian led the charge in adopting the GNU/Linux name.

This was all unfolding in the early 1990s, with the first distributions accessible to the masses around 1992 and 1993 with the popularity of dial-up Internet in the home and CD-ROM drives and media becoming less expensive and widely used.

Linux Distributions
While GNU and Linux combined make a bare-bones operating system with just enough tools to log in and compile software, it's not enough to be useful to the average person. To that end, groups package the GNU system, the Linux kernel and sometimes up to thousands of third-party packages into distributions. These distributions are complete operating systems: many of them are somewhat secure, stable, and usable for their given purpose.

Ubuntu, one of the more popular distributions, gathers praise for being one of the easiest for non-technical people to use. It also gets criticized by many technical folks who prefer something more svelte and minimalist. Those technical folks often choose Linux distributions that fit their needs: Arch Linux, Debian, or Gentoo. Likewise, corporations often spring for enterprise-supported distributions like SUSE Linux Enterprise Server or Red Hat Enterprise Linux. There are literally hundreds of active distributions, all of which loosely fall under the Linux umbrella. I do not have time to list them all, however I've touched on some of the more popular ones.

Configuration and package management
Package management systems, configuration tools, and other details vary widely between them. A sysadmin that uses SLES at work, for example, will probably have to spend some time figuring out how things with on Arch Linux or Debian GNU/Linux. Most Linux distros use a System V-style init based on runlevels. Configuring services and daemons usually involves messing with files and subdirectories in /etc/init.d/. The automated tools to do this, however, differ between families of Linux distributions.

Popular Linux package-management systems
RedHat Package Manager (Red Hat, SUSE, Fedora)
Debian Package (Debian, Ubuntu)
PacMan (Arch, Frugalware)
BSD-Derived Ports-like systems (Arch Build System, Gentoo)

Lineage of the BSDs

Berkeley Software Distribution (BSD) started as an additional package to go with Bell Labs' Unix Version 6. By the end of 1979, 3BSD was a complete operating system (kernel and userland) designed to run on DEC VAX systems. By late 1983, BSD had implemented TCP/IP. Legal troubles surrounding copyright of the source code held back BSD's development in the early 1990s, but by 1994, a portable, free operating system (4.4BSD-Lite) existed: a kernel and userland wrought from a very mature code-base written by a comparatively small group of developers. Development of BSD at Berkeley win 1995.

A more mature and unified kernel / userland code-base, and smaller development community are two major things that separate BSD-derived operating systems from Linux distributions. All BSD operating systems still package many other open-source tools such as X.org, Apache Web Server and perl. Many of the BSDs come with some or all of the above included by default. To that end, even BSD flavors are similar to Linux Distributions in that the release team can pick and choose what gets rolled in with the base operating system.

BSD Flavors
During the legal battle encumbering official development on BSD, a team of developers ran with some existing free software from the official 4.3BSD release, 386BSD and some GNU code as well. The result was FreeBSD. FreeBSD now focuses on cutting-edge hardware support, performance and scalability. More "liberal" than the other BSDs, FreeBSD isn't vehemently against closed-source binary drivers and allowing developers to sign Non-Disclosure Agreements with hardware vendors in the name of functionality -- practices that Linux developers regularly partake in.

Around the same time, NetBSD was also underway. Today, NetBSD focuses on clean kernel code that is extremely portable and easy to compile across almost every 32-bit computing platform. If your kitchen sink had a CPU, it could probably run NetBSD.

OpenBSD forked from NetBSD shortly after NetBSD's 1.0 release, mostly due to a falling out between Theo DeRaadt and the rest of the NetBSD developers. OpenBSD's primary focus has always been on security and freedom of code. Strict code audits, re-writing open-source replacements for proprietary services, and refusal to use closed-source binary "blob" drivers or sign NDAs are some key factors.

Configuration
FreeBSD and NetBSD have a somewhat "hybrid" init for services and daemons. For the most part, "easy" system configuration tools are only found in the installation tools and scripts. Configuration is typically done by modifying human-readable files and scripts in /etc that are well-documented with comment lines. The syntax of the system tools often varies slightly from the GNU equivalents found in Linux distributions.

Package Management
Binary packages are handled nearly identically across all three major BSD platforms, which borrowed the functionality from FreeBSD.

The Ports Tree is a staple in BSD derivatives. It is a skeletal directory of patches that can automatically fetch, build, and install source code including all dependencies. NetBSD refers to this functionality as "Source packages" because it uses the term "Ports" to describe porting the entire operating system to different architectures.

In praise of Linux
No one had really heard of GNU until the Linux kernel came along. It was the last piece of a huge puzzle. That puzzle was a free operating system that beat BSD to the target market by almost 3 years. It took the Internet by storm, engaging a new wave of passionate coders. As a catalyst, Linux has probably done more for the Free and Open Source Software movement than anything else to date. It also happens to be that Linux's threading is quite efficient, and the kernel scales fabulously from old 386 computers up to bleeding-edge supercomputers. For things where symmetric multi-processing and threading matters, such as databases, Linux can be a very hard competitor to beat.

My Linux gripes
I feel like there are too many cooks in the kitchen sometimes. Updates to the kernel and GNU sources happen fast and frequently from a very, very diverse and loose pool of developers. It's both good and bad. It also seems like every budding techno-junkie has thought that it would be a good idea to learn how to craft their own Linux distribution. There are too many to be useful. Fortunately, there's a relatively small group of distributions that really matter out here in the real world. Still, one has to experience many of them in order to be what I'd consider a Linux expert. When hiring a sysadmin with 3 years of Linux experience you really don't know if they will have any idea what to do with the flavor you've got deployed, without asking. I also dislike the ominous verbiage and forced-open source of the GNU Public License under which most of Linux and all of GNU is licensed. The GPL forces you to share the source code to anything you derive from GPL-licensed work. While it sounds noble, it's actually a restriction on what you can and cannot do. The license itself is incompatable with some other popular licenses, so you may not be able to use code from two different projects if you plan on releasing the end result to the masses.

In praise of BSD
I'm an OpenBSD fanboy, but I like NetBSD and FreeBSD as well. If you've used one, you will probably be comfortable using the others. They are fast and come installed with a fairly minimal set of tools, but it's very east to install the things you want and need in order to build your system up the way you want it. I don't know anyone who's tried BSD coming from another UNIX-like operating system background and not at least liked it. The BSD license has less restrictions on what you can do with the code. While a smaller core of developers generally means the BSDs have less support than Linux for bleeding-edge hardware, I like the fact that the BSD flavors are more mindful of what is allowed into the base operating system. In the case of NetBSD and OpenBSD, I see a lot of benefits that come from a strict code auditing framework. Recently, FreeBSD has been working on scaling CPU performance, but it's taken them a long time to catch up to Linux on enterprise server class hardware.

My BSD gripes
With few exceptions, BSD is usually slow to the game for adding exciting new features and hardware support. Because of this, there are still places where the BSD kernel lacks the performance of Linux. BSD is therefore often playing catch-up with Linux on performance, while Linux is busy adding new features. The BSDs are often a pain in the ass to patch, too. And just like Linux, OpenBSD releases patches as soon as they fix a problem. They don't release binary patches, though, so you have to have a kernel and userland source tree available, manually patch and re-compile components, and move them into place. This is an arduous procedure that's arisen because of the portability of the source code. Still, I wish that official binary patches were released for popular architectures, such as x86. See: OpenBSD FAQ on Patching.

To answer Ben's original questions:
Why should I use OpenBSD?
If you are the kind of person who likes a lean environment for your desktop or servers, you will probably like any of the BSDs. I'd recommend starting with FreeBSD, or if you're a die-hard command-line commando, OpenBSD. If you're serious about security and stability, OpenBSD is a good choice. BSD isn't for everyone, and there are some things that it's simply harder to to on BSD than it is to do on Linux. Running Mozilla with Flashplayer, for example. I honestly don't miss having flash. It's an annoyance to me, most of the time. Exception: When someone sends me a really funny video on YouTube.

What advantages does it have over Linux?
I think I've made plenty of points and counterpoints regarding the technical advantages of Linux and BSD. It's difficult to compare Linux (in general) and all the BSD flavors side-by-side. So my initial comment stands: "Seriously, more geeks should give this operating system a try!" You might just like it!

OpenBSD's philosophy and ease of use are what keep me coming back. Are those advantages over Linux? No. It's about personal preference.

2008-10-31

OpenBSD 4.4 is hitting the mirrors now!

OpenBSD 4.4 is scheduled to be officially released November 1, 2008 (that would be tomorrow as of writing). It's already on some of the FTP mirror sites, though.

I am installing this TONIGHT. I may try Ubuntu Intrepid Ibex that was released this week as well, but I'm really more excited about OpenBSD. I'm a little bit of a fanboy, if you can't tell.

2007-12-01

Upgrade your mobo BIOS without Windows or DOS.

Sometimes you find a nifty piece of hardware that you just can't let go into disuse. This time around it was a Tualatin Pentium 3-S 1266MHz CPU new-old stock, new-in-box. I got it some time ago to upgrade a PC for family that it turns out just upgraded the whole system instead. Thus it sat around in the box until I ran across a mobo to drop it in. Recently I found a system at my favorite shopping destination (Surplus Exchange) that had a Tualatin capable mobo; the DFI CM33-TL just so happens to max out a the 1.26Ghz P3-S I already had. Even nicer is that it is the Rev C board which with the newer BIOS updates can boot from USB and can do 48-bit ATA addressing. Alas, no AGP slot. So why all the love for an old P3 server chip? The later P3-S could outperform the early P4 chips and use half the wattage! So what do we do when all that we have to boot the system with is a non-Microsoft OS and most BIOS update utilities run in Windows, or use disk creation software the runs in Windows/DOS? Luckily it seems that is is possible to update some mobos without having to resort to using an unwanted OS. DFI has made the CM33-TL able to boot from a floppy or run a program under Windows to flash the BIOS - or enter an update mode that simply reads the flash utility and BIOS file from a floppy. It turns out that it is a good thing they enabled all three. Under a fairly standard Ubuntu Linux install I was able create a floppy the the DFI board could update from by combining the BIOS update features in a way DFI didn't document.

Several steps that worked for me:
1. Nab the BIOS update of choice for your mobo & revision. Be sure your file is correct - close doesn't cut it with a BIOS. It's either an exact match or something won't work right. In my case I could nab the smaller download intended for a Windows-based update utility.
2. Extract the .zip file containing the utility and BIOS image. Many of the .exe files manufacturers supply are programs meant to run under DOS or a DOS shell to create a disk image. By having the .zip we can get around that.
3. Copy the extracted files to a freshly formatted and tested floppy (basic FAT12/MS-DOS format is fine). Having a good floppy is very key to a successful flash. GIGO is an important point to consider when doing something that can brick a system.
4. Reboot the system and be ready to press the BIOS flash key(s) when prompted. On the CM33-TL you press Alt-F2 just after the RAM test and floppy seek.
5. The BIOS will then enter the flash update mode and read the floppy. If it determines the BIOS image is compatible it will begin to flash it to the BIOS chip.
6. Once it's done enter the BIOS setup and "Load Safe Defaults". This will let the BIOS set any settings that might cause the system to fail to boot. Go though the menus and set things as you need.
7. Test boot to be sure it works as before. Test boot again using the new features and marvel at the sudden uses that have opened up.

I had been concerned about having to make a bootable floppy for the update but the BIOS option to enter the update mode does not need a fully bootable floppy to operate.

With a system like this it is possible to operate a NAS system with large drives on a chip that boots from a USB thumb drive, operate on older, cheap RAM and uses little power. Having a system that boots from USB allows you to configure the server to spin down drives that are idle and save more power; an OS on a USB device will not need to spin up the main/RAID drives to write logs, etc. Smart choices of hardware can make a cobbled together server operate more efficiently.

2007-11-07

ArsTechnica reviews Ubuntu Gutsy Gibbon

Finally, there's a really good technical overview of Gutsy, thanks to Ryan at Ars. Read on to get the skinny on the next iteration of the so-called perfect desktop Linux distribution. 


http://arstechnica.com/reviews/os/ubuntu-gutsy-gibbon-review.ars

2007-11-02

Operating systems out the wazoo!

In a matter of two weeks, we've seen a plethora of new OS releases:

I know that I am currently playing with all three:

I'm currently working with a fresh, clean install of Gutsy Server, building an end-all, be-all shared host for a client of mine who wishes to give dozens of end-users their own web space and e-mail domains. I haven't messed with Gutsy on the desktop yet. In due time.

I did an in-place upgrade to Leopard on my MacBook, and it's everything I expected and then some. There are a few minor annoyances, but I'll chalk them up to Apple making an attempt to match and/or exceed Vista's user-interface flair. Unfortunately, I feel that the UI changes in Leopard traded friendliness and clarity for sex appeal. It looks slick, but the graphical changes are skin deep. Functionally, Leopard is still lean and mean. I don't feel like it took a performance hit, and there are boat-loads of new feaures - some of them long overdue (like Spaces, and QuickLook which I'm already a fan of). Things I'm looking forward to testing out: ZFS Support (which requires a developer download to fully implement on Desktop Leopard), Time Machine, and the new "Firewall."

I also did an in-place upgrade to OpenBSD on the virtual machine that I use most often. At first glance, it's the same deal as usual. More hardware support, more robust drivers for certain devices, and some new functionality. I haven't gotten to test it yet, but I'm eager to see the new features in pkg_add, which has never, ever worked the way I would like -- so much so that I actually wrote (and released) a set of scripts to make installing software a breeze in OpenBSD. Finally, I'm interested in seeing how sensorsd works in its new zero-configuration mode on my 1U servers, which have always given OpenBSD's sensorsd some trouble.

I'm sure that HiR will revisit some of these in more detail after really giving them a good shake down.

If you're in or around Kansas City, come join us at the 2600 meeting tonight, Friday October 2nd, 2007 in the Food Court at Oak Park Mall - half a mile east of I-35 on 95th street. The "Official" start time is 5:00PM, but people generally show up as their schedule allows. Look for laptops. That will be us.

2007-10-12

Linux: Ready only for the geek desktop

I do almost everything within either OpenBSD, Solaris or Mac OS X. All of them required me to install quite a few extra pieces of software to work just the way I like, but at the end of the day, they're great for the things I do, with some exceptions noted in Solaris. I spend the majority of my time doing web stuff (surfing, forums, blogging), listening to music, writing e-mail, word processing, performing systems administration, and tinkering with encryption and information security. Occasionally, I may goof around with my own music or graphical art. Solaris lacks easily-installed free or bundled graphics, MIDI, and audio editing software.

Enter Linux. Linux is a pretty broad brush to be painting with these days. Linux is a kernel. It's also a highly generalized term for any operating environment with Linux at its core. The end result is quite confusing. As part of my job, I take care of a bunch of Red Hat Enterprise Linux servers. I've been familiar with Red Hat for quite some time. While I don't particularly like how Red Hat approaches certain things, I am quite good at installing, patching, managing, and tweaking Red Hat Linux servers simply because I've been doing it for so long. When I went to play with a totally different flavor of Linux on a spare server at home, however, my first instinct to use the command-line for everything was met with a few problems. Primarily, many of the tools and programs that Red Hat provides me with are nowhere to be found. Only because of my familiarity with Linux and UNIX flavors in general (okay, and my ability to read documentation) was I able to figure out how certain things were set up. For those who care, it was ArchLinux, but I had similar issues with SME Server as well, despite being loosely derived Red Hat Enterprise Linux.

Right now, the big push in the Linux world is getting Linux onto the desktop. Linux for everyone. Break free from your commercial operating system hell! Linux is here to save the day! Ubuntu is the big name that gets thrown around most often. Self-described as "Linux for human beings", Ubuntu aims to be the final answer to the Linux desktop quandary. After trying Ubuntu Desktop, Ubuntu Server, and Kubuntu Desktop, I can say that "Linux" has come quite a way in its quest for desktop domination.

Ubuntu Desktop is based on the Gnome desktop environment. Asmodian X pointed out to me that Gnome feels an awful lot like Mac OS 9, and I wouldn't have quoted him on that unless I agreed. Part of the clunky feel is the fact that Linux is still bound by the X Window System. Essentially, all graphics go through a network or local socket. Windows and MacOS X don't suffer the same fate, and their interfaces simply feel more responsive. I can deal with a sluggish display, though. There are bigger fish to fry. All flavors of Ubuntu install quickly and ask a very minimal set of questions during installation. As long as the hardware is supported, pretty much anyone can get any of the Ubuntu flavors installed in minutes.

Ubuntu server is everything you'd expect in an open source LAMP server distribution that's released by a company that believes in ease-of-installation. Much like Ubuntu Desktop, only a small set of options are available during installation. The end result is a server distro that is neither lean and mean, nor bloated. It's pretty damned generic, and up to the user to install and configure what needs to be installed if anything more than a basic web application and database server is desired.

Kubuntu Desktop replaces Gnome with the K Desktop Environment (KDE) and a different set of bundled applications -- for the most part, the KDE-based apps are chosen over the competing software packages where available. Konqueror is the default web browser as opposed to Firefox. Kontact and Kopete handle mail/scheduling and Instant messaging respectively. The list goes on and on. If I had to compare Gnome to MacOS Classic, I'd have to say that on a user interface level, KDE feels a bit like Windows Vista with most of the snazzy features turned off, except a little more "Fisher Price." It kind of feels like a toy, but it gets the job done nicely.

Keep in mind that my impression of the two desktop environments is based only on Ubuntu. I haven't used KDE or Gnome prior to this in several years. Right now, I'd say I favor KDE over Gnome, at least in the configurations provided by Canonical (the company behind Ubuntu). There are other variants of Ubuntu which I have not yet tried, so they aren't being reviewed here.

After you get one of the desktop flavors of Ubuntu up and running, keeping the system secure and up-to-date is a breeze. The system checks for upgraded packages that are available for download, and alerts you to their presence. It's easier to keep Ubuntu up-to-date than it is to do the same on Windows. It really is that easy. Installing other software packages can be a breeze as well. Ubuntu provides a graphical application installer that lets you simply choose programs from a list or search through the list for what you want. You simply select the programs you want to install, then install them. The system handles all the downloading and installation procedures on its own, including any other packages that are required by the software you selected.

BSD has been doing package management like this for years without the graphical installation wizard. You still need to know what you want, and have to look through the list manually. Ubuntu is based on the Debian package system, and Debian has also had similar functionality for many years. This stuff isn't new, but combined with the other aspects of Ubuntu, it makes for a system that's pretty user-friendly.

  • Installation is a breeze.
  • Applications that you need to get going are already installed by default
  • Patching and upgrading software is automated.
  • Installing new software is as easy as picking it from a list.


Almost anyone can install and use Ubuntu without much of a fuss. What more could you ask for? Quite a bit, actually. Compared to Windows or Mac OS X (still the two heaviest hitters in the desktop operating system market), all Linux flavors are left wanting. Configuration of anything but the most rudimentary options requires the use of the command-line, which is not an environment that many people are comfortable in. For me? I live and die by the CLI and don't mind it one bit. If there's a software package that you read about for Linux and it's not on the list of stuff that Ubuntu provides, then there's no easy way to install it. Someone like me could download and unpack it, and compile it if needed. Most people are used to double-clicking on the installer or dragging the application (seemingly one file) to their hard drive. Don't get me started on the difficulty of installing certain drivers under Ubuntu.

The other advent that the Linux desktop has brought to the table is "Live" distributions. A Live distribution is an operating environment that boots from removable media such as a CD-ROM or Flash drive, providing an instantly functional system that doesn't rely on a hard drive to operate. Ubuntu and Kubuntu Desktop installation CDs initially launch in this mode. You truly get to try it before you install it. Things tend to load very slowly from CD, so the whole operating system seems very sluggish when run this way.

There are dozens of popular Live distributions that you could check out. Back|Track is my favorite so far: for hackers, geeks, auditors, and security professionals alike. Back|Track, is the end result of Whax and Auditor joining forces. Upon booting, you get a clean, functional desktop platform from which to launch any number of tests and exploits.

Truly, Ubuntu is only good for end users who are happy using it pretty much just as it comes from a default installation. The Live version is sluggish and not recommended as a replacement for Windows - a preview if you will. Other Live distos are great for tinkerers and nerds. For geeks and hackers, a full install of Debian or ArchLinux would be considerably more flexible than Ubuntu if you wish to stick with the Linux kernel.

In closing, I'll say that the biggest hurdle remaining for Linux to conquer on the way to end-user desktops is the fact that the command-line is still not optional despite the best efforts of the Linux community. A command-line should only be required as a last-ditch interface to the operating system in order to recover from some earth-shattering catastrophic failure. Windows has been to this point for years. OS X has as well. For some reason, Linux is lollygagging. It would also help if everyone could just agree on one package distribution model and stick with it. So far, I think Debian's system holds the most promise for the Desktop and enterprise workstations.