Showing posts with label install. Show all posts
Showing posts with label install. Show all posts

2009-09-23

Booting Linux and Windows on separate drives

Normally, installing Windows isn't something I'd do. Not for friends. Not for family, and not for myself. My wife dual-boots Ubuntu and Vista on her laptop -- Vista because that's what shipped with it, and World Of Warcraft runs fine under it. She's plenty competent to keep it cleaned up, secure, and able to restore her stuff from backups if something goes wrong. She's probably better at Windows (at least Vista) than I am, and certainly doesn't need my help very often. As for me, I just didn't think I NEEDED Windows for much...

That is, until I found out how much better my employer's VPN works from Windows. It doesn't work well from MacOS, barely works under Ubuntu, and oddly, works okay under Solaris 10, but it's far from perfect. A few days ago, I logged into the VPN from the Corporate-mandated Windows XP Work PC in the office and was kind of in awe. We're talking an order of magnitude better, on a logarithmic scale. Figures, right? With all the after-hours remote work I'm finding myself doing more and more often these days, it looks like I'm installing Windows!

As a self-proclaimed Operating System Junkie, I suppose it wouldn't hurt to dabble in Windows just a little. After all, my wife's already running a game server on Win2k. What can it hurt?

The only machine I have laying around that I felt would do Windows justice is an old Dell PowerEdge tower server, which spends most of its time running Ubuntu. I didn't feel like re-partitioning or re-installing everything, so I unplugged the Ubuntu hard drive, scared up an old 20GB drive for Windows, bolted it into place, then went to town installing Windows. My goal was to move the Windows hard drive to the secondary IDE controller once installed, then figure out how to get GRUB to boot Windows.

From here, I'm assuming that:

  • You have a Linux distro installed on the first hard drive booting with GRUB
  • You have swapped the Linux hard drive out for a fresh one (also the first hard drive) and installed Windows to it.
  • Afterward, you have put both hard drives in, with Linux as the Master on the Primary IDE controller (or the first SATA drive)
First, I wanted to make sure that the BIOS saw all my hardware. At this point, my setup was like this:

hd0 - Primary Master: 80GB HDD, Linux
hd1 - Primary Slave: Optical drive (DVD±RW, etc)
hd2 - Secondary Master: 20GB HDD, Windows

Next, I made certain that Linux booted properly. This, as expected, worked just fine. I rebooted, and paused GRUB's boot process and entered CLI mode to try to boot Windows. Initially, I tried this, which I thought should work:
grub> rootnoverify (hd2,0)  # Select partition, don't mount it
grub> chainloader +1 # Calls the first sector, should be Windows loader
grub> boot # What do you think?

Starting up ...

Yeah, right. It locks up. Doesn't even try.

Reading up on the GRUB documentation, I found the map command. Score! This tricks the BIOS into swapping drives around.
grub> map (hd0) (hd2)       # Maps hd2 (as above) to hd0
grub> map (hd2) (hd0) # ... and vice versa ...
grub> rootnoverify (hd2,0)
grub> chainloader +1
grub> boot
Amazingly, map did the trick and Windows started booting. It thinks it's running on C: and that Linux is on the secondary Master. Now, to take this and make a "Windows" option in the GRUB menu. Boot into Linux and add these lines to the end of /boot/grub/menu.lst:
title          Windows
map (hd0) (hd2)
map (hd2) (hd0)
rootnoverify (hd2,0)
chainloader +1
While you're in there, you may want to look for the Timeout line as well, and increase it. I chose not to, because I'll be booting to Windows very rarely.

Then, update GRUB's configuration, since it has to write data to the boot sector on the Linux drive. On debian-based systems, it's:
$ sudo update-grub
Now, give it a reboot and make sure that both Windows and Linux boot from GRUB as expected. This little project actually went easier than I'd expected, mostly thanks to GRUB's documentation. While extensive and technical, it is well-organized.

By the way, I tested the VPN for about 9 hours today and it was rock solid the whole time. Better than I can say for the other operating systems I've tried it with. At least I got some benefit from using Windows. If only I had awesome coffee, an IBM Model M and my MX Revolution mouse at the office every day. And if I could work in my pajamas.


Now, if you'll excuse me, I need to go take a shower with concentrated chlorine bleach and a cheese grater to get rid of all this Microsoft residue.

2008-10-19

Sysadmin Sunday: Apache Name Based Hosting mini-howto

Apache Name Based Hosting configuration
by Asmodian X

Contents
1. Description
2. Getting started
3. Base Filesystem Layout
4. Base Configuration
5. Name based hosting configuration (WWW only)
6. Name based hosting configuration (SSL single site)
7. Implementing the configuration

1. Description

Apache name based hosting configuration using Debian Linux or Ubuntu Linux Server edition. This is intended for intermediate Linux/UN*X administrators. You will require the Apache mod_vhost module, along with apache2, openssl and whatever other apache services you want.

2. Getting started

If you have not already installed apache ...

At the Ubuntu/Debian Linux prompt:


$sudo apt-get install apache2
$sudo a2enmod vhost_alias
$sudo a2enmod ssl


3. Base Filesystem Layout
htdocs layout:

/data/sites
• ssl
⁃ symlink to site folder in www
• www
⁃ site_url
⁃ htdocs
⁃ cgi-bin


This could easily be turned into Suse's standard of /srv/www/sites/www ...etc . the site_url needs to be exactly what the end user will type in as their dns url. so there needs to be a folder

called host.example.com as well as www.host.example.com. This is easily accomplished with symlinks in Linux.

Config layout: (based off of ubuntu/debian standard)

/etc/apache
• sites_available
• sites_enabled
• modules_available
• modules_enabled
• ssl
⁃ sitename
⁃ certificate file

The ssl directory could easily be in /etc/ssl but this is up to you.

4. Base Configuration
This is the default Debian/ubuntu apache.conf file. No changes were made here.

ServerRoot "/etc/apache2"
LockFile /var/lock/apache2/accept.lock
PidFile ${APACHE_PID_FILE}
Timeout 300
KeepAlive On
MaxKeepAliveRequests 100
KeepAliveTimeout 15
<IfModule mpm_prefork_module>
StartServers 5
MinSpareServers 5
MaxSpareServers 10
MaxClients 150
MaxRequestsPerChild 0
</IfModule>
<IfModule mpm_worker_module>
StartServers 2
MaxClients 150
MinSpareThreads 25
MaxSpareThreads 75
ThreadsPerChild 25
MaxRequestsPerChild 0
</IfModule>
User ${APACHE_RUN_USER}
Group ${APACHE_RUN_GROUP}
AccessFileName .htaccess
<Files ~ "^\.ht">
Order allow,deny
Deny from all
</Files>
DefaultType text/plain
HostnameLookups Off
ErrorLog /var/log/apache2/error.log
LogLevel warn
Include /etc/apache2/mods-enabled/*.load
Include /etc/apache2/mods-enabled/*.conf
Include /etc/apache2/httpd.conf
Include /etc/apache2/ports.conf
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%h %l %u %t \"%r\" %>s %b" common
LogFormat "%{Referer}i -> %U" referer
LogFormat "%{User-agent}i" agent
ServerTokens Full
ServerSignature On
Include /etc/apache2/conf.d/
Include /etc/apache2/sites-enabled/
Listen 80
Listen 443

5. Name based hosting configuration (WWW only)

UseCanonicalName Off
LogFormat "%V %h %l %u %t \"%r\" %s %b" vcommon
DirectoryIndex index.html index.shtml index.php index.htm
<Directory /data/sites/www>
Options FollowSymLinks
AllowOverride All
</Directory>
<VirtualHost *:80>
Servername host.example.com
CustomLog /var/log/apache2/access_log.host.vhost vcommon
VirtualDocumentRoot /data/sites/www/%0/htdocs/
VirtualScriptAlias /data/sites/www/%0/cgi-bin/
</VirtualHost>

WWW name based hosting requires the use of the mod_vhost apache2 module. Any interface that apache is listening to will check to see what hostname was being called and match it to a directory name in /data/sites/www/.

6. Name based hosting configuration (SSL single site)

UseCanonicalName Off
LogFormat "%V %h %l %u %t \"%r\" %s %b" vcommon
DirectoryIndex index.html index.shtml index.php index.htm
<Directory /data/sites/ssl>
Options FollowSymLinks
AllowOverride All
</Directory>
<VirtualHost 1.2.3.4:443>
SSLEngine On
SSLCertificateFile /etc/apache2/ssl/generic/generic.crt
Servername host.example.com
CustomLog /var/log/apache2/access_log.host.vhost vcommon
VirtualDocumentRoot /data/sites/ssl/host.example.com/htdocs/
VirtualScriptAlias /data/sites/ssl/host.example.com/cgi-bin/
</VirtualHost>

Alternatively you can add another virtual host for port 80 in-case you want to exclude this site from the name based section above.

SSL wants a static port, IP or both. Its easier to have a static IP but either will do. Also, you will need a dedicated ssl certificate for each site (lest you get an SSL error message on the client side) or you need to get a Wildcard SSL certificate for your domain. This is assuming you are assigning sites under the example.com domain such as site1.example.com, site2.example.com ...etc.

If you are dealing with different DNS names for each site then individual certificates are needed.

7. Implementing the configuration
When installing the configuration take these steps:

1. Remove the /etc/apache2/sites_enabled/default configuration symlink
2. Create the generic name based hosting files (listed above) into files in the /etc/apache2/sites_available folder.
3. Create symlinks from the sites_available configuration files into the sites_enabled folder.
4. restart apache.

2008-02-10

Sysadmin Sunday: Pure-ftpd configuration on Ubuntu Server Edition

0. Introduction

There are tons of ftp servers out there. Some are leftovers from the stone age others are fairly up to date with SSL capability and virtual user support. In this case I have chosen Ubuntu server and Pure-ftpd.

Why: Ubuntu comes out of the box unencumbered by unnecessary bloat that many server editions are forced to install out of a spider web of software dependencies. It at the same time has a very mature package management system which allows for easy software updates.

I'll stop there because I don't intend this to be a flame war on what Linux is best. An important difference that the Ubuntu installation has versus its native configuration is that it uses a configuration wrapper instead of modifying a start-up script. Which is why I am writing this article.

This article assumes that you have an intermediate or advanced knowledge of command line based UNIX operating systems.

-=-=-=-=-=-=-=-
Table of contents:
-=-=-=-=-=-=-=-
0.........Intro
1..........System setup and overview on what needs to be done

2..........Package installation

3..........Wrapper configuration with virtual users and SSL
4..........Access control via virtual users and iptables and or pure-ftp

5..........Informative resources

-=-=-=-=-=-=-=-


1.System setup and overview on what needs to be done

-=-=-=-=-=-=-=-
We will be needing an Ubuntu server edition installation. Preferably with a firewall in place.
In this case I have a pile of users who need to use dreamweaver to edit their web sites. Preferably they need to have their own logins, those logins need to be chrooted to their directory. Secondly, the users have no earthly reason to have a system user account.
For security sake I want the option for them to use encryption (once we can get licenses for the newer version of Dreamweaver which supports it). Lastly having these virtual users have disk space quotas so some idiot doesn't up load a ton of the family pictures that were taken with a 10 Megapixel camera and are like 20 Mb a piece causing the storage array to !#*& itself.

Pure-Ftpd can use arbitrary file paths for virtual user home directories. You can assign a local system user account and group for each virtual user or group of virtual users.

You can make a system user and group called webadmin who owns the "sites" folder under "/data/sites". All of the virtual users from a system standpoint are doing business as "webadmin". Pureftp does the job of doing access control and permissions on its end and keeping people in their home folders.
-=-=-=-=-=-=-=-
2.Package installation
-=-=-=-=-=-=-=-
Run the command:

apt-get install pure-ftpd
There is a GUI configuration tool but by default Ubuntu does not have a GUI so I leave that to you.

Once installed the wrapper configuration folder is on "/etc/pure-ftpd". The folder contains a structure like so:

root@stage:/etc/pure-ftpd# ls -al
total 32

drwxr-xr-x 5 root root 4096 2008-02-20 01:32 .

drwxr-xr-x 133 root root 12288 2008-02-20 01:32 ..

drwxr-xr-x 2 root root 4096 2008-02-20 01:32 auth

drwxr-xr-x 2 root root 4096 2008-02-20 01:32 conf

drwxr-xr-x 2 root root 4096 2007-06-21 19:01 db

-rw-r--r-- 1 root root 230 2007-06-21 19:01 pureftpd-dir-aliases

Under Auth you will find the following symlinks:
root@stage:/etc/pure-ftpd/auth# ls -alF
total 8
drwxr-xr-x 2 root root 4096 2008-02-20 01:32 ./
drwxr-xr-x 5 root root 4096 2008-02-20 01:32 ../
lrwxrwxrwx 1 root root 26 2008-02-20 01:32 65unix -> ../conf/UnixAuthentication
lrwxrwxrwx 1 root root 25 2008-02-20 01:32 70pam -> ../conf/PAMAuthentication
Delete all of these and make a symlink to "../conf/PureDB":
root@stage:/etc/pure-ftpd/auth# ln -s ../conf/PureDB
Go to the conf directory and edit the "PAMAuthentication" file to say NO insted of YES.
Add a new file called "ChrootEveryone" and edit it and add the word "YES".

Now lets make a user!
pure-pw useradd test -u webadmin -g webadmin -d /data/sites/localhost/ -N 25
pure-pw mkdb
Where -N is a 25 Mb quota and -u and -g is the userid and groupid of the corresponding system user. -d is the folder that the user is chrooted in. The command "mkdb" creates the binary password database.

Now we have our chrooted ftpd environment finished we just need to configure the ssl option.
now it is possible to FORCE users to use ftp-SSL however thats out side of the scope of this article.
(from the pure-ftpd documentation)
mkdir -p /etc/ssl/private  openssl req -x509 -nodes -newkey rsa:1024 -keyout \
/etc/ssl/private/pure-ftpd.pem \
-out /etc/ssl/private/pure-ftpd.pem

chmod 600 /etc/ssl/private/*.pem
Then go into "/etc/pure-ftpd/conf" and edit the file named "TLS" and add the number "1".
(0 disables encryption, 1 makes it optional and 2 makes it mandatory).

Now restart the service :
/etc/init.d/pure-ftpd restart

And login to your new ftp server!
-=-=-=-=-=-
4.Access control via virtual users and iptables and or pure-ftp
-=-=-=-=-=-
It pisses me off to read the logs and see all of the automated Interweb
exploit scripted attacks. So here are some suggestions to keep the
automated attacks down.

1. Hosts.deny doesn't work. Use Ip tables filtering.
2. Grab a blackhole ip list (BBL) from http://www.unixhub.com/block.html.
3. Determine your scope of service, if your users work only within the continental United States then blocking AP-NIC and any other non-local IP ranges in their entirety would be a good idea.

If you want to tie it down on a per-user basis try this:
pure-pw usermod testuser \
-r IP-ADDRESS-RANGE\

-R IP-ADDRESS-RANGE
where -r is allowed IP ranged and -R is denied ranges (Example: -R 200.0.0.0/8 -r 192.168.0.0/16)

-=-=-=-=-=-
5.Informative resources
-=-=-=-=-=-
Barnes, Robert. "Bad IP addresses/Bob's Block List (BBL)", (Accessed Feb, 2008)
http://www.unixhub.com/block.html

Denis, Frank. "Pure ftpd", (Accessed Feb, 2008)
http://www.pureftpd.org
http://download.pureftpd.org/pub/pure-ftpd/doc/README.TLS

Hornburg, Stefan (Racke). "Debian pure-ftpd-wrapper man page", (Accessed Feb, 2008)
http://www.penguin-soft.com/penguin/man/8/pure-ftpd-wrapper.html

2007-12-01

Upgrade your mobo BIOS without Windows or DOS.

Sometimes you find a nifty piece of hardware that you just can't let go into disuse. This time around it was a Tualatin Pentium 3-S 1266MHz CPU new-old stock, new-in-box. I got it some time ago to upgrade a PC for family that it turns out just upgraded the whole system instead. Thus it sat around in the box until I ran across a mobo to drop it in. Recently I found a system at my favorite shopping destination (Surplus Exchange) that had a Tualatin capable mobo; the DFI CM33-TL just so happens to max out a the 1.26Ghz P3-S I already had. Even nicer is that it is the Rev C board which with the newer BIOS updates can boot from USB and can do 48-bit ATA addressing. Alas, no AGP slot. So why all the love for an old P3 server chip? The later P3-S could outperform the early P4 chips and use half the wattage! So what do we do when all that we have to boot the system with is a non-Microsoft OS and most BIOS update utilities run in Windows, or use disk creation software the runs in Windows/DOS? Luckily it seems that is is possible to update some mobos without having to resort to using an unwanted OS. DFI has made the CM33-TL able to boot from a floppy or run a program under Windows to flash the BIOS - or enter an update mode that simply reads the flash utility and BIOS file from a floppy. It turns out that it is a good thing they enabled all three. Under a fairly standard Ubuntu Linux install I was able create a floppy the the DFI board could update from by combining the BIOS update features in a way DFI didn't document.

Several steps that worked for me:
1. Nab the BIOS update of choice for your mobo & revision. Be sure your file is correct - close doesn't cut it with a BIOS. It's either an exact match or something won't work right. In my case I could nab the smaller download intended for a Windows-based update utility.
2. Extract the .zip file containing the utility and BIOS image. Many of the .exe files manufacturers supply are programs meant to run under DOS or a DOS shell to create a disk image. By having the .zip we can get around that.
3. Copy the extracted files to a freshly formatted and tested floppy (basic FAT12/MS-DOS format is fine). Having a good floppy is very key to a successful flash. GIGO is an important point to consider when doing something that can brick a system.
4. Reboot the system and be ready to press the BIOS flash key(s) when prompted. On the CM33-TL you press Alt-F2 just after the RAM test and floppy seek.
5. The BIOS will then enter the flash update mode and read the floppy. If it determines the BIOS image is compatible it will begin to flash it to the BIOS chip.
6. Once it's done enter the BIOS setup and "Load Safe Defaults". This will let the BIOS set any settings that might cause the system to fail to boot. Go though the menus and set things as you need.
7. Test boot to be sure it works as before. Test boot again using the new features and marvel at the sudden uses that have opened up.

I had been concerned about having to make a bootable floppy for the update but the BIOS option to enter the update mode does not need a fully bootable floppy to operate.

With a system like this it is possible to operate a NAS system with large drives on a chip that boots from a USB thumb drive, operate on older, cheap RAM and uses little power. Having a system that boots from USB allows you to configure the server to spin down drives that are idle and save more power; an OS on a USB device will not need to spin up the main/RAID drives to write logs, etc. Smart choices of hardware can make a cobbled together server operate more efficiently.

2007-04-15

Solaris

So, you've just received your gratis Solaris 10 DVD set and you already know that your hardware works and has basic drivers because you used the
Hardware Check Tool ISO. However, the DVD boots but ends up complaining -- ERROR: The disc you insterted is not a Solaris OS CD/DVD?! Try setting the DVD drive as the slave drive on the main ATA channel with the HDD. It should boot and install fine then.

Target: ECS/PC Chips M963GV mobo w/ SiS 551GX/964L chipset and a 2.8GHz HT P4.

EDIT/UPDATE:

Now that it's installed and booted, you want to move the drive back to the secondary ATA channel, so each drive can have a channel to itself. The problem is that Solaris maintains a hard-set device map. Once booted to the install with the drive back on the secondary channel, a quick run of prtconf from a root terminal shows that ide, instance #1 (driver not attached), drat! With a bit of help from Google and a good blog at blogs.sun.com called PotstickerGuru, we get the command called devfsadm that will allow us to rebuild the device map. So issue devfsadm -r / from that root session, run prtconf again and the second channel should have been recognized and now show ide, instance #1 / sd, instance #1. Reboot, and the drive will be recognized and functional. Now we can stick in a CD-RW with the sfe driver for the SiS900 Fast-Ethernet chip and finally get the system online.