2010-03-30

Egon is not amused.

CERN Scientists crossed the proton streams again.

Dr. Egon Spengler: There's something very important I forgot to tell you.
Dr. Peter Venkman: What?
Dr. Egon Spengler: Don't cross the streams.
Dr. Peter Venkman: Why?
Dr. Egon Spengler: It would be bad.
Dr. Peter Venkman: I'm fuzzy on the whole good/bad thing. What do you mean, "bad"?
Dr. Egon Spengler: Try to imagine all life as you know it stopping instantaneously and every molecule in your body exploding at the speed of light.
Dr Ray Stantz: Total protonic reversal.
Dr. Peter Venkman: Right. That's bad. Okay. All right. Important safety tip. Thanks, Egon.
All kidding aside, the most recent particle collisions at 7TeV are the most powerful proton collisions ever made, and the Large Hadron Collider is still only running at 50% of its intended capacity.

But mostly, I just wanted to interject with some Ghostbusters humor.

2010-03-24

Guest post: Pulling audio from YouTube with PHP and ffmpeg

Editorial note:
Patrick Teague is a colleague of mine from my first job in the mid-1990s. When I heard of the project he was working on, I asked if he wanted to share it here, and he obliged.

I got this working instantly on OpenBSD. All I needed to do was:
sudo pkg_add install pear php5-curl ffmpeg (and then copy the sample php5-curl config into place)
sudo pear install console_commandline

I also noticed that occasionally, I needed to run the script twice, once to set the proper cookies for curl, then it would work after that.

Enjoy!


The attached script requires the following... I've noted the Ubuntu packages and they should match Debian, other linux and/or BSD distros should be similar. Windows MSI installer should just be a simple matter of ticking off the options.

  • PHP 5.2+ CLI (php5-cli) with cURL installed (php5-curl)
  • PEAR's (php-pear) Console/CommandLine
  • ffmpeg binary should be in your environment $PATH and should have access to libmp3lame

I have no idea why people enjoy listening to music as videos with a static image, but Youtube is full of these as well as music videos and people playing their own remixes or original music. If I hear something I like I then try to get it off of emusic, itunes, directly from the artist at a show, or somewhere else. Sometimes the music just isn't available yet (no official CD/mp3 release) or the tunes are available directly off the musician's website, but there's a better/live remix on Youtube...

I found myself listening to something the other night and I was able to find plenty of tracks from the artist on emusic and itunes, but not the particular track. In 1 of the comments somebody asked where they could get a copy of the track and a reply mentioned mp3ify. I googled it and found http://www.mp3ify.com/ which I initially used to grab the mp3. After using it 2 or 3 times it screamed at me for not making a donation so I figured I should be able to figure out how they did it and stop wasting somebody else's bandwidth and processing time. Not to mention the more control I have over something the more control I have over my results.

I have flashgot installed and it allows me to download the flv (flash) files on Youtube pages, but it wasn't letting me see what the URL was and the only flv that I was easily finding in the HTML code was the player. Flashgot does allow you to add downloaders and send the downloader various different options including URL, cookie data, referer URL, etc. I wrote a quick shell script that would dump this information to a text file and used this information in my various attempts to grab the flv from Youtube.

Note: I must have really needed sleep because I completely forgot that I could have had an easier time grabbing the URL and cookie data from firebug.

I haven't done extensive testing, but the Youtube cdn seems to require a combination of several settings. I found that the I couldn't grab the flv without having certain cookie data, a referer URL, a "valid" User-Agent string, and a particular URL. The User-Agent it may not care about as I was rushing my testing, but I couldn't seem to get wget or curl to download the flv without setting it to a Firefox UA string. It seems to want certain cookie data, but I'm not sure how much of the cookie data is relevant vs the generated timestamp in the flv's URL - again, I was rushing this.

From what I can tell Youtube creates a javascript object named yt and adds settings to it via the yt.setConfig() function. This yt.setConfig() can accept single key/value pairs [i.e. yt.setConfig('LOGGED_IN', true);] or large JSON objects with several key/value pairs. In 1 of these large objects the value for VIDEO_TITLE is set (needed for modifying the URL) and the flv URL is listed within the SWF_ARGS key which is a property list and the particular value I found the URL listed in is fmt_url_map. The value for fmt_url_map is URL encoded with a pipe seperating 3 values. The 1st value is a number, the 2nd value is the URL that I used for the flv, and the 3rd value is another URL (I'm not sure what this URL is for). This 2nd value that I used as the URL is mostly correct, but can't be used without being modified. At the very end of the URL is a comma followed by a couple of characters, these need to be trimmed off the end and replaced with &#/ + a modified version of the VIDEO_TITLE + _video.flv.

I've only seen a limited set of translated characters in the modified VIDEO_TITLE. So far I've only seen space ( ), dash (-), period (.), and ampersand (&) are converted to an underscore (_) and square brackets (both [ and ]) are removed. I'm guessing there's a larger set of translated characters, but I'm not sure what they are... I'm also not sure how important it is to get these correct as at least 1 of my attempts had an incorrect name, but worked anyway (I would prefer to use a correct name just in case so it doesn't cause any red flags).

Honestly I'd prefer to run the html page through some shell utility (possibly rhino) that could process javascript and output the variables needed so I could let Youtube's code do it's own work. That would also make it more forward compatible if Youtube decided to change how they structure their URLs or something. In the meantime I used a combination of grep and sed to initially pull out the values I needed (exchanged for preg_grep() and preg_replace() in PHP code).

The final part was to convert the flv to a different format. I've used ffmpeg in the past for converting between media formats, but this was the 1st time I've used it to convert an flv. There are several nifty things that ffmpeg can do - merge raw video files with audio files, transcode decrypted VOBs into a video+audio format, audio and/or video conversions (i.e. wav into mp3), etc. I googled for ffmpeg convert flv to mp3 to get some quick solutions... 1 of the solutions I found suggested using -acodec mp3, but I couldn't get that to work. Initially I dropped back to mp2, but later discovered I needed to use -acodec libmp3lame instead. I should probably also state that -acodec copy is valid, but could cause problems if the embedded audio is in a format that isn't acceptable in an mp3 wrapper.

The other switches on ffmpeg I'm using are -ac 2 (setting the audio channels to 2), -ab 128k (the audio bitrate), -vn (disable video recording - don't need the video for an mp3), and -y (overwrite output files). There are many other switches available and I've thought about adding the following (possibly via some switches set up in Console/CommandLine) - -title string, -author string, -copyright string, -comment string, -album string, -track number, and -year number as these should populate the fields in the id3 tag.

The final bit is more of a pet peeve than anything that's really needed. My preference is to set the date on files to match what's on the server - otherwise how do you know if it's been modified. If you don't want or need this, feel free to comment out the if statement surrounding the curl_getinfo( $ch, CURLINFO_FILETIME ) as well as the if ( !touch( $file_mp3, $GLOBALS['server_filetime'] ) ) { section at the very end right after the passthru( $cmd );.

Download:

get-youtube (syntax highlighted)

get-youtube (plain text source code)

DNS Tunneling Part 4: Honorable mention

DNS Tunneling has been around since at least 2005. At least that's when I first heard about it. In the last 5 years, there have been many tools written to leverage this infrastructure vulnerability. The ones I touched on are but a few. I wanted the ability to demonstrate a working SSH tunnel via DNS, from all major operating systems. To that end, I couldn't find any one tool that worked great on all my favorites. For example: OzymanDNS won't run on OpenBSD without completely re-compiling perl in an arguably insecure configuration. It just so happens that dns2tcp works on pretty much everything except for Windows. Still, there are others out there that might be worth looking at. Here are a few:

Heyoka is a Windows-only tool, supposedly with some interesting stealth technology. The binary is both the server and the client, and it can tunnel any TCP connection to localhost (a listening VNC, RDP, Squid, or COPSSH server, for example). I tested it without using any of its advanced features, and it works. I used WinXP home in my lab as the server, and Windows 7 on my Macbook as the client. I had to spawn an administrator command shell on Win7 to get it to run. YMMV.

DNSCat is a nifty, minimal tool that acts kind of like netcat, only over DNS. For some reason, I couldn't get OS X to play nicely with it, despite the fact that it looked like it wanted to work. Also, the "server" would occasionally bail out to to the shell again, so I often found myself wrapping it in a "while true" one-liner shell script loop on the server end. Using the client on OpenBSD seemed to work great, as seen in this screen shot I took. The "server" activity is in the window on the right. Ron gets props for some other fun stuff, too, such as weaponizing DNScat as a metasploit framework payload. This tool, combined with netcat or stunnel, could prove to be quite flexible, I think.

I didn't get to play with NSTX, but it looks like a linux-only affair. I don't run Linux on my laptop, so it'd have been difficult to test in the field.

Have any others you've used with good results?

Shodan gets monetized

... well, kind of.


Achillean has added even more to the Shodan index, and for a small fee, you can export up to 1,000 records at a time. An export will cost you anywhere from $2.50 each if you buy export credits in bulk, all the way up to $5.00 for a one-time export.

As HD Moore noted, that means that you could, in theory, buy all of the HTTP data (~22 million records worth) for around $55,000. I'm sure you could negotiate a reduced rate for 20,000 credits, though, if you asked nicely.

Also, I don't know about you, but my time is worth money and if I wanted 22 million records like this, it would take more than a mere $50k of my billable time to scrape it all together. We'll see how this attempt at monetization works out. It'd be a shame to see all this data get scrapped because its owner can't afford to keep it online.

2010-03-23

DNS Tunneling Part 3: Linux, Mac OS X and BSD clients

In the third part of this series, I'll cover using DNS2TCP on unix-like operating systems. DNS2TCP is not written in perl. It's a set of small C programs. It can tunnel multiple types of traffic, as opposed to OzymanDNS which is designed to be used as a Proxy Command for SSH.

Compiling is straight-forward once you download it. This worked on all platforms I tried it on, which includes Ubuntu Linux, OpenBSD, and Mac OS X.
$ tar xvfz dns2tcp-0.4.3.tar.gz
$ cd dns2tcp-0.4.3
$ ./configure
$ make
$ sudo make install

Assuming that you have a valid subdomain nameserver (as outlined in part 1), you just need to edit the configuration file for the "dns2tcpd" server. There's an example file "server/dns2tcpdrc" in the archive that I've modified. Of particular note, make sure to change the "listen" line to 0.0.0.0 or your ethernet interface's IP address. The default configuration will not work because it listens only on localhost. Also, make sure that the "domain" line matches your subdomain. Finally, you must make sure that the chroot directory exists. This is where dns2tcp caches its data. The "ressource" lines are intentionally mis-spelled. The author of this tool lacks proper English grammar skills. That's okay, just keep it in mind. "ressource" lines have the format:

ressources = [resname]:[ip]:[port], ...

My config looks like this, for Squid and SSH:

# config file

listen = 0.0.0.0
port = 53
user=nobody
chroot = /var/empty/dns2tcp/
domain = l33t.h-i-r.net
ressources = ssh:127.0.0.1:22 , proxy:127.0.0.1:3128

Then, run it as follows:
$ sudo dns2tcpd -f /path/to/dns2tcpdrc
or, if you wish to also run it in the foreground for use in a screen session, add the -F flag:
$ sudo dns2tcpd -F -f /path/to/dns2tcpdrc

That's it for the server side.

Now, on the client end, compile and install dns2tcp as well. Configure the "dns2tcprc" file. Unfortunately, it can only be configured with one "ressource" at a time. I am going to use SSH with dynamic proxy again.

#
# configuration :
#

domain = l33t.h-i-r.net
ressource = ssh
local_port = 2222
debug_level=1

I've found that this tunnel software can take a while to fully work. Sometimes up to five minutes. Once it catches on for the first time, though, it seems much more stable and quick than OzymanDNS on the same platforms. Launch it like this:

$ dns2tcpc -f /path/to/dns2tcprc [DNS Server]

Where DNS Server is a DNS server you can access, and probably should be the one you were issued by DHCP.

Activate the SSH tunnel from the CLI:
$ ssh -C -p 2222 -D8080 user@localhost

In the screen shot below, you can see both the SSH session and the dns2tcp client window open.


Again, configure Firefox to use the dynamic port you specified above as the proxy on localhost.


After this, you should be in action!

DNS Tunneling Series:
Part 1: Intro and Nameserver setup
Part 2: Windows Clients (using ozymandns)
Part 3: Linux, BSD and Mac OS X clients (using DNS2TCP)

DNS Tunneling Part 2: Windows Clients

Of all the tools I tried to get working, Dan Kaminsky's OzymanDNS was the only one I could find that actually works for Windows. Maybe there are others out there (link to them in the comments!) but I didn't find any at a glance. Also, Doxpara seems to be down, so here's a mirror of the source package for Linux/Unix/BSD/OS X.

This is generally okay, because OzymanDNS is a fine solution in and of itself, even if it hasn't been updated in five years or so. OzymanDNS server runs fine on Mac OS X, Linux, and BSD. It's all in perl, and heck, it might even work under cygwin on Windows. I haven't bothered trying. I'm using Linux as my server for ozymanDNS.

I did have to perform the following actions before OzymanDNS would run:

sudo perl -MCPAN -e install Net::DNS
sudo perl -MCPAN -e install MIME:Base32

This installs the DNS and Base32 perl modules that Kaminsky's scripts need.

Next, keep in mind the name you chose for your subdomain name server if you followed along in Part 1. You'll need that here. SSH to your server and start ozymanDNS. Keep in mind you'll need to leave this process running while you're on the road. I launched it inside a GNU Screen session so that it could run in the background and I could re-attach to it when I want to. The syntax is:
sudo ./nomde.pl -i [your external-facing IP] [your subdomain name]



Someone made executables of these tools for Windows. You can download the Windows version of OzymanDNS (as well as putty and some DLLs) here. I recommend copying the DLLs and droute.exe into your path somewhere, like C:\Windows\System32 for example.

Once you're on the road and need to tunnel, configure putty. Click the screen shots below for full size.

Connection/Proxy, select the "Local" radio button, the "Consider proxying local host connections" check box, and enter "droute -r [DNS Server] sshdns.[your subdomain]" as shown below. DNS server should probably be whatever DNS server you were assigned via DHCP (use "ipconfig /all" from a command window) - I really don't know why you need something prefixed to your subdomain for ozymanDNS to work, but I always use "sshdns".

Switch to the Connection/SSH option in the configuration tree and enable compression.


Next, set up a Dynamic tunnel on port 8080 (or whatever you want) as displayed below. Then finally go back up to "Session"and connect to Localhost port 22. Since this is a lot of work, I'd advise you to type something like "tunnel" into the "Saved Session" box, and save it. This will save you a lot of hassle down the line.


If all goes well, you'll be prompted to verify the SSH key for the connection, and then be allowed to log in. You'll also have a working Dynamic SOCKS tunnel thanks to this session. Again, I should remind you that this method of tunneling can be slow by nature of how DNS works. Tunneling more traffic over it, via SSH tunneling will be even slower. We try to mitigate that with Compression above, but it only helps so much.



Now, configure Firefox to use the dynamic proxy. Tools/Options, Advanced, Network, Connection Settings. Use localhost for the SOCKS proxy host, and set the port to the one you configured in Putty.


The final test is to make sure that we are actually going through the tunnel. I chose the old standby WhatIsMyIP.org.


I won't cover using ozymandns under Linux or BSD, but it works well enough. Use this on the client end to get a dynamic SOCKS proxy on port 8080.

ssh -D 8080 -o ProxyCommand="/path/to/droute.pl -r [DNS Server] sshdns.[your subdomain]" user@localhost

DNS Tunneling Series:
Part 1: Intro and Nameserver setup
Part 2: Windows Clients (using ozymandns)
Part 3: Linux, BSD and Mac OS X clients (using DNS2TCP)

DNS Tunneling Part 1: Intro and Nameserver setup

I know it's not new, but the fact is that DNS tunneling just works when a lot of other egress methods fall flat on their faces. I've been playing with DNS tunneling in earnest for the past few weeks, since I don't really have much better to do.

So, why does DNS tunneling work, anyway? First, let's look at the two main environments where you'd be likely to use DNS tunneling: Captive portals and enterprise web filters.

Most captive portal systems (as found in coffee shops, hotels, airports) block all IP to external hosts until you've paid, accepted the terms of service, or enter a valid code that a customer service rep gives you. They often employ a transparent http proxy to redirect you to the captive portal's main interface, via meta refresh or HTTP 3xx redirect.

Most enterprise web-filters work by providing a SOCKS or HTTP proxy, and not allowing direct HTTP or HTTPS connections out from employees' workstations. If the content isn't allowed by the filter, the proxy returns an error message to the users.

Some hospitals I've visited use a hybrid of these technologies: employing both transparent proxies and web filtering. Generally, captive portal operators have very little recourse aside from banning you, if you get caught tunneling. People get fired for pulling these tricks with employers. As always: use this stuff wisely.

Now comes the fun part: DNS still works, more often than not, in both of these situations. To test it, try doing an nslookup on a popular domain. If it doesn't return a "private" RFC 1918 IP address (192.168.0.0/16, 172.16.0.0/12, 10.0.0.0/8), then there's a good chance DNS tunneling will work.

While you can't connect via UDP to any external servers, your computer can usually make requests against an internal DNS server all day long. When you tunnel via DNS, you are using a client program that encodes data into DNS requests, all of which are designed to go to the DNS server you configured earlier in this article.


The flow of data looks like this:
1) Your SSH client (or other application) sends data to the listening TCP port for the tunnel program
2) The tunnel program makes a DNS request for your tunnel subdomain to the private DNS server.
3) The private DNS server asks a root server for the authoritative NS record
4) The root server replies with the home server's address
5) The Private DNS server passes your DNS request to the home server
6) The home server acts on the data, tunneling traffic
7) The home server receives the TCP responses
8) The home server encodes the response data in a DNS reply packet
9) The DNS reply is sent to the private DNS server
10) The private DNS server passes the response to your DNS tunneling client
11) The tunneling client decodes this data and passes it to your client application


Using many, many strange-looking DNS requests and responses, it is possible to have a completely DNS-encapsulated TCP session. A keen-eyed admin will notice the unusual amount of DNS requests. This is NOT a stealthy way to tunnel, and it can be easy to detect. I should note that this can make things very, very slow, but I was getting surprisingly fast speeds at the local Starbucks Coffee last week:


The first and hardest part for any of us, is getting our "server" (at home, co-located, or what have you) set up as an authoritative name server for a subdomain. It's also necessary that DNS traffic (UDP Port 53) can get to your server. You'll have to configure that in your own firewall (at home) or check it with your provider. In my case, I'm running this in my DMZ with external-facing IP addresses and no firewall rules running on the router itself. I'm using no-ip.org for dynamic DNS to my home environment.

To add a subdomain name server, first you have to buy a domain and have primary nameservers. Most registrars allow you to fiddle with your DNS. For example, Godaddy's "Total DNS control" panel allows you to easily add a sub-domain name server. In this example, I'm editing kc-2600.com and adding a subdomain of "tunnel" and pointing its name server record to my dynamic DNS. If you have a static IP, you can specify this as well.


I did something similar on ZoneEdit:


This really is the hardest part. Once you do this, you have to wait a while for the record to propagate to the root servers. But then, you're home-free. All of the tools I'm about to introduce you to will work just fine once you have your own authoritative subdomain nameserver record.

DNS Tunneling Series:
Part 1: Intro and Nameserver setup
Part 2: Windows Clients (using ozymandns)
Part 3: Linux, BSD and Mac OS X clients (using DNS2TCP)

2010-03-11

Teaser: Cory Doctorow's new novel, For The Win

It was an odd bit of coincidence this morning. Frogman had shared a story with me from Boing Boing: 200 advance-release promo copies of Cory Doctorow's newest novel, For The Win, are being given away to teenage gamers to review. I should mention that I am thankful to have a few friends who skim Boing Boing for the really awesome and interesting stuff. There's far too much content for me to wade through in my RSS reader, so folks like Frogman are my Boing Boing filter.


Thinking back to an email conversation I had with Cory back in late November (in which I requested a preview copy for HiR), I was wondering if we were going to actually get one of the them. Not 5 minutes later, UPS knocked on my door and dropped off a special package:

At any rate, we're excited to be able to get a sneak peek of Cory's latest work. Frogman will be doing the review. Look for it in the coming weeks!

The end of an era: SecurityFocus

SecurityFocus announced in a memo yesterday that it would, for the most part, cease operations.


It is survived by Bugtraq (which SecurityFocus picked up more than a decade ago) and a few other high-volume mailing lists. In turn, SecurityFocus was picked up by Symantec in 2002. This is a sad day, indeed.

2010-03-06

Ye-olde tech: Slide Rule

Even though this virtual slide rule is full of actual win, you can buy a real (if chintzy) slide rule for cheap from ThinkGeek.


I suck at math, and I'll readily admit it. You still need to be okay at math to use a slide rule: namely, you have to keep track of the order of magnitude of your calculations in your head or on paper. It's no secret that I dig old tech, though. Plus, analog calculators are fun and you can't exactly stuff a Babbage difference engine into your pocket.


Related: Upgrade!

2010-03-01

0x0d - Happy Birthday, HiR!

HiR ca. Late 1997

March 1st, 1997. That's the day I uploaded the first volume of HiR e-Zine (then called "Hackers Information Report") to a few local BBSes. This included pushing it out to a small inter-BBS forum with world-wide reach, and finding its way onto some "hacking" web sites within a matter of weeks.

The first issue was penned by me alone, and almost entirely on a road-trip with my parents, on my old NEC Versa 550D while sitting in the back seat of a powder-blue 1989 Ford Aerostar for hours on end. I was just a kid. Going back and reading some of my older stuff is sometimes embarrassing.

A few months later, I had friends from local BBSes submitting content - Frogman being one of them. Soon thereafter, I'd run into a reader of HiR in a college class -- Asmodian X, who also had plenty of fascinating things to add. Contributors came and went, and I'd answer some e-mail questions. I still keep in touch with some of the past contributors and commenters.

From 2001-2007, HiR faded in and out and was mostly dormant. I got busy. We all got busy. The core contributors all grew up, in one way or another. We'd come up with one or two interesting articles, and think maybe we should put together a new "issue" of HiR.

Even though we all live fairly close, that never happened.

In early 2007, we decided to go with a "blog" format. I think it works better. We write when we get a chance, and the comments section gets us closer to our readers. We changed the name, mostly to shed the "hacker" from our name -- I think most of us have long given up hope of completely reclaiming "hacker" as a good word in all use cases. Now it's just a recursive acronym: HiR Information Report. Why the lower-case i? Partially to encourage HiR to be spelled (like H. i. R.) and partially as a throwback to the uBiQuiToUS LoWeRVoWeLiNG of the 1990s. Can we leave that part behind us?

Some months, we really hammer out the content and muster up a post nearly every single day. Other months, we're all but silent. We're still busy, but we are still passionate. Also, in the last year, we've had a pair of really great guest posts. We hope to have more of these, and maybe even land a few more regular contributors.

At any rate, this is me saying "thanks" to those who've contributed to our journey, on behalf of the entire HiR crew. The co-writers. The guest posters. The commenters. The friends who have been around with us for what seems like an eternity in Internet years -- tossing us link-love even back in the 90s. (Lookin' at you, HNNCast), the folks who still archive our mess of old text files (yes you can find us there, no I'm not linking to it), the folks who have kicked it with us at meetups, cons and user groups. And, of course, the readers, without whom we'd probably have given up on this little project long ago.