Thursday, December 18, 2008

WPA Cracking

In Yorkshire on holiday with the extended family. Touch of man flu!

Its been a while since my last post as life has been flat out. Just a week back I taught the first LE only wireless attack course. I taught it at the Defford SB facility which was perfect, as apart from a bunch of huge radio telescopes there is no wireless interference at all.

What was interesting was the vast difference made by different antenna's. I guess this is obvious but I had the chance to really test the differences between the omni-directional and directional antennas I had available. The out and out winner was the 12dbi directional 'can' antenna which took us to the edge of the facility, at least 100 meters from the Access Point with plenty of power left over. Having returned to the office I thought I would invest in a parabolic mesh antenna slated as 24dbi. I bought 2, one for me and one for an operation I'm working on with a Police force. When they arrived they were HUGE! When put together the dish was at least 70cm square, not terribly useful in a covert setting. When hooked up the coverage was astonishing, I reckon that 1km could be possible with clear line of sight.

As WPA cracking is very reliant on a dictionary attack it is interesting to note that Elcomsoft are releasing a WPA specific cracking tool that uses a dictionary attack associated with GPU acceleration which is very exciting. They have offered me a beta copy and I will let you know how it goes.

The company already has brute force cracking a ability of WPA passphrases with GPU acceleration which the press have been having a field day over, saying WPA is dead. In reality a box with 2 super fast NVIDIA GTX 280 cards in will still take 3 months to break an 8 character password. I think the new dictionary version will be much faster.

We shall see...

Thursday, October 16, 2008

Just take what you need!

Sat in Brussels airport, flight delayed for 2 hours, 10pm :(

I’ve been presenting today at the European Network Forensics and Security Conference in Holland. It is not a big event but there were some very interesting people in attendance including Laura Chappell from Wireshark University and James Lyle from NIST. I had not met either before but look forward to communicating more with them in the future.

I was presenting today on the subject of extracting just the information we perceive we need from a case rather an always imaging an entire drive, or more commonly now, a gaggle, bunch, collection (what is the term for multiple drives) of drives which regularly can exceed a TB. Now I know the purists amongst you will shout foul, the whole drive is best evidence and I do not disagree with you; but when dealing with, for example, a fraud case where the predominant evidence will be found in email, an accounting partition and chat logs, why ‘initially’ image vast amounts of data when we know where to start. It is very straight forward to image out just a .pst file or just take a partition and this can reduce processing and searching times tremendously. This does not mean that you never image the drive, however when we have multiple machines to look at why initially image them all when the pertinent data might be available in key containers.

A number of Police Forces in the UK and I’m led to believe ACPO too are looking at a methodology of pre-imaging triage to try and reduce workloads and backlogs and I am in general agreement with this.

There are a bunch of ways of extracting what you need. On a live machine you can simply write your own script to search a machine and extract just the files you need. For example, open notepad and just enter:-

xcopy "%systemdrive%\documents and settings\*.pst" /h /s

..save the text file as a batch file (myprog.bat) and put it on a USB key or external drive. When you plug the drive in to a machine and run the batch file it will search all folders under documents and settings and copy back any .pst file it finds. Easy as that! You could make a couple of subtle changes and it would find and copy back all the thumbs.db files which you could parse out in Encase, FTK, Vinetto and have a pretty good idea what images were on the machine. Quite handy.

xcopy "%systemdrive%\documents and settings\*bs.db" /h /s

If you want things to feel a bit more ‘forensic’ then use dd on the target system to extract what you need:-

dd if=\outlook.pst of=e:\harvest\outlook.dd conv=noerror

You could use this method with Helix and use either the Windows terminal on a live machine or boot to the swanky new Ubuntu Linux side and do it there. You can then MD5 the file and off you go.

md5sum > md5.txt

The argument is even more compelling with live servers in a corporate environment. Tell a sysadmin that you are going to shut down his email server for 8 hours while you image it and he will go a rather nasty colour. Do a live response and just take the pertinent .edb or whatever, files and everyone is happy and you likely have all you need. The same argument can be made when looking at a RAID array. The ‘Financial Director’ under investigation will rarely, if ever, have access to the RAID controller to hide any data anywhere clever on the array disks. So in that situation, do a live response on his machine and figure out what disk partitions/folders he has access to and just go and get those. Imaging the appropriate partition on a RAID will give you everything you need and saves a shed load of time trying to figure out the striping pattern.

I appreciate this blog entry is overly simplistic and all these decisions should be made on a case by case basis with full comprehension of what is potentially being missed, however the modern investigator should be aware of these techniques and use them where appropriate.

Thursday, August 28, 2008

Backtrack 3 on the Asus EEE (that rhymes!)

I mentioned a few posts ago about the wonders of the tiny Asus EEE. I’ve just had the latest 901 version delivered with 8 hours battery life and an Intel Atom processor. One of the coolest things I’ve been doing is booting the machine to an alternative OS on an SD card. Perhaps one of the most useful is the ability to boot to the Backtrack distro. It means that you have your tiny portable machine totally ready to carry out sysadmin tasks and even wireless cracking using the inbuilt Atheros wireless chipset.

However getting Backtrack 3 to boot on the EEE has been a problem and a number of forums have questions about it. When you download the bootable USB version (http://www.remote-exploit.org/cgi-bin/fileget?version=bt3-usb) there is a helpful text file telling you which files to copy to the USB key or SD card, then simply browse to the ‘Boot’ folder on the card and run the ./bootinst.sh script. To get a command shell up in the EEE Xandros Linux distro just hold down CTRL-SHIFT-T. Then as if by magic you can boot to Backtrack by simply holding down the ESC key at boot time.

However, a number of people have noted that it seems impossible to run the shell script. You simply get an error message. The solution is very simple. If you look at the permissions for the script (ls -la) you will note that the files on the SD card do not have execute permissions. If you try and change the permissions:-

chmod 777 bootinst.sh

..it pretends to work but another look at ls -la and you see that it hasnt.

The problem is to do with the mount permissions for the device as a whole. If you execute the ‘mount’ command you will see that the device is mounted with the noexec flag set and that is what is messing things up! With no other keys or devices plugged in it seems to always mount at /media/D:, so.. simply unmount the device:-

umount /media/D:

then remount with the following:-

mount -o rw /dev/sdc1 /media/D:

Dropping the noexec flag makes the files executable. Now just browse back to the right directory:-

cd /media/D:/boot

then execute the shell

./bootinst.sh

That’s it, now you can reboot to BT3. Have fun.

Wednesday, July 16, 2008

The way Linux pages data

I've been spending some more time looking at why the bad sectors on the NIST tests (see last post) were in the middle of a read run. In their conclusions they state that:-

"Up to seven accessible sectors adjacent to a faulty sector may be missed when imaged with dd based tools in the Linux environment directly from the ATA interface."

This doesn't seem to make any sense if we are saying that some sectors are skipped when a bad sector is encountered. Surely it would always be the first sector with later sectors skipped? This explanation seems to go some of the way in finding a solution.

When dd requests a block, the mapping layer figures the position of the data on the disk via its logical block number. The kernel issues the read operation and the generic block layer kicks off the I/O operation to copy the data. Each transfer of data involves not just the block in question but also blocks that are adjacent to the required block. Hence a 4096byte 'page' transfered from the device to the block buffering layer in the kernel (often a page segment in RAM) will contain the bad block and adjacent 'good' blocks.

If you have a 4096byte page with a single 512byte bad block you will have, wait for it, 7 good 512byte blocks in that page. This fits with the observations of NIST that 7 sectors may be missed, obviously something bad is happening to the entire 4096byte page.

They then go on to conclude that:-

"For imaging with dd over the firewire interface, the length of runs of missed sectors associated with a single, isolated faulty sector was a multiple of eight sectors."

This makes perfect sense, as the kernel pages the data in 4096byte blocks including 7 good and 1 bad sectors, any 'loss' of data by the block buffering layer would be in 'whole pages' or 8 sector multiples. Am I making any sense?

Hence, I'm reasoning that when dd hits a bad block, something is happening to the block buffering layer to either overwrite, clear or otherwise remove some or all of the buffered pages. The speed of the differences in moving blocks to and from different media such as ATA rather than firewire may help to explain the different numbers of lost pages. e.g. there is physically more or less data in the buffer when it gets deleted/wiped/overwritten etc.

I now need to look at why the buffer is possibly being affected. Any comments are welcomed!

Tuesday, June 24, 2008

Link to the NIST research on dd isues

I've written a couple of simple overviews of the issues surrounding dd and the seeming lost sectors when bad blocks are encountered. I neglected in my previous posts to include a link to the research by NIST at http://dfrws.org/2007/proceedings/p13-lyle.pdf.

Speaking to both Drew Fahey and Barry Grundy the feeling is that there is no reason to overreact, virtually every tool we use has some flaw or another, however further research is needed to be clear about the issue and how to circumvent it.

I'm off to present at the ACPO conference tomorrow and I'm sure the subject will come up, I'll post any interesting comments.

Wednesday, June 11, 2008

Norway

I'm teaching this week at the National Police University in Norway and have met some very interesting and talented investigators from various services. What is very interesting is the almost total lack of organised defense experts. It is quite fascinating that most cases with computer evidence rely almost totally upon the prosecution expert with no counter from an alternative position.

As I do both prosecution and defense work I can see the pros and cons from both sides but although I do not doubt the integrity of the officers here I do believe that a sound defense requires experts giving testimony from both sides. Even though with the best will in the world the reports should be the same, we both look at the same data, we all know that things get missed and some issues and elements can be explained in more ways than one.

It is does seem that some officers are now beginning to leave the service and set up on their own so I suppose we will begin to see that change. In the UK, of course, we have many defense experts and although one has to wonder about the competence and even integrity of one or two, at least a defendant can be assured of a second set of eyes on the data. Dont get me started on the need for industry control, I can go on all day. Doesn't mean I know how to solve the problem though!

I guess setting up in Norway could be a good thing for someone?

Linux dd issues part 2

I spoke in the last few posts about the issues with dd both in Windows and Linux. Having recommended in a previous post that you use dd_rescue with the -d flag added to enable direct disk access I have since found that when running it from the Helix distro it appears to work but instead creates a 0 byte file. I can't get my head around why it would do this.

However, following more research it appears that using GNU-dd in Linux you can enable the iflag=direct argument. This seems to enable O_DIRECT disk access and avoid the seeming buffering issues. Testing this against a drive with no errors it acquired the drive as expected and provided the right hash, so at least it doesn't mess things up.

Interestingly I emailed Barry Grundy about it and he had been following the same line of research and testing. Both of us are away from our labs for a week or so and will not be able to test against a drive with bad sectors until then but I will post again.

If you wish to try it the syntax is simple:-

dd if=/dev/(drive) of = (where you save it) conv=noerror iflag=direct

If you get any interesting results please don't hesitate to contact me.


Tuesday, June 3, 2008

...and FAU-dd issues

Having just posted about DCFLDD, my good friend Jim also pointed out that I had ignored the issues with FAU-dd from George Garner. Helix uses this dd version on the Windows side, specifically because it supports the \\.\PhysicalMemory device to grab RAM. It has been noted that even if the block size is set to 512b FAU-dd still copies data at 4096b to increase speed. however, if it encounters a bad block it will skip 4096b.

The latest version from George steps back from 4096b to 512b when a bad block is found to minimize lost data but unfortunately support for \\.\PhysicalMemory was removed in that version. This is only an issue if bad blocks are found. Removing the noerror switch will stop dd if errors are found and enable you to use a different tool if you are concerned about this. (do not remove the noerror switch when imaging RAM, it will stop almost immediately)

Also, to get around this, FTK imager is installed on the Windows side and there are no reported problems of this type with that tool. However, running from a GUI will have a greater footprint on a live system.

DCFLDD problems

A number of concerns have been raised recently about certain linux dd implementations such as DCFLDD. You can read about it at http://www.forensicfocus.com/index.php?name=Forums&file=viewtopic&t=2557 and http://tech.groups.yahoo.com/group/ForensicAnalysis/message/82

In simple terms the problems revolve around how dd treats a bad sector. With the noerror flag set one would hope that dd would jump the bad sector, zero it and move on. However it would seem that a number of sectors are being missed when a bad block is found. Research by Barry Grundy and others indicates that this is due to the way the Linux kernel buffers data coming from the device being imaged. The buffering is a good thing as it speeds things up but it also would seem to enable the skipping of good sectors when a bad one is encountered.

This affects one of my favourite tools, Helix. Helix uses the DCFLDD tool as a basis for the Adepto GUI on the Linux side. In the meantime if you are using Helix you can make use of dd_rescue, making sure that the -d flag is set which enables direct disk access to the device. If you were planning to image the disk sda to an attached drive sdb1 this would look something like:-

dd_rescue -d -v /dev/sda /media/sdb1/image.dd

The release of Helix Pro later this year will deal with issue.

Saturday, May 31, 2008

SMTP woes

I've recently enjoyed a holiday in France and frighteningly one of the first questions I asked my Brother who booked the house was about Internet availability. He had already asked and Wifi was available in the house. It meant my hands could stop shaking with the stress of possibly being disconnected for 2 weeks. Well in reality my Vodafone dongle would have taken a hammering.

We rocked up to the house (beautiful place by the way) and 20 minutes after unpacking the cars there were 2 MacBook Pros glowing silently on the dining room table. In fact we had 3 notebooks between us as I had also taken my Asus EEE as mentioned in the previous post. Sad eh, but even my wife doesn't moan anymore as long as emails are answered, blogs are written etc at appropriate times.

In fact the laptops came in useful on a number of occasions, looking up the weather, finding a local Kart track, finding a good restaurant and route finding to a Chateaux. Even the parents and in-laws were on board.

Later that day a number of emails arrived but as I've found with a number of ISP's my normal SMTP details were blocked. There are a bunch of ways around this but for your information I used http://whatismyip.com to get the IP address assigned to the router, next I did a look up on SamSpade to find out who owned the IP. This turned out to be France Telecom i.e. Orange, a quick Google search found the details smtp.orange.fr which then worked perfectly with no authentication.

If you travel alot there is a paid option of www.smtp.com, for about $10 a month for 50 emails a day you can send emails through any ISP without the hassle of changing details.

You can of course just switch to webmail but I like my Mac Mail.

As an aside I cracked the WEP code on the house's router in 4 minutes 37 seconds - AAAAAAAAAAAAAFFFFFFFFFFFFF. I love my EEE!

Tuesday, May 27, 2008

EEE'up its good



A number of us have been working on the new Asus EEE PC 900. If you haven't heard of it, its a small form PC which is still very useable. The new 900 has a 20 gig solid state HD and larger screen than its predecessor. (I've got the black version which I think looks nicer than the 'ipod'esque white one).

The rather cool element to the EEE is the in-built Atheros WIFI chipset which supports monitor mode and packet injection. I'm not going to write a detailed explanation about why this is a good thing but any user of Aircrack-ng, Kismet or other such tools will be delighted.

The default OS is a Xandros Linux environment which is quite cool for day to day browsing use, however you are able to boot from the internal SD slot. With a little fiddling you can install Backtrack on an SD card, make it bootable (check the readme on the Backtrack download) and just by holding down the ESC key at boot time, fire up a full Backtrack environment. I managed to get up and working in about 10 minutes and even had a USB Railink Wifi adapter up and working too. Its tiny size makes it perfect for Wifi activities when out and about and at around £300 quid it would be rude not to!

Kicking off!

There are lots of computer forensic blogs out on the interweb some superb and others rather less useful. This aspires to be in the latter category. However as I work with, and have the privilege to train some excellent computer forensic professionals both here and abroad, I often hear about some great pieces of research, new tools and other movements within the industry. If appropriate I will try and post them here.

If you tell me about an idea I promise to check with you before I post here and will never name law enforcement persons unless express permission is gained. As you can tell, this is already an exceptionally boring blog.

If you want to contact me (only about computer forensic topics please) please don't hesitate to do so, either via phone, or from the form you can find on the web addresses in the right column.

That'll do for starters