A number of concerns have been raised recently about certain linux dd implementations such as DCFLDD. You can read about it at http://www.forensicfocus.com/index.php?name=Forums&file=viewtopic&t=2557 and http://tech.groups.yahoo.com/group/ForensicAnalysis/message/82
In simple terms the problems revolve around how dd treats a bad sector. With the noerror flag set one would hope that dd would jump the bad sector, zero it and move on. However it would seem that a number of sectors are being missed when a bad block is found. Research by Barry Grundy and others indicates that this is due to the way the Linux kernel buffers data coming from the device being imaged. The buffering is a good thing as it speeds things up but it also would seem to enable the skipping of good sectors when a bad one is encountered.
This affects one of my favourite tools, Helix. Helix uses the DCFLDD tool as a basis for the Adepto GUI on the Linux side. In the meantime if you are using Helix you can make use of dd_rescue, making sure that the -d flag is set which enables direct disk access to the device. If you were planning to image the disk sda to an attached drive sdb1 this would look something like:-
dd_rescue -d -v /dev/sda /media/sdb1/image.dd
The release of Helix Pro later this year will deal with issue.
Apple Safari update and fsCachedData
1 day ago