:::: MENU ::::

AsciiDoc: My new favorite document format

I recently attended an awesome Python course taught by Dave Beazley and noticed that his class exercises were generated into HTML from something called ‘AsciiDoc’. Given my experimentation with LaTeX in academic publishing, reST/Sphinx for automated forensic report generation, and MarkDown on StackExchange and other random places on the web, I was surprised that I hadn’t learned about this format yet (particularly since it’s been around for awhile).

I was happy to see that AsciiDoc has a Python base, but I quickly found a toolchain called Asciidoctor that makes it easier to generate nice-looking HTML5, PDF, and other output from this document format. The down side of Asciidoctor is that it’s written in Ruby—but I like the HTML output a lot better than the default AsciiDoc output.

I like the AsciiDoc syntax much better than reST, and it’s certainly easier than LaTeX (although LaTeX will always have a special place in my heart). It’s similar enough to MarkDown to make it easy to learn and use, yet extensive enough to offer things I want in documents that MarkDown simply doesn’t support well (like nice tables). Perhaps I’ve found a new forensic report format—only time will tell.

4n68r? Huh?

In case you were wondering:

4n6 = forensics (/fərɛnsɪks/), specifically the field of digital/cyber forensics

4n68r = forensicator (/fərɛnsɪketər/), a practititioner of digital/cyber forensics

It’s OK not to share some home-grown tools

I just deleted my Github project, 4n6aux, for integrating all of my digital forensic report-writing scripts into one tool. I use LaTeX for my forensic reports, and it’s a (heavily) modified version of the Army Corps of Engineers report template. I have a bunch of Python scripts I’ve written that generate my initial report template. It’s a tedious process but the final result looks awesome (at least I think so). My reporting tools have evolved from being a highly-customized version of the Sphinx documentation generator with custom LaTeX output to now being all standalone Python scripts, to include a script that automatically generates my glossary based on keywords actually used in the report body.

I was originally going to work on it through the Purdue Cyber Forensics club, but after some discussion with others, I’ve decided not to. The reality is that there just aren’t that many forensic examiners using LaTeX, especially outside of academia. I found this to be true in the field. I would show my reports to folks and get the obligatory “ooh’s” and “ahh’s”, but after showing the myriad of command-line tools and other setup knowledge (to include converting fonts and learning LaTeX), most 4n68rs decided it was too difficult for them.

When something is this heavily customized and works just fine for my own purposes, why spend the time integrating all of it when no one else will use it (or only a very small handful of people)?

Importing a private key into iPGMail using gpg in Windows PowerShell

Generally I use gpg on a Mac and never ran into this issue before. However, I recently was using gpg inside PowerShell on Windows and was doing what was supposed to be a simple operation: exporting a private key. I used the following (standard) command to export the private key with ASCII armor:

gpg --export-secret-key --armor 3C4A7E82 > C:\Users\4n68r\temp\keys.asc

But when I went to import this key file into iPGMail (an iPhone/iPad app for sending and decrypting PGP encoded messages), it gave the error, “No PGP messages found” and wouldn’t find the keys. I reached out to iPGMail support on Twitter and Wyllys Ingersoll responded immediately, continuing the conversation via email. He had me send him a test private key file and he realized that for whatever reason it was not encoded in UTF-8 or ASCII. I observed that it was encoded UCS-2 little endian.


Why Windows PowerShell kicked this file out in UCS-2 little endian is beyond me. But at least it was an easy fix and iPGMail support was awesome. Anyways, I had no luck when Googling this issue before contacting support, so hopefully anyone else who has this issue finds this and is able to fix the problem simply by changing the encoding on the exported keys file.

And for anyone curious, the key shown in the image is the garbage test key I created—knock yourself out trying to use it for something useful. The hex sequence in the command line is not part of my actual public key, either. Sorry for spoiling your fun.

Spammer Strategy

I saw a tweet from Spaf recommending this article, which is briefly quoted below:

I learned something in MBA school: as an industry matures, competition moves along five frontiers:

  • functionality (can we get the damn thing to work at all?)
  • reliability (will the damn thing please stop crashing?)
  • convenience (let’s shrink it so i can take it with me.)
  • price (if it’s a commodity, give me the cheapest)
  • fashion (indigo or graphite? hey, maybe key lime.)

Only after one frontier is crossed does a market focus on dimensions relevant to the next….

As a spammer, you can exploit this. How? …

… You must have patience. You must wait, wait, wait. Wait until the medium has moved through the frontiers, until the industry is competing on price. They must be made to ignore security until it’s too late….

Read more….

Keeping track of screws when disassembling Apple products

I recently saw a post from someone who had an extra screw left over after reassembling an iMac. I thought I’d share how I keep track of this. First, I go to iFixit and read the instructions for the disassembly (or I find a YouTube video). While going through the instructions, I make a screw template sheet for my use, like I did with my friend Mike’s MacBook Pro laptop:


That’s how I keep track of screws. I’ve also seen someone put a magnetic backing under the paper to keep them from moving, but I’m not a fan of magnetizing all of the screws. How do you do it?

Batch Restore Files Using Sleuthkit and Bash Scripting

I was performing data recovery for my brother-in-law on a failing drive and needed to recover as much data as possible. After spending a few minutes restoring files one by one (on a disk where the only tool I could get to recognize the partitions was Sleuthkit), I determined that a bash script was needed to batch restore the files (because repetitive work is boring—and time consuming).

I found a script for mass-restoring files using Sleuthkit, but it didn’t work for me. I kept getting errors related to how the cut command was being used (you can’t use a string as a delimiter). Aleksey Zapparov (the creator of the original script) informed me that it was supposed to be a horizontal tab character rather than blank spaces (\x09), but I decided to get it to work without that command.

The methodology is as follows:

  • Create an image of the drive (or you could work off the original, but this isn’t recommended).
  • Create a list of files to be restored and format the file list for easier processing using

    fls -f ext2 -p -r ./sdb-data 8650754 | grep -v '^..-' | grep -v '^... \*' > files.lst

    (keeping in mind that I was not all that interested in deleted files since I was recovering data).

  • Run the script to restore the list.

The original script written by Aleksey Zapparov was:

HT=`printf '\x09'`

cat $LIST | while read line; do
    filetype=`echo "$line" | awk {'print $1'}`
    filenode=`echo "$line" | awk {'print $2'}`
    filename=`echo "$line" | cut -f 2 -d "$HT"`

    if [ $filetype == "r/r" ]; then
        echo "$filename"
        mkdir -p "`dirname "$DEST/$filename"`"
        icat -f ext2 -r -s $IMAGE "$filenode" > "$DEST/$filename"

However, I could not get this to work and didn’t want to use a cut command dependent on a tab character. Here is the final working code that I used to parse the file list:


while  IFS=$' \t:' read filetype filenode filename; do
    if [ "$filetype" = "r/r" ]; then
       echo "$filename"
       mkdir -p "`dirname "$DEST/$filename"`"
       icat -f ntfs -o 409600 -r -s $IMAGE "$filenode" > "$DEST/$filename"
done < $LIST

I hope one of these saves you lots of time some day if you ever find yourself using Sleuthkit to batch restore files.

Finding the login password in plain text in RAM on Mac OS X 10.8 Mountain Lion

The login password is stored in plain text in memory on Mac OS X 10.8 Mountain Lion. To find it, use the following command:

strings ram_dump | grep -C 6 -i longname | grep -C 6 -i password

First I piped the ram_dump to strings to get rid of non-ASCII data, then I piped that output to grep and grep’d with six lines of context for ‘long name’ (I made the search case insensitive, but it is in lowercase so this isn’t necessary) and then piped the content to another grep expression searching for ‘password’ with the same parameters. This produces output similar to the following:


I’m not sure if Mac OS X 10.9 Mavericks still stores the password in plain text in RAM or not, but Mountain Lion certainly does (therefore presumably also previous versions).