This is continuation to my previous post about carving AT jobs. The end result is a Python script for carving any .JOB files: job_files_carver.py
As a reminder, the methodology for carving AT jobs relied on a specific string that remained the same across all AT jobs, and therefore was fairly efficient. Once we had a hit on the string, all we needed to do was to identify the beginning of the job file and carve an arbitrary amount of data that was large enough not to trim the data structure.
This time I decided to look for JOB files by searching for the Fixed Length data section using a regular expression. Once that’s identified, the code performs sanity checks on the section, determines where the Variable Length section finishes and runs extra sanity checks on it. Then it’s just a matter of writing both sections into a file.
After studying each field in the Fixed Length data section and testing my theories against various memory images, I determined which fields should remain constant or have predictable values. Constructing a regular expression that would look for data that matches these requirements was relatively easy, e.g. two bytes that represent month can only have values between 1 and 12 and therefore the regex would be “[\x01-\x0c]\x00]” (little-endian format). In the back of my head I kept the idea to make it efficient while keeping the number of false positives low. It’s therefore I went with an approach to simplify the matching regex and be more relax on some fields, and once it yields a hit, to perform extra verification. The end result are 2 extra regular expressions that verify the values in RunDate and Priority fields.
Variable Length Data Section
Once the Fixed Length data section has been found, I could take a generic approach and carve an arbitrary amount of data that follows it, because tools like Gleeda’s Job Parser would ignore the excessive bytes. That’s also the approach I took when developing the AT Jobs carver. This time however, I decided to parse the fields to determine where the section ends and only carve out the bytes that belong to the JOB file. This approach allowed to perform further sanity checks allowing to reduce the number of false positives even further.
D:\Tools\job carver> job_files_carver.py "memory.img" carved_files
[+] Found hit: 0x20d47310
[+] Found hit: 0x24e1f8a0
[+] Found hit: 0x5ab997a8
[+] Found hit: 0x5e55d000
[+] Found hit: 0x649a9d07
[-] Failed verification
[+] Found hit: 0x86aff1d8
[+] Found hit: 0xde52a0f8
D:\Tools\job carver> jobparser.py –d carved_files > job_files_analysis.txt
Have fun fighting evil and let me know if you encounter any problems!
PS. It would be great I found time to incorporate the two parsers into a Volatility plug-in…