Monday, 28 September 2015

Carving Scheduled Tasks (*.JOB)

This is continuation to my previous post about carving AT jobs. The end result is a Python script for carving any .JOB files:

As a reminder, the methodology for carving AT jobs relied on a specific string that remained the same across all AT jobs, and therefore was fairly efficient. Once we had a hit on the string, all we needed to do was to identify the beginning of the job file and carve an arbitrary amount of data that was large enough not to trim the data structure.

This time I decided to look for JOB files by searching for the Fixed Length data section using a regular expression. Once that’s identified, the code performs sanity checks on the section, determines where the Variable Length section finishes and runs extra sanity checks on it. Then it’s just a matter of writing both sections into a file.

The Regex
After studying each field in the Fixed Length data section and testing my theories against various memory images, I determined which fields should remain constant or have predictable values. Constructing a regular expression that would look for data that matches these requirements was relatively easy, e.g. two bytes that represent month can only have values between 1 and 12 and therefore the regex would be “[\x01-\x0c]\x00]” (little-endian format). In the back of my head I kept the idea to make it efficient while keeping the number of false positives low. It’s therefore I went with an approach to simplify the matching regex and be more relax on some fields, and once it yields a hit, to perform extra verification. The end result are 2 extra regular expressions that verify the values in RunDate and Priority fields.

Variable Length Data Section
Once the Fixed Length data section has been found, I could take a generic approach and carve an arbitrary amount of data that follows it, because tools like Gleeda’s Job Parser would ignore the excessive bytes. That’s also the approach I took when developing the AT Jobs carver. This time however, I decided to parse the fields to determine where the section ends and only carve out the bytes that belong to the JOB file. This approach allowed to perform further sanity checks allowing to reduce the number of false positives even further.

D:\Tools\job carver> "memory.img" carved_files
[*] Searching...
[+] Found hit: 0x20d47310
 Written: test\carved_1.job
[+] Found hit: 0x24e1f8a0
 Written: test\carved_2.job
[+] Found hit: 0x5ab997a8
 Written: test\carved_3.job
[+] Found hit: 0x5e55d000
 Written: test\carved_4.job
[+] Found hit: 0x649a9d07
[-] Failed verification
[+] Found hit: 0x86aff1d8
 Written: test\carved_5.job
[+] Found hit: 0xde52a0f8
 Written: test\carved_21.job
[*] Done

D:\Tools\job carver> –d carved_files > job_files_analysis.txt

D:\Tools\job carver>

Have fun fighting evil and let me know if you encounter any problems!

PS. It would be great I found time to incorporate the two parsers into a Volatility plug-in…

Tuesday, 8 September 2015

Carving AT Job Files

The post explains why it’s important an incident responder checks scheduled tasks for signs of lateral movement and most importantly, how to carve unnamed scheduled tasks (aka “At jobs”) from a blob of data, such as a physical memory image or page file. The Python script for carving those JOB files is available here:

When the attackers move laterally across the network, it’s a common practice to use the built-in utilities to stay under the radar, e.g. “makecab” and “expand” instead of WinRAR (for more examples, see Patrick Olsen’s post). It’s therefore frequent to see the attackers drop their tools on a remote system via a network share and then use the command line utility “at” to schedule tasks that will execute them. Some attackers clean up their tools and others take it even a step further and try to remove the artefacts left in the system – event logs, entries in “schedlgu.txt”, and At#.job files, to name a few. What follows is a short story that illustrates when carving job files from a memory image

When investigating a Windows system recently I discovered some JOB files that confirmed the attackers’ activity but I suspected some of the older JOB files might be missing. Since the system wasn’t shut down for over 100 days I decided to take an image of the physical memory and see if I can dig them up.

I remembered that At1.job file that already existed on the box had a Unicode text (“Created by NetScheduleJobAdd”) that appeared very unique. Luckily for me, searching it across the memory image yielded hits and soon I discovered new commands that were executed on the system. I didn’t even have to carve the job files because the commands were also in Unicode and closely preceded the comment I used for searching.

The only piece of a puzzle that was missing were the timestamps the scheduled jobs were created and executed on. For this, and other metadata, I relied on Gleeda’s JOB file parser. She also very kindly wrote a blog post describing the file’s data structure and referencing the MSDN’s “JOB File Format”. Once I tinkered around, I figured that I needed to grab 0x48 bytes before the command (most of the time it was “cmd.exe”) and an arbitrary amount following it (but enough so the structure doesn’t get corrupted) as the parser will ignore the excess bytes.

After having recovered 5 out of 10 JOB files I found, I started wondering how feasible would it be to automate the carving process, making sure that it works even if the command wasn’t “cmd.exe”. I quickly Googled for carving JOB files but a mention of the manual carving process on the SANS blog was all I could find, and so I decided to give it ago myself. The main obstacle was identifying where does the data structure start and this was overcame by the observation that two sequential fields that are always at the same offset – Error Code and Status, have fixed values. This led me to the idea that we can identify the starting position by finding bytes that match the values of these two fields, precede the comment and are in close proximity.

This resulted in a carver that is available on GitHub and worked well when tested on raw memory images of Windows 2003 SP2 x86 and Windows Server 2008 SP2 x64. I don’t suspect the comment has changed between Windows versions but please do let me know if that’s not the case. Below is an example of running the script:

** Memory image of a clean VM **
bart:~ bart.inglot$ python before.raw ~/jobs
[*] Created output folder: /Users/bart.inglot/carved_jobs
[*] Searching...
[*] Done

** Memory image of a VM after creating an AT job **
bart:~ bart.inglot$ python after.raw ~/jobs
[*] Searching...
[+] Found hit: 0x3a95d000
[-] Failed verification
[+] Found hit: 0x7a5ad518
[*] Done
bart:~ bart.inglot$ cat ~/jobs/carved_1.job
I�GW� M����
       !cmd.exeZ/c "\\WIN-TESTBOX\c$\Windows\System32\netstat.exe > \\WIN-TESTBOX\c$\netstat.txt"SYSTEMCreated by NetScheduleJobAdd0�

As mentioned earlier, the carving process relies on finding the comment that is unique to JOB files created by the “at” utility, and therefore it won’t find JOB files created by other means (e.g. GUI or “schtasks.exe”).

Have fun and find moar evil!

Saturday, 15 August 2015

Sysinternals Autoruns - offline analysis

This post discusses how to use Sysinternals Autoruns in the offline mode if all you have are registry hives.

1.      Create the following files structure for the system registry hives to fool Autoruns to thinking you’re giving it the Windows folder.
a.      <FOLDER>\System32\ntdll.dll [can be an empty file]
b.      <FOLDER>\System32\Config\SAM
c.      <FOLDER>\System32\Config\SYSTEM
d.      <FOLDER>\System32\Config\SOFTWARE
e.      <FOLDER>\System32\Config\SECURITY
2.      Recreate the user’s hives
b.      [on Vista+]
<FOLDER>\Local Settings\Application Data\Microsoft\Windows\UsrClass.dat

Now just point your Autoruns at these two folders and you’re ready to go!

It’s worth noting that while the latest version of Autoruns doesn’t seem to try to perform any tests on the files from extracted entries if it’s in the Offline mode (e.g. to tell you if the file exists), if your version does then you could use it to your advantage by running Autoruns on a fresh copy of the same operating system as the one the hives come from and turn on the option “Hide Windows Entries” and “Hide Microsoft Entries”. Naturally, doing so should reduce the amount of entries that you need to look at by ignoring the files that would occur on a legitimate system. Of course it discounts the fact that the system the hives come from might have had some of its system files replaced, which while quite unlikely to happen is perfectly plausible.

Troubleshooting: If you managed to load the hives in Autoruns the first time you tried but it complains when you try again, that’s most likely because the hives remain loaded by the system. To fix that open up the Command Prompt as administrator and type the following commands:

  • reg unload HKLM\
  • reg unload HKLM\autoruns.system
  • reg unload HKLM\autoruns.user

Sunday, 26 July 2015

SSH Fingerprint from a PCAP

This post will cover how to generate a SSH public key fingerprint from captured network traffic. There’s also a Python script that you can download ( that will go over a binary file searching for the public key that a typical SSH server presents and generate the fingerprint of it.

What is a SSH Fingerprint?

You know when you try to connect for the first time to a SSH server with a client like Putty, it will present to you a fingerprint that looks something like this “43:51:43:a1:b5:fc:8b:b7:0a:3a:a9:b1:0f:66:73:a8” and then it’ll ask if you’d like to accept it. Most people instruct their SSH client to remember the key, so that they’re not warned the next time they try to connect to the server. The idea is to prevent a Man-in-the-Middle (MITM) attack by verifying that the server’s public key is what we expect to get.

The Scenario

I’ve recently worked on an incident response (IR) case where the attacker was tunnelling their connections using the malware they implanted on some of the end nodes. When investigating a PCAP with tunnelled connections, I came across a SSH session. The natural thing would be to determine what host the attackers were accessing. However, based on the PCAP we couldn’t tell what the end-point system was.
Wireshark verified my suspicions that the SSH server needs to send its public key to the client before the encrypted communication can start. If we could produce the SSH fingerprint from the captured data then we can compare it to the once presented by the servers on the network and therefore pin point what systems the attackers were accessing.

The Solution

Googling to see if someone has had a similar problem yielded Didier Steven’s blog post about calculating a SSH fingerprint from a (Cisco) public key. Studying his code quickly revealed that the “mysterious” fingerprint is nothing else but MD5 of the public key, presented in a rather peculiar form, perhaps the byte separation was meant to help folks swiftly perform a visual inspection if it matches what they should expect to see.
Looking at the PCAP in Wireshark I quickly realised that the public key has a recognisable keyword “ssh-rsa”, which is preceded by a byte sequence “\x00\x00\x00\x07” and the key size before that. So all the code needs to do is look for the keyword, read the size, grab the data and calculate MD5 of it.
While testing the tool, I was surprised to learn that in fact there were two SSH public keys, referred to by Wireshark as KEX DH host key and KEX DH H signature; you’re only interested in the host key as this is what the SSH client will take as the fingerprint.

SSH Servers Scanning

Say you extracted the SSH fingerprint from the PCAP and you need to find the server it belongs to – nothing easier! As it turns out, nmap has a NSE script called ssh-hostkey that will do that for you. Example usage:

nmap [IP_range] --script ssh-hostkey --script-args ssh_hostkey=full -oX output.xml

All that's left now is to grep the XML file or write a parser. Have fun!

Thursday, 31 May 2012

Chip's Backdoor and Chinese: Fake Accusation

There's been a lot of hype recently about a backdoor in a "military grade" chip, which allegedly has been planted by Chinese. There is a great article by Robert Graham, which explains (almost) exactly what mistakes have made by the paper's authors, which led them to thinking that a debugging functionality is a backdoor. I do recommend reading it just for the sake of familiarising yourself with the hardware side of information security.

Tuesday, 29 May 2012

Final Year and Links

I know it has been ages since I wrote the last time but finishing university kept me extremely busy. Speaking of which, I should soon publish my dissertation and then I will post a link to it.

While still on the topic of university, I have published some of the essays/reports that I wrote while studying at University of Derby. One can probably notice that with time they started improving therefore if you are going to read anything, I highly recommend you start from the latest ones. The details of each of them are available on my LinkedIn profile.

This is a list of them:

Sunday, 27 November 2011

Cryptoscan: Fixed Windows Vista+ support

There’s been a minor update to the batch script that I’ve provided with Cryptoscan as previous version was limited to only work with versions of Windows supported by Volatility 1.3 beta. I know that to truly solve the problem I should have ported the module to Volatility 2.0 but I already tried it and I miserably failed. Anyone wants to help? ;>

The new batch script runs Cryptoscan using version 1.3 and then the other two modules (i.e. Strings and Modules) using version 2.0.

To get it working here are 3 simple steps:
1.       Extract 'Volatility-1.3_beta'.
2.       Extract 'Volatility-2.0.standalone' to the same folder as before.
3.       Extract 'Cryptoscan' to the same folder too (overwrite if asked).

You can run ‘Cryptoscan.cmd’ now and enjoy! ;)

If you have some problems getting it working then check my previous post or leave a comment.

PS. There's been a small change to the provided binaries. Instead of using GnuWin32, the batch script uses UnxUtils since they do exactly the same job and are smaller in size.