Not all data sticks around, and some data stays around longer than others. In this video, you’ll learn about the order of data volatility and which data should be gathered more urgently than others.
<< Previous Video: Data Loss PreventionNext: Capturing System Images >>
A big part of incident response is dealing with intrusions, dealing with incidents, and specifically how you deal with those from a forensics level. Forensics is talking about the collection and the protection of the information that you’re going to gather when one of these incidents occur. There are data sources that you get from many different places– not just on a computer, not just on the network, not just from notes that you take. There’s a combination of a lot of different places you go to gather this information, and different things you can do to help protect your network and protect the organization should one of these incidents occur.
If you’d like a nice overview of some of these forensics methodologies, there’s an RFC 3227. Google that. It’s called Guidelines for Evidence Collection and Archiving. And it’s a good set of best practices. Very high level on some of the things that you need to keep in mind when you’re collecting this type of evidence after an incident has occurred.
There is a standard for digital forensics. And digital forensics itself could really be an entirely separate training course in itself. There’s so much involved with digital forensics, but the basic process means that you acquire, you analyze, and you report. Those three things are the watch words for digital forensics. Those are the things that you keep in mind.
We’re going to talk about acquisition analysis and reporting in this and the next video as we talk about forensics. The details of forensics are very important. You need to get in and look for everything and anything. You need to know how to look for this information, and what to look for. And you have to be someone who takes a lot of notes, a lot of very detailed notes. Sometimes the things that you write down and the information that you gather may not even seem that important when you’re doing it, but later on when you start piecing everything together, you’ll find that these notes that you’ve made may be very, very important to putting everything together.
In forensics there’s the concept of the volatility of data. And when you’re collecting evidence, there is an order of volatility that you want to follow. The volatility of data refers to how long the data is going to stick around– how long is this information going to be here before it’s not available for us to see anymore. That’s one of the challenges with digital forensics is that these bits and bytes are very electrical. In some cases, they may be gone in a matter of nanoseconds. Other cases, they may be around for much longer time frame.
So the idea is that you gather the most volatile data first– the data that has the potential for disappearing the most is what you want to gather very first thing. The data that could be around for a longer period of time, you at least have a little bit of time that you could wait before you have to gather that data before it disappears. So this order of volatility becomes very important.
So what’s volatile and what isn’t? When you look at data like we have, information that might be in the registers or in your processor cache on your computer is around for a matter of nanoseconds. These registers are changing all the time. That would certainly be very volatile data. If we could take a snapshot of our registers and of our cache, that snapshot’s going to be different nanoseconds later. So that’s one that is extremely volatile.
Next volatile on our list here– these are some examples. This is obviously not a comprehensive list, but things like a routing table and ARP cache, kernel statistics, information that’s in the normal memory of your computer. Those would be a little less volatile then things that are in your register.
Next down, temporary file systems. Those tend to be around for a little bit of time. But being a temporary file system, they tend to be written over eventually, sometimes that’s seconds later, sometimes that’s minutes later.
Next is disk. When we store something to disk, that’s generally something that’s going to be there for a while. Unfortunately of course, things could come along and erase or write over that data, so there still is a volatility associated with it. If we catch it at a certain point though, there’s a pretty good chance we’re going to be able to see what’s there.
Remote logging and monitoring data. If there’s information that went through a firewall, there are logs in a router or a switch, all of those logs may be written somewhere. The problem is that on most of these systems, their logs eventually over write themselves. Sometimes that’s a day later. Sometimes that’s a week later. Sometimes it’s an hour later. But generally we think of those as being less volatile than something that might be on someone’s hard drive.
The network topology and physical configuration of a system. That again is a little bit less volatile than some logs you might have. And down here at the bottom, archival media. A DVD ROM, a CD ROM, something that’s stored on tape somewhere and archived and sent somewhere else– probably we can have as one of the least volatile data sources you can find, because it’s unlikely that that particular digital information is going to change any time in the near future.