swannysec Musings on InfoSec

Starting Small with Threat Intel - Pt. 2



In part one of this series we looked at the basics of threat intelligence and how you can begin to absorb and apply it without any technological investment or barrier to entry. For the second installment, I had originally planned to write at length about some low-effort and low-investment methods of automating the ingestion, processing, and application of freely available threat intelligence sources. However, I’ve decided to take a bit of a detour because there are some pre-requisites for this type of intelligence automation and I believe they are worth looking at in detail.

Knowledge

While I’ve previously discussed the fact that you don’t need a fully mature information security program to begin working with threat intelligence, it is helpful to have at least a few things in place to make the most efficient use of threat intelligence data. To begin with, you’ll need one or more human beings capable of digesting threat intelligence data as I outlined in the first part of this series. No automated system is going to make any amount of threat intelligence data magically useful without a human being making informed decisions about the information contained therein as it relates to the security and risk posture of the organization. Once you’re ready to understand intelligence data and make decisions based on it and other data available to you, let’s move on to the next prerequisite.

Tools

Threat intelligence indicator feeds, regardless of whether any processing or filtering has been applied, will generate more data than a human can ingest and process via traditional means such as spreadsheets or simple graphs. Therefore, tools capable of accepting, parsing, and manipulating large data feeds are essential. The quickest way to this capability for many organizations will be a flexible SIEM or SIEM-like tool such as Splunk, ELK, or Graylog. These tools will take just about any type of log-oriented data and allow you to parse it and store it as you see fit. Another option is a SIEM that’s designed specifically to accept these feeds. Splunk’s Enterprise Security product will do so in addition to LogRhythm, ArcSight, Alienvault and others. In the next part of this series I’ll also look at some non-traditional, non-SIEM options for handling large intelligence data feeds.

Jammed Up

Photo Credit: Julie Rybarczyk

Once you have a tool in place to help you process and understand large amounts of intelligence data in a meaningful way, you need to operationalize it in some manner. I split this capability into two levels of maturity, which I’ll delve into more in the next post, but can be roughly defined as visibility-only and enforcement. In order to implement visibility-only, you’ll need one or more security devices or systems capable of outputting useful log data that can be cross checked against intelligence data for evidence of malicious or suspicious activity. Possible sources of data ideal for this correlation include firewalls, endpoint logs, an IDS such as Snort, Suricata or Bro (see Security Onion), web proxies, or forensic artifact collectors like Google’s GRR, Mozilla’s MIG, or Facebook’s osquery. In a more mature program, some of those same systems, should they be capable, can be used to actively enforce decisions on intelligence-provided indicators when provided a correctly processed and formatted feed in which the analyst has high confidence. Palo Alto’s dynamic block lists, Symantec Endpoint Protection’s hash blocking, and even Microsoft’s own Group Policy hash rules are all examples of possible enforcement avenues. Again, more detail coming in the next installment, but let’s move on to the final pre-requisite for now.

Self-Awareness

Finally, before you even consider external sources of intel data, you need to be mining your own internal data sources for actionable intelligence. What can possibly be more relevant than data on what is actually happening on your network? (Author’s note: I wrote the preceding sentence before re-reading the links that follow in a table below. I feel dirty for unintentionally taking the words right out of Rick Holland’s mouth. Sorry Rick!) If you have any relatively sophisticated border controls (firewalls, IDS/IPS, proxy) or endpoint detection suites, they will likely be capable of producing basic reporting on threats seen in your environment. If you have a SIEM, you can glean even more from the data produced by those systems. Additionally, are you mining your incident/malware response process? There is valuable information about the “what, where, when, and how” of threats directed at your organization inside the logs and incident reports produced therein. The “why and the who” may be a little harder to glean, and are largely beyond the scope of this discussion, but all of that is possible with internal threat intelligence.

There is a ton of great work by people far smarter than myself that speaks to the value and methods of internal threat intelligence including things like hunting, which represents a maturity level far outside the scope of this series. If you’d like to know more, read here:

Know More!

Resource Description
Maximizing Your Investment in Cyberthreat Intelligence Providers From Rick Holland, formerly of Forrester, now VP at Digital Shadows. Covers intel more broadly (and well), but speaks to the value of internal threat intelligence. Rick’s writings on threat intel are a great starting point for anyone interested.
Threat Intelligence Awakens Rick’s recent presentation for SANS CTI Summit. Fun, extremely relevant, and highlights some important points on internal intel.
Internal Threat Intelligence - What Hunters Do From Raffael Marty at pixlcloud. Discusses the use of internal data for hunting.
APT Threat Analytics - Part 1 and Part 2 From Nigel Willson at AT&T. Slightly older material, still extremely relevant with excellent information about internal (and external) intelligence gathering and use.
On Internally-sourced Threat Intelligence From Anton Chuvakin at Gartner. Talks about a variety of internal intel collection activities, some of which are appropriate for this discussion and some of which are more advanced.

Remember, however, that this doesn’t have to be rocket science; we’re starting small. Generate top ten lists of exploits, malware, brute-force attempts, etc. and start to observe trends in those reports. Is a particular exploit targeting a particular host? Are you seeing an uptick in a particular strain of malware? Is one IP the source of many alerts all of a sudden or consistently? Dig a little deeper by looking at account activity. Failed logins, privilege elevations, password changes, and logins from unusual geographic locations all offer value for internal context. Also make sure you’re looking at your vulnerabilities and you have a reasonable inventory of the assets you’re defending; it’s critical that you understand the attack surface available to bad actors. Think about how all of this internal data speaks to @DavidJBianco’s Pyramid of Pain; can you start to discern actors and TTPs based on what you’re observing and put those findings to use in your decision making? If nothing else, begin learning the natural rhythms of your network; you’ll notice when things stand out! In short, make sure you’re looking at what’s happening on the inside before you begin adding external information to the picture.

As I noted in the first part of this series, however, once you have both internal and external data, put them together to form a holistic view of the threats to your environment; this broad view will enable better decision making and a more effective defense. Hopefully, you’re now equipped with a better understanding of the tools and techniques (TTPs anyone?) you need to have in place before you begin ingesting and operationalizing external threat intelligence data. In the next part, we’ll talk about the nuts and bolts of doing just that, as well as the caveats (external threat intel feeds are not magic; human analysts not included). Please reach out to me @swannysec, I’d love to hear your feedback. Thanks for reading!

Note: I will be taking a brief break in this series to run a few analysis pieces in the coming weeks.