swannysec Musings on InfoSec

Changing Things Up


If you’re reading this, you’ve likely noticed that this blog and my twitter account have been quiet of late. Summer is often a busy time, but my reasons for that are different than usual this year. Over the last few months I’ve been engaged in a lengthy recruitment and interview process and I’m really excited to share that after a decade in public higher education, I’ll be joining Scott Roberts and the other fine folks at GitHub next week! I’ll be working in some form of DFIR role, but I’m not exactly certain what it will entail over the long run; GitHub is still a growing company after all. In the near term, I intend to play Robin to Scott’s Batman or perhaps serve as “Bad Guy Catcher Minion” while I learn as much as I can and find my “sea legs.”

Sidebar: When I set this blog up almost a year ago, I chose this theme completely unaware that Scott had done the same and didn’t realize it until months later. Great minds think alike? Either that or I just rode his coattails all the way to GitHub.

While I won’t go into great detail on the matter, I do want to take a moment to discuss GitHub’s recruitment process. All of my interview experience (both as an interviewee and interviewer) prior to GitHub was extremely formal and restrictive, as might be expected of a state agency. GitHub’s process couldn’t have been more different; it was refreshingly open, honest, and relaxed. This shouldn’t, however, be confused with an easy interview process. GitHub’s process involved multiple video interviews, phone calls, hands-on exercises, and a marathon (for someone new to the public sector world) in-person interview with both technical and non-technical personnel. None of these steps were cake, though I enjoyed every one and learned a lot during most of them as well.

The most fascinating part of the process for me was that each conversation was a two-way street. Not only were my interviewers genuinely interested in my input on challenges they faced at GitHub, but I was able to share some of my own challenges, receiving meaningful input in return. I walked away from many of the conversations with valuable lessons learned and as a better professional, no matter the outcome. That’s a neat feeling, and the further I got into the process, the more I realized that sort of welcoming openness was endemic to GitHub’s culture. Everyone I’ve met so far has been wonderful and despite the length of the process and the inherent stress of any interview situation, I have enjoyed the process enormously. Fortunately, I walked away with more than just lessons learned!

So where do we go from here? I’m presently suffering from an enormous case of imposter syndrome. The professional challenges involved in moving to GitHub are not insignificant. The environment is a complete 180 from the one I’ve just spent a decade operating in, save some philosophical similarities. Additionally, I’m going from a very broad infosec role that included engineering, architecture, policy and compliance work, and only some IR work, to a more specialized role that will primarily handle IR. I will need to learn, or re-learn, a lot of new things both technically and in terms of business process. The near term will be dedicated to getting to know GitHub, building or rebuilding DFIR-specific skills, and moving back up Burch’s hierarchy of competence in an effort to defeat imposter syndrome and be a more effective incident responder.

Hierarchy

I will continue to blog and tweet, though I expect my focus will shift somewhat from threat intelligence to DFIR matters as I tend to use the blog, and to a lesser extent Twitter, to flesh out and reinforce what I’m learning or working on. I will likely contribute with less frequency, however, as I have a lot to process as I onboard at GitHub and I have some personal goals for the coming year I’d like to devote some time to:

  • Increase the quantity and quality of reading I do.
    • My masters degree sort of killed my desire to read a few years ago, which is a shame. I was a voracious reader prior to that experience, and I believe I need to read more to further my personal development. To this end, I am pushing more of my reading, including blogs/RSS, to my kindle and trying to read away from a PC. This has the side effect of better quality sleep, as I tend to read in the evening and less screen-time will help.
  • Tackle Python and eventually tinker with Go.
    • I will likely never be a great writer of code; it simply doesn’t come naturally to this liberal arts major. I have to work really hard at it, and I honestly don’t enjoy it all that much. That said, I want to reach a point where I establish a reasonable level of fluency and I’m capable of better communicating with those who do write code well.
  • Exercise more.
    • Duh. I’m thirty now and I need to be in better shape. I’ve got a couple of awesome kids to be healthy for and I want to feel better too. I’d like to ride a bike a couple times a week and also go back to some weightlifting.
  • Spend more time with my kids.
    • I was seriously burnt out over the last couple of years. My kids are growing up fast and I want to enjoy this time. My oldest is getting into computers, gaming, and shares my love of military history (win!). My youngest is a stout-hearted wild-child that brings me equal joy and trepidation by way of a risk-taking sense of adventure. Both are bright, curious, adorable, and deserve more of Dad’s time.
  • Begin speaking at conferences.
    • I originally planned to begin speaking this fall or winter, but in light of the new challenges I’m taking on, I’ve decided to spend a little more time absorbing/observing and begin speaking next spring/summer. Nonetheless, it’s on my agenda.

    Thanks for tagging along for this wild ride. I look forward to sharing more of my journey as I take on new challenges at GitHub. For now, off to GitHub HQ! As always, feel free to reach out to me @swannysec with your feedback!

Talking Point - The Whiz-Bang Intel Bias


I recently had an interesting conversation with a couple of people from the threat intelligence community around the idea of adversary innovation. Essentially, someone linked to a twitter blurb from a recent convention or trade show where the speaker mentioned that we, as defenders, need to innovate faster because our adversaries are doing it every day.

The immediate reaction, from a couple of very smart people who I consider to be mentors, as well as my own reaction, was that the idea was hogwash; attackers are lazy like the rest of us and only innovate when forced to do so. This makes sense, right? Humans, as a species are inherently lazy, and I know for a fact that most of those involved in technical fields loathe extraneous effort. So, this idea that attacker methods are constantly evolving and we must rise to meet that challenge is surely patently false, correct?

Upon further reflection, I decided that our initial instinct was in actuality incorrect, and may, in fact, indicate a bias on our part. While there are certainly those attackers out there who will innovate only when absolutely forced to do so, and many who never do at all, I think these may represent a smaller sample of the total population than we realize. Those working in or otherwise involved in the serious study of threat intelligence tend to dwell in the land of the APT. We eat, sleep, and breathe cyber-espionage, state-sponsored actors, and super-sophisticated financial crime syndicates. We do this because it’s our job, because it’s what keeps the lights on, because it’s fascinating, or maybe just because we all secretly imagine ourselves to be Jack Ryan.

JackRyan

The reality, however, is that these kinds of threats probably represent a fraction of what the majority of the world contends with every day. In fact, for most, the biggest threat comes from financially-motivated commodity malware. The ransomware industry (and it certainly qualifies as an industry at this point) and banking trojans are likely responsible for far more damage to the worldwide economy than APTs and other sophisticated attacks will ever be. This year’s Verizon DBIR supports this conclusion, as summarized perfectly by Rick Holland below:

RickDBIR

The shocking part of this realization, for me, came when I reflected on just how much innovation actually occurs in the malware industry. Take a look at any of the major ransomware or exploit kit campaigns over the last six months. The rate of change is astounding! I probably read two or three reports every week about how the actors behind Angler EK, Locky, TeslaCrypt, or CryptXXX have changed something in their delivery method, in their infrastructure, or their anti-detection measures. Here are a few examples:

Link Description
How the EITest Campaign’s Path to ANGLER EK Evolved Over Time Excellent overview of EITest and the payload and URL scheme changes it has seen since 2014.
Your Package Has Been Successfully Encrypted In-depth examination of a relatively new variant of the oft-iterated TeslaCrypt ransomware. Includes a great graphic that shows the rapid fragmentation of the ransomware industry.
Angler EK from 82.146.46[.]242 – New URI Pattern Analysis of Angler EK traffic from a particular host, showing a brand new URI pattern.

It makes sense that commodity malware has to innovate more often. Their methods are simpler, more visible, and ultimately easier for defenders and their technology to defeat. Signature generation and alerting for commodity malware is likely automated or semi-automated by many, raising the stakes for those peddling that malware. If they fail to innovate, they stop making money as signatures and patch management invalidate their methods. APT groups or those engaging in cyber-espionage? Their methods are more complex and sophisticated, in addition to the added burden of attempting to maintain their persistence inside an organization without being detected.

Pyramid

At the end of the day, this is a problem most simply demonstrated by David Bianco’s Pyramid of Pain. In essence, the Pyramid of Pain illustrates the concept that the majority of indicators (IPs, hashes, domains) are simple and low cost/effort to either defend against or replace as an attacker while the more challenging, complex indicators like tools and TTPs are both hard to effectively defend against and costly for an adversary to replace. Commodity malware authors and distributors simply need to change their executable or packaging or their URL-redirect/gate scheme and push the change out across their delivery network, without regard for the noise they make while doing so. These changes live primarily at the bottom of the pyramid; they’re not terribly costly for the adversary. In the case of APTs and other sophisticated intrusions, however, most of their methods exist higher up the pyramid. Accordingly, the cost of changing those methods, particularly while avoiding detection, is quite high. No wonder more sophisticated adversaries innovate only when they absolutely must.

Now, armed with an understanding that the speaker we criticized initially was probably more correct than we gave him credit for, where does the bias lie and how can we combat it? Personally, I have to remember to pinch myself now and then and be mindful of the fact that the world really isn’t as full of Lotus Blossoms, PLATINUMs, and APT6s as it seems.  While those types of things are amazing brain fodder for me, my own day-to-day job is a prime example of the broader reality, which is far less exciting.

Wizard

Image courtesy floodllama, provided under a Creative Commons license.

That reality is that commodity malware is my organization’s single largest threat (which can be just as damaging, if not more so than some APTs). Further, the unfortunate truth is that those threats do iterate quickly.  So, sometimes, it’s important to take off my whiz-bang intel wizard hat and look up from the ground; the perspective it brings is critical. I suggest we all take that step back and try to take in a little perspective now and then.

As always, I’d love your feedback; please reach out @swannysec.

Uncovering a New Angler-Bedep Actor


Some time ago, I did some analysis that linked a fairly run-of-the-mill Torrentlocker distribution network to actors and infrastructure delivering Pony. I promised some follow-up on that, and it’s still coming, but it’s proving to be a bit of a rabbit hole. I need some more time to dot my i’s and cross my t’s, but I hope to have something to share soon.

In the meantime, I wanted to take some time to write up another piece of research I recently completed on a group of well known Angler EK and Bedep actors. If you’ve followed along with Angler and Bedep over the last year or so, you’ll no doubt be familiar with [email protected], [email protected], and [email protected] These accounts are responsible for the registration of large numbers of domains associated with the distribution of Angler EKs and Bedep, as well as some other unpleasant creatures such as Kazy and Symmi. For more information, check out this great write-up from Nick Biasini over at Talos. An Alienvault OTX pulse with all the goodies is available, likely from Alex Pinto at Niddel, here.

Now that you’re familiar with the campaign in question, let’s take a deep-dive. For this analyis, I will be using Paterva’s Maltego loaded with transforms from two fantastic sources, PassiveTotal and ThreatCrowd. These are fantastic tools with free options that can get you started on some great analysis, so give them a try!

To begin, I entered the three well-known actors referenced above as e-mail entities in Maltego:

Starting Actors

Once entered, I started by utilizing PassiveTotal to return all known domains registered by these addresses as shown in the screenshot below (do note that you could manually import these from the Alienvault IOCs provided above, as well). The results follow in the second image, that’s a lot of domains!

Whois

First Expansion

Enter ThreatCrowd. Let’s go ahead and enrich each of these domains with any available information ThreatCrowd has to offer (sorry for the API load, Chris!). Select all the domains as follows:

Domain Select

Once you have all the domains selected, use the following transform from ThreatCrowd. The results are below.

Domain Enrich

Clusters

As you can see above, I’ve re-arranged the graph into the Organic layout in order to make the clustering around each registrant e-mail (in red) apparent. Below, observe a zoomed view of the links indicating domains from each cluster sharing IP infrastructure. The links are hard to see, so I circled them in red:

First Links

At this point, we have clear overlap between these three actors as they’re utilizing some of the same hosting providers and individual hosts to serve malicious domains. In order to go one step further, I expanded the graph again, this time by enriching all IP addresses with ThreatCrowd. (A note of caution here: this can return large number of domains if an IP you choose to expand is a large webhost, so take care to double-check whether returned entities are relevant.) Here’s what the graph looks like after one round of domain enrichment and one round of IP enrichment:

IPs Enriched

From here, I began working through new clusters of domains looking for new leads by checking whois records with PassiveTotal and looking for malware and other associated infrastructure with ThreatCrowd. After hunting around for a while, I discovered the following indicator, with new domains discovered from it circled in red:

New Cluster

This indicator uncovered something new. Below is a fresh graph, for clarity, containing the new domains discovered from that IP, followed by their enrichment via PassiveTotal’s whois details (scrubbed of all but registrant name, e-mail, and address):

New Domains

New Whois

Who is Sara Marsh, why is she registering obviously junky (and potentially DGA-generated) domains, and why is she sharing infrastructure with the likes of the actors we started with? At this point, I almost hit a dead end. Most of my normal, publicly available sources had no information of significance on Sara Marsh, her e-mails, or the domains she registered. ThreatCrowd showed her domains as adjacent to, but not directly hosting malware. Alienvault OTX had no information on her or her domains, and neither did most of the other sources I usually check. However, good old-fashioned google came to the rescue. A quick search of the new e-mail address revealed a pastebin paste from an anonymous source that referenced [email protected]

Pastebin

Post-compromise Bedep traffic observed to destination domains bokoretanom()net, op23jhsoaspo()in, koewasoul()com, and dertasolope7com()com.

Observed referers (forged - machines never actually browsed to the referers):  loervites()com, newblackfridayads()com, alkalinerooms()net, new-april-discount()net, violatantati()com, nicedicecools()net, books-origins-dooms()net, adsforbussiness-new()com

Observed traffic patterns:
/ads.php?sid=1923
/advertising.html
/ads.js
/media/ads.js
/r.php?key=a5ec17eed153654469be424b96891e79

Summary:
Bedep immediately opens a backdoor on the target machine; it also generates click-fraud traffic, and can be used to load further malware.  Bedep was written by the authors of the Angler Exploit Kit, and as such, AnglerEK is the primary distribution method for this malware.

All observed domains are registered to Sara Marsh ([email protected]) and Gennadiy Borisov ([email protected]) through Domain Context.  These are certainly fake names and email addresses, but appear to be used often.  As such, they are reliable indicators, for the time being, that a domain is malicious.

While I don’t usually rely on anonymous sources, this simply served to confirm what was already fairly apparent from appearances. This was backed up by the presence of [email protected] on malekal.com’s malwaredb, sharing an IP with a domain from none other than [email protected]

At this stage, I added [email protected] back to our original graph, and used PassiveTotal to return all domains registered to that address. The result is below:

Fourth Actor

Here’s an additional representation using bubble view. This view adjusts the size of the entities based, in this case, on the number of links associated with them. Again, the actor e-mails are in red:

Bubbles

By now, it is readily apparent that we’ve uncovered an additional actor in this Angler EK/Bedep campaign. In order to further demonstrate some of the relationships between these actors, I selected four related domains from the graph above, moved them to a fresh graph, and enriched them with both ThreatCrowd and PassiveTotal (displaying only relevant results):

All in the Family

The above image displays in a nutshell the close relationship between these actors. Nick Biasini did some fine work in uncovering the first three actors; now a fourth is apparent as well. A list of domains registered to [email protected] is below; this can also be found in an Alienvault OTX pulse which is embedded below. The same list and the Maltego graph are available on my GitHub repo.

As always, I appreciate any feedback; give me a shout @swannysec.

Likely Angler EK/Bedep Domains Registered by [email protected]:

qwmpo347xmnopw[.]in
alkalinerooms[.]net
swimming-shower[.]com
guy-doctor-eye[.]com
dertasolope7com[.]com
abronmalowporetam[.]in
j3u3poolre[.]in
joomboomrats[.]com
7opncxoiep-jek[.]in
violatantati[.]com
xvuxemuhdusxqfyt[.]com
lsaopajipwlo-sopqkmo[.]in
fl4o5i58kdbss[.]in
aneiuayte9k0o[.]in
term-spread-medicine[.]com
betterstaffprofit[.]com
ldsfo409salkopsh[.]in
bokoretanom[.]net
geraldfrousers[.]net
boometread[.]in
80poloertamo[.]in
newblackfridayads[.]com
books-origins-dooms[.]net
shareeffect-affair[.]com
loervites[.]com
axenndnyotxkohhf69[.]com
art-spite-tune[.]com
vewassorthenha[.]in
ruributgot[.]in
1000mahbatterys[.]com
xcmno54pjasghg[.]in
trusteer-tech[.]com
adsforbussiness-new[.]com
nicedicecools[.]net
xbvioep4naop[.]in
taxrain-bottom[.]com
dspo4-skdfhahsyr[.]in

Talking Point - On Attribution


Over the next week or so, I’m going to cover, in short form, a couple of topics that have been rattling around my brain for a while now as I continue learning and growing more comfortable working with and thinking about threat intelligence. We are fortunate to have a wonderful community built up around the discipline and I’ve had the opportunity to interact with a lot of amazing people who provide immeasurable wisdom and perspective. Exposure to the community and my own work related to the field inevitably leads me to draw some conclusions and formulate some strong opinions, so here we go.

On Attribution

Whodunnit? Stop it! Stop right there. Before you ask that question, or allow it to be asked of you by management, you need to ask a different question first. Does it matter? While I believe attribution has its place in the analytical process associated with generating threat intelligence, I’m not of the belief it’s always relevant to an organization’s aims in producing that intelligence. While I’m certain that your execs would love to hear that “China did it,” does that matter to your organization? Can you actually do anything with that information?

Electronic Dragon

Image courtesy Charis Tsevis, provided under a Creative Commons license.

Attribution, as with any other element of good threat intelligence, needs to be actionable for it to be relevant. See Richard Bejtlich’s post on attribution here for more on the value of attribution used properly. If you can successfully make a strategic, operational, or tactical shift on the basis of knowing who your adversary is, then by all means, attribute! However, I would imagine very few organizations possess the operational and intelligence maturity to respond meaningfully to knowing which specific cybercriminal, activist, or nation-state actor is targeting them. My personal belief is that all the time, effort, and hot air spent over attribution in our industry is largely wasteful. I will say that there is one key exception to this, however; attribution as an element of analysis (as opposed to an end-goal) may give valuable context to an analyst if they can pivot on some element of that attribution and use it to discover additional items of interest related to the actor. Be careful though, because attribution can also introduce cognitive bias! Robert M. Lee, a man much smarter than I, covers this topic and how attribution applies to the various levels of intelligence here.

Attribution is flashy. Attribution makes it sound like you really know your stuff. Attribution might even give your shareholders someone else to blame. However, if you’re producing intelligence with a stated goal of determining attribution, ask yourself if and how that is relevant to your requirements. If you’re able to translate attribution to meaningful action designed to prevent or respond to a threat, bravo, please continue! If you can’t figure out how to make attribution work for you, either as a component of a finished intelligence product or as an analytical tool, re-assess your goals and requirements and re-direct your efforts to more meaningful analysis that will produce real return on investment for your organization. Threat intelligence is hard enough to master and demonstrate the value of without wasting time pointing fingers to no end, so please, think before you attribute.

Your feedback is important, please head on over to @swannysec and share your thoughts!

Starting Small with Threat Intel - Pt. 2


In part one of this series we looked at the basics of threat intelligence and how you can begin to absorb and apply it without any technological investment or barrier to entry. For the second installment, I had originally planned to write at length about some low-effort and low-investment methods of automating the ingestion, processing, and application of freely available threat intelligence sources. However, I’ve decided to take a bit of a detour because there are some pre-requisites for this type of intelligence automation and I believe they are worth looking at in detail.

Knowledge

While I’ve previously discussed the fact that you don’t need a fully mature information security program to begin working with threat intelligence, it is helpful to have at least a few things in place to make the most efficient use of threat intelligence data. To begin with, you’ll need one or more human beings capable of digesting threat intelligence data as I outlined in the first part of this series. No automated system is going to make any amount of threat intelligence data magically useful without a human being making informed decisions about the information contained therein as it relates to the security and risk posture of the organization. Once you’re ready to understand intelligence data and make decisions based on it and other data available to you, let’s move on to the next prerequisite.

Tools

Threat intelligence indicator feeds, regardless of whether any processing or filtering has been applied, will generate more data than a human can ingest and process via traditional means such as spreadsheets or simple graphs. Therefore, tools capable of accepting, parsing, and manipulating large data feeds are essential. The quickest way to this capability for many organizations will be a flexible SIEM or SIEM-like tool such as Splunk, ELK, or Graylog. These tools will take just about any type of log-oriented data and allow you to parse it and store it as you see fit. Another option is a SIEM that’s designed specifically to accept these feeds. Splunk’s Enterprise Security product will do so in addition to LogRhythm, ArcSight, Alienvault and others. In the next part of this series I’ll also look at some non-traditional, non-SIEM options for handling large intelligence data feeds.

Jammed Up

Photo Credit: Julie Rybarczyk

Once you have a tool in place to help you process and understand large amounts of intelligence data in a meaningful way, you need to operationalize it in some manner. I split this capability into two levels of maturity, which I’ll delve into more in the next post, but can be roughly defined as visibility-only and enforcement. In order to implement visibility-only, you’ll need one or more security devices or systems capable of outputting useful log data that can be cross checked against intelligence data for evidence of malicious or suspicious activity. Possible sources of data ideal for this correlation include firewalls, endpoint logs, an IDS such as Snort, Suricata or Bro (see Security Onion), web proxies, or forensic artifact collectors like Google’s GRR, Mozilla’s MIG, or Facebook’s osquery. In a more mature program, some of those same systems, should they be capable, can be used to actively enforce decisions on intelligence-provided indicators when provided a correctly processed and formatted feed in which the analyst has high confidence. Palo Alto’s dynamic block lists, Symantec Endpoint Protection’s hash blocking, and even Microsoft’s own Group Policy hash rules are all examples of possible enforcement avenues. Again, more detail coming in the next installment, but let’s move on to the final pre-requisite for now.

Self-Awareness

Finally, before you even consider external sources of intel data, you need to be mining your own internal data sources for actionable intelligence. What can possibly be more relevant than data on what is actually happening on your network? (Author’s note: I wrote the preceding sentence before re-reading the links that follow in a table below. I feel dirty for unintentionally taking the words right out of Rick Holland’s mouth. Sorry Rick!) If you have any relatively sophisticated border controls (firewalls, IDS/IPS, proxy) or endpoint detection suites, they will likely be capable of producing basic reporting on threats seen in your environment. If you have a SIEM, you can glean even more from the data produced by those systems. Additionally, are you mining your incident/malware response process? There is valuable information about the “what, where, when, and how” of threats directed at your organization inside the logs and incident reports produced therein. The “why and the who” may be a little harder to glean, and are largely beyond the scope of this discussion, but all of that is possible with internal threat intelligence.

There is a ton of great work by people far smarter than myself that speaks to the value and methods of internal threat intelligence including things like hunting, which represents a maturity level far outside the scope of this series. If you’d like to know more, read here:

Know More!

Resource Description
Maximizing Your Investment in Cyberthreat Intelligence Providers From Rick Holland, formerly of Forrester, now VP at Digital Shadows. Covers intel more broadly (and well), but speaks to the value of internal threat intelligence. Rick’s writings on threat intel are a great starting point for anyone interested.
Threat Intelligence Awakens Rick’s recent presentation for SANS CTI Summit. Fun, extremely relevant, and highlights some important points on internal intel.
Internal Threat Intelligence - What Hunters Do From Raffael Marty at pixlcloud. Discusses the use of internal data for hunting.
APT Threat Analytics - Part 1 and Part 2 From Nigel Willson at AT&T. Slightly older material, still extremely relevant with excellent information about internal (and external) intelligence gathering and use.
On Internally-sourced Threat Intelligence From Anton Chuvakin at Gartner. Talks about a variety of internal intel collection activities, some of which are appropriate for this discussion and some of which are more advanced.

Remember, however, that this doesn’t have to be rocket science; we’re starting small. Generate top ten lists of exploits, malware, brute-force attempts, etc. and start to observe trends in those reports. Is a particular exploit targeting a particular host? Are you seeing an uptick in a particular strain of malware? Is one IP the source of many alerts all of a sudden or consistently? Dig a little deeper by looking at account activity. Failed logins, privilege elevations, password changes, and logins from unusual geographic locations all offer value for internal context. Also make sure you’re looking at your vulnerabilities and you have a reasonable inventory of the assets you’re defending; it’s critical that you understand the attack surface available to bad actors. Think about how all of this internal data speaks to @DavidJBianco’s Pyramid of Pain; can you start to discern actors and TTPs based on what you’re observing and put those findings to use in your decision making? If nothing else, begin learning the natural rhythms of your network; you’ll notice when things stand out! In short, make sure you’re looking at what’s happening on the inside before you begin adding external information to the picture.

As I noted in the first part of this series, however, once you have both internal and external data, put them together to form a holistic view of the threats to your environment; this broad view will enable better decision making and a more effective defense. Hopefully, you’re now equipped with a better understanding of the tools and techniques (TTPs anyone?) you need to have in place before you begin ingesting and operationalizing external threat intelligence data. In the next part, we’ll talk about the nuts and bolts of doing just that, as well as the caveats (external threat intel feeds are not magic; human analysts not included). Please reach out to me @swannysec, I’d love to hear your feedback. Thanks for reading!

Note: I will be taking a brief break in this series to run a few analysis pieces in the coming weeks.