Threat Hunting with OpenCTI + OpenAEV + Splunk ESCU
Defining the scope for a threat hunt is challenging. Teams must prioritize resources while understanding the behavior and impact of potential threats and avoiding scope creep.
Following MITRE Threat-Informed Defense principles, effective threat hunting is driven by cyber threat intelligence and validated against defensive measures. This article shows how to:
- Use Priority Intelligence Requirements (PIRs) in OpenCTI to focus hunts
- Identify relevant Splunk ESCU detections mapped to ATT&CK techniques
- Leverage OpenAEV to simulate threats and validate detections to close gaps
TL;DR
- Prioritize CTI with PIRs to drive hypothesis-based hunts
- Build an APT38/Sapphire Sleet dashboard in OpenCTI to scope hunts
- Ingest Splunk ESCU detections and map them to ATT&CK patterns
- Narrow scope via recent campaigns and ClickFix behaviors
- Hunt with SPL and validate detections through OpenAEV scenarios
- Turn results into improvements across people, process, and technology
Threat-Informed Hunting: Threat Hunting With MITRE Threat-Informed Defense
Threat hunting is more than searching for anomalies — it is the deliberate, proactive pursuit of adversary behavior before an intrusion fully unfolds. And like any disciplined investigation, the hunt is only as good as the framework that guides it. To stay focused, repeatable, and aligned with adversary tradecraft, we lean on structured models such as Splunk SURGE’s PEAK Threat Hunting Framework.
In this 5-Act walkthrough, we will reenact an intelligence-driven hunt, end-to-end — from defining a testable hypothesis, to mapping intelligence through MITRE ATT&CK, to validating exposure via OpenAEV.
Act 1: Structuring a Hunt: Testable Hypothesis, MITRE, and PEAK
Testable Hypothesis
Threat hunting has multiple hunt types: hypothesis-driven, baseline-driven, and model-assisted types. In this use case, we will focus on a hypothesis-driven hunt — the most aligned to Threat‑Informed Defense — which starts with a testable statement.
By “testable” statement, we mean a clear, specific, evidence‑based claim about potential adversary behavior that can be validated or disproven. For example:
- “Lazarus wants to steal my data.” → Not testable
- “Lazarus uses a connection proxy to route traffic between internal hosts and C2.” → Testable
With this second statement, we can scope a hunt because it is testable. This is where MITRE Threat‑Informed Defense brings structure:
MITRE Threat-Informed Defense
Threat-informed defense is a continuous process in which defenders and adversaries are constantly learning and evolving.
When prioritizing which hypotheses to pursue, MITRE Threat-Informed Defense helps connect CTI to Testing & Evaluation, which can then be used for Defensive Measures.

As shown above, Threat-Informed Defense emphasizes a three-phase cycle: identify relevant CTI, test and evaluate behaviors in your environment, address defensive gaps, and repeat.
For threat hunting, this translates into:
- Identify and prioritize CTI relevant to your sector, region, and assets
- Extract details such as attack patterns, indicators, and malware to scope hypotheses
- Test behaviors in your data and environment, including simulation when appropriate
- Review detections, preventions, and process gaps
PEAK Framework
Now that we have a hypothesis and our process defined, let’s ensure we have structure with a framework.
Cybersecurity hunting frameworks help to scope and refine our hunting processes in a structured manner with proper phases and steps to follow. Popular options include Sqrrl and TaHiTI. In this case, we will use the PEAK Framework from Splunk SURGE, which stands for:
- Prepare: PIRs, dashboarding, and hypothesis definition
- Execute: scoped hunts with clear observables and decision points
- Act: validate detections, update SOPs, and track improvements
- Knowledge: knowledge of Windows and macOS scripting used in APT38 campaigns
[For more on PEAK, see the Splunk paper: PEAK Threat Hunting Framework.]
At this point, it is worth noting that threat hunting is not only about proving a hypothesis. It should also be strategically leveraged to improve SecOps outcomes such as detection coverage, response clarity, and team readiness.
Act 2: Starting a Hunt: Using PIRs to Prioritize CTI
Let’s bring the structure and process described above to life through a concrete example: an end‑to‑end hunt for threats targeting the Singapore education sector, powered by OpenCTI, Splunk ESCU, and OpenAEV.
Defining Priority Intelligence Requirements (PIRs)
To start hunting, it is key to prioritize intelligence with Priority Intelligence Requirements (PIRs).
Let’s define the following PIR in the OpenCTI PIR manager:
“Threats targeting the education sector in Singapore in the last 90 days.”

OpenCTI immediately pivots this requirement into a multi-dimensional intelligence view: threat maps, trending intrusion sets, campaign timelines, and victimology summaries.
We can use this to drill down into our initial PIR for specific information and visuals.

Threat Discovery
In this case, the “Sapphire Sleet” intrusion set clearly stands out on the Threat Map as a high-priority threat, based on regional alignment, high relevance, and recent activity.

Let’s focus on Sapphire Sleet as our focal threat actor.
Identifying CTI from PIR for Threat Hunting
Moving back to the PIR view in OpenCTI, we can jump to granular details about this specific “Sapphire Sleet” threat actor. This includes:
- Threat aliases (BlueNoroff/APT38)
- Targeted geographies (including Singapore)
- Known campaigns
- Associated malware
- ATT&CK techniques
- References and reporting

Alongside a range of other contextual details, we can learn that Sapphire Sleet is associated with APT38/BlueNoroff in public reporting. Victimology includes Singapore and university targets — which aligns well with our initial PIR.

To scope out this hunt, we can examine behaviors by drilling down into different attack patterns and malware.

Scrolling through the campaign and TTP views reveals high volumes of historical activity. To avoid running an unfocused hunt, we narrow the scope to recent campaigns only.
Act 3: Setting up a CTI-driven operational cockpit
Contextual Threat Hunting Dashboard
OpenCTI provides pre-built dashboards, as well as the ability to configure your own. In this case, let’s build a custom dashboard around Sapphire Sleet (APT38/BlueNoroff) to support our specific hypothesis creation and scoping.
Let’s give our dashboard four sections for both reporting and actionable insights: Overview and High‑Level Indicators, Threat Overview, Recent Activities, and Suggested Mitigations and Detections.
Overview and High‑Level Indicators
Gain insight into the dashboard’s purpose and view high‑level indicators such as counts of malware, reports, campaigns, attack patterns, and relevant SIEM detections (Splunk ESCU).
This helps decide whether to scope around specific malware, reports, campaigns, or attack patterns.

Threat Overview
Highlights the top TTPs and malware used by Sapphire Sleet, and a map of targeted countries.This can be used to prioritize hypotheses by technique or malware, filtered by region.

Recent Activities
Lists all campaigns and reports from ingested intelligence.
Use this to decide whether to scope on the overall intrusion set or focus on a recent campaign/report with richer context.

Suggested Mitigations and Detections
Suggested Detections and Mitigations shows course of action (COA) objects with the most ATT&CK coverage and the most recent COAs by attack pattern. Here, COAs represent Splunk ESCU detections imported from Splunk Enterprise Security. Treat these as starting points that must be validated in your environment.

Ingesting Splunk ESCU into OpenCTI and Mapping to ATT&CK Patterns
To operationalize CTI, we integrate Splunk ESCU detections into OpenCTI as COAs.
Note that to do so, we require the following prerequisites:
– Access to Splunk Enterprise Security and the DA-ESS-ContentUpdate app (ESCU)
– Permission to run REST searches
– OpenCTI access and CSV Mapper capability
Export ESCU Detections via REST
We can export the ESCU content from Splunk ES, focusing on the DA-ESS-ContentUpdate app (ESCU detections only). Key fields include detection title, description, SPL, ATT&CK technique ID, and technique name.

Here is an example SPL to extract relevant fields:
| rest /services/saved/searches splunk_server=local
| search eai:acl.app="DA-ESS-ContentUpdate" is_visible=1 disabled=0
| table title description search annotations.mitre_technique_id annotations.mitre_technique
We can also build a sample output columns as such: title, description, search, mitre_technique_id, mitre_technique.

Map detections to ATT&CK IDs
For simplicity, the detections are exported as CSV, which we ingested into OpenCTI via a CSV mapper configured

Mapping logic:
- Create each detection as a STIX 2.1 Course of Action (COA) with name and description+search content
- Map mitre_technique_id and mitre_technique to ATT&CK Attack Pattern SDOs
- Create COA → Attack Pattern relationships with type of “mitigates”
- Label all imported detections with “siem-splunk-escu-detection” for easy filtering and deduplication
Note: This ingest was manually performed for testing. It can be automated by extending the Splunk connector. (GitHub issue or repo)
Now, we can return to OpenCTI and see the resulting objects:


This transforms OpenCTI into a CTI-driven detection catalog, revealing which ESCU detections cover which ATT&CK techniques — and whether those techniques matter for our chosen threat actor.
Act 4: Executing the Hunt
Narrowing the Hunt Scope
Broad, unfocused hunts create resource drain and rabbit holes. It is therefore very important to highlight and focus on specific elements of your hunt.
Returning to our dashboard, in the Recent Activities, we can see two campaigns that stand out: BlueNoroff “GhostCall” and “GhostHire”.

Inside these campaigns, a key observation appears:
Victims are socially engineered to “update Zoom,” triggering a ClickFix script that downloads ZIP-based payloads. The attack surface spans macOS and Windows.
We can now derive a crisp testable statement:
“Sapphire Sleet uses ClickFix scripts on macOS and Windows to download and execute multi-stage artifacts.”
With this in mind, we can test specifically for ClickFix behavior in OpenCTI’s campaign ATT&CK view.
There, the closest technique is Command and Scripting Interpreter → AppleScript, indicating script-based execution.

Instantly, we can see that related objects show links between AppleScript and ClickFix.

Mitigations show two relevant Splunk ESCU detections (one Windows‑focused, one macOS‑focused).

For example, “Windows PowerShell FakeCAPTCHA Clipboard Execution” notes potential FakeCAPTCHA/ClickFix clipboard hijacking and includes a reference SPL search.

Now that we have this insight, we can use these detections as a starting point.
Hunting Against Our Scope
In Splunk’s ESCU, lots of SPL behind the detections are based on running tstats against Splunk’s best practice data models known as the Common Information Model (Splunk CIM) for efficiency and resource-saving purposes.
We can use the Splunk tstats-based SPL behind the ESCU detection SPL as our starting point. However, in order to do this, there needs to be proper CIM mapping that can be achieved with Technology Add-ons (TAs) downloadable from Splunkbase for most well-known data sources.

If CIM is not in place, we can pivot to raw events while maintaining the same logic. Example Windows‑focused hunt (simplified):
index=windows EventCode=4104 OR SourceName=PowerShell
| search (Clipboard OR Set-Clipboard OR Get-Clipboard OR FromBase64String)
| stats count min(_time) as first_seen max(_time) as last_seen by host user ProcessName ScriptBlockText
| where count > 0
If no hits are found, further research on ClickFix can be found online, which can be used to enhance the SPL searches.
In our case, no matching events were found. That does not mean the hunt failed — it just suggests either absence of activity or log source/detection coverage gaps. Both are valuable outcomes.
This brings us to the next phase.
Act 5: Identifying and Improving Gaps With Threat Hunting Outcomes
Asking the right questions
When a hypothesis cannot be confirmed, treat findings as improvement opportunities across people, process, and technology. For example, we can take a look at:
- Data sources: Which endpoints, EDR managers, or logs are missing? Plan onboarding.
- Detections: Which ESCU detections need tuning or new custom detections? Improve Detection Engineering.
- Process: Are SOPs clear for triage and incident declaration? Update runbooks and RACI.
We can also ask ourselves a key question to help bring out further outcomes:
Did the threat not occur and the hypothesis is negative? If so, how do we prepare for future attempts?
Leveraging Exposure Management with OpenAEV
In this case, rather than waiting for occurrence from the real APT38, it is much wiser to validate it proactively via a simulation, enabling us to evaluate our current exposure to this threat.
Using Threat-Informed Defense, we have intelligence on BlueNoroff campaigns and ClickFix with AppleScript. From OpenAEV, we can build an attack scenario to safely simulate this threat campaign’s attack patterns and behaviors in our environment. This will allow us to validate detections, processes, and human responses to this threat.
Creating a simulation scenario
Let’s take a look at how we can design a scenario to simulate the attack in our environment using the following design:
- Objective: validate detection and response for ClickFix behaviors
- Techniques: AppleScript execution
- Assets: selected endpoints with EDR agents
- Injects: technical simulations and process drills
- Success criteria: expected Splunk notable events, analyst acknowledgments, SOP adherence
- Data collection: Splunk notable index, EDR logs, OpenAEV findings
With this design, we can easily generate a scenario from the AppleScript-related technique. We can then place all relevant injects on a timeline, representing the escalation over time.

We can add different types of injects. In this case, we’ll add:
- Technical injects for payload execution
- Process/human injects that can simulate email notifications, internal challenges, or media pressure
This enables us to test both the technical and human/process security controls for this exposure.
For the technical payload, we are simulating the AppleScript/ClickFix behavior with a controlled payload:

Then we select target assets for executions:

Note: Ensure the EDR manager forwards telemetry to Splunk and integrate Splunk as a Collector in OpenAEV so notable findings feed back into OpenAEV.
For the people-oriented injects, we can notify SOC and Detection Engineering that the scenario has begun, and ask them to validate whether Splunk Enterprise Security produced expected findings and from which detections:

Let’s also coordinate a response action and verify SOP clarity via an additional inject.

Reviewing results
OpenAEV has a wide integration library to leverage various EDRs, collectors, and injectors.
With EDR and Splunk integrated, OpenAEV reports prevention, detection, vulnerability, and human response outcomes.
For example, in the screenshot below, none of the prevention controls stopped the injected payloads, a small percentage of detections were able to detect some of the injects, and all the players involved in the tabletop exercise were not able to meet the expectations in terms of human responses.
These can be further drilled down into exactly which injects and which attack patterns succeeded or failed, which can then be fed back into the original security coverage and threat report in OpenCTI to report on the effectiveness of actual coverage against the CTI threat report in question.

With these metrics and security coverages both in OpenCTI and OpenAEV, you can then easily use them to identify and close any gaps. As an example, one such metric can include detection coverage by technique, time to alert, and SOP adherence.
Conclusion
In conclusion, the value of threat hunting extends beyond proving a hypothesis — it is a key process in detecting, testing, validating, and remediating potential exposures.
Using this Threat-Informed Hunting process, we were able to complete the following workflow:
- Start with PIR-driven intelligence in OpenCTI and used it as a basis for threat-hunt hypothesis and scoping.
- Leverage Splunk ESCU detection logic as a basis to operationalize our hunt.
- Validated our threat-hunt scope via an OpenAEV scenario to simulate hunt behavior to proactively identify People, Process, and Technology gaps.
- Feedback these metrics from OpenAEV back into OpenCTI as security coverages to help inform detection engineering and further threat hunting scopes.
With Threat-Informed Hunting, you can measurably improve your threat hunts to gain valuable outcomes and in turn improve your overall security program.

Extended Threat Management for Threat Hunting
This use case provides an example of what Filigran’s XTM platform users and threat hunters can accomplish by combining Threat intelligence with Exposure validation capabilities — all from a single, open-source platform.

Find out more:
Read more
Explore related topics and insights