Splunk LLC

09/05/2025 | News release | Distributed by Public on 09/05/2025 10:35

Operationalize ESCU Detections Featuring Onboarding Assistant

The Splunk Enterprise Security Content Update (ESCU) app is a powerful resource developed by the Splunk Threat Research Team. It provides out-of-the-box detection analytics mapped to the MITRE ATT&CK framework and tailored to various platforms such as Windows, Linux, and cloud environments. While installing ESCU is straightforward, operationalizing the content — meaning tuning, enabling, and maintaining it for real-world use — requires a few deliberate steps.

In this blog, we'll walk through the process of operationalizing ESCU content in Splunk Enterprise Security (ES), starting with setup and culminating in tuning for reduced false positives.

Getting Your Environment Ready

Before diving into ESCU content, ensure your Splunk environment is correctly configured to support these detection analytics.

✅ Required Apps and Add-ons

  1. Splunk Enterprise Security (v8.0.0 + recommended)
  2. Splunk Common Information Model (CIM) Add-on - This is essential since many ESCU detections rely on CIM-compliant data.
  3. Splunk Enterprise Security Content Update (ESCU) - Install the latest version from Splunkbase.
  4. Ensure that the TAs are up to date (Technology Add-ons) for your data sources (e.g., Sysmon, Okta, AWS)
  5. Install the latest ESCU

✅ Data Ingestion

Ensure that the relevant logs are being ingested and properly normalized:

  1. Validate data availability using this query to view what data lives in which index:
    | tstats count where index=* by index, source, sourcetype
    | sort - count
    
  2. Validate that data models are correctly being populated. You can run the following query to learn how the Authentication data model is currently being populated in your Splunk environment. Similarly, you can check other data models by switching out the name in the query
    | tstats count from datamodel=Authentication by nodename, sourcetype, index, host
    | sort - count
    
  3. Validate that the input macros associated with each detection are configured to point to the correct index in order for the detections to work OOTB!

    ⚠️ Input Macros

    A lot of these detections are written directly against the sourcetype and these analytics come with input macro, please make sure that these input macros are configured

Exploring Analytic Stories with the Use Case Library

Once your environment is ready, the best way to start operationalizing ESCU content is via the Use Case Library in Splunk Enterprise Security.

Navigating the Use Case Library

  1. From the Splunk ES menu: Security Content > Security Use Case Library
  2. Filter by Frameworks, Data Models, Use Case Categories (e.g., Cloud Security, Abuse, Adversary Tactics) or You can also search by keywords like “Cisco,” “ransomware,” or “credential theft” to find relevant Analytic Stories.

Reviewing an Analytic Story

Click into a story like Cisco Secure Firewall Threat Defense Analytics and to view the detailed narrative and description of the story and the analytics included in them are detection searches, MITRE ATT&CK techniques covered, known false positives and how to implement and tune the searches.

Test the Detection Analytic

Before deploying a detection to production, it's important to test each analytic search in your environment. To do this:

  1. Open the Use Case Library in Splunk Enterprise Security.
  2. Locate the detection under its corresponding analytic story.
  3. Run the detection manually by adjusting the time picker to cover a reasonable test window (e.g., last 24 hours or 7 days).

    ⚠️ Note:

    Before running this detection review each detection is annotated with implementation guidance, including which data models, input macro it relies on. Please make sure: - The relevant data models are accelerated and populated with data. - The input macro (e.g., `cisco_secure_firewall` ) is correctly configured to point to the right index(es) where your data lives. Read more about configuring macros. - Review the relevant information regarding how this search is to be run, any specific time windows are mentioned and if there is a baseline needed for this detection to work.

Now that you've explored a few detections, run the searches, and see what kind of results they return in your environment, you're in a good spot to start enabling them and letting them run on a schedule.

The Onboarding Assistant and Its Tales

The Onboarding Assistant isn't just flipping switches randomly — it actually does some smart background work to figure out which detections are relevant to your environment.

Here's how it works at a high level:

  • Scans Your Data: It looks at what source types, indexes, and event volumes you have in your Splunk environment. This helps determine if your data coverage matches what ESCU detections require (e.g., XmlWinEventLog, aws:cloudtrail, etc.).
  • Checks Installed Add-ons: It reviews the Splunk apps and TAs you've installed to make sure the necessary CIM mappings are available. This is important because ESCU detections rely on those data models.
  • Reviews Available Detections: It filters down the detection library to highlight only those detections that are categorized as TTP or Anomaly - the types most likely to generate useful intermediate findings or findings in Enterprise Security.
  • Matches Detections to Your Data Sources: Using a built-in lookup, it aligns each detection with the data sources it depends on. If the logs or normalized fields aren't present in your environment, that detection won't be recommended for enablement.
  • Organized by Analytic Stories: Lastly, it pulls together all this info and groups the content by Analytic Story, so you can enable content in logical bundles tied to specific threat scenarios or use cases.

The end result? You get a curated, context-aware list of detection rules that actually work with your environment — no guesswork needed.

This makes the Assistant an ideal tool for teams who want to get up and running fast with high-quality detections, without spending hours combing through YAML files or savedsearches.conf manually.

⚠️ Preview Feature

This feature is in preview. It may not identify all relevant content for your environment and might not be fully compatible with Detection Versioning.

Steps to Enable Content at Scale:

  1. Navigate to ES Content Update app > Onboarding(Preview)
  2. Select desired Analytic Stories, the number associated with each story signifies the number of detections that can be potentially enabled in your environment.
  3. Check the box on the detections to activate multiple correlation searches
  4. Wait for the detections to be enabled and configured to run on schedule (typically every hour)

    Step 1 . Select the stories you want to enable the detections for, the number alongside each story specifies the number of detections which have relevant data in your splunk environment



    Step 2. Select the detections you would want to enable in your environment.

    Note: To avoid consuming your entire search quota, please refrain from enabling too many detections simultaneously or enabling detections that have no data or TA available. The red X means that the required data/TA is not available.



    Step 3. Use the suggested default values for the time range and cron schedule for detection analytics and click Enable.

    It is highly recommended to use the suggested default values for the time range and cron schedule when enabling detection analytics, as these defaults are carefully chosen to align with the expected event latency and frequency of each detection. These values work in conjunction with other configurations shipped in the savedsearches.conf stanza, such as allow_skew, which is not visible or configurable on this page but plays a key role in ensuring reliable detection performance. For more details, refer to the allow_skew documentation.

    Step 4. Make sure that you have configured the macros correctly to point the the right index and data models are accelerated


    This process enables all the detections from an analytic story that generate either Findings or Intermediate Findings depending on the detection type.

Tuning Detections with Filter Macros

After enabling detections, you'll likely encounter false positives — this is a normal part of operationalizing security content, especially at scale. As environments grow in complexity, legitimate activity may resemble malicious behavior, causing detections to trigger even when no threat is present. Tuning is not optional; it's essential for maintaining a high signal-to-noise ratio in any detection platform.

In ESCU, we use filter macros to provide a consistent and easily configurable way to refine detections without modifying core logic.

Example Detection with Filter Macro

Each detection search typically ends with a filter macro, such as:

| `access_lsass_memory_for_dump_creation_filter`
OR
| `_filter`

Steps to Tune:

  1. Go to Settings > Advanced Search > Search Macros in Splunk.
  2. Locate the relevant macro by name (copied from the detection search).
  3. Edit the macro by adding filters to exclude known benign behavior, such as:
    • dest != values for trusted systems or systems known to exhibit this behavior
    • user != values for known administrative accounts using seemingly malicious scripts for legitimate purposes
    • process_path != or process_name != for approved applications or scripts
  4. Save the macro.
  5. Retest the detection to ensure it now focuses on suspicious, actionable behavior.

    Tip: Use historical search results or Incident Review context to identify what needs filtering. Start conservatively and revisit filters as the environment evolves.

This approach allows you to adapt out-of-the-box content to your unique environment — without forking or rewriting the SPL by using macros, which remain useful for tuning even in ES8+ with detection versioning. While updates to detections in ES8+ won't always override custom changes thanks to improved version reconciliation, macros still offer a simpler path for customization, especially for users less familiar with SPL.

Investigate in Risk and Notable Indexes

Once detections are live and tuned:

  • Investigate results in index=notable and index=risk
  • Use dashboards like Incident Review, Risk Analysis, or custom risk-based dashboards
  • Prioritize tuning based on volume and severity of alerts

Sample Risk data model query to explore risk:

| from datamodel Risk.All_Risk | stats count by risk_object, risk_score, source

Demo Video

Conclusion

Operationalizing ESCU content isn't just about installation — it's about making detection analytics actionable, relevant, and low-noise for your environment. By leveraging the Use Case Library, the Onboarding Assistant, enabling detections at scale, and then tuning them with filter macros, you can build a threat detection system that is both powerful and manageable.

Keep in mind, there is no single process or tool that works for every security operation program. Each environment has unique data sources, priorities, and tolerance for noise. If you’re new to deploying Splunk detections in Enterprise Security, we recommend giving the Onboarding Assistant a try — it provides a structured, scalable way to start activating detections and building confidence in your security operations journey.

Splunk LLC published this content on September 05, 2025, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on September 05, 2025 at 16:35 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]