Cisco Systems Inc.

09/03/2025 | News release | Distributed by Public on 09/03/2025 06:18

Cisco Secure Firewall: SnortML at Black Hat USA 2025

Additional Contributors: David Keller

The technical training sessions at Black Hat USA offer a unique monitoring opportunity, as they deliver hands-on learning opportunities for attendees to attempt new attacks. Many training sessions use a cloud resource owned by the trainer that the attendees connect to directly from the training room. This creates a traffic path of end users connecting to a wireless access point (AP) that is routed along an inspected traffic path out to the cloud. Our role in the SOC within the Black Hat NOC is to ensure stable connectivity for the AP and traffic path to the internet, and to monitor to verify that the attack traffic coming out of the classrooms is destined for the approved training resources, and is not launched against other targets.

We had a lot of traditional intrusion rules fire for attack training traffic, and as we saw in Black Hat Asia in Singapore, SnortML (short for Snort Machine Learning) provided another layer of detection that picked up attacks that didn't always match our traditional ruleset. Most impressively, the fidelity of SnortML alerts was very high-with over 29 TB of wireless data at the conference, we only had two recurring false positives from SnortML events, with over 100 attacks accurately identified. The full event breakdown looks like this:

  • 29.8 TB of network traffic
  • 133 SnortML events
  • Over 100 true positives
  • 21 false positives related to Chocolatey (a software management program)
  • 8 false positives related to Microsoft downloads
SnortML potential threat messageFig. 1: SnortML potential threat message

As anyone who has performed analysis of intrusion events can attest, dealing with high false positive rates is one of the biggest challenges. Having an event set with such a high rate of true positives (over 75%) was a huge benefit.

SnortML False Positives

What tripped up SnortML at Black Hat? The first false positive was a very long string related to a Microsoft file download.

SnortML false positive Fig. 2: SnortML false positive

The end of the above string in a larger font:

SnortML false positive, enlargedFig. 3: SnortML false positive, enlarged

In particular the %3d%3d (which decodes to ==) at the end stood out as encoding that likely tripped the detection. The other string that generated false positives was related to Chocolatey (put on your reading glasses):

SnortML Chocolatey false positive Fig. 4: SnortML Chocolatey false positive

Decoding the above yields the following output:

SnortML Chocolatey false positive 5: SnortML Chocolatey false positive

While this isn't malicious, it has multiple characteristics that look an awful lot like SQL injection, including very generous use of single quotes. The command 'tolower' is another element that the model also saw as likely to be related to malicious activity.

While both of the above are false positives, it's understandable that SnortML flagged them as malicious, particularly Chocolatey. Our SOC at Black Hat brought in the lead developer for SnortML to review the events so that the SnortML models can be tuned to avoid these false positives.

SnortML True Positives

SnortML currently has detection models for both SQL injection and Command Injection, with more models planned for future software releases. We saw many different attack permutations for these two event types at Black Hat. SnortML also proved very accurate at detecting path traversals and attempts to access sensitive files, such as /etc/passwd and /etc/hosts. The screenshot below shows the payloads from a set of SnortML events, with the alerting packets downloaded into Wireshark.

SnortML events payloads 6: SnortML events payloads

The above are true positives attacks but also acceptable for the Black Hat network-all of them originated from technical training rooms and were targeted at resources owned by the trainers.

SnortML also picked up multiple flavors of command injection, ranging from students experimenting with script strings like 'hello' and 'Hacked!' to injecting commands like 'whoami' and 'ls'.

Command injection, captured in SnortML
SnortML8
SnortML9
SnortML10Fig. 7: Command injection, captured in SnortML

Security Tools Detected by SnortML

Given that all of Black Hat's technical trainings involved security in some way, it wasn't surprising to see multiple tools pop up, including the famous WebGoat insecure server and a 'notsosecureapp' website devoted to teaching cyber security. Below is a full event screenshot showing path a traversal attempt to the notsosecureapp server.

WebGoat full event Fig. 8: WebGoat full event

We saw a lot of events involving WebGoat, including path traversals that introduced encoding.

Path traversals Fig. 9: Path traversals

And attempts to traverse sensitive Windows files.

Attempts to traverse sensitive Windows files Fig. 10: Attempts to traverse sensitive Windows files

The above decodes to the following:

Decoded attempts to traverse sensitive Windows files 11: Decoded attempts to traverse sensitive Windows files

Other WebGoat attacks included attempts to insert scripts using basic command injection.

Attempts to insert scripts using basic command injection Fig. 12: Attempts to insert scripts using basic command injection

The above decodes to a simple command injection that causes an alert popup.

Script insertions, decoded Fig. 13: Script insertions, decoded

More advanced attacks were also captured:

Script insertions, decoded Fig. 14: More advanced attacks captured by SnortML

The above decodes to:

More advanced attacks, decoded Fig. 15: More advanced attacks, decoded

Injecting the sleep command can be an easy way to confirm a successful attack, as it will result in a delay of the returned webpage (for the period specified) if it is successful and the sleep command isn't run by a background process.

SnortML also picked up multiple attempts to insert files:

SnortML registering attempts to insert files Fig. 16: SnortML registering attempts to insert files
SnortML registering attempts to insert files Fig. 17: SnortML registering attempts to insert files

The above decodes to:

Attempted file insertion, decoded Fig. 18: Attempted file insertion, decoded

Closing Thoughts

SnortML isn't a replacement for a robust intrusion ruleset-our traditional ruleset picked up important attacks that SnortML isn't trained to detect, including inbound attacks against public facing Black Hat servers that attempted to exploit recent CVEs. Nonetheless, the incredible accuracy of SnortML at Black Hat 2025-over 75% true positive rate-made it an extremely valuable and high-fidelity supplement to our traditional intrusion rule set. We look forward to rolling out new detection models for SnortML at future conferences.

About Black Hat

Black Hat is the cybersecurity industry's most established and in-depth security event series. Founded in 1997, these annual, multi-day events provide attendees with the latest in cybersecurity research, development, and trends. Driven by the needs of the community, Black Hat events showcase content directly from the community through Briefings presentations, Trainings courses, Summits, and more. As the event series where all career levels and academic disciplines convene to collaborate, network, and discuss the cybersecurity topics that matter most to them, attendees can find Black Hat events in the United States, Canada, Europe, Middle East and Africa, and Asia. For more information, please visit the Black Hat website.

We'd love to hear what you think! Ask a question and stay connected with Cisco Security on social media.

Cisco Security Social Media

LinkedIn
Facebook
Instagram
X


Share:

Cisco Systems Inc. published this content on September 03, 2025, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on September 03, 2025 at 12:18 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]