F5 Inc.

09/15/2025 | News release | Distributed by Public on 09/15/2025 05:17

How does SecOps feel about AI? Part 2: Data protection

AI generates a lot of feelings. Some believe it is the next in a long line of fads, soon to join the ranks of NFTs and 3D TVs. Others are building bunkers in preparation for malevolent AGI overlords that become self-aware. Amidst all the hyperbole, there is one reality that can be stated with certainty: AI is connected to a lot of data.

There's a lot of hype around AI that gets talking heads excited, scared, or skeptical, but at F5, we're interested in how everyday practitioners are feeling about it. To understand the reality of current challenges and concerns, we conducted a comprehensive sentiment analysis of the Internet's largest community of security professionals, Reddit's r/cybersecurity. Shawn Wormke's Part 1 blog on this study, "How does SecOps feel about AI?" summarized the overall findings from this study. All quotes came directly from security practitioner comments between July 2024 and June 2025. Here, we will take a deeper dive into the top AI-related concern of the year: data security.

Data security started as the top AI-related concern in 2025, and January's DeepSeek attack only accelerated that trend further.

Concerns surfaced as sensitive disclosures, shadow AI, and compliance

Many envision an AI threat landscape of bad guys leveraging AI to execute intricate social engineering attacks and unleash hordes of intelligent bots. Those threats are legitimate, but security professionals paint a picture that is significantly more naïve, just as detrimental, and far more widespread. To that end, SecOps concerns around internal AI misuse surfaced 2.3x more frequently than that of malicious abuse.

This cuts to the heart of the first issue: sensitive disclosures. As one practitioner framed it succinctly, "Let's be real, everyone's using LLMs at work and dropping all kinds of sensitive info into prompts." As models attain larger context windows and more file types become available for use with retrieval-augmented generation (RAG,) employees have learned that the quickest path to an informed output is giving the LLM all the information it might need. This stands in direct contradiction to the principle of least privilege, an essential pillar of zero trust. Simply put, "there is always tension between security and capability."

Traditional strategies for policy enforcement are not working

The logical step most organizations take to secure AI is an acceptable use policy (AUP). There exists a wide range of strategies, but the consensus is that traditional deterrents and restriction methods are insufficient.

As one user describes, traditional tools like web application firewalls (WAFs) and DNS filtering merely delay the inevitable: "By blocking them you're essentially forcing your data into these free services. It will always be a game of whack a mole dealing with blacklisting." This introduces one of the most discussed challenges of the past year: shadow AI. New models are released daily and wrappers of those models are vibe-coded hourly. Users will always find ways to circumvent policies they see as roadblocks to the successful execution of their positional priorities.

These top two concerns of shadow AI and sensitive data disclosures combine to create a worst-case environment for security teams: rampant exposure with zero visibility. Users might use mainstream LLMs to cut down on reading time, possibly uploading confidential documents in the process. But with shadow AI solutions, the SecOps team could observe those interactions and implement multiple options for risk mitigation. They could pay closer attention to the specific individual for future interactions, or gate critical resources from them until behaviors change. In the absence of a shadow AI solution, traditional countermeasures like firewalls and DNS blocking merely relocate users to obscure wrappers of the same foundational models, obscuring visibility into the form, fashion, and location of risky behaviors.

Compliance is where all exposures culminate

With mounting compliance standards like the EU AI Act and the General Data Protection Regulation (GDPR) layered atop existing industry-specific regulations, organizations without proper AI data governance risk punitive fines, legal liability, and erosion of public trust.

Security professionals have experienced their share of technologies for which enthusiasm and the desire for competitive parity outpaced security considerations. Cloud computing followed a similar path to where AI is today: rapid adoption and anticipation of exciting new possibilities, followed by widespread misconfigurations, excessive access, and failures of the shared responsibility model. Sound familiar? The primary difference is that the cloud had a significantly smaller pool of parties capable of contributing to the overall risk. The new frontier of AI security expands the immediate focus from cloud architects and engineers as the primary suspects to now anyone with access to sensitive data, including the models themselves.

Practitioners understand the assignment

There has never been a technology that did not introduce some level of risk, and never one wherein the world collectively said, "Too risky, let's all stop immediately." Security practitioners understand they have an important and challenging road ahead of them.

Ensuring that AI interactions with data have effective guardrails and continuous observability is a challenging endeavor, but a necessary requirement if AI adoption continues at its current pace.

F5 is already taking significant action to address these challenges, and we will continue to rely on SecOps voices to steer our priorities. Learn more here.

F5 Inc. published this content on September 15, 2025, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on September 15, 2025 at 11:17 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]