F5 Inc.

09/11/2025 | News release | Distributed by Public on 09/11/2025 07:24

How Does SecOps Feel About AI

With new technology, it's easy to get excited. Everyone remembers the feeling of their first LLM prompt, their first time using a web browser, this crazy idea that your data is living in a "cloud". But with every new innovation, there exists a groaning team of security professionals tasked with figuring out a way for the rest of us to use these things safely. We can get very caught up in headlines and future possibilities, of which AI has plenty, but as we define the role F5 will play in AI security, we thought it was time to take a step back and ask, how does the security community feel about AI?

Because we obsess over our customer here at F5, we analyzed every AI-related comment from the past year (July 2024 - June 2025) across the top users in the internet's largest community of security professionals, Reddit's r/cybersecurity. We classified each comment and user into sentiment buckets and extracted expressed or underlying pain points for each.

SecOps is split 50/50 on AI

48% of practitioners feel optimistic and are already using it in their stack while the rest remain skeptical or fearful of what the future holds.

Half of SecOps is optimistic about AI. They're using AI-powered tools in their stacks, automating repetitive tasks, or using AI assistants to prioritize alerts.

"I'm amazed at how proficient it is in recognizing new intrusion patterns."

"AI is here to stay, use it as your supplement, as your helper."

The other half is split between fear, rooted in the challenging task of securing AI systems, and skepticism that AI will progress much more than it already has.

"AI = Applications we have very little understanding of, built ontop of applications we already have done a horrible job securing for decades"

Shadow AI and Data Security

The tension between AI's data demands and cybersecurity's guiding principles has placed data security at the center of debate. It started the year as the top concern for security professionals and has only intensified as businesses accelerated their AI adoption.

"What makes AI effective is data. The more data it has access to the more effective it is. This model is in direct contradiction to securing access to data through access controls, network segmentation, and principle of least privilege. Anyone adopting AI at scale has to effectively remove many security controls that would prevent AI model access to data. The current choices for business are do they want effective AI OR do they want security."

The most prominent pain point surfacing within this domain is shadow AI, or unauthorized user interaction with AI systems. Regardless of how early or mature an organization is in their AI adoption, this is an issue that persists irrespective of AI maturity.

Observability is a Foundational Requirement, not a Feature

Data security may dominate the conversation, but observability and anomaly detection are the next strongest AI concerns across SecOps. As vendors make claims around what AI can do to support security workflows, practitioners express the imperative for balance: "There is only so much AI Security agents can do, and always with a human in the loop." One analyst shared their experience using AI to automate L1 triage of EDR alerts to cut MTTT from 45 minutes to under two but added the caveat, "this didn't come without guardrails, guardrails, and more guardrails." The ask is consistent: build continuous visibility and traceability across AI interactions, automate the repetitive work, and keep strategic judgment calls with humans.

Attacker Playbooks are Changing

"The best way to think about AI is to imagine augmenting your attackers with tens, dozens, or hundreds of employees…Today I'd estimate the top 5% of attackers have boosted their effectiveness by 50-300%." This isn't a theoretical threat, it's a force multiplier that raises the floor for opportunists and raises the ceiling for sophisticated threat actors. In practice, these changes are discussed in two forms: new adversarial AI techniques like prompt injection or jailbreak attacks targeting AI systems, and a malicious democratization of social engineering attacks like phishing and deepfakes. Concerns around the latter have only grown as models and agents connect into more tools and data.

Model Behavior Needs Guardrails

Another 12% of pain points called out model behavior and output quality as a security risk in itself. Top concerns included hallucinations, accuracy gaps, and harmful outputs, but foremost among them was escalation of privilege-AI accessing data or executing tasks of which it lacks permissions to do so. This is where we see SecOps pushing for practical guardrails: content moderation tuned to business risk, policy alignment, and clear permissions for models and agents.

What SecOps Wants from AI Security

Reading between the lines, and sometimes directly from them, security teams are expecting vendors to meet a higher standard.

  • Treat data security first - DLP for AI policy enforcement, and access controls that reflect least privilege
  • No "AI-washing" - demonstrate security and compliance capabilities with more proof points than snake oil
  • Make observability audit-ready and usable - make integration simple and effective for existing SIEM/SOAR workflows and embed visibility across all interactions
  • Flexibility for humans in the loop - design for prioritization of impact, not just speed
  • Adversarial Resilience - the AIattack surface is constantly changing, and solutions need to be able to evolve with it as new threats arise
  • Simplify Policy Management - Align to enterprise-wide policies and regulatory frameworks without manual overhead

Prioritize data protection, adapt protection to emerging adversarial threats, make observability across interactions foundational, and design with responsible AI governance in mind. We'll continue to share insights from real-world testing, publish practical guidance that security teams can put to work, and lean into transparency around how we reduce risk. Recalling the comment about "secure OR effective AI", we are looking forward to swapping that 'or' for an 'and'.

We heard the call, here's our answer.

F5 Inc. published this content on September 11, 2025, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on September 11, 2025 at 13:24 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]