As AI adoption surges and cyber threats grow in sophistication, 80% of IT security leaders agree that their current security practices require transformation, according to Salesforce’s latest State of IT report. While 100% of respondents identified areas where AI agents could improve security outcomes, serious concerns remain around data readiness, regulatory compliance, and trust.
Despite the optimism, implementation barriers loom large. Nearly half (48%) say their data foundation is not ready to support agentic AI, and 55% lack confidence in their current guardrails for deploying such tools.
“Trusted AI agents are built on trusted data,” said Alice Steinglass, EVP & GM of Salesforce Platform, Integration and Automation. “Security teams must prioritize data governance to safely and effectively scale AI capabilities.”
AI Security Potential Meets Implementation Anxiety
Agentic AI—autonomous agents designed to enhance and automate tasks—holds promise to reduce manual workloads and improve threat detection. But successful deployment demands robust data infrastructure, ethical guardrails, and regulatory compliance—areas where many organizations still lag.
“Organizations in the Middle East are particularly focused on ensuring data readiness and ethical deployment of AI agents,” added Mohammed Alkhotani, SVP & GM, Salesforce Middle East.
Top Concerns: Data Poisoning, Compliance Complexity, and Trust Gaps
Security leaders now cite data poisoning—the corruption of training data by malicious actors—as a top concern alongside familiar risks like phishing and malware. As a result, 75% plan to increase security budgets in the next year.
Meanwhile, the regulatory landscape presents both opportunity and complication:
- 79% say AI agents present compliance challenges, despite 80% seeing potential for improved regulatory adherence.
- Just 47% are confident in deploying AI agents in compliance with current standards.
- A staggering 83% have not fully automated their compliance processes, leaving room for manual error.
Public Trust in AI Drops; Internal Confidence Also Wavers
With public trust declining, only 42% of consumers trust companies to use AI ethically—down from 58% in 2023. Internally, 57% of security leaders doubt the accuracy or explainability of their AI systems, and 60% lack transparency around AI data usage.
Data Governance is Crucial to Scaling Agentic AI
According to Salesforce, nearly half of all IT security leaders feel unprepared to deploy agentic AI due to low data quality, inadequate permissions, and missing policies. Encouragingly, a recent CIO survey found 4x more budget is now being allocated to data infrastructure than AI—signaling that enterprises recognize data as foundational.
Agent Adoption Grows Rapidly
Today, 41% of security teams use AI agents; that number is expected to rise to 75% within two years. Anticipated benefits include:
- Enhanced threat detection
- Automated compliance auditing
- Increased operational efficiency
Yet only 47% believe their current security practices are truly ready for agentic deployment.
Case in Point: Arizona State University
ASU is among the first to implement Agentforce, Salesforce’s digital labor platform. By integrating Own backup, recovery, and archiving tools, ASU has built a strong data governance framework to support both compliance and AI innovation.
“Data relevancy and resilience are key to ASU’s AI transformation,” said a university spokesperson, citing the role of Salesforce’s ecosystem in helping meet regulatory and innovation demands.
As AI agents move from hype to reality, the message is clear: Trust begins with transparency, and transformation starts with data.