Skip to content

AI safety assurance in enterprises remains questionable, with just 44% sure of secure AI operation, according to Delinea survey.

Majority of corporations claim their AI functions are securely shielded, yet only a fourth can guarantee this assurance.

AI Safety nella durantΓ  dell'operazione solo nel 44% delle aziende
AI Safety nella durantΓ  dell'operazione solo nel 44% delle aziende

AI safety assurance in enterprises remains questionable, with just 44% sure of secure AI operation, according to Delinea survey.

In the rapidly evolving world of artificial intelligence (AI), companies are increasingly adapting their security controls to address new risks, according to a comprehensive study conducted by PwC Germany. The study, announced on September 5, 2025, introduces the "Assurance for AI" service, offering structured auditing and governance frameworks for AI systems to ensure transparency, responsibility, and auditability.

The research reveals several significant AI-based security risks that companies need to address. Opaque AI access processes and unauthorized access to agentic AI systems are identified as major concerns. The lack of oversight in AI usage makes companies vulnerable to unauthorized AI tool use and poorly managed AI systems.

The study found that over half (60%) of companies use generative or agentic AI, with 40% of these companies using AI agents to improve security operations. However, only 55% of companies use access controls for AI agents, leaving a substantial gap in security measures. Similarly, only 57% of companies have acceptable usage policies for AI tools.

The report also highlights the issue of shadow AI, with a third (32%) of respondents experiencing shadow AI issues more frequently. Shadow AI refers to AI systems that are deployed without the knowledge or approval of IT departments, leading to potential security risks and inefficiencies. Over half (56%) of respondents report facing shadow AI issues at least monthly.

The theft of login credentials using AI is another identified risk, underscoring the need for robust identity governance as AI transforms IT and security processes. Nearly half (47%) of companies are confident that their planned security initiatives will make AI use safer within the next two years.

As companies continue to integrate AI into their operations, establishing a comprehensive AI governance model is crucial for safely and effectively using AI technology. PwC Germany's Assurance for AI service aims to provide a solution to this challenge, offering companies a structured approach to managing their AI systems and mitigating the associated risks.

Read also: