- 95% of organizations are using GenAI tools in some guise
- 89% consider GenAI tools to be a potential security risk
- 23% of those using GenAI tools admit to having no monitoring in place
- IT teams, not general employees, emerge as the overwhelming force behind usage
New research from Zscaler, Inc. (NASDAQ: ZS), the leader in cloud security, suggests that organizations are feeling the pressure to rush into generative AI (GenAI) tool usage, despite significant security concerns. According to its latest survey, "All eyes on securing GenAI" of more than 900 global IT decision makers, although 89% of organizations consider GenAI tools like ChatGPT to be a potential security risk, 95% are already using them in some guise within their businesses.
Even more worryingly, 23% of this user group aren't monitoring the usage at all, and 33% have yet to implement any additional GenAI-related security measures – though many have it on their roadmap. The situation appears particularly pronounced among smaller-sized businesses (500-999 employees), where the same number of organizations are using GenAI tools (95%), but as many as 94% recognize the risk of doing so.