Organizations are both already adopting GenAI options, evaluating methods for integrating these instruments into their enterprise plans, or each. To drive knowledgeable decision-making and efficient planning, the provision of onerous knowledge is crucial—but such knowledge stays surprisingly scarce.
The “Enterprise GenAI Data Security Report 2025” by LayerX delivers unprecedented insights into the sensible utility of AI instruments within the office, whereas highlighting essential vulnerabilities. Drawing on real-world telemetry from LayerX’s enterprise purchasers, this report is among the few dependable sources that particulars precise worker use of GenAI.
For example, it reveals that just about 90% of enterprise AI utilization happens exterior the visibility of IT, exposing organizations to vital dangers equivalent to knowledge leakage and unauthorized entry.
Under we carry a few of the report’s key findings. Learn the complete report back to refine and improve your safety methods, leverage data-driven decision-making for threat administration, and evangelize for assets to boost GenAI knowledge safety measures.
To register to a webinar that can cowl the important thing findings on this report, click here.
Use of GenAI within the Enterprise is Informal at Most (for Now)
Whereas the GenAI hype could make it appear to be your entire workforce has transitioned their workplace operations to GenAI, LayerX finds the precise use a tad extra lukewarm. Roughly 15% of customers entry GenAI instruments every day. This isn’t a share to be ignored, however it’s not the bulk.
But. Right here at The New Stack we concur with LayerX’s evaluation, predicting this pattern will speed up rapidly. Particularly since 50% of customers at the moment use GenAI each different week.
As well as, they discover that 39% of standard GenAI device customers are software program builders, which means that the very best potential of knowledge leakage by way of GenAI is of supply and proprietary code, in addition to the chance of utilizing dangerous code in your codebase.
How is GenAI Being Used? Who Is aware of?
Since LayerX is located within the browser, the device has visibility into using Shadow SaaS. This implies they will see staff utilizing instruments that weren’t accepted by the group’s IT or by way of non-corporate accounts.
And whereas GenAI instruments like ChatGPT are used for work functions, almost 72% of staff entry them by way of their private accounts. If staff do entry by way of company accounts, solely about 12% is completed with SSO. Consequently, almost 90% of GenAI utilization is invisible to the group. This leaves organizations blind to ‘shadow AI’ functions and the unsanctioned sharing of company info on AI instruments.
50% of Pasting Exercise intoGenAI Contains Company Knowledge
Bear in mind the Pareto precept? On this case, whereas not all customers use GenAI every day, customers who do paste into GenAI functions, accomplish that incessantly and of doubtless confidential info.
LayerX discovered that pasting of company knowledge happens virtually 4 instances a day, on common, amongst customers who submit knowledge to GenAI instruments. This might embrace enterprise info, buyer knowledge, monetary plans, supply code, and so on.
How one can Plan for GenAI Utilization: What Enterprises Should Do Now
The findings within the report sign an pressing want for brand new safety methods to handle GenAI threat. Conventional safety instruments fail to handle the fashionable AI-driven office the place functions are browser-based. They lack the power to detect, management, and safe AI interactions on the supply—the browser.
Browser-based safety offers visibility into entry to AI SaaS functions, unknown AI functions past ChatGOT, AI-enabled browser extensions, and extra. This visibility can be utilized to make use of DLP options for GenAI, permitting enterprises to securely embrace GenAI of their plans, future-proofing their enterprise.
To entry extra knowledge on how GenAI is getting used, learn the full report.