Generative AI presents corporations of all sizes with alternatives to extend effectivity and drive innovation. With this chance comes a brand new set of cybersecurity necessities notably centered on knowledge that has begun to reshape the tasks of information safety groups. The 2024 Microsoft Knowledge Safety Index focuses on key statistics and actionable insights to safe your knowledge used and referenced by your generative AI functions.
84% of surveyed organizations wish to really feel extra assured about managing and discovering knowledge enter into AI apps and instruments. This report contains analysis to offer you the actionable industry-agnostic insights and steerage to raised safe your knowledge utilized by your generative AI functions.
Microsoft Knowledge Safety Index
Acquire deeper insights about generative AI and its affect on knowledge safety.
In 2023, we commissioned our first unbiased analysis that surveyed greater than 800 knowledge safety professionals to assist enterprise leaders develop their knowledge safety methods. This 12 months, we expanded the survey to 1,300 safety professionals to uncover new learnings on knowledge safety and AI practices.
A number of the top-level insights from our expanded analysis are:
- The information safety panorama stays fractured throughout conventional and new dangers attributable to AI.
- Person adoption of generative AI will increase the danger and publicity of delicate knowledge.
- Resolution-makers are optimistic about AI’s potential to spice up their knowledge safety effectiveness.
The information safety panorama stays fractured throughout conventional and new dangers
On common, organizations are juggling 12 totally different knowledge safety options, creating complexity that will increase their vulnerability. That is very true for the biggest organizations: On common, medium enterprises use 9 instruments, giant enterprises use 11, and extra-large enterprises use 14. As well as, 21% of decision-makers cite the dearth of consolidated and complete visibility attributable to disparate instruments as their largest problem and danger.
Fragmented options make it obscure knowledge safety posture since knowledge is remoted and disparate workflows may restrict complete visibility into potential dangers. When instruments don’t combine, knowledge safety groups should construct processes to correlate knowledge and set up a cohesive view of dangers, which may result in blind spots and make it difficult to detect and mitigate dangers successfully.
In consequence, the info additionally exhibits a powerful correlation between the variety of knowledge safety instruments used and the frequency of information safety incidents. In 2024, organizations utilizing extra knowledge safety instruments (11 or extra) skilled a median of 202 knowledge safety incidents, in comparison with 139 incidents for these with 10 or fewer instruments.
As well as, a rising space of concern is the rise in knowledge safety incidents from the usage of AI functions, which practically doubled from 27% in 2023 to 40% in 2024. Assaults from the usage of AI apps not solely expose delicate knowledge but in addition compromise the performance of the AI methods themselves, additional complicating an already fractured knowledge safety panorama.
Briefly, there’s an more and more pressing want for extra built-in and cohesive knowledge safety methods that may deal with each conventional and rising dangers linked to the usage of AI instruments.
Adoption of generative AI will increase the danger and publicity of delicate knowledge
Person adoption of generative AI will increase the danger and publicity of delicate knowledge. As AI turns into extra embedded in day by day operations, organizations acknowledge the necessity for stronger safety. 96% of corporations surveyed admitted that they harbored some stage of reservation about worker use of generative AI. Nonetheless, 93% of corporations additionally reported that that they had taken proactive motion and had been at some stage of both creating or implementing new controls round worker use of generative AI.
Unauthorized AI functions can entry and misuse knowledge, resulting in potential breaches. Using these unauthorized AI functions typically happens with staff logging in with private credentials or utilizing private gadgets for work-related duties. On common, 65% of organizations admit that their staff are utilizing unsanctioned AI apps.
Given these issues, it can be crucial for organizations to implement the suitable knowledge safety controls and to mitigate these dangers and be certain that AI instruments are used responsibly. Presently, 43% of corporations are centered on stopping delicate knowledge from being uploaded into AI apps, whereas one other 42% are logging all actions and content material inside these apps for potential investigations or incident response. Equally, 42% are blocking person entry to unauthorized instruments, and an equal proportion are investing in worker coaching on safe AI use.
To implement the suitable knowledge safety controls, prospects want to extend their visibility of their AI software utilization in addition to the info that’s flowing by way of these functions. As well as, they want a method to assess the danger ranges of rising generative AI functions and be capable to apply conditional entry insurance policies to these functions primarily based on a person’s danger ranges.
Lastly, they want to have the ability to entry audit logs and generate experiences to assist them assess their total danger ranges in addition to present transparency and reporting for regulatory compliance.
AI’s potential to spice up knowledge safety effectiveness
Conventional knowledge safety measures typically wrestle to maintain up with the sheer quantity of information generated in in the present day’s digital panorama. AI, nonetheless, can sift by way of this knowledge, figuring out patterns and anomalies which may point out a safety risk. No matter the place they’re of their generative AI adoption journeys, organizations which have applied AI-enabled knowledge safety options typically acquire each elevated visibility throughout their digital estates and elevated capability to course of and analyze incidents as they’re detected.
77% of organizations consider that AI will speed up their means to find unprotected delicate knowledge, detect anomalous exercise, and robotically shield at-risk knowledge. 76% consider AI will enhance the accuracy of their knowledge safety methods, and an awesome 93% are no less than planning to make use of AI for knowledge safety.
Organizations already utilizing AI as a part of their knowledge safety operations additionally report fewer alerts. On common, organizations utilizing AI safety instruments obtain 47 alerts per day, in comparison with a median 79 alerts amongst those who have but to implement comparable AI options.
AI’s means to investigate huge quantities of information, detect anomalies, and reply to threats in real-time affords a promising avenue for strengthening knowledge safety. This optimism can also be driving investments in AI-powered knowledge safety options, that are anticipated to play a pivotal position in future safety methods.
As we glance to the longer term, prospects are searching for methods to streamline how they uncover and label delicate knowledge, present more practical and correct alerts, simplify investigations, make suggestions to raised safe their knowledge environments, and in the end scale back the variety of knowledge safety incidents.
Ultimate ideas
So, what could be made from this new generative AI revolution, particularly because it pertains to knowledge safety? For these starting their adoption roadmap or searching for methods to enhance, listed below are three broadly relevant suggestions:
- Hedge towards knowledge safety incidents by adopting an built-in platform.
- Undertake controls for worker use of generative AI that received’t impression productiveness.
- Uplevel your knowledge safety technique with assist from AI.
Acquire deeper insights about generative AI and its affect on knowledge safety by exploring Knowledge Safety Index: Developments, insights, and techniques to maintain your knowledge safe and navigate generative AI. There you’ll additionally discover in-depth sentiment evaluation from collaborating knowledge safety professionals, offering much more perception into frequent thought processes round generative AI adoption. For additional studying, you may as well try the Knowledge Safety as a Basis for Safe AI Adoption white paper.
To study extra about Microsoft Safety options, go to our web site. Bookmark the Safety weblog to maintain up with our professional protection on safety issues. Additionally, observe us on LinkedIn (Microsoft Safety) and X (@MSFTSecurity) for the most recent information and updates on cybersecurity.