MindtheGap:SecuringgenerativeAIintheenterprise
Generative AI is transforming how we work, but unmanaged, it is a hidden data security risk. Can you ‘Mind the Gap’ and harness its full potential? Read on to discover the dangers and how to bridge them securely.
The generative AI revolution
Generative AI is rapidly transforming the way we work, offering a myriad of benefits. From the everyday tasks of content creation and note-taking to fueling breakthroughs in customer onboarding and even cybersecurity analysis, the potential of generative AI to unlock new productivity levels and gain a competitive edge is immense. However, this powerful technology, much like the arrival of railways in the Industrial Revolution, presents a new frontier fraught with hidden dangers, particularly regarding data security and governance.
Just as the railways revolutionised transportation, creating a network that facilitated the movement of goods and people at an unprecedented scale, this progress came with challenges. Managing vast traffic volumes on a new infrastructure requires careful planning and safety measures. Similarly, the flow of data within generative AI applications presents a new challenge for businesses, giving way to a potential new gap in security.
Uncontrolled data flow
Generative AI applications are data-hungry. To reap the benefits, businesses must feed them data, often vast amounts. This creates a significant challenge – ensuring data is used appropriately and securely. A single misstep can have serious consequences, as highlighted by the case of a Samsung employee who inadvertently leaked trade secrets via a public chat application while using a generative AI tool. This incident underscores the potential risks of uncontrolled data flow in generative AI applications.
The sheer volume of AI tools available amplifies the challenge. Hundreds of generative AI applications are readily available, so keeping track of employee usage is challenging. In fact, according to a recent Cisco report that polled 2,600 privacy and security professionals, more than one in four companies have banned the use of generative AI tools at work due to data security concerns.
At the heart of these restrictions lies the fear of inadvertent data leaks. Employees uploading confidential information to a public generative AI platform for tasks like meeting notes summarisation risk exposing sensitive data. Once data reaches these platforms, it becomes accessible to anyone who queries the tool. Companies like Amazon, Microsoft, and even the Italian Data Protection Authority (GPDP) are implementing restrictions, issuing warnings, or outright banning the use of generative AI.
Bridging the gap
Companies are left with a difficult decision: completely block access and miss out on potential benefits, bury their heads in the sand and hope for the best, or embrace generative AI tools while implementing robust security measures.
One solution to consider is Netskope. The Netskope One platform offers a solution that empowers businesses to bridge the generative AI gap.
Here's how:
-
Visibility: It provides a clear view of generative AI usage within your organisation. Identify which applications are being used, by whom, and for what purpose (queries, uploads, downloads).
-
Risk Assessment: Netskope classifies generative AI applications and assigns risk ratings based on security controls, data protection practices, and access controls. This rating allows businesses to make informed decisions about which tools to adopt.
-
Policy Enforcement: Implement security policies to govern generative AI usage. Block access to unapproved applications or restrict the type and amount of data that can be uploaded or downloaded.
-
User Coaching: Implement real-time user coaching to guide and educate users on how to use generative AI while adhering to corporate policies and safeguards.
-
Safeguards: For approved tools, establish safeguards like content controls or designated corporate accounts to prevent sensitive data leaks.
In summary
Generative AI is a game-changer, offering vast potential for innovation across various industries. However, this power comes with a hidden danger: a security gap that needs careful management.
IT and security leaders play a crucial role in helping their organisations bridge this generative AI gap. By implementing a comprehensive security strategy that includes visibility, risk assessment, policy enforcement, and safeguards, they can ensure the secure adoption of AI tools. This responsibility empowers them to unlock the full potential of AI technology without compromising data integrity.
Just like the railways revolutionised the Industrial Revolution, GenAI is a cornerstone of the Fourth Industrial Revolution. But to truly harness its potential, businesses must 'Mind the Gap' – the security gap that threatens to derail progress. It's not just a catchphrase; it's a call to action.
This securing generative AI Op-Ed is a joint effort between Netskope, a leading SASE provider, and Dynamo6, a partner of Netskope. We provide comprehensive solutions for adopting and securing generative AI in the enterprise.
Sources
-
Mashable, ‘Whoops, Samsung workers accidentally leaked trade secrets via ChatGPT’, April 2023.
-
Cisco, ‘Cisco 2024 Data Privacy Benchmark Study’, Jan 2024.
-
Deccan Herald, ‘Explained | Why Amazon is restricting its employees from using generative AI tools like ChatGPT’, Feb 2024
-
Times Now, ‘Microsoft Blocks Employees From Using Perplexity AI Chatbot, Here's Why’, April 2024.
-
GPDP Italia, ‘Artificial intelligence: the Guarantor blocks ChatGPT Illicit collection of personal data. Absence of systems for verifying the age of minors’, March 2023.
- World Economic Forum, ‘The Fourth Industrial Revolution: what it means and how to respond’, Jan 2016.