Blog

Fasoo Showcased Data Security for AI at Gartner Security and Risk Management Summit 2023

Fasoo protects sensitive data from uploading into AI LLMsOne theme at this year’s Gartner Security & Risk Management Summit was debunking myths around security.  I’m not sure if the MythBusters people will ask for a fee, but since Adam Savage was one of the keynote speakers I think they are covered. 

The opening keynote talked about a minimum effective mindset for security.  They debunked a lot of widely held beliefs that more is necessarily better.  Examples are more analysis means better protection or more tools equal better protection.  

In fact, the opposite is true.  The reality is more analysis and tools require more people and don’t really help you stay secure as your business grows.  There is also a huge shortage of security professionals, so you will be hard-pressed to find experts to manage this ever-growing toolkit.

Corporate environments are only getting more complex as they try to innovate.  This results in more strain on security and privacy as organizations embrace AI and other advanced technologies to develop and deliver products and services faster.  The need to collect and share a lot of user and customer data also stretches security because you need to guard against privacy violations.

By now most organizations have tried generative AI, like ChatGPT and Google Bard, to create marketing messages, generate prospecting emails or even write software code.  Some are going further by integrating these technologies into their core business.  Some companies are analyzing finance portfolios, accelerating drug development, or doing market analysis.  As the technologies mature more core capabilities will use them.

There are security challenges in using these tools since you want to ensure your employees don’t add sensitive data to public large language models (LLM).  Copying PII, PFI, or PHI into ChatGPT may violate privacy regulations and subject you to fines, lawsuits, or reputational damage.  If a developer or analyst copies your IP to improve their code or analysis, your data may show up in the next person’s answer.

Zero trust, cloud security, and mesh security architectures were talked about at the conference as organizations try to mitigate the risks of using AI as well as the constant increase in cyberattacks.  Generative AI allows you to create a lot more data and the threats to your sensitive data are growing exponentially.  A lot of analysts talked about Data Security Posture Management (DSPM) as organizations try to assess, control, protect and monitor their sensitive data.  As everyone settles into the new reality of hybrid work, these discussions are ramping up.  

At the Fasoo booth, a lot of people talked about the challenges of combining different technologies to address data security in the cloud, in the office, working at home, and sharing with partners and customers.  Companies are looking to consolidate capabilities with fewer tools and focus on more of a platform approach to address their needs.  A constant problem is setting different policies in many tools that still focus more on protecting the location of data rather than the data itself. 

One executive from a manufacturing company talked about how difficult it is to manage all the systems to protect identity and data in so many places.  She has one set of rules for her DLP system that alerts when sensitive documents are shared outside the company.  Her business units want to use generative AI and she is looking into enabling that, but she needs to ensure sensitive data doesn’t leave the company.  DLP doesn’t help because users are copying data from documents into ChatGPT. 

She also worries about CASB policies to manage cloud access and another set of policies for partner access to data repositories.  While each of these has its place, none of them really protect the data since once a user has access, they can do whatever they want with it.

Challenges and Solutions of Privacy and Security Issues in the AI Era

On Monday, June 5, 2023, Jamie Holcombe, CIO at the US Patent & Trademark Office (USPTO), and Tad Mielnicki, Co-Founder of Overwatch, joined Ron Arden, Executive Vice President, CTO, and COO of Fasoo, Inc., in a discussion on how to mitigate privacy and security concerns in the AI era.  Ron framed the conversation with how much has changed since the introduction of ChatGPT in 2022.  Everyone was just getting used to coping with the hybrid workplace and changing security architectures to a zero trust posture, when this new tool and potential threat emerged.

Fasoo Data Security Platform Tackling AI Risks

A telling comment from the discussion was “AI will not replace you.  A person or company using AI will.”

Just like any new technology, we need to embrace the good and the bad.  Generative AI is already changing business, but we need to put guardrails in place to mitigate risks.  Jamie mentioned the USPTO already has guidelines for patent examiners and using these tools.  Generative means unsupervised and it’s important to review what the models produce.  

Tad talked about preventing your sensitive data from getting into the models in the first place.  By using a discovery engine to surface sensitive data, you can encrypt it with security policies that allow users to work, but limit copying into ChateGPT or other systems.  If you use an internal LLM, you can encrypt all IP and privacy-regulated data.  This ensures the data will never make it into the model, since you can prevent uploading or copying that data.

Ron talked about the capabilities of a true zero-trust platform that enables universal control of data at rest, in transit, and especially in use, while continuously validating that a user should have access to that data every time they use it.  This protects your sensitive data from unauthorized access.  Since you can limit the copying of the data into an AI tool, you limit exposure and liability.

Rather than focusing on pieces of a solution, the Fasoo Data Security Platform helps organizations discover, classify, manage, protect, share, audit, monitor, and analyze sensitive data.  Since the fundamental principle is to protect first by encrypting and controlling the use of the data, it removes many of the concerns of protecting every location the data travels.

Click here for a replay of the panel discussion.

Fasoo Approach Minimizes Your Risk

During the course of the summit, a lot of attendees and analysts came to the Fasoo booth to understand how Fasoo’s Zero Trust Data Security can minimize the risks of AI, meet security and privacy regulations and protect sensitive data from both internal and external threats. 

One IT manager wanted an easy way to protect IP from going out the door when employees left the company and also needed to share sensitive information securely with customers.  He liked how the Fasoo Data Security Platform could help with both in one solution.

A number of visitors commented that Fasoo technology is very robust, balances security with usability, and integrates with an organization’s existing infrastructure.  A common strategy is to make the technology almost invisible to users unless they try to violate a security policy.  I remember one person saying, “I was a little skeptical during your presentation, but convinced once I saw it in action.” 

Tags
Book a meeting