Join top executives in San Francisco on July 11-12 to hear how leaders are integrating and optimizing AI investments for success. Learn more
Internet solutions company Cloud Flame today unveiled Cloudflare One for AI, its latest suite of zero-trust security controls. The tools enable companies to safely and securely use the latest generative AI tools while protecting intellectual property and customer data. The company believes the suite’s features will provide organizations with a simple, fast, and secure way to deploy generative AI without compromising performance or security.
“Cloudflare One empowers teams of all sizes to use the best tools available on the web without management headaches or performance challenges. Plus, it allows organizations to monitor and review the AI tools their team members have started using,” Sam Rhea, VP of product at Cloudflare, told VentureBeat. “Security teams can then limit usage to only approved tools and, within the approved tools, monitor and control how data is shared with those tools using policies built around [their organization’s] sensitive and unique data.”
Cloudflare One for AI provides enterprises with comprehensive AI security through features such as visibility and measurement of AI tool usage, data loss prevention, and integration management.
With Cloudflare Gateway, organizations can track how many employees are experimenting with AI services. This provides context for budgeting and business licensing plans. Service tokens also give administrators a clear log of API requests and control over specific services that access AI training data.
Cloudflare Tunnel provides an encrypted outbound connection to Cloudflare’s network, while the Data Loss Prevention (DLP) service provides security to close the human gap in how employees share data.
“AI is incredibly promising, but without the right guardrails it can pose significant business risks. Cloudflare’s zero trust products are the first to provide guardrails for AI tools so companies can take advantage of the opportunities AI unlocks while ensuring only the data they want to expose is shared,” said Matthew Prince, co-founder and CEO of Cloudflare, in a written statement.
Mitigating generative AI risks through zero trust
Organizations are increasingly adopting generative AI technology to improve productivity and innovation. But the technology also poses significant security risks. For example, large companies have banned popular generative AI chat apps due to sensitive data leaks. In a recent survey by KPMG US, 81% of US executives expressed cybersecurity concerns around generative AI, while 78% expressed concerns about data privacy.
According to Cloudflare’s Rhea, customers have expressed heightened concerns about inputs to generative AI tools, fearing that individual users might inadvertently upload sensitive data. Organizations have also raised concerns about training these models, which carries the risk of granting too broad access to datasets that are not allowed to leave the organization. By opening up data to these models to learn from, organizations could inadvertently compromise the security of their data.
“The main concern for AI services CISOs and CIOs is overdoing it — the risk that individual users, understandably excited about the tools, end up inadvertently leaking sensitive corporate data to those tools,” Rhea told VentureBeat. “Cloudflare One for AI gives those organizations a comprehensive filter, without slowing down users, to ensure the data shared is allowed and the unauthorized use of unapproved tools is blocked.”
The company claims that Cloudflare One for AI equips teams with the necessary tools to thwart such threats. For example, by scanning data that is being shared, Cloudflare One can prevent data from being uploaded to a service.
In addition, Cloudflare One facilitates the creation of secure paths for sharing data with external services, which can record and filter how that data is accessed, reducing the risk of data breaches.
“Cloudflare One for AI gives companies the ability to monitor every interaction their employees have with these tools or that these tools have with their sensitive data. Customers can effortlessly begin cataloging which AI tools their employees are using by relying on our pre-built analytics,” explains Rhea. “With just a few clicks, they can block or control which tools their team members use.”
The company claims Cloudflare One for AI is the first to provide guardrails around AI tools so organizations can take advantage of AI while ensuring they only share the data they want exposed, without compromising their intellectual property and customer data .
Keeping your data private
Cloudflare’s DLP service scans content as it leaves employees’ devices to detect potentially sensitive data during upload. Administrators can use pre-provided templates, such as social security numbers or credit card numbers, or define terms or phrases for sensitive data. When users try to upload data that contains one or more samples of that type, Cloudflare’s network blocks the action before the data reaches its destination.
“Customers can tell Cloudflare what types of data and intellectual property they control and [that] can never leave their organization, as Cloudflare scans every interaction their corporate devices have with an AI service on the internet to filter and prevent that data from leaving their organization,” explained Rhea.
Rhea said organizations are concerned about external services having access to all the data they provide when an AI model needs to connect to training data. They want to make sure that the AI model is the only service that gets access to the data.
“Service tokens provide a kind of authentication model for automated systems, just as passwords and second factors provide validation for human users,” said Rhea. “Cloudflare’s network can create service tokens that can be delivered to an external service, such as an AI model, and then act as a bouncer that checks each request to reach internal training data for the presence of that service token.”
What’s next for Cloudflare?
According to the company, Cloudflare’s cloud access security broker (CASB), a security enforcement point between a cloud service provider and its customers, will soon be able to scan the AI tools companies use and detect misconfiguration and abuse. The company believes its platform approach to security will enable companies around the world to adopt the productivity improvements offered by evolving technology and new tools and plug-ins without creating bottlenecks. In addition, the platform approach ensures that companies comply with the latest regulations.
“Cloudflare CASB scans the software-as-a-service (SaaS) applications where organizations store their data and complete some of their most critical business activities for potential misuse,” said Rhea. “As part of Cloudflare One for AI, we plan to create new integrations with popular AI tools to automatically scan for abuse or misconfigured defaults to help administrators feel confident that individual users won’t accidentally open doors to their workspaces to create.”
He said Cloudflare, like many organizations, anticipates learning how users will use these tools as they become more popular in the enterprise, and is willing to adapt to challenges as they arise.
“One area we’ve been particularly concerned about is the retention of data from these tools in regions where data sovereignty obligations require greater oversight,” said Rhea. “Cloudflare’s network of data centers in more than 285 cities around the world gives us a unique advantage in helping customers control where their data is stored and how it goes to remote destinations.”