Join top executives in San Francisco on July 11-12 to hear how leaders are integrating and optimizing AI investments for success. Learn more
The promised AI revolution has arrived. ChatGPT from OpenAI set a new record for the fastest growing user base and the wave of generative AI has spread to other platforms, creating a massive shift in the technology world.
It’s also dramatically changing the threat landscape – and we’re starting to see some of these risks materializing.
Attackers use AI to improve phishing And fraud. Meta’s language model with 65 billion parameters has been leaked, which will undoubtedly lead to new and improved phishing attacks. We see new rapid injection attacks daily.
Users often put business-sensitive data into AI/ML-based services, prompting security teams to scramble to support and monitor the use of these services. For example, the engineers of Samsung put own code in ChatGPT to get help with debugging, leaking sensitive data. An investigation through Fishbowl showed that 68% of people who use ChatGPT for work don’t tell their bosses.
Abuse of AI concerns more and more consumers, companies and even the government. That has been announced by the White House new investments in AI research and upcoming public reviews and policies. The AI revolution is moving fast and has led to four major types of problems.
Asymmetry in the attacker-defender dynamics
Attackers are likely to adopt and develop AI faster than defenders, giving them a distinct advantage. They will be able to perform advanced attacks powered by AI/ML at incredible scale and at a low cost.
Social engineering attacks are the first to take advantage of synthetic text, speech, and images. Many of these attacks that require some manual effort, such as phishing attempts masquerading as IRS or real estate agents and tricking victims into transferring money, will become automated.
Attackers can use these technologies to create better malicious code and launch new, more effective attacks at scale. For example, they will be able to quickly generate polymorphic code for malware that evades detection by signature-based systems.
One of AI’s pioneers, Geoffrey Hinton, recently made headlines when he told the New York Times that he sorry for what he helped build because “it’s hard to see how you can keep the bad actors from using it for bad things.”
Security and AI: further erosion of social trust
We’ve seen how quickly misinformation can spread thanks to social media. That’s according to a Pearson Institute/AP-NORC Poll from the University of Chicago 91% of adults across the political spectrum believe misinformation is a problem and nearly half fear they have spread it. Put a machine behind it and social trust can erode cheaper and faster.
Today’s AI/ML systems based on large language models (LLMs) are inherently limited in their knowledge, and if they don’t know how to answer, they make things up. This is often referred to as “hallucinating,” an unintended consequence of this emerging technology. When looking for legitimate answers, a lack of accuracy is a huge problem.
This damages people’s trust and creates dramatic mistakes with dramatic consequences. A mayor in Australia, for example, says he can sue Open AI for defamation after ChatGPT falsely identified him as a prisoner for bribery when he was actually the whistleblower in a case.
New attacks
In the next decade, we will see a new generation of attacks against AI/ML systems.
Attackers influence the classifications systems use to bias models and verify outputs. They create malicious models that are indistinguishable from the real ones, which can do real damage depending on how they are used. Rapid injection attacks will also become more frequent. Just a day after Microsoft introduced Bing Chat, a Stanford University student convinced the model to be internal guidelines.
Attackers are starting an arms race with hostile ML tools that trick AI systems in various ways, poison the data they use, or extract sensitive data from the model.
As more of our software code is generated by AI systems, attackers can exploit inherent vulnerabilities that these systems have inadvertently introduced to compromise applications at scale.
Scale externalities
The cost of building and operating large-scale models will create monopolies and barriers to entry that will lead to externalities that we may not yet be able to predict.
Ultimately, this will have a negative impact on citizens and consumers. Misinformation will be rampant, while social engineering attacks will hit consumers on a massive scale who will have no means to protect themselves.
The federal government’s announcement that governance is coming is a great start, but there’s so much to make up for to stay ahead of this AI race.
AI and security: what comes next
The non-profit organization Future of Life Institute published an open letter calling for a pause in AI innovation. It got a lot of press coverage, with Elon Musk joining the crowd of concerned parties, but hitting the pause button just isn’t feasible. Even Musk knows this – he seems to have changed course and started for himself AI company compete.
It was always disingenuous to suggest that innovation should be suppressed. Attackers will certainly not honor that request. We need more innovation and more action to ensure that AI is used responsibly and ethically.
The silver lining is that this also creates opportunities for innovative approaches to security that use AI. We will see improvements in threat hunting and behavioral analysis, but these innovations will take time and investment. Every new technology creates a paradigm shift and it always gets worse before it gets better. We’ve gotten a taste of the dystopian possibilities when AI is used by the wrong people, but we need to act now so that security professionals can strategize and respond when large-scale problems arise.
At this point, we are hopelessly unprepared for the future of AI.
Aakash Shah is CTO and co-founder of oak9.