As goes the cycle of cybersecurity, every new technology creates both a new landscape of threats and tools to defend against them. Generative AI is no exception.
“Gen AI makes things easier for both the defenders and the attackers,” said Subha Tatavarti, chief technology officer at Wipro Limited, at panel focused on cyber security threats in the AI age at Fortune’s Brainstorm AI conference in San Francisco this week.
Generative AI is making phishing attacks more convincing, and large language models in particular have created a massively exposed attack surface. At the same time, malicious actors are now selling hacker-targeted ChatGPT-like chatbots on the dark web that will spin up vector attacks as quickly as OpenAI’s product will answer questions or summarize text. But what’s especially challenging about the impact of generative AI on cybersecurity is the whiplash speed at which it’s hit the market (including the black market). Companies across sectors are now scrambling to not only understand emerging generative AI-enabled attacks and build new defense tools, but deal with fast-moving challenges about internal usage of these tools, policy, and compliance. As a result, the CISO role is being turned on its head.
“I feel for the CISOs of today,” said Tatavarti, adding that it’s going to be critical for CISOs to innovate quickly, including doing their own innovation beyond just what’s available on the market.
Tatavarti spoke alongside Checkpoint Chief Strategy Officer Itai Greenberg and Rodrigo Madanes, global AI innovation leader at EY, during a strategy session exploring how AI is impacting the evolving cybersecurity landscape. Amid the discussion about new kinds of threats being made possible by generative AI, the impact on the CISO role was a clear touchpoint that’s having a massive impact.
“The CISOs role is incredibly challenging and evolving quickly,” said Madanes. “I think right now, what’s happening is that they have been enforcing existing policies on data and protection, but as they move into the realm of shouldering the responsibility of protecting injection against the conversational interfaces that are being deployed, that requires a different skill set, a different set of tools that haven’t even been developed, that are mostly homegrown right now.”
Similarly, Greenberg said CISOs should be thinking about what tools they’re using and what data they’re uploading to those tools, especially public tools. This also includes carefully laying out guardrails, including for who can remove data from these systems.
To many, this looks like a different kind of role than the CISOs of yesterday, which narrowed in more on the technical aspects, such as IT outsourcing, rather than making major policy decisions. This point inspired a lively discussion among the participants, who commented on the growing risks of being a CISO and speculation that the role may actually split into two — one more operational role, and one that’s more governance-oriented.
Pointing to the fact that CISOs are now being held personally criminally liable regarding their handling of attacks on their companies, one participant, Ross Camp from data security and protection firm Commvault, asked if we should be worried about a shortage of CISOs in the near future. Just last month, former SolarWinds CISO Timothy Brown was charged by the Securities and Exchange Commision for defrauding investors by failing to disclose known security risks that led to the massive supply-chain attack on the company — and analysts and law professionals believe this will become much more common.
In terms of how to fight generative AI attacks with generative AI, this is still a work in progress. But in 2024, Madanes said the industry will be off to the races to build solutions.
“I think we’re only starting to see people realize how the attack vectors that are going to come into agents that are exposed to the outside world — what shape those are going to have, and what are going to be the commercial solutions they need to put in place. But I don’t think we’re there yet,” Madanes said. “I think we’re rushing to build commercial solutions, assess them, and deploy them.”
Greenberg, who provided much of the insight into the new types of attacks forming, such as next-level phishing and the availability of applications like FraudGPT, advocated for the importance of multiple lines of defense and cautioned against believing any one tool can do the job.
“I think it’s important for us to understand that it’s not one system, not one product that can deal with this,” he said.
Credit: Source link