Generative AI has been hitting the headlines recently with the release of ChatGPT at the end of 2022. And while it can be considered a useful tool for certain types of tasks, it is also posing a threat to companies, as hackers are using it to create more realistic social engineering attacks. In fact, according to recent Darktrace research, the number of ‘novel social engineering attacks’ increased by 135% in the first two months of 2023.
What are the dangers of generative AI?
Whilst the mode of social engineering attacks isn’t changing – phishing emails are still one of the main ways this is done – generative AI is enabling cybercriminals to create more sophisticated and believable content. Long gone are the days of generic emails from foreign princes with bad English in need of money. Now, perfectly convincing emails can be created that are original and reference relevant information with very little effort.
Generative AI, such as ChatGPT, has also made this type of technology more accessible as it is free and easy to use. It is also able to write code, which makes developing malicious software, such as ransomware possible without coding skills. This means we are likely to see a rise in these sorts of attacks, as it lowers the barrier to entry for cybercrime.
At a more abstract level, the increased use of generative AI has the potential to erode trust between employees and employers over time. The traditional way to limit social engineering attacks is to train people to spot them. However, if malicious communications are now indistinguishable from real ones, employees will learn to mistrust all communications. This isn’t conducive to good business.
How can organisations tackle this threat?
Given training employees to spot AI generated phishing emails is now almost futile, it’s crucial to look for new ways to limit the threat of these types of attacks. And one way to do this is to fight fire with fire: using AI algorithms to determine whether communication is malicious or benign, taking responsibility away from employees.