ChatGPT is about to set a fire under an already bubbling-hot cyber threat landscape, helping scammers write engaging, convincing and grammatically correct phishing emails or perfect malware code in seconds.
You’re probably familiar with ChatGPT by now, or at least you’ve heard about it. It’s been said that ChatGPT is poised to replace jobs in various fields such as programming, journalism, and music, among others.
Despite being available to the public for only a few months, this OpenAI chatbot has gained a massive following, astonishing users with its capacity to write diverse forms of content, from programming code to essays and song lyrics.
Proofpoint’s Executive Vice President for Cyber Security Strategy, Ryan Kalember, has warned that while cybercriminals could use AI tools like ChatGPT to conduct entire conversations with multiple victims, they don’t usually need or want to due to their already advanced strategies.
Although ChatGPT isn’t yet making a groundbreaking impact, it’s still being utilised by threat actors. At present, it’s merely an easily accessible tool among their arsenal.
There is no shortage of ready-made malware in the world, and code generated by ChatGPT for the same purpose isn’t necessarily superior to existing options. Although you can provide ChatGPT with specific instructions for coding, it doesn’t necessarily result in malware that performs better in terms of evading EDRs and infecting machines.
In instances like business email compromise (BEC), where tone and precision carry significant weight, the email itself is just a piece of the puzzle. Cybercriminals also require access to crucial data, such as payer and payee information, as well as other transaction particulars.
Typically, this information is acquired by infiltrating an inbox, enabling threat actors to mimic the writing style of an email that was already sent from the victim’s account. Rather than creating an email from scratch using AI, they can simply copy an existing one.
Kalember says even the most innovative forms of social engineering don’t lend themselves to the abuse of ChatGPT.
“Through research, we found a “marked increase in sophisticated, multi-touch phishing campaigns engaging in longer conversations across multiple personas.”
“While BEC actors will also play the long game, advanced persistent threat (APT) actors pioneered these types of attacks and haven’t needed to use AI to achieve their social engineering aims,” he said
Although cybercriminals or other skilled adversaries could utilise ChatGPT as a chatbot to engage in simultaneous conversations with numerous victims, this approach is not often necessary or desirable.
Generally, APT campaigns are tailored to specific objectives and involve extensive planning, thorough research, and precise targeting, making them unsuitable for mass deployment.
Moreover, cybercriminals already possess access to millions of compromised credentials and endpoints, and they’ve even created their own CRM-style systems to prioritise manual escalation of privilege and lateral movement.
While not yet a revolutionary force, threat actors are already incorporating ChatGPT into their toolkits and experts warn that it could surpass the capabilities of existing tools and pose a significant threat to cybersecurity.