Darktrace researchers recently witnessed a 135% trigger in novel social engineering attack emails in early 2023.
It cited that the email attacks targeted numerous of its customers in the first two months of 2023, a trigger which it implied is similar to the adoption rate of ChatGPT.
The attacks employed “sophisticated language techniques”, that included increase in text volume and sentence characters along with punctuations in phishing emails.
It was found there was a rapid decrease in the number of malicious emails that were sent with a link or an attachment.
Darktrace cited that this could only mean that generative AI like ChatGPT, was being used by malicious attackers to construct targeted attacks.
“Email is the key vulnerability for businesses today. Defenders are up against sophisticated generative AI attacks and entirely novel scams that use techniques and reference topics that we have never seen before,” cited Max Heinemeyer, CPO at Darktrace.
Let’s Understand More
Darktrace research showed that 82% of people were worried about attackers using generative AI to create scam emails which were not distinguishable from genuine emails at all. It also cited that 30% of them had fallen for a scam text or email.
Darktrace asked people what the top-three features were that suggested an email was a scam and found out that:
Defending it All!
Defending AI social engineering attacks have always been one of the main stems through which attackers can breach any system of any organization.
An easy way to install any malware on a system is to embed malicious code within a Microsoft Office document, such as a Word file.
Microsoft recently employed several measures to help decrease the abuse of its software in scam attacks such as in 2022, it disabled VBA macros. Recently again, it decided to block emails received from potentially vulnerable Exchange servers.
The Vulnerability Coninues
Microsoft Exchange servers have been abused by hackers for years to send convincing email campaigns involving email hijacking.
The threat of AI to cyber security is genuine and extends beyond just generative AI.
What do you think about it?