There’s no doubt about it, small organizations can enhance their operations using ChatGPT and other generative AI tools. However, despite this being neutral technology, cybercriminals are using it with malicious intent to strengthen their attacks. That’s why organizations need to leverage robust cybersecurity tools, sophisticated knowledge, and security awareness training to stay secure in a world with advanced AI. But how exactly is it helping these bad actors?
Table of Contents
How Cybercriminals Can Use Generative AI
Just like you or me, cybercriminals can boost their operations by taking advantage of generative AI, like ChatGPT – which is alarming. Consider this:
You start every workday by sifting through the many emails you’ve collected since you left the day before. Today, you need to go through over 40. You have had security awareness training, so as you sift through them you diligently look for misspellings and emails that read poorly. Why? Because they are the most telling traits of phishing and scam emails.
When you finish, you realize you didn’t see any emails that appeared off. Normally, you see at least one or two, but that doesn’t mean you didn’t have any phishing attempts in your inbox. This is a world with a natural language processing tool at anyone’s fingertips – threats existing more heavily disguised. So, we’ll start here on our journey of exploring how cybercriminals can use AI to enhance their malicious threats.
Convincing Phishing Emails
Due to the powerful nature of an AI tool, like ChatGPT or Google Bard, bad actors can generate emails free of poor spelling or confusing grammar – enabling them to send convincing phishing emails that appear to be from legitimate organizations. This capability increases the likelihood of recipients falling for the scam and revealing confidential information, such as:
- Account numbers
- Credit card numbers
- Personal data
Tailored Email Content
If you are vaguely familiar with marketing approaches, you know that personalized messages are more effective in achieving a desired response. Whether you’re trying to increase social media engagement or get more people to sign up for your webinars via email, people are more likely to act when they receive messages tailored to them. Many bad actors use a similar approach.
Hackers can use generative AI platforms to maliciously tailor their content toward a specific audience. They can use AI to gather information from public online sources and use it to manipulate individuals or companies in an attack.
Here’s an example of a personalized phishing email that Bleeping Computer asked ChatGPT to write (for demonstration purposes):
Phishing email written by ChatGPT (Bleeping Computer)
As you can see, as generative AI becomes more advanced, it has become easier for hackers to create more realistic and convincing emails that can trick even the most vigilant users. Unfortunately, this means the number of hard-to-identify phishing emails you receive in your inbox may significantly increase – posing a significant threat to your data security.
Generating Malicious Codes
It may come as a surprise to some, but ChatGPT excels at writing impressive code. AI-adopting bad actors can use this strength to create malicious codes or commands that exploit vulnerabilities in your organization. Access to this capability makes it easier for cybercriminals to gain unauthorized access to systems and launch advanced attacks, as the barrier of entry is lowered.
Popular generative AI tools do have restrictions in place to prevent malicious use (i.e., the generation of malicious codes). As you might expect, controls are not enough to stop determined cybercriminals. They can be bypassed by knowledgeable hackers.
To avoid the limitations of controls entirely, many cybercriminals are using tools they find on hacker forums. These AI platforms have no controls in place which allows them to craft phishing and business email attacks with little to no resistance. WormGPT is one such tool and has been trained on a diverse array of data sources – concentrating on malware-related data.
ChatGPT, Cybersecurity, and BEC
It’s for good reason that AI-powered cybersecurity threats, like ChatGPT, are a growing concern for organizations of all sizes. They can evade traditional security measures more easily and cause significant damage. Business email compromise (BEC) is one threat that should be on the radar for all organizations.
Over the last decade, BEC caused over $50 billion in losses. This was before the advent of readily accessible, and efficient generative AI. We now exist in an era where AI-enabled attacks thrive. More will need to be done to safeguard sensitive information or there will be an increase in organizations facing greater losses.
Any organization that lacks sophisticated cybersecurity are at a significantly higher risk of becoming a victim of cybercriminals. The repercussions of such attacks include substantial financial loss, such as money lost to hackers and operational disruptions. Other impacts on organizations include:
- Lost customer trust leading to loss of business.
- Legal consequences due to non-compliance.
- Increased security costs to prevent future attacks.
- Changes in cyber insurance coverage, including premium adjustments.
Protecting Your Organization
To protect future success, organizations need to proactively mitigate risks. Given that 91% of cyberattacks start with email, it is crucial that all staff receives regular security awareness training. Staff members need to be aware of the threats they face and how their actions can protect sensitive information. We highly recommend using KnowBe4. Organizations who implement KnowBe4’s security training improve their phishing susceptibility by an average of 85% in one year.
Additionally, it is essential to invest in advanced cybersecurity tools and strategies to remain up to date with the latest security trends and to counter complex social engineering attacks effectively. Small businesses (SMBs), specifically, need to have access to a knowledgeable cybersecurity team to mitigate the risks.
Verizon’s 2022 Data Breach Investigations Report highlights the devastating impact of a data breach, with 60% of SMBs going out of business within six months of a cyberattack. Leaders can enhance their cyber resilience from advanced cyber threats, like AI-enhanced phishing attacks, by:
- Providing regular training to their employees to mitigate potential risks.
- Investing in advanced technology and knowledgeable cybersecurity staff.
- Enabling MFA.
- Regularly implementing updates and patches.
Taking a proactive approach to cybersecurity is vital for organizations to safeguard valuable assets and maintain the trust of their customers.