3 AI-Powered Cyber Attacks Coming for Your Business

At the beginning of this month (#CybersecurityAwarenessMonth) we outlined the Top 5 Cyber Threats to Small Businesses this year. We're breaking down each one to give you a better understanding of how these prevalent attacks are debilitating businesses in 2024 and into the already encroaching new year. You can see the full list of breakdowns at the bottom of the blog above!

AI is everywhere and you've noticed. But discussions on a big scale can often make you scratch your heard, wondering, should I care? Is it a good thing or a bad thing? The answer, is as anything really is, both good and bad.

AI at it's basic level is good, but hackers and other threat actors are using AI to create attack codes and threats to all businesses, not just small ones. Unfortunately, though, small businesses are the easiest targets. And now threats are evolving faster than ever. 

Types of AI Attacks on Small Businesses

Automatically Finding Vulnerabilities

You can try it yourself. Open any large language model (LLM) and type in "what are the most common vulnerabilities in a cloud system?" - this is  basic question that opens up astronomical results. Just as easy as is it is for you to find information on the web, so too is it as easy for hackers. But it goes beyond that. 

Since LLMs are inherently built to digest vast amounts of knowledge at once, they are therefore incredibly easy to manipulate. Especially since the current LLMs like ChatGPT are advancing in a way that now sets a standard for cyber criminals to build off of. What's more, for the price of a ChatGPT Plus membership ($20 /month), you get access to "Code Interpreter", a new addition to ChatGPT 4.0. 365 DataScience say it well: Like a screen share example in a Teams call, "Code Interpreter runs the code for you and provides the ready-made output. Plus, it lets you see how it arrived at the result, so you can still run the code yourself... it makes specific solutions accessible to less tech-savvy individuals, empowering them to conduct tasks requiring programming without knowing how to write a single line of code." Did you catch that? That's just for less tech-savvy individuals.

It's even scarier if a company already integrates AI into their own systems. All it takes is a hacker to manipulate their own code and input it into your system - boom, easy vulnerability. 

Launching Sophisticated Phishing Campaigns

This builds on the previous issue. Because AI models are ever-evolving, this means the work that a threat actor has to do lowers. Pg. 8 of the "Spear Phishing with Large Language Model" study done at the Oxford Centre for the Governance of AI details how the future will be impacted by AI-driven phishing attacks: "cybercriminals will gain the ability to to automate increasingly sophisticated hacking and deception campaigns with little or no human involvement." 

Essentially, without having to work as hard, cyber criminals can and will generate phishing emails that mimic human touch without even working hard enough to produce content written by a human. ChatGPT 4.0's free version has advanced so much that a simple "generate a spear phishing email" (which is something you can input into any LLM by the way), can produce human-level quality output. The sophistication has nearly doubled from ChatGPT 3.0 to 4.0. See pages 5-6 of that Oxford study for an example. 

Bypassing Traditional Security Measures

LLMs, as we said before, are easily manipulated or "jail-broken" to act in any way the developer wishes. All they have to do is draft a prompt and input it in a way that break's the LLM's inherent safety precautions. DarkReading says "many LLMs are designed to not provide detailed instructions when prompted for how to make a bomb. They respond that they can't answer that prompt. But there are certain techniques that can be used to get around the guardrails. An LLM that has access to internal corporate user and HR data could conceivably be tricked into providing details and analysis about employee working hours, history, and the org chart to reveal information that could be used for phishing and other cyberattacks." 

At it's core, it's easy to bypass your security measures as long as the criminal has the right prompt and information to offer the LLM.

What To Do Next

Securing yourself first and foremost is the only proactive measure that can help you prevent AI-driven attacks. It's daunting and perhaps feels a little like fitting an ocean in a water bottle but we're here to help in any way we can. 

Educating yourself is step one, so you're in the right place. Step two is to set those security measures in place. If you need help, let us know. We'd be happy to come alongside you in that process. 

Originally published on October 22, 2024

Be a thought leader and share:

Subscribe to Our Blog

About the Author

Emily Kirk Emily Kirk

Creative content writer and producer for Centre Technologies. I joined Centre after 5 years in Education where I fostered my great love for making learning easier for everyone. While my background may not be in IT, I am driven to engage with others and build lasting relationships on multiple fronts. My greatest passions are helping and showing others that with commitment and a little spark, you can understand foundational concepts and grasp complex ideas no matter their application (because I get to do it every day!). I am a lifelong learner with a genuine zeal to educate, inspire, and motivate all I engage with. I value transparency and community so lean in with me—it’s a good day to start learning something new! Learn more about Emily Kirk »

Follow on LinkedIn »