What cybersecurity threats will emerge from the depths of the dark web and from the minds of threat actors in 2024? In this year’s video series, the WatchGuard Threat Lab shares their predictions for how cyberattackers might leverage AI, your favorite tech gadgets, and more, to bypass your defenses and access your private data.
1. Prompt Engineering Tricks Large Language Models
Large Language Models (LLMs) – AI/ML models that allow a computer to carry on a very convincing conversation with you and answer just about any question (though not always accurately) – have taken the world by storm. There’s a risk lurking underneath the fun surface, however. Threat actors and trolls love to turn benign emerging technologies into weapons for their own nefarious purposes and amusement. The same LLMs that might help you draft a paper could also help criminals write a very convincing social engineering email. While the creators of LLMs have slowly tried to add safeguards that prevent bad actors from abusing LLMs for malicious purposes, like all security, it’s a cat-and-mouse game. While not exactly traditional hacking, “prompt engineers” have been working diligently in the shadows to develop techniques that effectively nudge LLMs out of their “sandbox” and into more dangerous waters where they can chart a course of their own with greater potential to yield malicious results.
The potential scale of the problem gets scary when you consider that more and more organizations are trying to harness LLMs to improve their operational efficiency. But using a public LLM for tasks dependent on your proprietary or otherwise private data can put that data at risk. Many of them retain input data for training purposes, which means you’re trusting the LLM vendor to store and protect it. While a traditional breach that exposes that raw data is still possible, we believe threat actors may target the model itself to expose training data.
During 2024, we forecast that a smart prompt engineer—whether a criminal attacker or researcher — will crack the code and manipulate an LLM into leaking private data.
2. MSPs Double Security Services via Automated Platforms
The last full-year estimate pegged the global number of unfilled cybersecurity jobs at 3.4 million, a figure that surely grew substantially in 2023. Adding fuel to the fire, cybersecurity has a burnout problem (pun intended), which is why Gartner predicts nearly 50% of cybersecurity leaders will change jobs, contributing to a “great cybersecurity resignation.” With so many unfilled cybersecurity positions, how will the average small to midmarket company protect themselves?
The answer is managed service and security service providers (MSP/MSSPs). MSPs will enjoy significant growth in their managed detection and response (MDR) and security operations center (SOC) services IF they can build the team and infrastructure to support it. We expect the number of companies who look to outsource security to double due to both the challenging economy and difficulty in finding cybersecurity professionals. To support this spike in demand for managed security services, MSPs/MSSPs will turn to unified security platforms with heavy automation (AI/ML), to lower their cost of operations, and offset the difficulty they may also have in filling cybersecurity technician roles.
3. AI Spear Phishing Tool Sales Boom on the Dark Web
While AI/ML risks may still only account for a fraction of the attacks during 2024, we do expect to see threat actors really begin experimenting with AI attack tools and start to sell them on the underground. This prediction: 2024 will see a boom in an emerging market for automated spear phishing tools, or a combination of tools, on the dark web. Spear phishing is one of the most effective tools attackers have to breach networks. However, traditionally it has also required the most manual work to research and target victims. There are already publicly available tools for sale on the underground to send spam email, automatically craft convincing, targeted text when equipped with the right prompts, and scrape the Internet and social media for a particular target’s information and connections, but a lot of these tools are still manual and require attackers to target one user or group at a time. Well-formatted procedural tasks like these are perfect for automation via AI/ML. During 2024, we expect to see at least one AI/ML-based tool to help automated spear phishing show up for sale on the underground.
4. AI-Based Vishing Takes Off in 2024
Voice phishing (vishing) increased over 550% YoY between the first quarter of 2021 and Q1 2022. Vishing is when some scammer calls you pretending to be a reputable company or organization or even a co-worker (or someone’s boss) and tries to get you to do something they can monetize, such as buying gift cards or cryptocurrency on their behalf.
The only thing holding this attack back is its reliance on human power. While VoIP and automation technology make it easy to mass dial thousands of numbers and leave messages or redirect victims unlucky enough to answer, once they’ve been baited to get on the line, a human scammer must take over the call to reel them in (so to speak). Many of these vishing gangs end up being large call centers in particular areas of the world, very similar to support call centers, where many employees have fresh daily scripts that they follow to socially engineer you out of your money. This reliance on human capital is one of the few things limiting the scale of vishing operations.
We predict that the combination of convincing deepfake audio and large language models (LLMs) capable of carrying on conversations with unsuspecting victims will greatly increase the scale and volume of vishing calls we see in 2024. What’s more, they may not even require a human threat actor’s participation.
5. VR/MR Headsets Allow the Re-Creation of User Environments
Virtual and mixed reality (VR/MR) headsets are finally beginning to gain mass appeal. VR/MR headsets offer a ton of new and personal information for attackers to steal, monetize, and weaponize. Among that information is the actual layout of your house or play space.
To track your presence in a virtual environment properly, these headsets must track you in real space. They do so with various cameras and sensors that get many perspectives of the room or area you inhabit. Even when they only use 2D cameras, combining the multiple camera angles with photogrammetry could allow someone with access to that data to get the layout of the room you are in. More recently, the already popular Quest 3 headset added a depth sensor, which allows it to not only automatically get a more detailed layout of your real-life environment, but also of the furniture and objects within that environment. These headsets have also added “passthrough” and mixed reality features, which allow you to walk around your entire house with the headset on, all the while using that depth sensor to potentially 3D map the layout of your surroundings wherever you go.
So far, the creators of these headsets do not yet seem to be looking to store this data for their own purposes (yet being the operative word here). They also try to design safeguards to prevent software or malicious actors from gaining access. But it is there; and for those with the will, a way can always be found. In 2024, we predict either a researcher or malicious hacker will find a technique to gather some of the sensor data from VR/MR headsets to recreate the environment users are playing in.
6. Rampant QR Code Usage Results in a Headline Hack
While quick response (QR) codes – which provide a convenient way to follow a link with a device such as a mobile phone – have been around for decades, they have surged in popularity in recent years resulting in a mainstream explosion in usage. While five years ago most American households may not have known what a QR code was, now almost everyone uses them; at the very least, to look at a restaurant menu, as many were conditioned to do through and since the pandemic.
Unfortunately, the convenience of QR codes is training people to unthinkingly do the very thing that cybersecurity professionals say they should never do: click on random links without knowing where they go. Not only do QR codes encourage bad security practices, they obscure some of the techniques many would use to verify if a typical URL or hyperlink is safe to click on.
As they are readily available and increasingly posted and utilized in public spaces, it’s trivial for attackers to alter those codes. It could even be something as simple as a replacement sticker slapped on the original one directing to a malicious site instead of the menu at a local café, for instance.
So, unless you are sure you need to follow a link, you should never follow a QR code out of curiosity. More importantly, if you really need to get to the destination the QR code provides, one of the first things you should do before visiting the link is carefully check the full domain and URL to ensure it correctly directs to the place you expect. Unfortunately, turning text links into these graphics, though quick and convenient, makes it harder for people to verify the domain and full URL before visiting. All of this is why QR codes are such dangerous and fantastic obfuscation tools for attackers.
Despite these obvious risks, QR codes have proven too useful and convenient for the average person to ignore. If something is really good, people will do it despite potential problems. For that reason, we expect a big, headline-stealing breach or hack to start with an employee of the victim following a QR code – leading them to accidentally visit a malicious destination.