What risks will shape the cybersecurity landscape in 2025? In this year's predictions, the WatchGuard Threat Lab explores how threat actors will use multimodal AI to streamline attacks, target vulnerabilities in software supply chains, and exploit GenAI's growing capabilities to infiltrate networks and access sensitive information.
1. Malicious Use of Multimodal AI Will Create Entire Attack Chains
By 2025, malicious use of multimodal AI will be used to craft an entire attack chain. As multimodal AI systems gain the ability to integrate text, images, voice, and sophisticated coding, they will be coming to threat actors who will leverage them to streamline and automate the entire pipeline of a cyberattack. This includes profiling targets on social media, crafting and delivering realistic phishing content, including voice phishing (vishing), sometimes finding zero-day exploits, generating malware that can bypass endpoint detection and deploying the infrastructure to support it, automating lateral movements within compromised networks, and exfiltrating stolen data. This hands-off, entirely seamless approach will democratize cyber threats even more radically than malware-as-a-service offerings have in recent years, enabling less-skilled threat actors to launch advanced attacks with minimal human intervention. Therefore, organizations and security teams, regardless of size, will face an increase in highly tailored cyber threats that will be difficult to detect and combat.
2. Threat Actors Move to the Long Con as Attacks Using Compromised Legitimate Software Become the Norm
In 2025, attackers will intensify their attempts to target little-known but widely used third-party open-source libraries and dependencies to avoid detection and execute malicious attacks. They will also expand their focus on a "long-con" approach, where attackers target the software supply chain over a long period of time, building up a false reputation as a good-faith actor rather than just instigating a point attack. This could even involve impersonating or compromising reputable maintainers to enter the software supply chain. By quietly invading these trusted sources that many applications use, attackers can push malware, making the threat much more challenging for organizations and open-source ecosystems to detect and defend against.
3. As GenAI Enters Its Disillusionment Stage, an Opportunity for Bad Actors to Profit Opens
GenAI hasn’t quite yet found its footing in the business landscape to deliver transformative changes to organizations or produce the returns on investment so far promised. Even if the broad impact hasn’t materialized, the technology has seen dramatic improvements in areas involving audio and video generation used in deepfakes, but not without widely publicized gaffs. As the crest of the GenAI hype cycle peaks and trends downward into a trough of disillusionment around its practicality and potential, people’s feelings about GenAI not yet being impressive downplay the complete picture of potential harm. Whether GenAI continues to dominate mainstream headlines or not, the technology itself will continue to improve exponentially in the background. As humans tend to remember the instances of bad deepfakes and other issues, they may believe GenAI is a far-off promise that cannot fool them. This will open up new attack vectors for bad actors to profit by combining GenAI with other sophisticated tactics to earn the trust of organizations when performing what they believe is a legitimate business transaction.
4. The Role of the CISO Will Become the Least Desirable in Business
The CISO is a human-centric role. The biggest issues that CISOs typically encounter are not the technical problems but the human and governance problems. As regulatory and policy demands grow, including requirements for the CISO to personally certify the cybersecurity integrity of their business, they will face greater personal accountability and legal risk in 2025 and beyond. CISOs are facing increased burnout, not to mention the growing challenge of gaining support across departments and managing the actual security threats. The heightened pressure of making sure internal corners are not being cut may increase turnover, lowering the draw of qualified candidates willing to tackle the role and leading to widening the cybersecurity skills gap drain. Ultimately, the difficulty of filling essential CISO positions could lead to delayed responses to security risks until it's too late and there’s a breach or compliance issue that leads to reactive investment. However, there is a bright spot: the cybersecurity supply chain is becoming more attuned to the difficulties of modern-day CISOs. Technology providers and partners are operationalizing a platform approach to alleviate this burden and establish stronger trust and accountability across the ecosystem, and smaller businesses can outsource some of their CISO responsibilities to managed service and managed security service providers (MSPs/MSSPs) who offer CISO services.
5. Intelligence Agencies and Law Enforcement Disruption of Threat Actors Starts to Have a Meaningful Impact
Intelligence agencies and law enforcement are getting more sophisticated in their tactics for foiling and taking down bad actors. A new focus on disrupting cybercriminal activities, expanded international partnerships, and a bevy of new interpretations of laws and policies to support these efforts, are resulting in more big takedowns, and more often. Meaningful adoption of disruption tactics – from turning off botnets to cutting off profit avenues and being louder about their success (such as takeovers of actors’ underground pages) is making it tougher and less incentivizing for cybercriminals to stick their necks out. In particular, partnering with other nations and even private organizations for a "whole of world" approach is making it more difficult and expensive for threat actors to continue to carry out their attacks. This is poised to have major downstream impacts as rising costs are the primary barrier and deterrent for would-be bad actors to get into the hacking game.
6. To Secure Operational Technology, Organizations Will Rely on AI-Powered Anomaly Detection
When it comes to harnessing the power of AI in security, two can play that game. As threat actors will look to increasingly leverage AI to both find and build vulnerabilities, cybersecurity professionals will also take advantage of AI’s capabilities to uncover and stymie their attempts. Additionally, as operational technology (OT) continues to converge with information technology (IT), defenders are poised to have significantly improved controls like AI-powered anomaly detection, to proactively detect and respond to new threats in a technology-agnostic way. Cybersecurity teams will rely less on protocol or application-specific defensive capabilities that are complex to set up and manage and instead deploy more AI-powered anomaly detection controls that baseline “normal” no matter what “normal” is and then alert on deviations.