Gen has released its top cyber threat predictions for 2025. Gen cybersecurity experts predict the next phase of AI and deepfakes, a shift in data theft towards full-scale identity theft, ultra-sophisticated scams and new tactics for financial theft.
“After a monumental year in AI – and a somewhat catastrophic year in breaches – we believe we’ll see significant shifts in both scams and digital identity risks in 2025,” said Siggi Stefnisson, Cyber Safety CTO at Gen. “Cybercriminals will capitalise on large breaches to either steal identities outright or utilise the information to create hyper-personalised and believable scams. AI will exacerbate the issues, not only helping criminals make their scams more sophisticated, but also forcing people to question how technology is shaping human thought. It’s sure to be a year of change, and it will be more important than ever for people to be protective of their digital lives.”
Gen’s Top Cyber Predictions for 2025:
1. AI will blur everyday reality: Large Language Models (LLMs) will begin to create hyper-personalised experiences as people work more with AI. In late 2024, over 200 million people used ChatGPT every week. While convenient, these technologies will likely begin shaping individual perceptions and reality, prompting ethical discussions on AI's impact on human thought. As AI becomes more integrated into complex areas like parenting and education, ethical concerns about its role in society will grow. Expect more debate addressing technology’s boundaries and influence on personal development. The European Union and several US states have already introduced legislation to advance AI protections, and we expect that there will be increased activity across the US and around the world in the coming year. In New Zealand, while there is no standalone AI Act yet, New Zealand’s strategic approach to AI has been outlined in a Cabinet Paper published on 26 June by the Minister of Science, Innovation and Technology, Judith Collins. The Minister intends to take a “light-touch, proportionate and risk-based approach to AI regulation”. This October, New Zealand further demonstrated its commitment to AI safety by joining the UK’s Bletchley Declaration on AI Safety.
2. Deepfakes will become unrecognisable: AI will become sophisticated enough that even experts may be unable to tell what’s authentic. People will have to ask themselves whenever they see an image or watch a video: is this real? Unfortunately, people with bad intentions will take advantage. This can be as personal as a scorned ex-partner spreading rumours via fake photos on social media or as extreme as governments manipulating entire populations by releasing videos that spread political misinformation. As deception becomes increasingly sophisticated, verifiable digital credentials – a combination of verifiable information used together as a digital authenticity signature – will evolve into powerful tools for proving what's real.
3. Data Theft leads to a surge in Identity Theft: Following a year of consistent, large-scale breaches, we will continue to see a significant rise in identity theft. Criminals will stitch together personal information extracted from data breaches, publicly available sources, and information stolen from devices to create comprehensive profiles of individuals, putting individuals at higher risk of identity theft. This will fuel sophisticated extortion attempts and enable attackers to impersonate trusted companies, especially those previously compromised convincely.
4. Scams enter the era of hyper-personalisation: Expect a shift towards hyper-personalised, human-centric methods that manipulate human behaviour rather than exploiting traditional technological vulnerabilities. Armed with personal data from past breaches and dark web exchanges, attackers will develop hyper-targeted strategies to deceive their victims. This is similar to the sextortion campaign uncovered in the US and Canada in 2024, which used Google Street View images to startle victims. Combining psychological insights and social engineering, these schemes will disarm people, deploying convincing phishing and fraud tactics across platforms like social media and messaging apps. Hyper-personalised, human-targeted methods will make it incredibly difficult to distinguish between legitimate communications and scams.
5. Financial theft takes on new forms: Anticipate a notable surge in financial theft driven by increasingly sophisticated mobile banking threats and the growing popularity of cryptocurrencies. Fraudsters will employ advanced techniques like deepfaked celebrities promising high ROI on their fake investment platforms, universal income announced by voice-cloned government officials or fake giveaways to deceive investors and traders alike. The CyrptoCore campaign in 2024 showed signs of this future trend, taking over a million dollars from victims in just a few days, leveraging Elon Musk's deepfakes as a lure. Additionally, cybercrime and the physical world will collide as there are more cases of street muggers forcing people to unlock their phones and provide access to financial apps to transfer funds to attacker-controlled accounts.