2024 Fraud Predictions: AI, ATO & Social Engineering

By Doriel Abrahams, Principal Technologist, Forter

In 2023, the news cycle was dominated by the continued breakout of generative AI — and it’s hard to argue, given the technology’s breakneck speed in evolution and the resulting impact on every occupation and industry.

Fraud has been no exception, and I expect to see it continuing to evolve in 2024. Here are five predictions for fraud in 2024 and the things fraud fighters working to protect their companies must prepare for the months ahead.

 

A Small Step for a Scam, a Giant Leap for Social Engineering

It seems slightly counterintuitive, but the advances in artificial intelligence are exactly what fuel an advance in attacking the most human part of online interactions. I expect to see that take a giant leap forward in 2024. 

The pressure on social engineering comes, slightly ironically, from the impressive efforts that consumer technology companies have invested in making their apps and systems safer, encouraging multi-factor authentication, enabling biometric identification, and so on. The success of these protective measures means that criminals look in other places for vulnerabilities. Very often, the weakest link is the human one. 

With the growth and sophistication of generative AI, and especially thanks to the intuitive interfaces made possible by things like ChatGPT and FraudGPT, it has become exceptionally easy for fraudsters to carry out highly convincing, multi-step social engineering scams at a scale that simply would not have been possible before. 

In the same way, deepfake technology has become incredibly convincing and practically ubiquitous among the criminal fraternity. Where once it was only seen where significant amounts of money were involved, as with scamming CEOs of companies, now it is starting to be used in scams against ordinary people. 

Fraudsters were refining their techniques in 2023; I expect to see these take off further in 2024. 

 

Remote Desktop Control Brings Fraud Close to Home

Remote desktop control (RDC) is when a fraudster takes over a victim’s device and uses the device (laptop, desktop computer, etc.) to operate as the victim. From the perspective of surface-level analysis, they look to a site or app exactly like the regular user; they’re using the normal device, which has all the features and settings that it usually has, apparently from the location the real user typically uses. 

This makes it challenging to identify the bad actor behind the facade, making it easy for many fraudsters to change victim’s passwords, make purchases, apply for new credit cards, and more. It’s much less likely that they’ll experience friction because it looks above board. Friction they experience can also sometimes be circumvented, depending on the depth of their access to the device and its information. 

RDC attacks are not new, having been a feature of online fraud for many years, especially in banking. However, they’re starting to tick up as another area of vulnerability that is not protected by the efforts made by consumer tech companies in recent years. Moreover, with the rise of IoT and smart devices, combined with generative AI in the form of AI assistants, there are simply more devices to take over, making this area worth exploring for fraudsters.

I suspect 2023 was just the tip of the iceberg and that 2024 will take RDC attacks to another level.

 

ATO Onwards and Upwards

Continuing the theme of fraudsters looking for gaps in the protections, account takeovers are also in my sights for 2024. ATOs are when a bad actor gains access and takes over an online account using stolen or hacked credentials. 

As with RDC, the problem is not the account or its information but that the person using the account is not the person to whom it belongs. It’s a legitimate account hijacked by a bad actor. Also, like RDC, this makes identification harder.

Unlike with RDC, however, there are certain signs that fraud fighters know to look out for, especially device and location-related aspects. But since fraudsters know those can be signs of their activity, they increasingly take active steps to obfuscate the reality. 

One ingenious method that’s just started appearing is fraudsters entering an account a few times, then changing only the shipping address and nothing else, and not taking other action. They don’t attempt a purchase, which might set off more checks; instead, they hope that when the real customer makes the purchase, they won’t notice the new primary address and the purchase will end up at the fraudster’s door without any further effort. 

This same scenario can play out with even more problematic consequences in a marketplace setting if the fraudster manages to change the bank account details used to withdraw funds from the site. Although there is usually a higher bar for making a change of this nature, if the fraudster can manage it, it’s doubtful that the seller will notice in time, and substantial amounts of money may be involved. Making only one change is a subtle way to help fraudsters avoid detection. 

I expect to see this and other ATO variants expanding in 2024. 

 

AI Assistants Challenge Fraud Fighters

The more generative AI tools can be used to make consumer’s daily activities easier, the greater the surface area for fraud. A bot that can take care of shopping, gather information to plan a vacation or meal plan, compare flight prices, and so on sounds like a great idea. It’s easy to see why consumers will likely jump on these attractive time-savers. As a fraud fighter, it’s also easy to see how a device that can make purchases automatically using consumers’ credentials and payment information represents a vulnerability. 

The challenge that lies in store for fraud fighters is twofold. First, distinguishing between good bots, like these, and bad bots, used by bad actors and in scams or abuse, is challenging. It’s not enough to distinguish a bot once these good bots come online; you must be able to tell the difference between good and bad bots and act accordingly. Second, there’s also the challenge of ensuring that a good bot isn’t being taken over and misused by a bad actor. 

This trend is only in its infancy, but based on the speed at which generative AI has been evolving in 2023, it’s one that fraud fighters need to start thinking through sooner rather than later. Waiting until the technology reaches fruition might be too late to prevent significant loss to your business. 

 

Fraud Prevention Empowerment

To end on a positive note, while it’s true that new technologies present challenges from the fraud perspective, opening up new possibilities for fraudsters, it’s important to remember that fraud fighters can and should also leverage these tools. 

Generative AI has enormous potential for helping fraud experts manage and analyse vast amounts of data by interacting with it in their language rather than needing to fight with systems, unnecessarily complicated data structures and SQL. This opportunity cannot be underestimated. Fraud teams can use more information, more effectively, to get answers to urgent, essential questions and questions they’ve been asking for years. 

The power this puts in our hands as a fraud community is tremendous. This, combined with the collaborative and creative energy of fraud fighters and their determination to solve mysteries and improve things, makes me feel alert, optimistic, and even excited as we head into 2024.

Previous post Beyond Theory: firsthand insights from GoodHabitz’s integration of ChatGPT into online learning
Next post Why You Need Robust Administrative Controls to Deliver Security and Compliance in Mobile Messaging