The Rise of AI-Driven Scams: A New Era of Digital Deception
Sometime last year, Ian Lamont’s inbox began piling up with inquiries about a job listing. The Boston-based owner of a how-to guide company hadn’t opened any new positions, but when he logged onto LinkedIn, he found one for a “Data Entry Clerk” linked to his business’s name and logo.
The Scamming Landscape
Lamont soon realized his brand was being scammed, which he confirmed when he came across the profile of someone purporting to be his company’s "manager." The account had fewer than a dozen connections and an AI-generated face. He spent the next few days warning visitors to his company’s site about the scam and convincing LinkedIn to take down the fake profile and listing. By then, more than twenty people reached out to him directly about the job, and he suspects many more had applied.
The rise of generative AI has created a dual-edged sword for businesses. While it holds the potential to add immense value—McKinsey estimates it could contribute more to the global economy than the entire GDP of the United Kingdom—its ability to produce authentic-seeming content at scale has also led to a surge in scams. Since the launch of ChatGPT in 2022, the landscape of online fraud has transformed dramatically.
The Deepfake Economy
In the past year alone, AI-enabled scams have quadrupled, according to the scam reporting platform Chainabuse. A survey conducted by Nationwide found that a quarter of small business owners reported encountering at least one AI scam in the past year. Microsoft claims it now shuts down nearly 1.6 million bot-based signup attempts every hour. Renée DiResta, a researcher at Georgetown University, describes the generative AI boom as the "industrial revolution for scams," as it automates fraud, lowers barriers to entry, and increases access to targets.
The consequences of falling victim to an AI-manipulated scam can be devastating. A finance clerk at the engineering firm Arup was duped into approving overseas transfers exceeding $25 million after being convinced he was in a video call with his colleagues, all of whom were deepfake recreations.
The Struggle for Safety
Professionals across various industries—recruitment, graphic design, publishing, and healthcare—are scrambling to protect themselves and their customers from these evolving threats. Many feel like they are engaged in an endless game of whack-a-mole, with the moles multiplying and becoming more cunning.
For instance, last year, fraudsters created a French-language replica of the online Japanese knives store Oishya, sending automated scam offers to the company’s 10,000-plus Instagram followers. The fake company claimed customers had won a free knife, leading nearly 100 people to fall for the scam. Kamila Hankiewicz, who runs Oishya, only learned about the scam after several victims contacted her.
Adapting to New Threats
In response to these threats, Hankiewicz has ramped up her company’s cybersecurity measures and initiated campaigns to educate customers on identifying fake communications. While many customers were upset about being defrauded, Hankiewicz’s proactive approach ultimately strengthened her relationship with them.
Rob Duncan, VP of strategy at cybersecurity firm Netcraft, highlights that GenAI tools enable even novice scammers to clone a brand’s image and craft convincing scam messages in minutes. Attackers can easily spoof employees, fool customers, or impersonate partners across multiple channels.
Despite mainstream AI tools like ChatGPT having precautions against copyright infringement, numerous free or inexpensive online services allow users to replicate a business’s website with simple text prompts. This makes it easier for scammers to create convincing replicas of legitimate businesses.
The Challenge of Verification
Text is just one front in the war against malicious AI use. It now takes a solo adversary with no technical expertise as little as an hour to create a convincing fake job candidate for a video interview. Tatiana Becker, a tech recruiter in New York, describes deepfake job candidates as an "epidemic." She has had to frequently reject scam applicants who use deepfake avatars to cheat on interviews.
Nicole Yelland, a PR executive, experienced a scam firsthand when a fraudster impersonating a startup recruiter approached her via email. The scammer sent an enticing offer package, but during the video meeting, the "hiring manager" refused to speak and instead asked Yelland to type her responses. Alarm bells rang when she was asked to share private documents, including her driver’s license.
The Need for Robust Solutions
While platforms like Zoom and Microsoft Teams are improving their ability to detect AI-generated accounts, experts warn that this could create a vicious cycle. The data collected to identify fakes may be used to train more sophisticated GenAI models, making it increasingly difficult to combat fraud.
To tackle this issue, some experts advocate for a shift towards authenticating a person’s identity rather than merely detecting fakes. Companies like Beyond Identity are developing tools that verify meeting participants through biometrics and location data, labeling any discrepancies as "unverified."
The Broader Implications
The challenges extend beyond fake job candidates; entrepreneurs must also contend with fraudulent versions of themselves. In late 2024, scammers ran ads on Facebook featuring a deepfake of Jonathan Shaw, a medical professional, promoting a dangerous dietary supplement. Patients, believing the video was genuine, reached out for advice on the supplement.
Moreover, the internet is inundated with low-quality, AI-generated content, making it increasingly difficult for users to discern what’s real. Instances of social platforms promoting malicious content have been documented, leading to scams involving nonexistent rental properties and products.
Conclusion: Vigilance is Key
As the landscape of online business continues to evolve, small business owners must remain vigilant. They should always verify that they are dealing with actual humans and ensure that their financial transactions are secure. While generative AI can enhance efficiency, it also empowers scammers to operate with unprecedented ease.
In this new era of digital deception, the stakes are higher than ever. As Ian Lamont and countless others have learned, the battle against AI-driven scams is ongoing, and staying informed is the first line of defense.
Shubham Agarwal is a freelance technology journalist whose work has appeared in Wired, The Verge, Fast Company, and more.