AI-driven scams are rapidly advancing as cybercriminals leverage new technologies to exploit victims, according to Microsoft’s latest Cyber Signals report.
In the past year, Microsoft has thwarted $4 billion in fraud attempts, blocking around 1.6 million bot sign-up attempts every hour, highlighting the scale of this escalating threat.
The ninth edition of Microsoft’s Cyber Signals report, titled “AI-powered deception: Emerging fraud threats and countermeasures,” discusses how artificial intelligence has lowered technical barriers for cybercriminals, allowing even low-skilled individuals to create sophisticated scams quickly and easily.
Tasks that once took scammers days or weeks to accomplish can now be completed in minutes.
This shift in fraud capabilities marks a significant change in the criminal landscape, impacting consumers and businesses globally.
The progression of AI-driven cyber scams
Microsoft’s report emphasizes how AI tools can now scan and extract company information from the web, enabling cybercriminals to create detailed profiles of potential targets for highly convincing social engineering attacks.
Fraudsters can trick victims with AI-generated product reviews and fake storefronts, complete with fabricated business histories and customer testimonials.
Kelly Bissell, Corporate Vice President of Anti-Fraud and Product Abuse at Microsoft Security, highlights the growing threat, stating, “Cybercrime is a trillion-dollar issue, and it has been rising every year for the last 30 years.”
He suggests that there is an opportunity to leverage AI more rapidly to detect and mitigate threats, noting, “Now we have AI that can make a difference at scale and help us integrate security and fraud protections into our products faster.”
The Microsoft anti-fraud team reports that AI-powered fraud attacks are happening worldwide, with significant activity originating from China and Europe, especially Germany, due to its prominence in the EU’s e-commerce market.
The report further indicates that the larger the digital marketplace, the higher the likelihood of proportional fraud attempts.
E-commerce and job-related fraud are on the rise.
AI-driven fraud is growing in two particularly troubling areas: e-commerce and job recruitment scams. In e-commerce, fraudulent websites can now be quickly created with AI tools that require minimal technical skills. These fake sites often resemble legitimate businesses, using AI-generated product descriptions, images, and reviews to deceive consumers into thinking they are interacting with trusted merchants.
To further deceive, AI-powered chatbots can engage customers, delay chargebacks with scripted excuses, and manipulate complaints with AI-generated responses that make the fraudulent sites seem professional.
In the realm of job recruitment, generative AI has made it much easier for scammers to post fake job listings across various platforms. They create fraudulent profiles with stolen credentials, auto-generate job descriptions, and launch AI-powered phishing campaigns targeting job seekers.
AI-driven interviews and automated emails add a layer of authenticity, making these scams more difficult to spot. Fraudsters often request personal information, such as resumes or even bank details, under the pretext of verifying the applicant’s credentials.
Warning signs include unsolicited job offers, requests for payment, and communication through informal platforms like text messages or WhatsApp.
Microsoft’s strategies to combat AI fraud.
To address emerging threats, Microsoft has adopted a comprehensive approach across its products and services. Microsoft Defender for Cloud offers protection for Azure resources, while Microsoft Edge includes features such as website typo and domain impersonation protection. The company highlights that Edge uses deep learning technology to help users avoid fraudulent sites.
Additionally, Microsoft has improved Windows Quick Assist by adding warning messages to alert users to potential tech support scams before allowing access to individuals claiming to be IT support. On average, Microsoft blocks 4,415 suspicious Quick Assist connection attempts each day.
As part of its Secure Future Initiative (SFI), Microsoft has introduced a new fraud prevention policy. Starting in January 2025, product teams are required to conduct fraud prevention assessments and implement fraud controls during the design phase, ensuring that products are “fraud-resistant by design.”
With the rise of AI-powered scams, consumer awareness remains critical. Microsoft recommends users be cautious of urgency tactics, verify website legitimacy before making purchases, and avoid sharing personal or financial details with unverified sources.
For businesses, adopting multi-factor authentication and utilizing deepfake detection algorithms can help reduce risks.