Generative AI: Hindering or helping synthetic fraud?
June 27, 2023
Brenda Sorenson, EVP Corporate Communications
Industry and individuals alike are embracing the potential of generative artificial intelligence (AI) solutions like ChatGPT, while concern continues to grow about the security risks they pose. AI is being used increasingly to detect fraud. Software providers offer solutions that analyze hundreds – even thousands – of data points for purposes of everything from verifying identity to detecting anomalies in user behavior. But at the same time, AI has proven a highly useful tool for criminals to commit fraud, including synthetic fraud – now the fastest growing type of financial crime.
How artificial intelligence is making synthetic fraud easier.
Synthetic fraud involves the creation of new, fictitious identities using a combination of real and false information. Criminals use synthetic identities to open accounts, obtain loans, and commit numerous other types of fraud. Synthetics were the primary weapon of fraudsters who stole billions from US taxpayers through pandemic relief programs including unemployment insurance and paycheck protection. Now, AI is making synthetic fraud even easier.
Malicious applications of the technology have enabled criminals to use AI to create synthetic identities for fraud. Here’s how it’s being done.
• Data scraping – Fraudsters use AI-powered tools to scrape data from various sources, such as social media, public records, and online marketplaces – then use this data to create a synthetic identity that appears to be real.
• Generative adversarial networks (GANs) – GANs are AI algorithms used to generate new data that appears to be real. Fraudsters use GANs to create synthetic identities using realistic-looking information, such as a name, date of birth, and Social Security number, that bypass fraud detection models.
• Social engineering: AI-powered chatbots and other automated systems are used to conduct social engineering attacks on individuals to obtain their personal information, such as a Social Security numbers, which is then used in combination with false information to create a synthetic identity.
• Patterns and trends: Generative AI can be used to generate realistic and convincing synthetics
by analyzing large amounts of data and creating new identities based on patterns and trends.
Using real and false data, criminals are able to generate impressive fake identity documents that are realistic enough to pass many of the standard compliance checks. Some fraudsters are even using generative AI tools to generate synthetic identity data for sale on the dark web, as opposed to using it themselves.
The greatest threat lies in the quality of the data.
The common denominator for all of these tactics is data. AI models are dependent on the data they receive, and the greatest threat lies in the quality of that data. Garbage in is garbage out. The same AI models used to detect fraud, could be easily overcome by AI’s own ability to commit it.
The same tool that enables fraud will not be effective in shutting it down. That requires security. The processes that are used to open accounts, conduct transactions, and access networks today were built to enable users, whether they are customers or employees – not stop criminals. Data can be compromised, which means AI models for fraud can be undermined. Generative artificial intelligence holds great potential in numerous applications, but not as a tool to fight synthetic fraud.
Security, Not More Data.
TASCET’s approach to stopping synthetic fraud is groundbreaking. We enable organizations in industries from banking and healthcare to auto dealerships and wireless providers to deploy a security module that allows them to create and control the credentials used by customers. Our Cognition Engine is not driven by data, yet it ensures that each customer is unique within an enterprise and an industry – and linked to a single security credential that can’t be duplicated or stolen.
There is a human behind every synthetic. With our technology, you know the human, not the data they present. One customer – one PROOF. This is security. This is the means to end synthetic fraud.
Contact us to learn more.