ChatGPT caricature trend raises fresh ID fraud fears
Daon has warned that a social media trend encouraging users to ask ChatGPT to generate a caricature based on what it "knows" about them could increase exposure to identity fraud and social engineering.
The trend prompts people to request an AI-generated image of themselves and share it publicly, often implying the model should draw on personal details available online. Security specialists have long warned that public posts contribute to the pool of information criminals use to carry out account takeovers, impersonation, and targeted scams.
Bob Long, President, Americas at Daon, said the risk goes beyond sharing obvious sensitive identifiers. "Preventing identity fraud on the internet can be a serious challenge. Everyone knows that it's vital not to share high-value personal information like your social security number or credit card information, but that is just a start to truly protecting your identity," he said.
Social engineering
Criminals often use tactics to persuade victims to disclose credentials or other access information. These methods can be more effective than purely technical attacks because the victim unknowingly participates in the compromise, Long said.
"There are multiple ways that bad actors take advantage of people in order to break into their accounts. Stealing your login information through a data breach is just the most visible method of attack. The most common is something most people don't even see until after their information is compromised-social engineering," he said.
Long described social engineering as a range of approaches, including phishing, that become more effective as attackers learn more about a target. "Social engineering is a broad term for a number of methods of luring people into handing over their login credentials willingly. Phishing is the most well known of these techniques, but there are many others. One thing they all have in common is the more a fraudster knows about their target, the easier it is to fool them," he said.
Security teams often treat publicly available personal information as a risk factor in its own right. Attackers can combine biographical details, location data, employment history, friends and family links, and personal interests to craft convincing messages that appear to come from service providers, colleagues, or family members. The same information can also help them answer knowledge-based authentication questions.
Caricature trend
Long said the caricature trend can both expand the information available to criminals and signal which people have a large public footprint. "That's where things like the new trend of having Generative AI create a caricature of you based on everything it knows about you moves from being a fun exercise to a security threat," he said.
He said reposting the output turns a private curiosity into a public dossier. "By creating one of these images and posting it on social media, you are doing fraudsters' work for them-giving them a visual representation of who you are," he said.
Long compared it with earlier viral formats that encouraged people to share personal facts. "This is literally the modern version of the '40 things about me' posts that used to be popular on social channels, creating a quick access, public record of who you are so people with bad intentions can exploit it," he said.
He also questioned prompts that ask a model to include everything it knows about a person, arguing that the format could appeal to criminals who want victims to voluntarily reveal information. "The fact that it explicitly prompts AI to include everything it knows about you makes it sound like it was intentionally started by a fraudster looking to make their job easy. It not only tells them a lot about the person, but it tells them which people have a lot of accessible information and which don't," he said.
Authentication shift
Long linked the risk to continued reliance on passwords and knowledge-based checks across consumer services. These mechanisms remain widespread despite investment in alternatives such as device-based signals and biometric authentication. In many account recovery processes, personal information still acts as a gatekeeper.
"Until all businesses move away from passwords and other knowledge based forms of authentication, people will need to remain vigilant about what information about them is publicly available," he said.
Daon sells digital identity tools for identity verification and authentication, including biometric and multi-factor approaches. The company says these methods can reduce reliance on information that can be guessed, stolen, or assembled from public sources.
AI data concerns
Long also raised a separate concern about providing images to generative AI services without clear information on how the data will be handled or used downstream. The issue has drawn scrutiny as consumer AI tools become embedded in social media, photo editing, and messaging products.
"Of course, the argument against giving your image to Generative AI also stands. Unless you know, for certain, what will be done with that image outside of providing the requested output, you are at risk of your image being used for anything from training AI image generators to populating less-than-legal tracking software. Sharing personal information, including your image, with AI should only be done when you know and trust the organization making the request," said Long.