Abu Trica and Network Allegedly Used AI to Create Fake Online Dating Profiles

Authorities have revealed that Abu Trica and his alleged criminal network used artificial intelligence (AI) technology to create fake identities on dating platforms, targeting unsuspecting individuals in sophisticated online scams.
Investigators say the network employed AI to generate realistic profile pictures, craft personalised messages, and even manipulate voices to mimic real human interaction. These tactics allowed the scammers to build trust with victims and convince them to send money or share sensitive personal information.
The use of AI made the operation particularly hard to detect. Experts explain that AI-generated images and interactions can closely resemble real people, enabling fraudsters to run elaborate scams across multiple countries without raising immediate suspicion.
Reports indicate that the network maintained prolonged contact with victims, continuously adapting conversations to avoid common fraud-detection cues. This combination of automation and personalisation significantly increased the success of the scam.
Law enforcement agencies from Ghana and the United States have been collaborating on the case, which gained international attention after several high-profile arrests linked to romance scams targeting elderly victims abroad. The involvement of AI has added a new layer of complexity to the investigation, prompting authorities to develop new strategies to detect and prevent these crimes.
Cybersecurity experts warn that AI-enabled scams are an emerging threat, as traditional verification methods often struggle to distinguish between real users and sophisticated fake profiles. They emphasise the importance of public awareness and enhanced safeguards on digital platforms to prevent such fraud.
Authorities have urged internet users, particularly on dating and social networking sites, to exercise caution when interacting with unknown individuals. Reporting suspicious activity promptly can help curb these scams and protect vulnerable users from financial and emotional harm.
This case highlights the increasing intersection of advanced technology and crime, illustrating how AI can be exploited for fraudulent purposes. Law enforcement continues to adapt to these evolving threats to hold perpetrators accountable and safeguard online communities.
Source: Thepressradio.com




