AI startup blocks Nigeria after detecting 95% fraud in user data
A US-based artificial intelligence startup has blocked access to its platform in Nigeria after detecting what it described as an exceptionally high level of fraudulent activity, underscoring growing risks in the global race for AI training data.
A US-based artificial intelligence startup has blocked access to its platform in Nigeria after detecting what it described as an exceptionally high level of fraudulent activity, underscoring growing risks in the global race for AI training data.
- Kled AI has blocked Nigeria after detecting a 95% fraud rate in user uploads.
- The startup pays users for data used to train artificial intelligence models.
- Fake images and forged identity documents overwhelmed its systems.
- The company says the ban is temporary while it upgrades fraud controls.
Kled AI, a marketplace that pays users to upload photos, videos and documents used to train AI systems, said it removed its app from Nigeria and imposed an IP ban after months of monitoring user behaviour.
“We have removed Kled from the Nigerian app store and IP banned the entire region,” said founder Avi Patel.
The company, launched in 2025, connects individuals supplying data with AI firms that rely on large volumes of real-world content to build models.
It said it had processed more than one billion uploads and paid out hundreds of thousands of dollars to users globally within four months.
However, internal reviews showed that roughly 95 per cent of submissions from Nigeria were unusable. According to the company, uploads included blank images, duplicate files, internet-sourced content and AI-generated material submitted as original data.
The situation escalated over a weekend when Kled’s verification system was flooded with forged identity documents, including fake Japanese passports with altered photographs.
“That was the final straw,” Patel said, adding that the startup could not sustain the cost of filtering large volumes of fraudulent data.
Kled said fraud levels in Nigeria were significantly higher than in other key markets such as Malaysia, Indonesia and the Philippines, where it reported rates below 10 per cent despite larger user bases.
The company described the suspension as temporary and said it is upgrading its fraud detection systems before reconsidering a return.
The move has drawn mixed reactions. While some users acknowledged that platforms offering cash incentives often attract abuse, others questioned the company’s figures or dismissed the announcement as a publicity stunt.
Patel rejected that claim, saying the decision was driven by operational risk. He also warned of impersonation, noting that a fake Android version of the app had appeared, even though Kled currently operates only on iOS.
The incident highlights a broader challenge for the AI industry. As demand for high-quality training data accelerates, platforms that rely on crowdsourced contributions are increasingly exposed to manipulation, particularly in markets where economic pressures can incentivise system abuse.
For Nigeria, Africa’s largest digital economy, the development reflects a recurring tension. The country is a major hub for tech adoption and talent, yet concerns around online fraud continue to shape how global platforms assess risk.
Kled’s exit points to a wider shift in the tech sector, where the focus is moving beyond rapid growth to data integrity and trust, both critical to the future of artificial intelligence.