Dev 360 | AI Can Make Mistakes: India Must Learn Early | Patralekha Chatterjee

0
2

In the 2008 sci fi film WALL E, set centuries in the future, humans have become obese and completely passive, spending their lives in floating lounge chairs while robots do everything for them and they stare at screens all day, unable to stand or act on their own.

India is nowhere near that dystopia, but the film prods us to reflect on the risks of a poorly planned AI rollout alongside its opportunities and upside. In recent days we have heard a lot about potential job losses. The fears are real but there is another elephant in the room: blind trust in AI outpacing understanding of AI, opening the door to a flood of scams and manipulation.

Currently, from government missions to startup ecosystems and the recent India AI Impact Summit in New Delhi, there is a huge buzz around artificial intelligence. Policymakers, entrepreneurs, and global leaders recently gathered in Delhi to celebrate its potential in healthcare, agriculture, and beyond.

Yet beneath this optimism lies a dangerous “trust paradox”: Indians exhibit exceptionally high confidence in online and AI-generated information, even as real awareness of the technology remains perilously low.

This vulnerability is supercharging deepfake-enabled financial cybercrime, turning synthetic media into weapons that erode trust, drain bank accounts, and threaten institutional integrity.

A January 2026 analysis by the Observer Research Foundation captures the scale. “A 2025 analysis of Artificial Intelligence scams in India reports that 47 per cent of Indian adults have either been victims of, or know someone who has been a victim of, an AI voice-cloning or deepfake scam, nearly double the global average of 25 per cent. The same report notes that 83 per cent of Indian victims of AI voice scams suffered monetary loss, with almost half losing over Rs 50,000, highlighting the rapid growth of deepfake-enabled fraud in the Indian financial ecosystem. Within this rapidly evolving threat landscape, one of the most concerning developments for financial institutions is the emergence and proliferation of deepfakes as tools for cyber-enabled financial crime,” it notes.

The Pew Research Centre’s October 2025 global survey of 25 countries points out that only 14 per cent of Indian adults have “heard or read a lot” about AI, with another 32 per cent hearing “a little”: the lowest awareness level overall. Among 18-34-year-olds, awareness stands at just 19 per cent. Paradoxically, 89 per cent of Indians express high trust in their government’s ability to regulate AI: the highest globally.

Clearly, there is a big gap between AI as a buzzword and AI basics.

This “trust without knowledge” dynamic creates fertile ground for deception. Consider the January 2026 Bengaluru case: a young software engineer matched with “Ishani” on a dating app. The profile was an AI-generated deepfake. After shifting to WhatsApp, the fraudster lured him into a video call where he was coerced into stripping; the interaction was secretly recorded and used for sextortion. He transferred Rs 1.5 lakh across accounts before realising the deception. The police noted that victims routinely assume video callers are real: a fatal assumption turbocharged by accessible AI tools. Dating apps have become hotspots for such scams, blending romance, investment fraud, and extortion.

The consequences ripple far beyond individuals. In the judiciary, AI “hallucinations” are already contaminating legal processes. On February 17, the Supreme Court, in a bench led by Chief Justice Surya Kant with Justices B.V. Nagarathna and Joymalya Bagchi, flagged the “alarming” use of AI for drafting petitions. Justice B.V. Nagarathna recalled a phantom citation (Mercy vs Mankind), a case that doesn’t exist. Justice Nagarathna recounted instances where real Supreme Court judgments were quoted with entirely invented paragraphs. The court stressed the duty to verify can’t be delegated. The Kerala high court mandates strict human oversight for all AI use in the state’s subordinate courts.

Even in high-stakes sectors like healthcare, blind adoption without localisation backfires. IBM Watson for Oncology promised to democratise expert cancer care but faltered in India, Thailand and South Korea due to Western-biased training data from Memorial Sloan Kettering Cancer Centre. A 2025 article in the International Research Journal of Innovations in Engineering and Technology said: “The generally affluent population treated at Memorial Sloan Kettering does not reflect the diversity of people around the world. The cases used to train Watson therefore do not consider the economic and social issues faced by patients in poorer countries.” This led to mismatches with local cancers (higher gastric incidence in Asia), drug availability, treatment variations, and reduced accuracy for elderly or resource-constrained patients.

Indian startups like Qure.ai (TB detection) and Niramai (breast cancer screening) show promise by prioritising homegrown, context-aware solutions; yet the Watson lesson endures: technology without local context and literacy fails people.

How is India dealing with these challenges?

As the ORF report pointed out, the Indian government is trying to address deepfake threats, including those in the financial sector, through enhanced cyber laws, advisories to digital platforms and institutional strengthening. Though technology-neutral in wording, these measures are meant to directly tackle AI-driven misinformation, impersonation, and identity theft that fuel financial scams and cyber-fraud.

Yet, laws alone are insufficient against a population where low AI literacy meets high trust. Gullibility undermines every safeguard. When citizens cannot spot deep fakes, they cannot protect savings, engage in ethical debates, or hold regulators accountable.

If scams are the symptom, blind trust is the disease — and the cure is scepticism taught early. To counter blind trust in AI outputs and reclaim critical thinking, we must teach scepticism. And not only to adults: AI literacy must begin in schools, not just in adulthood — and it must rub in a simple truth: AI makes mistakes. Teaching children that even confident outputs can be wrong is the best inoculation against blind trust.

Finland stands out as the country that most explicitly and systematically teaches students scepticism and critical thinking to avoid blind trust in any information source — including AI outputs like hallucinations, deepfakes, and confident-but-wrong responses. A Brazilian programme running since pre-2025 mandates AI literacy in schools for 90,000-plus students yearly, arming citizens against hallucinations, deepfakes, and algorithmic deception through ethics-integrated curricula.

India’s initiatives, like Yuva AI for All, plans to introduce AI from Class 3 in 2026-27. But it must not become yet another class.

The road ahead demands realism and awareness: a country can only optimally leverage AI if its people are equipped to cut through the clutter and understand what it can do, what it cannot, the risks and rewards. India needs early scepticism education teaching fallibility as a core lesson. Blind faith is dangerous — be it in God, in government, or in technology.

Disclaimer : This story is auto aggregated by a computer programme and has not been created or edited by DOWNTHENEWS. Publisher: deccanchronicle.com