AI Models in India to Require Govt Approval; What are the Implications?

In Short
  • The Indian IT Ministry has issued a new advisory for large tech companies offering AI services and foundational AI models in India.
  • The new advisory mandates large tech giants to seek permission from the government before deploying "untested" AI models.
  • The government has asked AI platforms to embed a permanent identifier in generated data for easier identification of the first originator.

India’s Ministry of Electronics and Information Technology (MeitY) recently issued an advisory to tech platforms and intermediaries operating in India to comply with regulations outlined under IT Rules, 2021. The new advisory asks companies like Google, OpenAI, and other technology firms to “undertake due diligence” and ensure compliance within the next 15 days.

In what’s new, the IT Ministry has asked tech companies to get explicit permission from the Government of India before deploying “untested” AI models (and software products developed on such models) in India.

The advisory states, “The use of under-testing / unreliable Artificial Intelligence model(s) /LLM/Generative Al, software(s) or algorithm(s) and its availability to the users on Indian Internet must be done so with explicit permission of the Government of India and be deployed only after appropriately labeling the possible and inherent fallibility or unreliability of the output generated. Further, ‘consent popup’ mechanism may be used to explicitly inform the users about the possible and inherent fallibility or unreliability of the output generated.

Although the advisory is not legally binding on platforms and intermediaries, it has drawn criticism from tech firms across the world, suggesting that it might stifle AI innovation in India. Aravind Srinivas, the CEO of Perplexity AI, called it a “bad move by India.”

To clarify the advisory, Rajeev Chandrasekhar, the Union Minister of State for Electronics and Information Technology, took to X to shed light on the key points. He said that seeking permission from the government is only applicable to large platforms, which include giants like Google, OpenAI, and Microsoft. He said advisory doesn’t apply to startups. He also points out that the advisory is aimed at “untested” AI platforms.

It’s worth noting that India’s home-grown Ola released its Krutrim AI chatbot recently, marketing the chatbot as having “an innate sense of India[n] cultural sensibilities and relevance“. However, according to an Indian Express report, the Krutrim AI chatbot is highly prone to hallucinations.

Besides that, MeitY has asked AI companies to “not permit any bias or discrimination or threaten the integrity of the electoral process including via the use of Artificial Intelligence model(s)/ LLM/ Generative Al, software(s) or algorithm(s).

The fresh advisory is issued in the backdrop of Google Gemini’s recent misfire where the AI model responded to a politically sensitive question, drawing ire from the establishment. Ashwini Vaishnaw, India’s IT Minister, warned Google that “racial and other biases will not be tolerated.”

Google quickly addressed the issue and said, “Gemini is built as a creativity and productivity tool and may not always be reliable, especially when it comes to responding to some prompts about current events, political topics, or evolving news. This is something that we’re constantly working on improving.”

In the US, Google recently faced criticism after Gemini’s image generation model failed to produce images of white people. Users accused Google of anti-white bias. Following the incident, Google has disabled the image generation of people in Gemini and is working to improve the model.

Apart from that, the advisory says if platforms or its users don’t comply with these rules, it might result in “potential penal consequences.”

The advisory reads, “It is reiterated that non-compliance to the provisions of the IT Act and/or IT Rules would result in potential penal consequences to the intermediaries or platforms or its users when identified, including but not limited to prosecution under IT Act and several other statues of the criminal code.

What Could be the Implications?

While the advisory is not legally binding on tech companies, MeitY has requested intermediaries to submit an Action Taken-cum-Status report to the Ministry within 15 days. This can have wider ramifications not just for tech giants offering AI services in India, but may also stifle AI adoption and overall technological progress in India in the long term.

Many are concerned that it may create more red tape from the government and large companies may be hesitant to release powerful new AI models in India, fearing regulatory overreach. So far, all tech firms have kept up with the latest trends in releasing advanced AI models in India, on par with Western countries. In contrast, Western countries are being extremely cautious about AI regulations that may hinder progress.

The new regulation may create “more red tape” from the Indian government and large companies may be hesitant to release powerful new AI models in India, fearing regulatory overreach

Apart from that, experts say that the advisory is “vague” and does not define what is “untested.” Companies like Google and OpenAI do extensive testing before releasing a model. However, as is the case with AI models, they are trained on a large corpus of data scraped from the web and may exhibit hallucinations, producing an incorrect response.

Nearly all AI chatbots disclose this information on their homepage. How is the government going to decide which models are untested, and under what frameworks?

Interestingly, the advisory asks tech firms to label or embed a “permanent unique metadata or identifier” in AI-generated data (text, audio, visual, or audio-visual) to identify the first originator, creator, user, or intermediary. This brings us to traceability in AI.

It is an evolving area of research in the AI field, and so far, we have not seen any credible way to detect AI-written text, let alone identify the originator through embedded metadata.

OpenAI shut down its AI Classifier tool last year, which was aimed at distinguishing human-written text and AI-written text as it was giving false positive results. To fight AI-generated misinformation, Adobe, Google, and OpenAI have recently employed the C2PA (Content Provenance and Authenticity) standard on their products which adds a watermark and metadata to generated images. However, the metadata and watermark can be easily removed or edited using online tools and services.

Currently, there is no foolproof method to identify the originator or user through embedded metadata. So, MeitY’s request to embed a permanent identifier in synthetic data is untenable at this point.

So that is all about MeitY’s new advisory for tech companies offering AI models and services in India. What is your opinion on this subject? Let us know in the comments section below.

Comments 0
Leave a Reply