"That creates a chilling effect on freedom of expression.”
Elon Musk’s Grok is causing controversy in India, with users repeatedly asking: “How long before Grok is banned in India?”
In February 2025, Elon Musk’s xAI announced that its Grok 3 AI chatbot would be free to use. Its rollout has been chaotic, mirroring the unpredictable nature of the billionaire-owned social media platform.
Grok’s responses have included profanity, Hindi slang, and misogynistic slurs.
Political questions about Prime Minister Narendra Modi, Congress leader Rahul Gandhi, and other figures have sparked controversy.
Many users have tested Grok’s biases, despite warnings from AI experts against using chatbots for fact-finding.
The Union Ministry of Information and Technology has taken note. Officials said:
“We are in touch, we are talking to them (X) to find out why it is happening and what are the issues. They are engaging with us.”
Some experts warn against overregulation.
Pranesh Prakash, co-founder of the Centre for Internet and Society (CIS), said:
“The IT ministry does not exist to ensure that all Indians, or indeed that all machines use parliamentary language.
“This provides cause to be worried if companies start self-censoring legal speech just because governments object to it.
“That creates a chilling effect on freedom of expression.”
Grok’s case highlights concerns about AI-generated misinformation, content moderation, and legal accountability.
The controversy also recalls public criticism of the Indian government’s now-withdrawn AI advisory from 2024.
Musk promotes Grok as an ‘anti-woke’ alternative to ChatGPT and Google’s Gemini. He told conservative commentator Tucker Carlson that existing AI models have left-wing biases.
Musk said: “I’m worried about the fact that it’s being trained to be politically correct.”
Grok can search X for public posts to generate real-time responses. Users can tag Grok in posts to receive replies. A premium “unhinged” mode promises provocative and unpredictable answers, per the chatbot’s website.
Rohit Kumar, founding partner at public policy firm The Quantum Hub, sees this as risky:
“The biggest issue in the Grok case is not its output but its integration with X, which allows direct publishing onto a social media platform where content can spread unchecked, potentially leading to real-world harm, such as a riot.”
The legal framework for AI-generated speech remains unclear.
Meghna Bal, director of Esya Centre, said: “We have to consider, first, whether it comes within the teeth of permissible restrictions on speech under the Constitution, and then unbundle where, and how, it crosses the line under different laws.”
On whether Grok could face criminal liability, Bal pointed to precedents like a case in Canada where an airline was held responsible for false information provided by its AI chatbot.
Courts treated the AI as a publisher, rejecting the airline’s claim of non-responsibility.
Bal proposed creating safe-harbour protections for AI developers, similar to the rules shielding online platforms from liability for user content.
She said: “The safe harbour framework for AI companies could borrow from the end-user license agreements and user codes of conduct and content policies created by some companies for their large language models.”
Microsoft has warned that AI jailbreaks—techniques used to bypass guardrails in AI systems—are difficult to prevent.
Grok users have tested the chatbot on topics from cricket to politics, deliberately pushing boundaries.
Bal said: “Literature indicates that it is much easier to attack a generative AI service (through prompt engineering) than guard against such attacks.”
Kumar believes direct policing of AI chatbot outputs is the wrong approach:
“Instead, developers should be required to assess risks, be more transparent about the datasets used for training to ensure diversity, and conduct thorough red-teaming and stress testing to mitigate potential harms.”
For now, Grok remains operational in India. But as scrutiny intensifies, questions over its future persist.