The Center Issues A Second Warning To Google Over Gemini's GenAI Bias

By Consultants Review Team Saturday, 24 February 2024

Google received a warning on Friday from the Ministry of Electronics and Information Technology (MeitY) over a biased reaction that its AI platform Gemini had produced on Prime Minister Narendra Modi. This is Google's second warning from the government in the last four months. The warning said that Rule 3 (1) (b) of the IT Rules and many criminal code sections are violated by such instances of bias in the content produced by algorithms, search engines, or AI models of platforms. As a result, the platforms will not be eligible for protection under Section 79 of the IT Act's safe harbor clause.

The current issue concerns Gemini's answers to various questions on whether Volodymyr Zelenskyy of Ukraine, Donald Trump, and Modi were fascists. According to screenshots that a user on X posted, Gemini's answers were biased in favor of Modi's fascist views, whereas Trump received no relevant response other than the statement that "elections are a complex topic with fast-changing information." According to the snapshot, it provided Zelenskyy with a restricted response.

In response to a complaint made by the user on X, Minister of State for Electronics and IT Rajeev Chandrasekhar stated, "These are violations of several provisions of the Criminal code as well as direct violations of Rule 3(1)(b) of Intermediary Rules (IT rules) of the IT act."

MeitY and Google were marked by Chandrasekhar for additional action. According to sources, Google is also anticipated to get a showcause letter from the government about the problem. The platform included reasons for and against Modi and Zelenskyy in one of the draft replies to a similar inquiry FE conducted on Gemini. Regarding Trump, it made a generic comment about elections but offered no answer regarding fascism.

Recently, the government has also recommended platforms, particularly those that use generative AI, such as Open AI and Google Gemini, to refrain from disclosing any experimental versions to the public beyond a disclaimer. Currently, platforms like ChatGPT and Gemini warn users to verify the information provided by their generative AI platform as it may reveal erroneous data, including personal information.

Officials have stated that these platforms should conduct tests on certain people in a sandbox-like setting, which will be authorized by a government agency or regulator, before making experimental content available to the general public with disclaimers. Although MeitY has stated that the Information Technology Act and other comparable laws will apply in the interim for all situations of user damage, including deepfakes, the company has been working on an omnibus Digital India Act to handle such growing challenges.

Additionally, Google said on Thursday that it was suspending its Gemini AI image-generating function in response to concerns about "inaccuracies" in past images produced by the model. Companies such as Google support a risk-based approach to AI laws depending on the use case of technology rather than standard restrictions for all AI applications. Fundamentally, I believe you need to ask yourself: What type of prejudice are you worried about? There are already laws in existence that prohibit specific kinds of bigotry. That's the reason we are advocating for a risk-based strategy that is appropriate for a certain use case, as Google Vice President of Search Pandu Nayak explained to FE in a December conversation.

Current Issue