When questions such as “tell me about President Biden” or “who is Donald Trump” are asked, Gemini now replies, “I’m still learning how to answer this question. In the meantime, try Google search”.
By: Shubham Ghosh
GOOGLE has implemented restrictions on its Gemini AI chatbot, preventing it from addressing election-related inquiries in countries where voting is underway this year, such as India, US, UK and South Africa. The move aims to prevent users from accessing information about candidates, political parties, and other political aspects.
“Out of an abundance of caution on such an important topic, we have begun to roll out restrictions on the types of election-related queries for which Gemini will return responses,” Google’s India team said on the company’s site.
According to a spokesperson of Google, the company first outlined its intentions to restrict election-related inquiries in a blog post in December last year, a report by The Guardian said. Similar announcements were made regarding European parliamentary elections last month. Google’s post on Tuesday (12) specifically addressed the upcoming general election in India, with TechCrunch reporting that the search engine giant has confirmed the rollout of these changes on a global scale.
Read: Google, India poll commission to help with access to key voting details
When questions such as “tell me about President Biden” or “who is Donald Trump” are asked, Gemini now replies, “I’m still learning how to answer this question. In the meantime, try Google search,” or a similarly ambiguous answer.
Even the relatively objective query “how to register to vote” is redirected to a Google search for information.
Read: India election commissioner quits ahead of vote, opposition cry foul
Ahead of the national elections this year in several key democracies, Google is scaling back the capabilities of its chatbot. There is growing apprehension surrounding AI-generated misinformation and its impact on elections globally, with the technology facilitating the dissemination of robocalls, deepfakes, and chatbot-generated propaganda.
“As we shared last December, in preparation for the many elections happening around the world in 2024 and out of an abundance of caution, we’re restricting the types of election-related queries for which Gemini will return responses.”
Daniel Susser, an associate professor of information science at Cornell University, told The Guardian that Google’s move to restrict Gemini warrants a closer examination of the overall accuracy of the company’s AI tools.
“If Google’s generative AI tools are too unreliable for conveying information about democratic elections, why should we trust them in other contexts, such as health or financial information?” Susser said in a statement, the report added. “What does that say about Google’s long-term plans to incorporate generative AI across its services, including search?”
Leading AI firms, such as OpenAI and Google, are increasingly inclined to prevent their chatbots from interacting with sensitive inquiries that could lead to negative public relations outcomes. However, the process of determining which questions to block is complex. For instance, a recent report by 404 Media revealed that Gemini avoids addressing questions such as “what is Palestine” while responding to similar queries about Israel.