What Precisely We Want Our Chatbots To Perform: Google's Gemini Fiasco

By Consultants Review Team Friday, 01 March 2024

Following Google's humiliating Gemini roll-out, in which the firm attempted to ensure that its flagship AI chatbot was biased-corrected but instead produced a hilariously biased AI chatbot, the jokes are still going strong.

It seems like the type of oppressive, woke plot that Big Tech has long been accused of by Elon Musk and his comrades in the culture wars. Google has now gone ahead and provided them with substantial support supporting their claim.

It makes me think of the humiliating error that happened with Hunter Biden's laptop in 2020, when Twitter momentarily blocked the news from being distributed. Ted Cruz and other prominent figures took full use of this mishap.

Though most people outside of Google are unaware of any of this, inside at Google, it's seen as a degrading admission of guilt. This is Google's huge wager for the future. Furthermore, although we can live with the possibility that AI bots "hallucinate," this is not the same as fearing that chatbots would purposely make mistakes because that is not how they were designed.

Really, what do we want these things to accomplish? Their apparent proficiency in text summarization bears some significance in terms of potential economic upheaval. (If you've read the AI-generated bullet points at the beginning of this piece, you can get a peek of the future—imagine an AI me reblogging Read's blog.) But many of the other envisioned use cases appear difficult to envisage right now, especially the one where chatbots become a lifetime "ally," understanding you and your needs in a profound sense, as predicted by investor and AI enthusiast Marc Andreessen. Perhaps if we all take a moment to relax and slow down, we can all understand what this technology truly can and cannot achieve.

Current Issue