[ad_1]
Rita El Khoury / Android Authority
TL;DR
- Google posted a GIF of its AI chatbot, Bard, in action.
- Astronomers on Twitter pointed out that Google’s new tool made a mistake.
- After news of the flub spread, shares of Alphabet dropped by 8%.
Yesterday, Microsoft held a surprise AI event where it performed a live demo of its AI-powered Bing search engine. Not to be outdone, Google posted a demo on Twitter of its own AI chatbot. However, things didn’t go exactly as planned.
It’s no secret that the overwhelming popularity of ChatGPT has Google worried. To compete against this threat to its business, Google revealed its own AI chatbot — Bard. The tool can produce intelligent responses to natural language queries by using information scraped from the internet.
A day after Microsoft’s AI-powered Bing live demo, Google posted a GIF on Twitter showing its new AI tool in action. The GIF shows Bard answering the question “What new discoveries from the James Webb Space Telescope can I tell my 9 year old about?” Bard then provides a bulleted list of answers.
Bard is an experimental conversational AI service, powered by LaMDA. Built using our large language models and drawing on information from the web, it’s a launchpad for curiosity and can help simplify complex topics → https://t.co/fSp531xKy3 pic.twitter.com/JecHXVmt8l
In the last bullet point, it says “JWST took the very first pictures of a planet outside of our own solar system.” Astronomers on Twitter were quick to point out that this information was false. Astrophysicist Grant Tremblay later quote tweeted the post explaining that the first image of a planet outside our solar system was taken in 2004 — this is before the James Webb Space Telescope was created.
Not to be a ~well, actually~ jerk, and I’m sure Bard will be impressive, but for the record: JWST did not take “the very first image of a planet outside our solar system”.
While this flub may be a funny moment for some on social media, Google is all but laughing. Not long after word of the mistake spread, shares of Google’s parent company — Alphabet — reportedly dropped by 8%, according to Reuters. This resulted in a loss of more than $100 billion in market value.
It’s not uncommon for AI tools like Bard and ChatGPT to make factual errors due to them pulling information off the internet rather than a trusted database of verified data. The problem is, these systems present the information in an authoritative way that makes you believe answers are true.
Microsoft has preemptively protected itself from the liability that comes from that by placing the onus on the user. This is stated in its disclaimer that says “Bing is powered by AI, so surprises and mistakes are possible. Make sure to check the facts, and share feedback so we can learn and improve!”
In response to the incident, a spokesperson from Google told The Verge:
This highlights the importance of a rigorous testing process, something that we’re kicking off this week with our Trusted Tester program. We’ll combine external feedback with our own internal testing to make sure Bard’s responses meet a high bar for quality, safety and groundedness in real-world information.
In a follow-up tweet, Tremblay mentions “But ChatGPT etc., while spooky impressive, are often *very confidently* wrong. Will be interesting to see a future where LLMs self error check.” Whether this incident will convince these companies to implement quality assurance through error checks remains to be seen.
[ad_2]