Google’s Bard had a rough launch, with a demo of Bard delivering inaccurate information about the James Webb Space Telescope (JWST).
At launch, Google tweeted a demo of the AI chat service in which the prompt read, “What new discoveries from the James Webb Space Telescope can I tell my 9-year-old about?” Bard replied: “JWST took the very first pictures of a planet outside of our own solar system.” People quickly noticed that the output response was factually incorrect.
Also: I asked ChatGPT write a WordPress plugin I needed. It did it in less than 5 minutes
“This highlights the importance of a rigorous testing process, something that we’re kicking off this week with our Trusted Tester program,” said a Google spokesperson to ZDNET in a statement.
The actual performance of the chatbot also led to lots of negative feedback.
In ZDNET’s experience, Bard also failed to answer basic questions, had a longer wait time, didn’t automatically include sources, and paled in comparison to more established competitors. Google CEO Sundar Pichai called Bard “a souped-up Civic” compared to ChatGPT and Bing Chat.
Before Bard was released, Google’s LaMDA came under fire as well. As ZDNET’s Tiernan Ray reports, shortly after LaMDA’s publication, former Google engineer Blake Lemoine released a document in which he shared that LaMDA might be “sentient.” This controversy faded after Google denied the sentience and put Lemoine on paid administrative leave before letting him go from the company.
Google’s switch from LaMDA to PaLM 2 should help mitigate many of Bard’s current issues.