Google Bard, two employees wanted to stop AI chatbot launchApr 11, 2023 | ||
On the same day that Google launches the "Experimental Updates" section for the AI Bard , designed to allow anyone to discover the latest innovations introduced in the proprietary chatbot, a very interesting report from the New York Times appears on the net. According to what was written by the US portal, some Google employees would have tried to stop the launch of Bard by sounding an alarm on the development of artificial intelligence. | ||
The alarm of some employees for Google Bard | ||
According to the report, two employees in charge of examining all the Mountain View company's AI products would have tried to prevent the release of the chatbot, in light of some concerns related to the generation of dangerous or false statements . The two reviewers in question recommended blocking Bard following the risk assessment, believing the chatbot was not ready for widespread use. | ||
In response, the director of Google's Responsible Innovation Group, Jen Gennai, reportedly modified the document they shared internally in order to remove the recommendation and minimize the risks of the chatbot. At the same NYT Gennai therefore explained that the reviewers should not have evaluated whether or not to proceed with the debut of Bard and that, in any case, these comments were welcomed by the technicians to “ correct inaccuracies ” and launch artificial intelligence as an experiment limited to a few testers. | ||
Considering the race for generative AI - and not only that - by the giants of the tech world, concerns are basically legitimate. Indeed, in recent weeks Elon Musk and thousands of other signatories have also signed an open letter calling for the halt of the development of AIs after GPT-4 "for at least six months", so that all public actors can openly discuss their future. | ||
________________ | ||
Source: NYT |