[ad_1]
It’s no secret that Microsoft’s push towards integrating ChatGPT into its various services prompted industry giants like Google to build its own AI chatbot named Bard. However, Google’s answer to ChatGPT has had a shaky start. According to a report from The Information, a former Google AI researcher named Jacob Devlin claims Google used ShareGPT, a platform where users share their conversations with ChatGPT, to train Bard.
Devlin reportedly left Google after expressing concerns to Alphabet CEO Sundar Pichai and other executives, arguing that using ShareGPT’s data would violate OpenAI’s terms of service and make Bard’s responses too similar to ChatGPT. The report also states that Google later stopped using ChatGPT data following Devlin’s warnings.
Google’s response
Although Google has denied the allegations and stated that they used LaMDA to train Bard, some people are sceptical about its rapid development after the release of ChatGPT. Additionally, Google is also reportedly increasing its efforts to make Bard compete with ChatGPT through the collaborative effort called “Gemini” between Google Brain and DeepMind.
Bard’s rocky start was evident in its first demonstration, where it made a factual error about the discoveries from the James Webb Space Telescope, leading to Google parent Alphabet losing $100 billion as stocks tumbled. But, it is clear that Google is eager to improve its AI chatbot and is willing to collaborate across its divisions to achieve this goal. The battle between Google and Microsoft over AI will probably intensify after the integration of ChatGPT into Microsoft Bing, which already has over 100 million daily active users and is growing quickly.
However, ethical AI development should be a top priority for all companies in the industry. Using scraped data without permission is unethical and illegal, and the development of ethical AI should be a primary objective for all companies in the industry.
[ad_2]
Source link