According to some Google employees, Bard AI chatbot is just a charade

0
57

[ad_1]

The artificial intelligence battle led to the birth of Google’s Bard AI chatbot, a rival to the Bing ChatGPT chatbot. But some employees at Google don’t think much of Bard like their employers want you to, it seems. What exactly can be the reason behind the rejection Bard is getting from its people?

Well, many might argue that the reports only cite a trifling sum of Google workers. Some people making up this list of those kicking against Bard are ex-Google staff. But regardless of the size of the opposition, it is a wise course to take into account the reason for their aversion.

It boils down to the abilities of the Bard AI chatbot when put to use by people. The outlined issues are also the case with Bard’s greatest rival, the Bing AI chatbot. Funny enough, the staff at Google have labelled this AI platform as “worse than useless”. This sounds a bit too harsh coming from Googlers, hence drawing more attention to Bard’s performance.

The misleading responses of Google’s Bard AI chatbot put it under intense scrutiny from its makers

Over the past few months, Microsoft has made the headlines as a result of its integration of ChatGPT into its browser. This brought the AI chatbot into Bing and Skype, but users have been able to spot some flaws. These flaws come as a result of the Bing AI chatbot not knowing the limits of a conversation, threatening users, and falsifying information.

Shortly after Microsoft announced the integration of ChatGPT into Bing, Google was already preparing to announce the Bard AI chatbot. Well, it seems like Google employees are pointing out noticeable flaws in the AI chatbot, following its arrival.

These flaws are quite similar to those that came to the spotlight with the ChatGPT chatbot. Bard now provides false information to users and gives dangerous advice, hence threatening user safety. Google’s rush might be the main cause of these issues that their employees are pointing out in Bard’s usage.

Reports have it that the internal safety team had advised that the chatbot not be launched. Instead, these flaws be fixed to foster the launch of a more stable and user-friendly product. Now, Google will have to focus on fixing these issues to ensure that those making use of this product will be able to get safe and accurate responses for their searches.

Just like Microsoft’s Bing AI chatbot, Bard comes with some flaws. Many might argue that it is normal for any new AI product, but such issues are pushing certain countries to ban AI chatbots. The AI chatbot industry needs to undergo refining before it can sit with the majority as being safe for usage.

[ad_2]

Source link