AI expert feels that tech companies are fearmongering for AI dominance

0
38

[ad_1]

If you’re a major AI company, and you want the world to know that your product could change it for the better, the LAST thing you’d want to do is tell people that it could wipe out humanity. However, one person thinks that this is the case. Google Brain co-founder Andrew NG thinks that AI companies are fearmongering for AI dominance, according to a report.

This seems like a counterintuitive ploy. It’s like selling fire extinguishers and telling the public they can burn down your house. Well, Ng believes that companies are doing this, and his argument makes sense.

Fearmongering for AI dominance

Ng spoke to the Australian Financial Review, and he talked about a deceptive practice he believes that companies are doing. He believes that large AI-focused companies are selling the all too prevalent notion that AI will lead to the extinction of humanity.

That way, these companies could trigger strict regulations around AI. In turn, this could make it harder for open-source models to flourish. This could dampen innovation for open-source models and make it harder for smaller companies to enter the AI game.

Also, policies proposed by governmental bodies could lead to companies being required to license AI. That could have a massive impact on the open-source community.

Why open-source is a threat to major companies

Why do people flock to Gimp when Photoshop is there? Why get Maya when Blender is there? It all comes down to accessibility. Gimp and Blender are free programs that are accessible to everyone. Photoshop and Maya, while amazing pieces of software, are expensive. Also, the former have their code available for the community to work on. The latter have their code locked behind a sealed vault.

When it comes to LLMs, open-source LLMs are much more accessible to people. This makes it possible for smaller businesses to incorporate AI technology into their structure without needing a multi-million-dollar investment. Also, the source code is available for the community to contribute to.

This means that people will be more likely to turn to those open-source models in lieu of sources like Bard, ChatGPT, etc. Whether wanting regulation is a ploy by big companies to squash open-source AI remains to be seen.

[ad_2]

Source link