[ad_1]
As of the writing of this article, OpenAI is in the midst of a pretty big legal battle. A firm in California is claiming that OpenAI violated the privacy of its users while training its chatbot ChatGPT. Because of this claim, and how much data the company gathered for its chatbot, this could have some major implications for OpenAI and other AI companies. The question is: Is OpenAI in the wrong in this situation?
What’s the case about?
Once you look at what this case is about, it’s easy to get a grasp of the gravity of the situation. Now, AI is like a human brain; it needs to be trained. It needs to be taught. Companies feed tons of data into their LLMs (Large Language Models) so that they can learn. The more the LLMs know, the more helpful they can be. Google is one of the leading AI companies because it has access to a billion tons worth of data through Google search, Android, etc.
In the case of ChatGPT, it also uses an LLM that needs to be trained; and the company trained it using a ton of data. The issue lies in some of the places where it got this information. The company used a plethora of data from the internet. This includes a bunch of data that people uploaded without giving consent to be used by an AI chatbot.
We’re talking about articles, blog posts, short stories, screenplays, poems, social media posts, etc. The list goes on and on. Since ChatGPT’s knowledge goes so far back, there’s no telling how much of people’s data has been used to train ChatGPT. The lawsuit says that this information was obtained without the consent of the original poster.
This is why OpenAI is in hot water. The sheer scale of information that OpenAI fed into ChatGPT is gigantic. If the court finds that the company is at fault, then it could mean bad news for AI in general.
So, is OpenAI in the wrong?
It’s the wild west out there for AI, as lawmakers are still figuring out how to regulate it. While they’re battling with the legality of AI, the average Joes are dealing with the morality of it. Is OpenAI in the wrong for using so much data to train ChatGPT?
One could argue yes.
A big part of this argument comes from the controversy with AI-generated art. There’s an ongoing protest against this technology because it gives people the ability to instantly create images with nothing but words. Just like with ChatGPT, AI image generators need to be fed images of art from human artists. Many human artists don’t want their art to be used to train these image bots.
Well, think about people having their written content used to train ChatGPT. Just like how image generators can put artists out of business, chatbots can put writers out of business as well. Chatbots can generate articles, short stories, scripts, blog posts, etc. in a matter of seconds. Why hire and pay a human writer when you can just have your content generated? Understandably, human writers who spent years developing their craft would not want their content used to train the very machine that could put them out of their job.
These are people who need to put their work out there to make a living. Journalists post their articles on public websites, writers make blog posts for companies’ public websites, etc. They do this to share their craft with the readers and hopefully inspire other writers. They didn’t sign on to have their work scraped and used to train AI chatbots.
Others can argue no
There are two sides to this argument. When artists, writers, and musicians post their work online, they run the risk of having it used for other purposes. It’s not much different from having your content stolen and plagiarized. Creators put their content out with the understanding that it could be lifted and used for some other purposes.
The conversation gets even more hairy if the content is not protected by copyright. If your content is freely available on the internet for anyone to grab, then OpenAI could be within its right to use it to train ChatGPT.
If OpenAI loses the case, it might be a good thing
Those of you using ChatGPT to write your essays and such are most likely rooting for OpenAI to win the case. However, if the company loses the case, it might not be the worst thing. Sure, OpenAI will face a financial loss, which isn’t great for the company. However, if the court wins this case, it could help push AI regulation forward.
As stated before, it’s the wild west out there for AI regulation. Lawmakers are scrambling to rein in AI development and make sure that it won’t have any negative effects. People’s right to their content has never been more in jeopardy as AI image generators and AI chatbots are popping up all over the place. Content from all over the internet is being swooped up and used for these LLMs, and there’s no way to put a halt to it.
If OpenAI loses this case, then it will bring this issue to the forefront. It will get people talking about how these LLMs are getting their information. People are crying to have AI regulated at this point. It’s causing people to lose their jobs. Independent creators are being 1-upped by talentless folks who generate content and market it. Someone can easily whip up an AI-generated “song” and market it on Spotify. Imagine pumping out an entire album in one day or a week and gathering income from it.
The same thing goes for AI “artists” and AI “authors”. It might already be too late for most people. We’re still just learning the potential of generative AI technology; this includes the potential for bad.
Only time will tell
At this point, we’re still waiting on the results of the case, so it’s hard to tell what’s going to happen. We’ll need to wait and see how this case develops to know the future of the technology. A case like this could definitely have some major ramifications for AI.
[ad_2]
Source link