[ad_1]
While AI has proven to be pretty smart so far, it’s still not as smart as a human being. We’re still far from that point, but there are people who believe that it’s closer than we think. According to Google’s AI chief, there’s a 50/50 shot that AI could reach the AGI level (Artificial General Intelligence), and it could happen sooner than we think.
Right now, there’s a lot of fear surrounding AI. However, if they’re afraid of AI, then they don’t want to know what AGI is. Basically, AGI is an AI that’s just as smart as a human. Modern AI chatbots like ChatGPT are great at regurgitating information presented to you. Even if you’re generating a story (a bit of media that requires critical thinking and artistic abilities), it still doesn’t represent true human intelligence.
Google’s AI chief still believes that there’s a 50/50 shot at achieving AGI in the near future
Back in 2011, Google’s current AI chief and DeepMind co-founder Shane Legg publically stated that there’s a 50% chance that AI will progress to AGI within the next couple of decades. Now, more than a decade later, he hasn’t budged from that stance.
He was the guest on an interview with Dwarkesh Patel (via Futurism), and he expressed his thoughts on AGI and when we could expect to reach that point of development. While he stated his stance back in 2011, his interest for AI goes back yet another decade when he read 1999’s “The Age of Spiritual Machines”. He pointed to two main points in the book that really spoke to him. “One is that computational power would grow exponentially for at least a few decades. And that the quantity of data in the world would grow exponentially for a few decades.”
The book was not wrong in that regard, as the amount of data being produced every day is growing as each day goes on.
Difficulties with achieving AGI
He predicted that AGI would be upon us by the end of this decade, but he later said that this might not be the case. One thing holding it back is the fact that it’s hard to really define what human intelligence is. That’s such a vague concept. “You’ll never have a complete set of everything that people can do,”
This is true; what separates your intelligence from the AI in the phone you’re reading this article on? That’s tough to know. Legg also talked about testing an AI to see if it’s as smart as a human being. He said that there wouldn’t be one singular test to see if an AI is smart enough.
Another point Legg brought up was the fact that we won’t really be able to scale up AI models to push an AI into AGI territory. Right now, the AI models powering current chatbots are absolutely gargantuan in size. They’re unimaginably big, but they’re not big enough to create a true AGI, according to Legg.
So, how will we achieve this technology?
Right now, Legg believes that we’re on track to being able to create an AGI; it’s a matter of what we train our models with. He thinks that companies should “start training models now with the scale of the data that is beyond what a human can experience in a lifetime“. That’s a tall ask, but he believes that it’s achievable.
Right now, as we wait to see if humanity will create the first AGI, companies are still working on making their AI smarter. Right now, AI poses a threat to many parts of the tech industry, so let’s hope that AGI won’t be even worse.
[ad_2]
Source link