[ad_1]
When speaking about the world’s top AI companies, we typically talk about Google and OpenAI. These two companies famously bumped heads back in 2023 with ChatGPT going up against Google Bard (now called Gemini). Well, Anthropic, another leading AI company, just announced Claude 3, and this powerful AI model is powerful enough to trade blows with GPT-4 and Gemini.
Anthropic has been doing a lot of work and research on its AI. However, it just didn’t stand out amongst the competition. However, it just debuted Claude 3, the company’s magnum opus.
Anthropic AI introduces Claude 3, its most powerful model
Any new AI model coming out that pushes the envelope brings us closer to artificial general intelligence (AGI). This is something that has been done by Anthropic. According to the report, Claude 3 was able to best GPT-4 and Gemini in Industry benchmarks including undergraduate-level knowledge, graduate-level reasoning, and basic mathematics.
Claude 3, just like Gemini, comes in three different sizes. These are Opus, Sonnet, and Haiku. Opus is the largest and most powerful of the models. This is the one that went through the benchmarks. Sonnet and Haiku are the smaller and more efficient models. These models will be offered for cheaper, Also, we’re not sure if any of the smaller two models will be able to fit onto devices like Gemini Nano. If so, then it will make on-device generative AI possible for more devices.
As for Claude 3’s capabilities, it’s got some serious chops. For example, Claude 3 can summarize up to 150,000 words. This gives it the ability to summarize entire novels. One example that CNBC gave was being able to summarize the entirety of Harry Potter And The Deathly Hallows. That is a massive book, in case you don’t know. That is double Claude 2’s 75,000 words, and many times larger than ChatGPT’s 3,000 words.
So, that’s more than enough real estate to summarize extensive business documents, court cases, books, articles, Etc. So, if you are a business using Claude, you will have a powerful companion.
Risk and safety
One of the major aspects of generative AI has to do with bias and safety. Right now, artificial intelligence does not have a moral compass. So, guardrails have to be built into the models so that they don’t generate content that would get any YouTuber canceled. According to Anthropic co-founder Daniela Amodei, Claude 3 has a more nuanced understanding of certain prompts than Claude 2.
“In our quest to have a highly harmless model, Claude 2 would sometimes over-refuse,” Amodei told CNBC. “When somebody would kind of bump up against some of the spicier topics or the trust and safety guardrails, sometimes Claude 2 would trend a little bit conservative in responding to those questions.” It seems that Claude 3 would have a better understanding of dicey topics and be better able to navigate them.
Speaking of risk, Anthropic Cloud 3 does not generate images. You are allowed to upload images as input, but it will not give you image outputs. The report referred to Google’s current controversy with Gemini as one reason why Claude 3 does not generate images.
This model is currently available to try now. You can try it by going to the official Claude 3 website.
[ad_2]
Source link