[ad_1]
ChatGPT recently had a major malfunction when the AI model began speaking complete gibberish to multiple users. Various reports started appearing, especially on Reddit, of ChatGPT going off on completely nonsensical ramblings. There were also reports of the AI getting stuck in loops or mashing languages together for weird hybrids. OpenAI took notice and identified the issue later on, having it fixed a few hours later.
ChatGPT malfunctions, gets stuck in loops & spouts gibberish
One report on Reddit showed ChatGPT suddenly began talking incoherently. Random words, excerpts from specific training data, and even different languages were all being blurted out by the AI. Another malfunction report had a user share multiple screenshots of ChatGPT fluctuating between hallucinating and being lucid. After talking nonsense, it got stuck in a loop before terminating its response. Once the user pointed this out to it, it apologized, before talking gibberish once more. This repeated another time, at which point the screenshots were taken.
Another very interesting post showed ChatGPT getting stuck in a loop again. This time, however, the loop was a self-correcting one. Once asked to name the biggest city in the world starting with the letter ‘A’, ChatGPT went on a very odd tangent. It would name a city, then say it wasn’t the correct answer and try again, ad infinitum. People in the comments tried this with their own chats and shared similar results. Yet another Reddit post about GPT-4, OpenAI’s premium subscription-based model, showed ChatGPT hallucinating. Once again, multiple users in the comments shared similar results.
OpenAI explains why this happened
OpenAI took note of the developing malfunction and quickly identified the issue. Within a few hours, the company reported that ChatGPT was operating normally on its status page. The next day, the company released an explanation for what had gone so wrong.
Apparently, a recently introduced optimization had messed things up. OpenAI explained, in very simple terms, how ChatGPT works and what had gone awry.
“LLMs [Large Language Models] generate responses by randomly sampling words based in part on probabilities. Their “language” consists of numbers that map to tokens. In this case, the bug was in the step where the model chooses these numbers. Akin to being lost in translation, the model chose slightly wrong numbers, which produced word sequences that made no sense.”
In short, ChatGPT was sourcing the wrong words, and displaying abject gibberish. Think of it like throwing darts at a large dictionary and constructing a sentence out of the words the darts landed on.
[ad_2]
Source link