[ad_1]
In this day and age of social media, it’s no secret that content moderation for tech giants like Meta has become one of the most pressing issues. However, as part of the ongoing AI revolution, it looks like OpenAI might have a solution, as the company recently announced that they are working on building a new content moderation system with GPT-4.
How does the system work?
In a recent blog post, OpenAI highlighted its new development and stated that the process first begins by inputting the moderation guidelines into GPT-4, which subsequently undergoes rigorous testing using a small corpus of problematic content. Once the initial results are in, human contributors step in to analyze both the successes and shortcomings of AI, thus enhancing its learning curve. In cases where the system encounters difficulties, evaluators have the privilege of delving into GPT-4 for explanations regarding its decisions.
Lilian Weng, OpenAI’s director of security system, highlighted the transformative potential of the new technology and stated, “We believe this offers a more positive vision of the future of digital platforms, where AI can help moderate online traffic according to platform-specific policy and relieve the mental burden of a large number of human moderators.”
Furthermore, OpenAI emphasized that this new AI-driven content moderation tool will encompass more than just textual content, as it plans to extend its capabilities to encompass images and videos.
Need for the new technology
While some may argue that AIs will never be able to match the consistency of human moderators, it is important to recognize the toll that manual review of distressing and graphic content exacts on the mental well-being of human moderators. Therefore, such a system which takes the burden off of human moderators could mark a significant turning point in content moderation.
Furthermore, for corporations like Meta, such a system could yield substantial financial savings. According to a study conducted by the University of Zurich, utilizing AI for content regulation proved to be up to 20 times more cost-effective than traditional human moderation.
Although the advantages of an AI system are evident, it’s important to note that human involvement will remain crucial. This is because human moderators, for the time being, would still retain control over the final outcome, as AI systems are not yet completely foolproof. “We can’t build a system that is 100% bulletproof from the ground up… But I’m pretty sure it will be good,” said Weng.
[ad_2]
Source link