[ad_1]
According to Arstechnica (via Platformer), Microsoft has just axed an AI ethics team following the mass layoffs within the company that affected 10,000 employees. The team was responsible for monitoring and reducing the social harms caused by Microsoft AI products.
Microsoft has been making the headlines in recent months for its initial investment in ChatGPT parent company OpenAI and integrating it into its Bing search engine. While all companies that incorporate AI into their products and services have a team to examine the possible associated dangers, Microsoft just laid off its AI ethics team members.
The team reportedly developed a “responsible innovation toolkit” for Microsoft that helped the company’s engineers to predict and eliminate the risks generated by the AI. Former team members say they had a key role in mitigating AI risks in Microsoft products.
Microsoft AI ethics team left the company following recent layoffs
In response to the news, Microsoft announced it remains “committed to developing AI products and experiences safely and responsibly, and does so by investing in people, processes, and partnerships that prioritize this.”
The company also says it has focused on building its Office of Responsible AI in the past six years. This team remains in place and works with the Aether Committee and Responsible AI Strategy in Engineering to mitigate AI risks.
Microsoft’s decision to lay off an AI ethics team coincides with the launch of OpenAI’s most advanced AI model, GPT-4. This model is added to Microsoft Bing, which can stoke even more interest in Google’s rival.
Microsoft started forming its AI ethics team in 2017, and the team reportedly had 30 members. The Platformer reports that the company later scattered the members into different departments as AI competition with Google was heating up. Finally, Microsoft downsized the team to only seven people.
The former employees also claim Microsoft didn’t listen to their recommendations for AI-driven tools like Bing Image Creator that copied artists’ work. The axed employees are now concerned about the possible dangers that AI can expose to users when there’s no one in the company to say “no” to the potentially irresponsible designs.
[ad_2]
Source link