[ad_1]
The White House initiative for developing responsible AI is getting new members. IBM, Nvidia, Adobe, plus five other AI companies have joined the program to develop safer and more responsible AI services for users.
In July, the US administration started a new program to encourage companies to develop AI responsibly. Meta, Google, and OpenAI were among the first notable companies that joined the program and agreed to invest more in cybersecurity and oversee AI development plans with more care. Now, a bunch of other tech companies are committed to acting more responsibly while developing and releasing AI tools.
Besides IBM, Nvidia, and Adobe, other companies that joined the White House AI program are Cohere, Palantir, Salesforce, Scale AI, and Stability AI. Of course, this is a voluntary commitment, and violators will not be punished. The Biden administration claimed it had consulted with several world leaders to develop this plan and its provisions.
IBM, Nvidia, and Adobe join the White House voluntary program for developing more AI responsibly
All companies that have joined this program must design internal and external testing of AI systems before releasing them to the public. Additionally, they’re committed to investing in security measures for protecting model weights and sharing information with peers to mitigate AI risks.
Other obligations include allowing third parties to report AI vulnerabilities. In addition to watermark AI-generated content, publicly report risks associated with their AI systems, and research societal risks. Finally, the AI systems developed by these companies must address society’s greatest challenges.
AI is becoming a national security matter for the United States and other world powers. Thus, governments rush to regulate AI and mitigate its potential societal risks. The US administration has introduced the AI Bill of Rights as a guideline for enacting new rules.
The European Union is also in contact with the United States to develop its AI regulations. However, many experts believe the speed of developing regulations can’t keep up with the pace of AI development. Because of this, DeepMind CEO Mustafa Suleyman has suggested the US government use Nvidia chips for AI rule enforcement.
Concerns about misusing AI are rising after Microsoft announced that Chinese actors post AI photos on social media to mislead American voters. These AI photos mostly try to widen the racial issues gap in the US.
[ad_2]
Source link