Microsoft AI CEO considers public web content as “freeware”

0
28

[ad_1]

The CEO of Microsoft AI considers that the public information available on the web is like “freeware.” Much of this data has been used to train AI models. However, not everyone agrees with this statement. Content creators continue to feel affected by the potential use of their work for this purpose, without receiving compensation.

Mustafa Suleyman, CEO of Microsoft AI, answered multiple questions related to the current role of AI and its development. One of the issues that most concern people in this regard has to do with copyright. Currently, there is still no firm “AI legislation” that considers all items, participants, and fair compensation. There are many “gray lines” that can cause questions even about the ethical use of AI.

Microsoft AI CEO says that publicly available content on the internet is “freeware”

With this in mind, CNBC’s Andrew Ross Sorkin asked Suleyman about the topic. Sorkin touched on the issue of the intellectual property of the data used to train AI models. This encompasses all content that is publicly available on the Internet. More specifically, he asked who owns the IP over that content, who should receive compensation for use of the IP, and whether AI companies have “stolen” that content.

To this, the CEO of Microsoft AI responded: “The social contract of that content since the 90’s has been…it is fair use. Anyone can copy it, recreate with it, reproduce with it. That has been freeware if you like.” This response is causing controversy among people, especially among content creators. They consider that comparing all publicly available content on the Internet with “freeware” is dangerous. They fear that, under that view, AI companies will feel free to take the content whenever they want.

“Gray area” cases should be dealt with in court, Suleyman claims

Suleyman makes a point by trying to separate the types of content publicly available on the Internet. He mentions that there is another category where creators explicitly say that their content cannot be taken or used without consent. Suleyman considers these cases a “gray area” that must be dealt with in court. He mentions that there are other people who had also used that kind of content without authorization. Creators might not even know anything about the use of the content, because it is not so easy to determine.

With no firm AI legislation currently in place, cases of this nature are being dealt with separately and with different results. However, the legal boundaries between fair use and “theft” of publicly available content are still unclear when it comes to training AI services.

[ad_2]

Source link