Ousted OpenAI CEO returns amid speculations

0
37

[ad_1]

The controversial co-founder of a US-based artificial intelligence company returns four days after his surprise ousting as CEO on November 17, 2023.

Former and current CEO Sam Altman founded OpenAI, a software development company, with others in 2015. His co-founders are Elon Musk, Greg Brockman, Ilya Sutskever, John Schulman, and Wojciech Zaremba.

Altman was the driving force behind ChatGPT, an AI application that has created equal amounts of excitement and fear. He pitched the idea to Microsoft, succeeding in persuading CEO Satya Nadella to commit resources to OpenAI to the tune of $10 billion. The tech giant has a 49 percent state in OpenAI. Altman also led negotiations for a tender offer that would bump the company’s valuation to over $80 billion.

OpenAI will use the funds to fuel research and process massive amounts of data. The goal is to get closer to developing an artificial general intelligence (AGI) system. Sam Altman states on the site that the “mission is to ensure that artificial general intelligence [AGI]…benefits all of humanity.”

AGI is a theoretical framework for creating AI software that can autonomously solve various complex problems across different fields of knowledge. It is strong AI compared to the weak AI software that only functions within narrow parameters.

Industry observers also credit Altman for attracting leading AI engineers to the company with the promise of developing massively disruptive technology. His moonshot ethos struck a chord with tech talent eager to do something revolutionary and establish themselves in history.

Despite his significant contributions to OpenAI, his ousting as CEO does not surprise people in the know. Altman has been reportedly clashing with the board for some time, primarily over his “move fast and break things” approach.

Speculations

No one knows exactly what the discussion behind closed doors prompted Altman’s dismissal. The board or anyone else in the know has not forwarded confirmation of the reasons behind the decision.

An OpenAI announcement offered one reason for the ousting. It claimed that Altman failed to be “consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.” Co-founder Greg Brockman resigned in protest, but the notice merely mentions his departure as board chairman. Brockman returned to OpenAI upon the reinstatement of Altman, but it is not yet clear in what capacity.

However, Altman’s lack of candor is unlikely to be the only reason for such a significant move by the board. An internal memo by interim CEO Emmett Shear denies that the dismissal was due to malfeasance or any disagreement on safety. He did not specify its rationale but pledged to investigate the matter.

The lack of information has led to numerous speculations on what really happened. These included too much deference to Microsoft or excessive focus on side projects. Some AI experts believe that the real issue that led to the ejection was a failure in governance.

One is David Shrier, an Imperial College Business School AI professor. He believes that the speed of adoption and potential for commercialization of ChatGPT had sparked tensions between OpenAI Inc. and OpenAI LLP. OpenAI is a for-profit company whose owner remains OpenAI Inc., a 501(c)3 nonprofit organization. The unusual relationship culminated in forming a “capped profit” structure, where any excess profit OpenAI makes goes to the nonprofit.

On paper, the board has complete control over OpenAI. However, the massive operations and influence of OpenAI and its CEO have essentially rendered the board ineffective. The directors have no governance powers over OpenAI, which Shrier considers alarming. He believes that AI technology could lead to an existential crisis, which makes robust governance critical in managing it.

The Letter

One of the more substantiated rumors is that the straw that broke the camel’s back was a letter. Before the board decided to oust Altman, they purportedly received a warning letter from staff researchers. It reportedly stated that OpenAI had made a discovery that was powerful enough to threaten humanity.

While the letter is unavailable for review, Murati sent internal communications informing staffers and the board about Project Q*. Without confirming media speculations on the nature of Q*, some OpenAI employees believe that Q* team members have made a breakthrough in developing AGI.

The authors flagged the progress of an OpenAI team on optimizing AI models to increase their ability to reason and do scientific work. They pointed out the potential danger that highly intelligent machines could pose to humanity. The letter did not specify the precise nature of the threat.

The concerns that led to sending the letter to the board appear to have been triggered by software learning to solve math problems. While that might seem a cause for celebration rather than chagrin, AI researchers disagree. The ability to answer math problems with only one correct answer indicates a significant step toward AGI.

In the APEC CEO Summit 2023, Altman hinted at significant breakthroughs in AI tech in a discussion with other tech experts. In his speech, Altman said in part that “we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime.”

The OpenAI board fired Altman the day after the summit.

Why this should cause concern may be encapsulated in what Altman articulated in his article on the site. He stated, “AGI would come with serious risk of misuse, drastic accidents, and societal disruption. [However,] because the upside of AGI is so great, we [should not] stop its development forever.” Altman suggested that AGI developers and society must figure out how to get it right.

Altman has previously brushed aside concerns over releasing products into the market without understanding the consequences. Many have raised concerns over security, bias, confidentiality, accuracy, and ethical issues with ChatGPT since its launch in November 2022. With AGI seemingly at the brink of creation, the fears of what OpenAI will do under Altman’s leadership may seem reasonable.

The Walk Back

Office meeting room image 89348393483849

However, about 700 OpenAI employees thought otherwise. When the board relieved Altman of his CEO duties, they signed an open letter signaling their displeasure. The letter stated they may resign unless the board reinstated Altman and Brockman. It further requested the board to step down, claiming their actions indicated they “are incapable of overseeing OpenAI.”

After their dismissal, Microsoft immediately hired Altman and Brockman to head a new AI research team. It also opened the doors to any OpenAI who wanted to switch.

Investors also pressured the board to back down on their decision. Altman, who was once at the helm of YCombinator, had a talent for raising funds and attracting top talent. Maintaining the various software platforms of OpenAI requires copious amounts of both. His absence may have created a big problem for the company in financial and human resources terms.

As a result, the OpenAI board would bring back Altman as CEO. They announced the decision on November 21 on X (formerly Twitter) after reaching an “agreement in principle” with him. The details of the agreement have not been made public.

Altman confirmed the reinstatement in an X post, stating that he was looking forward to returning to OpenAI. He said that Nadella fully supports his return under the new board. The Microsoft CEO noted that the changes were an “essential step toward more stable, well-informed, and effective governance.”

The Aftermath

The powers that be in OpenAI also formed a new set of board members, primarily luminaries in business and technology. The previous board of directors was mainly researchers and academics.

The revamped board includes former Salesforce CEO Bret Taylor and former Treasury Secretary Lawrence Summers. Quora CEO Adam D’Angelo is the only holdover from the original board. Taylor takes on the mantle of OpenAI board director. However, D’Angelo led negotiations with outgoing board members that would eventually bring back Altman and Brockman.

Former board members wanted to stay and keep Altman and Brockman in check if there was a reinstatement. They expressed concerns that future board members would let Altman do whatever he wanted. During negotiations, the board decided it would stand by its decision to oust Altman and Brockman.

However, it was to no avail. Helen Toner, who openly criticized OpenAI in October 2023, stepped down to make way for a fresh start. Rand Corporation scientist Tasha McCauley also vacated her seat. OpenAI co-founder Ilya Sutskever also stepped down but appears to have stayed as Chief Scientist.

Plans are afoot to expand the board. Before his ousting, Altman had been pushing for months to add more directors that would include more experienced people. The initial board is smaller but unlikely to stay that way for long. By all accounts, the final board will most likely be a diverse group of people who cleave closer to the non-doom side of AI. The composition of the current board suggests that OpenAI will shift to resemble a typical Silicon Valley company.

It is unclear whether Microsoft will have any role or representation on the board due to the reshuffling. However, given the participation and influence of the tech giant, the directors will likely maintain close ties with Microsoft.

External Effects

The ongoing drama at OpenAI represents global regulation issues, and people will carefully watch what happens next. It prompted the US Federal Trade Commission to issue staff directives authorizing investigations of AI-powered companies selling products and services. The directives provided legal powers to gather evidence, including documents and testimonies.

OpenAI’s aborted ousting also affected negotiations over the proposed AI law in the European Union. It occurred as member states were deep in debates over the approach to regulations, specifically aggressive oversight vs. self-regulation. Some believe that regulations should target foundational models, while others maintain they should focus on applications.

Three EU countries agreed to support self-regulation, indicating that France, Italy, and Germany trusted companies like OpenAI to act responsibly. However, other members see the OpenAI events differently, believing it proved that more robust protocols are necessary. Relying on visionaries and disruptors in the AI space is bound to lead to a disastrous lack of governance.

The failure of the OpenAI board to stand by their decision due to pressure is likely to highlight the power of stakeholders. It amplifies the need for more robust public oversight as the government can theoretically curtail corporate activities that may pose a threat to society. However, calls for government interference with private AI enterprises will probably not go down well with big tech.

The (Possible) Real Deal

The reasons behind Altman’s ousting and reinstatement brought the public’s attention to the views of fringe groups about AI risks. Additionally, the back and forth between the board and Altman brought to light the conflict between Altman’s actions and beliefs.

Altman was one of the most vocal advocates of AI regulation when speaking before the Senate. He stated that his worst fear was that the technology and industry would “cause significant harm to the world….in a lot of different ways.”

However, he focused increasingly on raising funds and forging alliances to advance AI technology that he admittedly did not understand completely. Some industry experts consider the disconnect as disingenuous. His professed fear of a dystopian future directs attention away from what OpenAI is currently doing. In the meantime, OpenAI is garnering even more influence and power as a commercial venture.

On the other hand, the desire for regulation and commercialization are not mutually exclusive concepts. Disruptive technology tends to be very profitable but always goes through growing pains that are eventually resolved over time.

OpenAI, as a for-profit venture and disruptor, will likely go through the same as the lightbulb, Internet, or digital banking. No one wants to go back to before these technologies today, but many feared them at the time. No one in the 1990s would trust their money with anything other than a traditional bank. Yet most people today enjoy the ability to open a retirement account online.

Technological advances will always profoundly affect society in good and bad ways. The only question is how people will manage them so that the good outweighs the bad.

[ad_2]

Source link