NEW YORK — Sam Altman is stepping down from his role as CEO of OpenAI, the company announced on Friday.
The departure follows a review process undertaken by the company's board of directors, said OpenAI, the maker of the popular conversation bot ChatGPT.
"Mr. Altman's departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities," OpenAI said in a statement. "The board no longer has confidence in his ability to continue leading OpenAI."
The company's chief technology officer, Mira Murati, will take over the CEO role on an interim basis, OpenAI said.
Founded as a non-profit in 2015, OpenAI has risen to prominence since ChatGPT was made available to the public a year ago. The chatbot now boasts more than 100 million weekly users, Altman announced earlier this month.
Meanwhile, the company has grown dramatically. As of October, OpenAI was set to bring in more than $1 billion in revenue over a year-long period through the sale of its artificial intelligence products, The Information reported.
In January, Microsoft announced it was investing $10 billion in OpenAI. The move deepened a longstanding relationship between Microsoft and OpenAI, which began with a $1 billion investment four years ago. Microsoft's search engine, Bing, offers users access to ChatGPT.
Speaking with ABC News' Rebecca Jarvis in March, Altman said AI holds the capacity to profoundly improve people's lives but also poses serious risks.
"We've got to be careful here," Altman said. "I think people should be happy that we are a little bit scared of this."
In May, Altman testified before Congress with a similarly sober message about AI products, including the latest version of ChatGPT called GPT-4. He called on lawmakers to impose regulations on AI.
"GPT-4 is more likely to respond helpfully and truthfully, and refuse harmful requests, than any other widely deployed model of similar capability," Altman said.
"However, we think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models," he added, suggesting the adoption of licenses or safety requirements necessary for the operation of AI models."
This is a developing story. Please check back for updates.
Copyright © 2023, ABC Audio. All rights reserved.