The new model Q* allegedly increased employee worries about safety, leading to a board meeting prior to CEO Sam Altman’s dismissal.
Prior to Sam Altman’s firing, OpenAI was allegedly developing a cutting-edge system that was so potent that employees at the business were concerned for their safety.
Some OpenAI experts were so concerned about the artificial intelligence model that they wrote to the board of directors before to Altman’s resignation, stating that it could endanger humankind, according to Reuters.
The tech news site the Information reported that the Q* model, which is pronounced as “Q-Star,” was able to handle simple math problems that it had never encountered before. The article also mentioned that some safety researchers were concerned about the system’s rapid progress. One would consider the capacity to answer math problems to be a major advancement in artificial intelligence.
The reports came after several turbulent days at San Francisco-based OpenAI, where Altman was fired by the board last Friday but was later reinstated on Tuesday night after almost all 750 employees threatened to quit if he was not brought back. The largest investor in OpenAI, Microsoft, also backed Altman.
Artificial general intelligence (AGI) is the name for a system that can do a wide range of tasks at human or above human levels of intellect. Many experts are concerned that businesses like OpenAI are moving too fast towards achieving AGI, which may theoretically elude human control.
According to Andrew Rogoyski of the University of Surrey’s Institute for People-Centred AI, the development of a large language model (LLM) that can solve mathematical problems would be revolutionary. “The intrinsic math prowess of LLMs is a significant advancement, opening up a whole new range of analytical possibilities for AIs,” he stated.
Speaking on Thursday of last week—the day before his abrupt termination—Altman mentioned that ChatGPT’s company had achieved yet another milestone.
“I’ve gotten to be in the room, four times now in the history of OpenAI, the most recent time was just in the last couple of weeks, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime,” he said during an appearance at the Asia-Pacific Economic Cooperation (Apec) summit.
Initially established as a nonprofit, OpenAI is governed by a board of directors that oversees Altman’s for-profit subsidiary. The largest investor in the for-profit company is Microsoft. OpenAI will have a new board led by Bret Taylor, a former co-chief executive of software business Salesforce, as part of the agreement in principle for Altman’s return.
According to the creator of ChatGPT, the firm was founded with the intention of creating “safe and beneficial artificial general intelligence for the benefit of humanity,” and the for-profit business would be “legally bound to pursue the nonprofit’s mission.”
The nonprofit’s emphasis on safety sparked rumors that Altman had been fired for jeopardizing the organization’s main goals. But last week, Emmett Shear, his temporary replacement as interim CEO, said that the board “did *not* remove Sam over any specific disagreement on safety.”
We have reached out to OpenAI for feedback.