An OpenAI spokesperson said it has created an internal metric to track the progress its large language models are making toward artificial general intelligence, or AI with human-like intelligence. told Bloomberg,
Today’s chatbots, such as ChatGPT, are at Level 1. OpenAI claims it is closer to Level 2, which is defined as a system that can solve basic problems at the level of a person with a PhD. Level 3 means AI agents that are able to take actions on behalf of the user. Level 4 involves AI that can make new innovations. Level 5, the final step to achieving AGI, is AI that can do the work of an entire organization of people. OpenAI has previously defined AGI as “a highly autonomous system that surpasses humans in the most economically valuable tasks.”
OpenAI’s unique structure is centered around its mission to achieve AGI, and how OpenAI defines AGI is important. The company has stated that “if a value-aligned, safety-conscious project gets close to creating AGI before OpenAI”, it is committed to not competing with the project and dropping everything to assist. What OpenAI says Charter This is vague, leaving room for the decision of the for-profit entity (governed by the nonprofit), but the scale at which OpenAI can test itself and competitors could help determine when AGI is clearly reached.
Still, AGI is still a long way off: reaching AGI, if at all possible, will require trillions of dollars worth of computing power. Timelines vary widely among experts and even within OpenAI. In October 2023, OpenAI CEO Sam Altman Said We’ll likely be “five years, more or less” before we reach AGI.
This new grading scale, although still under development, was introduced by OpenAI just a day ago. announced the Its collaboration with Los Alamos National Laboratory aims to explore how advanced AI models like GPT-4o can safely assist in biological research. A program manager at Los Alamos, who is responsible for the national security biology portfolio and was instrumental in securing the OpenAI partnership, told Verge Its goal is to test the capabilities of GPT-4o and establish a set of security and other factors for the US government. Eventually, public or private models can be tested against these factors to evaluate their own models.
In May, OpenAI Disbanded your security team After the group leader, OpenAI co-founder Ilya Sutskever, left the company. Jan Leakey, a leading researcher at OpenAI, resigned shortly afterwards One post claimed that “security culture and processes have taken a back seat to shiny products” at the company. While OpenAI denied that this was the case, some are concerned about what it will mean if the company does in fact reach AGI.
OpenAI has not provided details about how it allocates models to these internal levels (and refused to provide any). VergeHowever, company leaders demonstrated a research project using the GPT-4 AI model during an all-hands meeting on Thursday and they believe the project shows some new skills that demonstrate human-like reasoning abilities. Bloomberg,
This scale can help provide a strict definition of progress, rather than leaving it up to interpretation. For example, OpenAI CTO Meera Murat said In an interview in June that the models in its labs are no better than the models already out in the public. Meanwhile, CEO Sam Altman said at the end of last year The company recently “pushed back the veil of ignorance,” meaning the models are notably more intelligent.