OpenAI says it has begun training a new flagship AI model

OpenAI said Tuesday that it has begun training a new flagship artificial intelligence model that will succeed the GPT-4 technology that drives its popular online chatbot, ChatGPT.

The San Francisco start-up, one of the world's leading artificial intelligence companies, said in a blog post that it expects the new model to bring “the next level of capability” as it strives to build “artificial general intelligence” , or AGI. , a machine that can do everything the human brain can do. The new model would be an engine for artificial intelligence products including chatbots, digital assistants similar to Apple's Siri, search engines and image generators.

OpenAI also said it is creating a new safety and security committee to explore how to manage the risks posed by the new model and future technologies.

“While we are proud to build and release industry-leading models in both capability and safety, we welcome robust discussion at this important time,” the company said.

OpenAI aims to advance artificial intelligence technology faster than its rivals, while also appeasing critics who say the technology is becoming increasingly dangerous, helping to spread misinformation, replace jobs and even threaten humanity. Experts disagree on when tech companies will reach artificial general intelligence, but companies including OpenAI, Google, Meta and Microsoft have steadily increased the power of AI technologies for more than a decade, demonstrating a notable leap approximately every two or three years.

OpenAI's GPT-4, released in March 2023, allows chatbots and other software apps to answer questions, write emails, generate term papers and analyze data. An updated version of the technology, which was unveiled this month and is not yet widely available, can also generate images and respond to questions and commands in a highly conversational voice.

Days after OpenAI showed off the updated version – called GPT-4o – actress Scarlett Johansson said it used a voice that sounded “eerily similar to mine”. She said she rejected efforts by OpenAI CEO Sam Altman to license her voice for the product and that she had hired a lawyer and asked OpenAI to stop using her voice. The company said the voice was not that of Ms. Johansson.

Technologies like GPT-4o learn their capabilities by analyzing large amounts of digital data, including sounds, photos, videos, Wikipedia articles, books and news articles. The New York Times sued OpenAI and Microsoft in December, alleging copyright infringement of news content related to artificial intelligence systems.

Digitally “training” AI models can take months or even years. Once training is completed, AI companies typically spend several months testing the technology and refining it for public use.

That could mean OpenAI's next model won't arrive for nine months or a year or more.

As OpenAI trains its new model, its new safety and security committee will work to refine policies and processes for safeguarding the technology, the company said. The committee includes Mr. Altman, as well as OpenAI board members Bret Taylor, Adam D'Angelo and Nicole Seligman. The company said the new policies could be put in place in late summer or fall.

This month, OpenAI said Ilya Sutskever, a co-founder and one of the leaders of its security initiatives, would be leaving the company. This caused concern that OpenAI was not sufficiently addressing the dangers posed by artificial intelligence

Dr. Sutskever had joined three other board members in November in removing Mr. Altman from OpenAI, saying that Mr. Altman could no longer be trusted regarding the company's plan to create artificial general intelligence for the good of humanity. After a lobbying campaign by Altman's allies, he was reinstated five days later and has since reasserted control over the company.

Dr. Sutskever led what OpenAI called its Superalignment team, which explored ways to ensure that future AI models did not cause harm. Like others in the field, he became increasingly concerned that artificial intelligence posed a threat to humanity.

Jan Leike, who ran the Superalignment team with Dr Sutskever, resigned from the company this month, leaving the team's future in doubt.

OpenAI has focused its long-term security research in its broader efforts to ensure its technologies are secure. This work will be led by John Schulman, another co-founder, who previously led the team that created ChatGPT. The new safety board will oversee Dr. Schulman's research and provide guidance on how the company will address technology risks.

Leave a Reply

Your email address will not be published. Required fields are marked *