OpenAI, the San Francisco tech company that attracted global attention after releasing ChatGPT, said Tuesday it was unveiled a new version of its artificial intelligence software.
The software, called GPT-4, “can solve difficult problems more accurately, thanks to its broader general knowledge and problem-solving ability,” according to an announcement on OpenAI’s website.
In a video the company posted onlineit said GPT-4 had a range of capabilities that the previous iteration of the technology didn’t, including the ability to “reason” based on images users uploaded.
“GPT-4 is a large multimodal model (accepts image and text input, transmits text output) that, while less capable than humans in many real-world scenarios, demonstrates human-level performance across a variety of professional and academic benchmarks,” OpenAI wrote on its website.
Andrej Karpathy, an OpenAI contributor, tweeted that the feature meant the AI could ‘see’.
The new technology is not available for free, at least until now. OpenAI said people could try out GPT-4 on its ChatGPT Plus subscription service, which costs $20 a month.
OpenAI and its ChatGPT chatbot have shaken up the tech world and made many people outside the industry aware of the possibilities of AI software, in part through the company’s partnership with Microsoft and its search engine Bing.
But the pace of OpenAI’s releases has also raised concerns as the technology has gone untested, forcing abrupt changes in fields from education to the arts. The rapid public development of ChatGPT and other generative AI programs has prompted some ethicists and industry leaders to demand guardrails for the technology.
Sam Altman, CEO of OpenAI, tweeted Monday that “we definitely need more regulation on AI.”
The company fleshed out GPT-4’s capabilities in a series of examples on its website: the ability to solve problems, such as scheduling a meeting between three busy people; scoring high on tests such as the uniform bar exam; and learning a user’s creative writing style.
But the company also acknowledged limitations, such as social biases and “hallucinations” that it knows more than it actually knows.
Google, concerned that AI technology could reduce the market share of its search engine and of its cloud computing service, released its own software, known as Bard, in February.
OpenAI was launched in late 2015 with backing from tech billionaires including Elon Musk, Peter Thiel and Reid Hoffman, and the name reflected its status as a non-profit that would follow the principles of open-source software freely shared online. In 2019, it moved to a “capped” for-profit model.
Now it releases GPT-4 with some degree of secrecy. In a 98-page document accompanying the announcement, company employees said they would keep many details close to their chest.
Notably, the paper said the underlying data on which the model is trained will not be publicly discussed.
“Given both the competitive landscape and the security implications of large-scale models such as GPT-4, this report does not provide further details about the architecture (including model size), hardware, training computers, dataset construction, training method or anything like that.” They wrote.
They added: “We intend to make further technical details available to additional third parties who can advise us on how to balance the above competition and safety considerations against the scientific merit of further transparency.”
The release of GPT-4, the fourth iteration of OpenAI’s core system, has been rumored for months due to the growing hype surrounding the chatbot built on it.
In January, Altman tempered expectations of what GPT-4 could do by telling the podcast StrictlyVC, “People are begging to be disappointed and they will be.”
On Tuesday he asked for feedback.
“We’ve been doing the initial training of GPT-4 for quite some time, but it’s taken us a lot of time and a lot of work to feel ready to release it,” Altman said on Twitter. “We hope you enjoy it and we appreciate feedback on its shortcomings.”
Sarah Myers West, the general manager of the AI Now Institute, a nonprofit organization that studies the effects of AI on society, said releasing such systems to the public unsupervised “is essentially experimenting in the wild.”
“We have clear evidence that generative AI systems routinely produce error-prone, derogatory and discriminatory results,” she said in a text message. “We cannot rely solely on company claims that they will find technical solutions to these complex problems.”
OpenAI said it is scheduling an online demonstration on Google-owned video service YouTube at 1 p.m. PT (4 p.m. ET) on Tuesday.