Rather than keep AI chatbot tech private, Meta has decided to open the floodgates and essentially give away the computer code for a new large language model.
Meta will open-source a new AI model, Llama 2, and make it free for research and commercial purposes. Meta wants entrepreneurs, startups, and developers to create an ecosystem around its AI tech at a time when the industry has gravitated toward OpenAI’s ChatGPT.
“Open source drives innovation because it enables many more developers to build with new technology,” Meta CEO Mark Zuckerberg wrote(Opens in a new window) in a Facebook post, later adding: “I’m looking forward to seeing what you all build!”
“This is going to change the landscape of the LLM (large language model) market,” added(Opens in a new window) Meta’s chief AI scientist, Yann LeCun.
It’s unclear how the technology compares to ChatGPT. But Meta says Llama 2 has been trained on 40% more data then the first Llama model, including on over 1 million new human annotations. As a result, Llama 2 can perform slightly better on various benchmarks involving reasoning, coding, proficiency, and knowledge.
The open-source approach could put pressure on OpenAI, which has been charging companies to use its own generative AI tech through an API. In contrast, Zuckeberg plans on letting users download the Llama 2 source code from a company website(Opens in a new window), although users have to complete a form, which is then reviewed. “There is also an optimized version that you can run locally on Windows,” he added. Companies will be able to access the AI model on Microsoft’s Azure cloud computing platform.
Still, the news raises questions over whether Llama 2 will fall into the wrong hands and end up fueling scams and other malicious acts. When the original Llama model was released to researchers in February, for example, copies of the code were leaked and quickly exploited to create(Opens in a new window) an uncensored chatbot capable of disturbing responses.
Meta, however, argues that open-sourcing Llama 2 can make the technology safer.
“Opening access to today’s AI models means a generation of developers and researchers can stress test them, identifying and solving problems fast, as a community,” the company wrote(Opens in a new window) in a separate blog post. “By seeing how these tools are used by others, our own teams can learn from them, improve those tools, and fix vulnerabilities.”
Recommended by Our Editors
Meta executives also told(Opens in a new window) The New York Times that people can already generate large amounts of misinformation and hate speech without using AI programs. Instead, releasing Llama 2 to the community could help the social network fight online toxicity since the company is already trying to tap AI to crack down on rule-breaking content.
In addition, Meta says it’s been “red-teaming” Llama 2 to understand how people could misuse the large language model and install guardrails in response. “These safety fine-tuning processes are iterative; we will continue to invest in safety through fine-tuning and benchmarking and plan to release updated fine-tuned models based on these efforts,” the company said.
Since Llama 2 is being open-sourced, users will be able to modify its code. However, Meta tells PCMag it includes language in Llama’s acceptable use policy that says the company can crack down on people found abusing the AI model. This can include initiating legal proceedings or terminating any agreements between Meta and a software developer.
Get Our Best Stories!
Sign up for What’s New Now to get our top stories delivered to your inbox every morning.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.
Hits: 0