Hackers to Put AI Models From Google, OpenAI, More to the Test at Defcon 31

A number of AI tools are now publicly available, from Google Bard and Bing AI to the one that got this ball rolling: OpenAI’s ChatGPT. But what can these artificial intelligence models actually do? We’ll soon find out, as hackers put the biggest names in AI to the test at Defcon 31.

The hacker convention is calling on(Opens in a new window) attendees at this year’s gathering to “find bugs in large language models built by Anthropic, Google, Hugging Face, Nvidia, OpenAI, and Stability.”

Defcon 31(Opens in a new window) is scheduled for Aug. 10-13 in Las Vegas. The AI effort is being organized by AI Village, in partnership with Humane Intelligence(Opens in a new window)SeedAI(Opens in a new window), and the AI Vulnerability Database(Opens in a new window). The White House Office of Science, Technology, and Policy(Opens in a new window) is also involved, as is the National Science Foundation’s Computer and Information Science and Engineering Directorate, and the Congressional AI Caucus.

“This is the first time anyone is attempting more than a few hundred experts to assess these models,” according to Defcon organizers. “The more people who know how to best work with these models, and their limitations, the better. This is also an opportunity for new communities to learn skills in AI by exploring its quirks and limitations.”

DEFCON logo


AI Village will oversee the AI-hacking event at Defcon.
(Credit: Defcon)

AI Village organizers will provide laptops, access to each model, and a prize for the person who is able to most thoroughly test each one. “We will be providing a capture the flag (CTF) style point system to promote testing a wide range of harms,” the organizers say. “The individual who gets the highest number of points wins a high-end Nvidia GPU.”

Participants will include expert researchers as well as “students from overlooked institutions and communities.” All are expected to abide by the hacker Hippocratic oath.

By partnering with a non-government entity, the White House says its wants unbiased, independent experts to determine if the models meet the criteria in its AI Bill of Rights(Opens in a new window). The Office of Science, Technology, and Policy wrote the bill of rights, and is overseeing the Defcon initiative.

“This independent exercise will provide critical information to researchers and the public about the impacts of these models, and will enable AI companies and developers to take steps to fix issues found in those models,” the White House says(Opens in a new window).

The bill of rights identifies five principles the administration says “should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.” It contends all people have the following rights when it comes to AI:

Recommended by Our Editors

  1. You should be protected from unsafe or ineffective systems. 

  2. You should not face discrimination by algorithms, and systems should be used and designed in an equitable way. 

  3. You should be protected from abusive data practices via built-in protections, and you should have agency over how data about you is used.

  4. You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.

  5. You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.

The Defcon event looks to expedite the discovery of issues that violate these principles. Typically, companies like Google and OpenAI would test their own technology behind closed doors, or via their own bug bounty programs, which creates issues “security professionals and the public have to deal with,” Defcon organizers say.

“We love the explosion of creativity that new generative large language models (LLMs) allow,” Defcon says. “However, we’re only beginning to understand the embedded and emergent risks that come from automating these new technologies at scale.”

VP Kamala Harris at podium with Biden behind her.


Vice President Kamala Harris and President Biden
(Credit: Alex Wong / Staff / Getty Images)

The White House today also announced that the National Science Foundation has allocated $140 million to create seven new National AI Research Institutes, while the Office of Management and Budget will release the draft of a policy on use of AI systems by the government for public comment this summer.

Vice President Kamala Harris also met with the CEOs of Alphabet, Anthropic, Microsoft, and OpenAI to discuss the development of responsible, trustworthy, and ethical AI systems.

SecurityWatch newsletter for our top privacy and security stories delivered right to your inbox.”,”first_published_at”:”2021-09-30T21:22:09.000000Z”,”published_at”:”2022-03-24T14:57:33.000000Z”,”last_published_at”:”2022-03-24T14:57:28.000000Z”,”created_at”:null,”updated_at”:”2022-03-24T14:57:33.000000Z”})” x-show=”showEmailSignUp()” class=”rounded bg-gray-lightest text-center md:px-32 md:py-8 p-4 mt-8 container-xs” readability=”31.423799582463″>

Like What You’re Reading?

Sign up for SecurityWatch newsletter for our top privacy and security stories delivered right to your inbox.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.

Facebook Comments Box

Visits: 0