The U.S. Department of Defense, the Pentagon has launched a “bias bounty” program offering up to $24,000 in rewards. This initiative aims to identify and address real-world examples of unfairness within AI models currently employed by the military.

The program specifically targets large language models (LLMs), a type of AI capable of generating text, translating languages, and even writing different kinds of creative content. The initial challenge focuses on Meta’s open-source LLM, LLaMA-2 70B, and tasks participants with eliciting demonstrably biased outputs through specific prompts and interactions.

While the bounty offers the largest prize of $10,000 for the most impactful submission, even approved participants who can showcase examples of bias will receive $250. Submissions will be judged based on several criteria, including the realism of the scenario, relevance to specific protected classes, supporting evidence, conciseness, and the number of prompts required to replicate the bias.

This program represents the first of two planned “bias bounties” by the Pentagon. It highlights the growing concerns surrounding potential biases in AI systems, particularly those used in critical domains like national security. Biases in AI algorithms can lead to discriminatory outcomes, such as unfair hiring practices or inaccurate facial recognition software disproportionately affecting certain groups.

By crowdsourcing the search for biased outputs, the Pentagon hopes to gain valuable insights into the limitations of current AI models and identify areas where improvement is needed. Additionally, the program aims to raise awareness about the ethical implications of AI development and encourage responsible practices within the field.

However, some experts caution that such bounty programs might have limitations. Critics argue that focusing on specific, pre-selected models might overlook broader systemic biases ingrained in AI development processes. Additionally, the potential for malicious actors to exploit the program for manipulation or misinformation raises concerns.

Shares: