Loading market data...

Group Pushes for Mandatory AI Vetting in U.S. Government Contracts

Group Pushes for Mandatory AI Vetting in U.S. Government Contracts

Why voluntary rules fall short

Americans for Responsible Innovation argues that voluntary guidelines, which some agencies have adopted, give contractors an easy out. Without a mandate, a company can skip thorough testing and still win work. The group believes that when a flawed AI system harms someone — denying benefits, flagging innocent people, making wrong predictions — the damage can't be undone. Pre-deployment checks are the only way to prevent those outcomes.

The group didn't name specific agencies or contracts. But it stressed that the sheer number of AI tools in government means even one bad model could affect millions of people.

What mandatory vetting would involve

Under the group's proposal, any AI model intended for use in a government contract would have to pass a standardized set of tests. Those would cover fairness, accuracy, security, and reliability. The group hasn't laid out exactly who would run those tests or what the pass-fail criteria would be. It is calling on the government to develop those details — either through an executive order or by passing a law.