Biden Issues Nation’s First AI Executive Order

The Biden Administration is moving forward with a first-of-its-kind artificial intelligence executive order aimed at creating new standards for AI safety and setting up new guardrails to prevent potential misuse. Today’s executive order marks a key inflection point in AI regulation, as lawmakers in governments around the world grapple with how to best prevent harm in the emerging and unpredictable technology.

What Could the Future of Medical AI Look Like?

Biden’s order breaks down into eight categories:

New Standards for AI Safety and SecurityProtecting Americans’ PrivacyAdvancing Equity and Civil RightsStanding Up for Consumers, Patients, and StudentsSupporting WorkersPromoting Innovation and CompetitionAdvancing American Leadership AbroadEnsuring Responsible and Effective Government Use of AI

On the standard issue, the order will ask major AI companies to share their safety test results with the government and develop new tools to ensure AI systems are safe and trustworthy. It’s also calling on AI developers to develop a variety of new tools and standards to protect against all sorts of AI doomer catastrophe scenarios, from AI-generated bioweapons to AI-assisted fraud and cyber attacks.

Federal government agencies will work alongside private industry here. The National Institute of Standards and Technology (NIST), for example, will be responsible for developing standards for “red teaming” AI models before their release to the public. The Department of Energy and Department of Homeland Security, meanwhile, will look into the potential threats to infrastructure and other critical systems.

Elsewhere, the order focuses on the efforts to prevent AI from being used to discriminate against people. Similarly, the order calls for the development of new criminal justice standards and best practices to determine how AI is used in sentencing, parole, and pretrial release, as well as in surveillance and predictive policing. The text of the order notably does not call for outright bans of any of these use cases, which some privacy advocates have hoped for.

READ MORE  Buy or gift a lifetime of language learning with Rosetta Stone and StackSkills for just $160

In addition to AI uses, Biden’s order also tries to address the data used to train increasingly powerful models. The order specifically calls government agencies to evaluate how they collect and use commercially available information, including those procured from data brokers.

The order builds off previous voluntary commitments from seven of the world’s leading AI firms around watermarking and testing requirements. Those “commitments” essentially amount to self-policing on the part of tech giants. This order, by contrast, alters the weight of the executive pen, though it’s unclear how far government agencies will go to punish firms deemed out of step with the new guidelines. Still, White House Deputy Chief of Staff Bruce Reed told CNBC he believes these, and other guidelines include in the order mark “the strongest set of actions any government in the world has ever taken on AI safety, security, and trust.”

This is a developing story, and we will be updating this page with additional information.

Leave a Comment