President Joe Biden is receiving wide applause among Black leadership for his executive order that attempts to assure that artificial intelligence (AI) remains within boundaries that respects civil rights and adheres to principles of democracy. But the question remains whether the executive order goes far enough to protect Black people – particular from abusive law enforcement.

“We believe in the potential for AI to be a powerful tool to help advance our vision of opportunity and prosperity for Black and Brown people. But we cannot let the tools of the future reinforce the mistakes of the past. Guardrails must be implemented now to ensure that this emerging technology centers equity at every step of development and implementation,” said Damon Hewitt, president and executive director of the Lawyers’ Committee for Civil Rights Under Law (LCCR), in a statement issued following Biden’s signing of the executive order.

Maya Wiley (Courtesy photo)

“This executive order is a critical step to help guard against algorithmic bias and discrimination.  It can be the beginning of a pathway to a future where AI empowers instead of oppresses.”

Hewitt says the executive order prepares the federal government “to prevent and address bias and discrimination in new technologies; but more action is needed to fully address harmful AI uses by law enforcement.”

Damon Hewitt (Courtesy photo)

Tech experts have pointed out that abusive AI tactics have been racially biased, especially against Black people.

An article titled, “Racial Discrimination in Face Recognition Technology,” written by Harvard University biotech consultant, Alex Najibi, points out that face recognition technology, often used by police departments and in airport screening, as well as employment and housing decisions, has been known to involve “significant racial bias, particularly against Black Americans.”

Najibi adds, “Even if accurate, face recognition empowers a law enforcement system with a long history of racist and anti-activist surveillance and can widen pre-existing inequalities.”

He writes that “despite widespread adoption, face recognition was recently banned for use by police and local agencies in several cities, including Boston and San Francisco” because face recognition “is the least accurate” of all recognition technologies such as fingerprinting.

While applauding the Administration on its initial steps to direct agencies to determine how AI is used in criminal justice, the LCCR says Biden’s executive order does not go far enough to actually address “harmful uses of AI by law enforcement agencies, such as the discriminatory use of facial recognition technologies.”

President Barack Obama, who also released a statement, pointed out that he asked his staff seven years ago to study “how artificial intelligence could play a growing role in the future of the United States.”

He pointed out additional problems that could occur, including national security threats.

“We don’t want anyone with an internet connection to be able to create a new strain of smallpox, access nuclear codes, or attack our critical infrastructure. And we have to make sure this technology doesn’t fall into the hands of people who want to use it to turbocharge things like cybercrime and fraud,” Obama states.

He credited organizations such as the Leadership Conference on Civil and Human Rights and Upturn to the Alignment Research Center for “tackling these questions, and making sure more people feel like their concerns are being heard and addressed.”

The Leadership Conference, led by Maya Wiley, president, wrote a letter to Biden and Vice President Kamala Harris on August 4, urging the Administration to focus Biden’s executive order on “protecting the American public from the current and potential harms of this technology— including threats to people’s rights, civil liberties, opportunities, jobs, economic well-being, and access to critical resources and services.” That letter was co-signed by LCCR, the NAACP, Center for American Progress among others.

The Executive Order directs the following requirements:

  • Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government.
  • Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy.
  • Protect against the risks of using AI to engineer dangerous biological materials by developing strong new standards for biological synthesis screening.
  • Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content.
  • Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software.
  • Order the development of a National Security Memorandum that directs further actions on AI and security.

The focus of the executive order is primarily to assure a fair and safe future while using AI. But the LCCR insists the order needs more work and vows to continue working with Administration to that end.

Hewitt concluded, “To make that future a reality, civil rights-focused protections must apply to every aspect of our lives touched by AI technology, including the harmful use of AI by law enforcement. We look forward to working with the Biden Administration on how we can address the full scope of this challenge and fully leverage the opportunity before us.”