fbpx

Select Page

Next Steps on the U.S. AI Bill of Rights

by Dr. Lorraine Kisselburgh and Marc Rotenberg

Nov 2, 2021 | Politics, Technology

PHOTO CREDIT: 
Jonathan McIntosh

The President’s top science advisors, Dr. Eric Lander and Dr. Alondra Nelson, have called for a Bill of Rights for Artificial Intelligence. In a recent commentary for Wired they drew powerful analogies with traditional areas of government regulation. “It’s unacceptable to create AI systems that will harm many people,” they wrote, “just as it’s unacceptable to create pharmaceuticals and other products—whether cars, children’s toys, or medical devices—that will harm many people.” Building on our country’s foundational principles, Lander and Nelson said “in the 21st century, we need a ‘bill of rights’ to guard against the powerful technologies we have created.”

We could not agree more. In 2018, working with computer scientists and legal experts, we helped develop the Universal Guidelines for Artificial Intelligence (UGAI),the first human rights framework for Artificial Intelligence. Our aim was to outline a set of basic rights and responsibilities for the use of AI. In drafting these guidelines, we wrote that the growth of AI decision-making systems “implicates fundamental rights of fairness, accountability, and transparency.” We also explained that AI  “produces significant outcomes that have real-life consequences for people in employment, housing, credit, commerce, and criminal sentencing.” We asserted that to maximize the benefits and minimize the risks of AI, these rights must be incorporated into law, ethical codes, as well as standards for system design.

The UGAI included a dozen principles, including rights to fairness, accountability, transparency, and human determination; obligations for accuracy, security, and public safety; as well as prohibitions on practices such as secret profiling and social credit scores, which enable industrial and government surveillance and data collection to discriminate against individuals. The Chinese social scoring system, for example, gathers real-time data on citizens, including biometrics and social behaviors, and generates a social score that can determine one’s employment, housing, and transportation options.

Most critically, we described these principles as rights and obligations for the use of AI. This means that people who are subject to AI-driven decisions should have certain rights, and those who design and deploy AI systems must assume certain responsibilities. The Universal Guidelines for AI look very much like the AI Bill of Rights the President’s top science advisors are proposing.

More than 300 experts and 60 organizations, including leading scientific and computing societies, endorsed the Universal Guidelines for AI. The UGAI has provided the basis for recommendations to national governments and international organizations developing AI strategies. We are especially pleased to see concrete proposals to prohibit social scoring in recent policy initiatives such as the AI Act of the European Union and the UNESCO Recommendation on the Ethics of AI.

Many of the principles contained in the UGAI anticipate the challenges set out by Dr. Lander and Dr. Nelson. For example, our first recommendation—a Right to Transparency—said that: “All individuals have the right to know the basis of an AI decision that concerns them. This includes access to the factors, the logic, and techniques that produced the outcome.” With so many opaque decisions today in education, hiring, employment and criminal justice, it is vitally important to establish algorithmic transparency as a foundational principle.

Those who worked with us on the Universal Guidelines also believed strongly in the principle that humans must remain responsible for the systems they create.  That is not simply the concept of a “human-in-the-loop” but also a requirement that institutions that deploy AI systems maintain control of these systems. We believe there is an obligation to terminate the system if human control is no longer possible. That includes everything from power grid controllers to semi-autonomous weapons systems.

Much has happened in the AI policy world since we launched the UGAI. In 2019, the Organization for Economic Co-operation and Development (OECD) countries, which include the United States, established the first global framework for governmental policy (the OECD AI Principles).  That same year, the G20 countries, including China and Russia, adopted their own framework (the G20 AI Guidelines). The European Union is currently developing a comprehensive legal framework for AI regulation, and UNESCO will soon adopt the first global ethical framework.

While the United States played a key role in the development of the OECD AI Principles, it has mostly stood on the sidelines as other countries have pursued AI legislation. Still, we were glad to see Secretary Blinken recently tell the OECD, “We must ensure that advances in technology are used to lift people up and advance human freedom—not suppress dissent, further entrench inequities, or target minority communities.” Earlier this year National Security Advisor Jake Sullivan expressed support for the EU AI initiative. And the recent joint statement from the new US and EU Technology Trade Council suggested a possible transatlantic convergence on AI Policy.

We are also glad to see the emergence of “Democratic Values” in the global discussions around AI policy. We do believe there are two AI futures—one that favors freedom, pluralism, privacy, and dignity, and another that points toward centralization and control. From a computing perspective, absent carefully crafted rules, there is good reason to believe the second outcome is more likely. Data naturally favors centralization and control. But political will, leadership, and collaboration among like-minded nations holds out the possibility that new technologies can be both trustworthy and human-centric.

After developing the Universal Guidelines and now following closely recent AI Policy developments, we have several suggestions for the President’s Science Advisors as this process moves forward:

  • First, aim for a small number of clear, powerful principles. Avoid unnecessary qualifiers, loopholes, and exceptions. If you intend to influence AI policy, the goals must be clearly stated.
  • Second, build on prior initiatives. The Universal Guidelines for AI, already widely endorsed by the AI community, provide a good starting point but there is more to do. Look also at the recent international frameworks, and consider the words of European Commission President Ursula von der Leyen, who last year called for a transatlantic accord on AI, built on “human rights, and pluralism, inclusion and the protection of privacy.”
  • Third, proceed on a bipartisan basis. Lawmakers have already put in place the foundation for a bipartisan policy with the Congressional AI Caucus. And many of the policies currently pursued by the White House Office of Science and Technology Policy emerged through collaborations of the administrations of both political parties. Eliminating bias, promoting fairness, ensuring accountability, and transparency for AI-based systems could also help align the political parties behind a common national purpose.
  • Fourth, do not delay. Many in the AI policy field have become frustrated with the explosion of AI ethics recommendations. Although well intentioned, those frameworks have done little to curb unfair business practices or government surveillance ambitions. Ethics may be a starting point for AI policy, but it is not the endpoint. There is an urgent need now to make automated hiring and performance decisions fairer and more transparent. The increasing use of AI in judicial decision-making (including predictive algorithms for criminal sentencing) requires much greater scrutiny by independent authorities.

As the leading developer of AI technologies, the United States carries a unique responsibility to get this right. The President’s science advisors have launched a critical initiative. Their recommendations should build on earlier work and lead to concrete outcomes.

 

Dr. Lorraine Kisselburgh ([email protected]) is a social scientist and faculty fellow at Purdue University, and was Inaugural Chair of the Technology Policy Council of the Association for Computing Machinery.

Marc Rotenberg ([email protected]) is the founder of the Center for AI and Digital Policy and the editor of the AI Policy Sourcebook. He teaches privacy law at Georgetown Law.

 

Read On:

Share This Story:

0 Comments

We collect email addresses for the sole purpose of communicating more efficiently with our Washington Spectator readers and Public Concern Foundation supporters.  We will never sell or give your email address to any 3rd party.  We will always give you a chance to opt out of receiving future emails, but if you’d like to control what emails you get, just click here.