Regulating innovative and novel technology always presents a challenge for lawmakers. If the regulations are too broad, they can stifle innovation and strangle a promising new industry in its infancy. If the regulations are too narrow, they can permit industry to accumulate dangerous levels of market power or develop products or practices that harm the interests of the public.
Since Artificial Intelligence (AI) is the most innovative field in technology today, it presents perhaps the greatest challenge to legislators. Lawmakers in the European Union and United States have risen to the challenge in very different ways.
The European Approach
In the EU, legislators have drafted a proposed AI Act expected to go into effect in January 2023. The EU AI Act will represent the first broad AI regulation by a major regulator anywhere. The EU AI Act presupposes that unregulated AI can adversely affect a number of fundamental rights enshrined in the EU Charter of Fundamental Rights; as such, it aims to “enhance and promote the protection of the rights protected by the Charter: the right to human dignity (Article 1), respect for private life and protection of personal data (Articles 7 and 8), nondiscrimination (Article 21) and equality between women and men (Article 23). It aims to prevent a chilling effect on the rights to freedom of expression (Article 11) and freedom of assembly (Article 12), to ensure protection of the right to an effective remedy and to a fair trial, the rights of defense and the presumption of innocence (Articles 47 and 48), as well as the general principle of good administration. Furthermore, as applicable in certain domains, the proposal will positively affect the rights of a number of special groups, such as the workers’ rights to fair and just working conditions (Article 31), a high level of consumer protection (Article 28), the rights of the child (Article 24) and the integration of persons with disabilities (Article 26).”
To accomplish these lofty goals, the AI Act puts in place a regulatory framework that defines AI systems by their level of risk, with riskier AI systems subject to stronger regulation. Among the important Articles of the AI Act are Article 5, which prohibits certain discriminatory or invasive AI practices; Article 9, which requires that high-risk AI systems implement risk management systems; Article 13, which requires that high-risk AI systems be transparent enough to enable users to interpret the system’s output and use it appropriately; Article 14, which requires that high-risk AI systems can be effectively overseen by human beings to minimize effects on health, safety, and rights; and Article 15, which requires that high-risk AI systems have high levels of accuracy and robustness.
The American Approach
In the US, the White House Office of Science and Technology Policy has issued a Blueprint for an AI Bill of Rights. According to the Blueprint, American citizens should:
- Be protected from unsafe and ineffective systems.
- Be protected from discrimination by algorithms and systems should be used and designed in an equitable way.
- Be protected from abusive data practices and have agency over how data is used.
- Be notified that an automated system is being used and understand how and why it contributes to outcomes.
- Be able to opt-out where appropriate, and have access to a human being who can quickly consider and remedy problems.
These five principles are approximately analogous to the key articles in the EU AI Act. The first principle corresponds to Articles 9 and 15; the second principal corresponds to Article 5; the fourth principal corresponds to Article 13; and the fifth principal corresponds to Article 14. (The third principal is already covered by the EU’s General Data Protection Regulation).
Despite this similarity, the AI Act and the Blueprint are fundamentally different. The EU AI Act establishes functional regulations that will protect existing rights held by EU citizens. The US Blueprint, on the other hand, establishes new rights for American citizens but does not actually implement any regulations to protect them. Actually, it does even less than that: It suggests that we ought to have new rights, but doesn’t even establish them. The Blueprint is just a “non-binding” white paper that “does not constitute U.S. government policy.” Indeed, “the Blueprint for an AI Bill of Rights is not intended to, and does not, create any legal right, benefit, or defense, substantive or procedural, enforceable at law or in equity by any party against the United States, its departments, agencies, or entities, its officers, employees, or agents, or any other person.”
As the leader in Understandable AI®, Diveplane has been at the forefront of activism for sensible regulation of AI systems. We have written elsewhere in support of the EU AI Act’s provisions, especially the all-important Article 13 requirement that users be able to understand and interpret AI. Understandable AI is the central mission of Diveplane, and regulatory requirements for auditable, explainable, interpretable technology are urgently needed to prevent unalignable “black box” neural networks from deployed on socially critical systems. For similar reasons we support the goals of the Blueprint. In particular, we see the fourth principle, which asserts that users should be able to understand the automated system that affect their lives, as of critical importance to protecting the public from the real risks that uninterpretable AI will become unaligned AI.
However, like many others who have reviewed the Blueprint, we have some concerns about whether the Blueprint is the best path forward for AI regulation in the United States. We believe that a more robust regulatory structure will be required to promote enterprise and protect consumers.
When the Dotcom Revolution began, copyright law was well behind the technology it was meant to govern, and the early years of the internet era were marred by intense litigation targeting companies and consumers. Eventually, the comprehensive Digital Millennium Copyright Act (DMCA) was passed, which enabled internet business to thrive in the 2000s. We foresee that a similar statutory framework (perhaps similar to the EU AI Act) as necessary for AI companies and consumers. Companies need the reliability afforded by clear and objective regulations that can allow them the confidence to invest resources in innovative systems. Consumers need the security of legal protection for their data, privacy, and safety.
In short, both the AI industry and the American public would be better off if the United States created a comprehensive, balanced framework that both businesses and consumers could rely on. We hope that the next step forward in the development of the Blueprint for an AI Bill of Rights will work towards that.