Innovation in artificial intelligence (AI) has outstripped regulation of AI for many years. Now the European Union intends to change that. The AI Act is a proposed European law that will regulate artificial intelligence systems – the first broad law on AI by a major regulator anywhere. Expected to go into effect in January 2023, the AI Act aims to create a common regulatory and legal framework for AI, including its development, use, and the legal consequences of failing to adhere to the requirements.
The regulators at the European Union tabulated a long list of concerns that the new law intends to address. One of their most important concerns is in Section 3.5 of the Act’s Explanatory Memo:
The use of AI with its specific characteristics (e.g., opacity, complexity, dependency on data, autonomous behaviour) can adversely affect a number of fundamental rights enshrined in the EU Charter of Fundamental Rights.
The adverse impact of opacity is emphasized again in Paragraph 47 of the Act’s Preamble:
To address the opacity that may make certain AI systems incomprehensible to or too complex for natural persons, a certain degree of transparency should be required for high-risk AI systems. Users should be able to interpret the system output and use it appropriately.
High-risk systems are defined in Annex III of the Act. They include critical infrastructure that could endanger a person’s life (like self-driving cars); vocational training that impacts educational or career attainment (e.g., scoring of exams); safety of products (like robot-assisted surgery); employment, workers management and access to self- employment (e.g., resume or CV-sorting); essential services (e.g., credit scores); law enforcement (e.g., evidence evaluation); migration, asylum, and border control management (including evaluation of passport authenticity); justice and democratic processes (e.g., applying law to facts); and surveillance (including biometric monitoring and facial recognition).
Indeed, virtually every major AI application being developed by government and enterprise will qualify as high-risk AI systems under the AI Act, and those AI applications will need “a certain degree of transparency.” The transparency requirement is formalized in Article 13 Para. 1 of the Act:
High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable users to interpret the system’s output and use it appropriately.
Article 13 imposes a regulatory burden that most AI systems cannot meet. Most AI applications today are based on decades-old technology of neural networks. Neural networks approximate data patterns via an aggregate of millions of functions, a “black box” that has proven fast, accurate – and utterly inscrutable. Neural networks conceal their decision-making within countless layers of artificial neurons all separately tuned by countless parameters. At best, neural networks can provide an after-the-fact estimation of why a neural network provided a result, but those can never amount to more than an educated guess. Further, the neural network developers have only indirect control over what the AI does, and little insight into why it does what it does.
Since neural networks are almost totally opaque, they will be ineligible for use on high-risk AI applications under the AI Act. Adapting to the regulatory regime of the AI Act will require businesses and governments in the EU to adopt new technology that can provide transparent and interpretable output to its users.
Fortunately, that technology already exists. It’s called Understandable AI® and it’s the lifeblood of Diveplane.
Our approach to AI is designed around the principles of Predict, Explain, and Show, transparently revealing the exact features, data, and certainty driving the prediction, creating user confidence that operational decisions are built on a foundation of fairness and transparency.
To achieve that, our Understandable AI® uses non-parametric instance-based learning. Although instance-based algorithms have a rich history, early implementations of the algorithms suffered from notable limitations in accuracy and scalability.
Diveplane has developed a fast query system and probability-space kernel that makes instance-based learning practical for enterprise use. Our machine learning platform, Diveplane Reactor™, provides accuracy that exceeds neural networks, combined with transparency, interpretability, and flexibility unparalleled in the entire AI space.
Diveplane Reactor supports many use cases including prediction, anomaly detection, anonymization, and the creation of synthetic data, all from a single model. And because its output is transparent and interpretable, the Diveplane Reactor platform is what you need to become compliant with Article 13 of the EU AI Act.
When it comes to high-risk AI applications in the EU, the only choice going forward is Understandable AI®.
One Comment