Latest News
EU AI Act Mandates Understandable AI for all High-Risk AI SystemsLeading Corporations Adopt New M&A Due Diligence Criteria to Assess Value, Risks of Data-, AI-Centric BusinessesDiveplane and Cantellus Group Announce Partnership to Promote Adoption of Understandable AI®
Skip to main content

Regulation of AI is becoming increasingly important as the technology becomes more widely used and integrated into various sectors. Two approaches to regulating AI have emerged: horizontal and vertical regulation.

Horizontal regulation refers to the application of regulations to all uses of AI across all sectors and applications. The regulative authority for this approach often lies with the central government, and it is characterized by its uniformity and stability. This approach aims to create a consistent and coherent set of regulations that apply to all uses of AI regardless of sector or application, which can provide citizens with a stable set of rights and assurances in their interactions with AI.

Examples of horizontal regulation include the EU AI Act, which proposes a horizontal regulatory framework for the EU, where AI across sectors are subject to the same risk-assessment criteria and legal requirements. Another example is the US Algorithmic Accountability Act, which adopts a horizontal approach by directing the Federal Trade Commission (FTC) to mandate impact assessments of AI systems across sectors (subject to the size and reach of the enterprise).

However, this approach can lack the flexibility to address the specific needs and concerns of different sectors, as the regulations are designed to apply across the board. Additionally, it may be more difficult for the central government to develop regulations that can effectively address the unique challenges and opportunities presented by the various sectors.

Vertical regulation, on the other hand, refers to the application of regulations to only a specific application or sector of AI. Regulative authority for this approach may be delegated to an industry body, which is responsible for developing regulations specific to their sector. This approach allows for greater flexibility, as regulations can be tailored to the specific concerns and opportunities of a given sector.

Examples of vertical regulation include the NYC bias audit mandate, which applies to the use of automated employment decision tools, therefore only regulating the recruitment sector. Under this legislation, employers are required to commission a third-party audit of their systems to identify and mitigate bias. Another example of vertical regulation is the Illinois Artificial Intelligence Video Interview Act, which applies to the use of artificial intelligence to conduct video interviews and to ensure that this technology is used in a fair and unbiased manner.

However, this approach can lead to a lack of standardization and coordination across different sectors, which could be confusing for industry players and citizens alike. Furthermore, it can result in a multiplicity of responsible agents and regulatory reporting requirements which can lead to a significant amount of overlap and undue burden.

Both approaches have their own advantages and disadvantages and must be evaluated on a case-by-case basis to determine which approach is most appropriate for a given situation. It’s important to balance the need for flexibility and tailoring with the need for consistency and standardization in the regulation of AI.

As a company that specializes in artificial intelligence, Diveplane recognizes the importance of regulation in ensuring the safe and responsible use of AI. We believe that both horizontal and vertical approaches to regulation have their benefits and should be considered on a case-by-case basis.

On one hand, we support horizontal regulation as it provides consistency and stability for citizens and industry players alike. For example, the EU AI Act would create a consistent set of regulations for AI use across all sectors and applications, providing citizens with a stable set of rights and assurances in their interactions with AI. Similarly, the US Algorithmic Accountability Act is a step in the right direction to ensure that AI is used responsibly across all industries.

On the other hand, we also recognize the value of vertical regulation, which can be tailored to the specific concerns and opportunities of a given sector. For example, the NYC bias audit mandate helps to ensure that AI is used fairly and without bias in the hiring process. Similarly, the Illinois Artificial Intelligence Video Interview helps to protect the rights of job applicants in a specific use case of AI.

At Diveplane, we believe that it’s important to strike a balance between consistency and standardization through horizontal regulation and flexibility and tailoring through vertical regulation. By supporting both approaches, we can ensure that AI is developed, deployed, and regulated in a safe and responsible manner for all stakeholders involved.