|
As AI continues to improve and diffuse, it will likely have important long-term consequences for jobs, inequality, organisations, and competition. These developments may spur interest in regulation as a potential means to address the risks and possibilities of AI. Yet, very little is known about how different kinds of AI-related regulation – or even the prospect of regulation – might affect firm behaviour.
The findings suggest several potential implications for the design and analysis of AI-related regulation. First, where possible, regulators should adapt regulations to the needs and concerns arising in specific industries. Although policymakers sometimes find compelling rationales for adopting broad regulatory responses to major problems such as environmental protection and occupational safety, cross-cutting AI regulation such as the proposed Algorithmic Accountability Act may have enormously complex effects and make it harder to take potentially significant sector characteristics into account.
Second, policymakers will do a better job designing and communicating regulatory requirements if they retain a clear focus on regulatory goals. Given the impact of industry sector and firm size on responses, policymakers would do well to meticulously approach AI regulation across different technological and industry-specific use cases. While the importance of certain legal requirements and policy goals – such as reducing impermissible bias in algorithms and enhancing data privacy and security – may apply across sectors, specific sectoral features may nonetheless require distinctive responses. For example, the use of AI-related technologies in autonomous driving systems must be responsive to a diverse set of parameters that are likely to be different from those relevant to AI deployment in drug discovery or online advertising.
Third, given the level of concern among constituencies and target groups for regulation, policymakers should bear in mind the full range of regulatory tools available in the AI context. These include continued reliance on existing legal requirements with relevance to AI, such as tort law and employment discrimination, that can be gradually elaborated by courts or administrators. Policymakers should also consider the merits of soft-law governance of AI, as well as the costs and benefits of reliance on AI industry standards.