The Center for Long-Term Cybersecurity has issued a new report that assesses how competitive pressures have affected the speed and character of artificial intelligence (AI) research and development in an industry with a history of extensive automation and impressive safety performance: aviation. The Flight to Safety-Critical AI: Lessons in AI Safety from the Aviation Industry, authored by Will Hunt, a graduate researcher at the AI Security Initiative, draws on interviews with a wide range of experts from across the industry and finds limited evidence of an AI “race to the bottom” and some evidence of a (long, slow) race to the top.
Rapid progress in the field of AI over the past decade has generated both enthusiasm and rising concern. The most sophisticated AI models are powerful — but also opaque, unpredictable, and accident-prone. Policymakers and AI researchers alike fear the prospect of a “race to the bottom” on AI safety, in which firms or states compromise on safety standards while trying to innovate faster than the competition.
But current discussions of the existing or future race to the bottom in AI elide two important observations. First, different industries and regulatory domains experience a wide range of competitive dynamics — including races to the top and middle — and claims about races to the bottom often lack empirical support. Second, AI is a general-purpose technology with applications across every industry. As such, we should expect significant variation in competitive dynamics and consequences for AI from one industry to the next.
This paper analyzes the nature of competitive dynamics surrounding AI safety on an issue-by-issue and industry-by-industry basis. Rather than discuss the risk of “AI races” in the abstract, this research focuses on how the issue of AI safety has been navigated by a particular industry: commercial aviation, an industry where safety is critically important and automation is common.
Do the competitive dynamics shaping the aviation industry’s development and rollout of safety-critical AI systems and technical standards constitute a race to the bottom, a race to the top, or a different dynamic entirely? The answers to these questions have implications for policymakers, regulators, firms, and researchers seeking to maximize the upside while minimizing the downside of continued AI progress.
Among the key findings outlined in the report:
- In most industries, the empirical evidence of racing to the bottom is limited. Previous work looking for races to the bottom on environmental, labor, and other standards suggests that race-to-the-top dynamics may be equally or more common. In the case of AI specifically, few researchers have attempted to evaluate the empirical evidence of a race to the bottom.
- In the aviation industry, the lack of AI-based standards and regulations has prevented the adoption of safety-critical AI. Many modern AI systems have a number of features, such as data-intensivity, opacity, and unpredictability, that pose serious challenges for traditional safety certification approaches. Technical safety standards for AI are only in the early stages of development, and standard-setting bodies have thus far focused on less safety-critical use cases, such as route planning, predictive maintenance, and decision support.
- There is some evidence that aviation is engaged in a race to the top in AI safety. Industry experts report that representatives from firms, regulatory bodies, and academia have engaged in a highly collaborative AI standard-setting process, focused on meeting rather than relaxing aviation’s high and rising safety standards. Meanwhile, firms and governments are investing in research on building certifiably safe AI systems.
- Extensive regulations, high regulatory capacity, and cooperation across regulators all make it hard for aviation firms to either cut corners or make rapid progress on AI safety. Despite the doubts raised by the tragic Boeing 737 crashes, regulatory standards for aviation are high and relatively hard to shirk. The maintenance of high safety standards depends in part on regulators’ power to impose significant consequences on firms when they do attempt to cut corners.
“A deep dive into the aviation industry provides grounds for optimism that, in at least one safety-critical domain, firms and regulators are approaching AI tentatively, with ample awareness of the risks these systems pose,” Hunt concludes. “[Aviation] experts attested to the importance of learning slowly about AI, experimenting first with the least safety-critical applications and investing time and money in improving understanding of these systems before moving toward valuable but more safety-critical applications. For now, there is little sign that competitive pressures are sufficient to overwhelm safety imperatives.”
Download the Report