Debate Sparks Over Regulation of AI as Companies Grapple with Risks and Rewards

A.I

A recent surge in the development of machine learning models, often referred to as “artificial intelligence,” has sparked discussions about the need for regulation. Executives at companies producing these AI products have been actively engaged in the debate. OpenAI founder and CEO Sam Altman, for instance, emphasized the importance of government intervention to mitigate the risks associated with increasingly powerful models during testimony before US lawmakers in June.

A survey conducted in 2022 among “AI experts” revealed that there is a median estimate of a 10% chance of an existential disaster caused by AI. Furthermore, there are numerous examples of non-existential risks posed by AI, such as fake legal cases cited by lawyers or inaccurate stories published by media outlets. In light of these concerns, MIT economics professor Daron Acemoglu and grad student Todd Lensman have developed what they claim to be the first economic model of how to regulate transformative technologies.

The authors propose several assumptions about transformative technology: it can enhance productivity across various sectors but also has the potential to be misused, intentionally or unintentionally, leading to disastrous outcomes. To explore these ideas and how businesses would respond under such assumptions, economists formulate a mathematical expression. They conclude that deploying new transformative technologies slowly would be more beneficial since it allows for a better understanding of both their potential benefits and risks.

The gradual deployment also provides flexibility to change course if unforeseen risks emerge before multiple industries become dependent on the technology.

The economists argue that some form of regulation is necessary because private firms only bear partial costs of AI misuse and therefore have incentives to adopt it more rapidly than what is socially optimal. To ensure that transformative technology is adopted at an appropriate pace, they consider implementing tax schemes but find them ineffective in theory. Instead, they propose combining a tax on transformative technologies with sector-specific restrictions on their use in low-risk areas. This approach, known as a “regulatory sandbox,” is already commonly used for new technologies and could delay the adoption of machine learning in high-risk sectors until a better understanding of its implications is achieved.

While the authors present a strong case for slower adoption of transformative technology, they acknowledge that their assumptions may be flawed. They highlight the possibility that faster adoption could increase knowledge about the technology, thereby reducing risks. They suggest that future research should explore how experimentation in certain sectors can be conducted without significantly increasing overall risk.

Economist Tyler Cowen from George Mason University offers an alternative perspective to challenge the authors’ conclusions. He raises concerns about rival nations, particularly China, developing AI that may be less safe or more threatening. Proponents of accelerating machine learning adoption argue that using AI in risky applications, such as weapons systems, is necessary to maintain a competitive edge. However, this argument still necessitates clear regulations to establish distinct boundaries between the US and other nations. Advocates for AI safety emphasize the importance of establishing laws to prevent potential misuse, such as mass surveillance, rather than embracing dystopian technologies in a race to be the first.