How to legislate for AI in an age of uncertainty - FT中文网
登录×
电子邮件/用户名
密码
记住我
请输入邮箱和密码进行绑定操作:
请输入手机号码,通过短信验证(目前仅支持中国大陆地区的手机号):
请您阅读我们的用户注册协议隐私权保护政策,点击下方按钮即视为您接受。
人工智能

How to legislate for AI in an age of uncertainty

We need laws that only kick in once we know the impact of the technology

The writer is professor of law at Penn State

We don’t know the future, nor how artificial intelligence will affect it. Some believe AI will lead to explosive economic growth, others are convinced it will immiserate all but a select few, while some aver that its economic impact will be marginal.

So how do legislators do their job under such uncertainty? Currently they don’t. They have abrogated their responsibilities to promote the health, safety and welfare of voters through inaction, adopting a wait-and-see approach. If they delay too long, the new technology could have already harmed society and generated new billionaires ready to capture future regulatory processes. Yet regulating too early also has risks, inadvertently hampering innovation. 

There is a better way to proceed that allows us to respond proactively to uncertainty. We need adaptive AI laws that detail how to react to each possible future harm or benefit but that don’t kick in until we see how AI is transforming society. Such adaptive AI laws would be passed now and take effect automatically when benchmarks are met to hedge social risks and distribute benefits.  

Adaptive AI laws could take many forms and borrow tools from elsewhere, such as the decision trees used in machine learning. For example, politicians can currently pass one set of laws that take effect if job losses mount, triggering policies like supplemental unemployment benefits and increased taxes on the rich. Another set of acts could be triggered with job growth, such as improved sick leave and fewer corporate subsidies. Benefits and taxes could rise using sliding scales tied to job losses or income inequality. Some responses, such as instituting a universal wage, could activate under numerous scenarios if economic inequality got too bad or if economic growth exploded. 

Such adaptive AI legislation could be applied to a range of other fields. It was initially unknown how harmful social media would be for children. If it emerges that AI has similarly negative effects, adaptive laws could restrict children’s access to it. If AI instead advances mental health goals, more public resources could be allocated. 

It’s easy to assume that AI will enhance our learning and expertise, yet a recent study showed that oncologists were 20 per cent worse at detecting precancerous growths on their own after having relied on AI as a detection aid. Adaptive AI regulation could grapple with the uncertain effects of AI on education across disciplines and ages. 

Many benchmarks for triggering the activation of an adaptive law could draw from reliable governmental data, such as on income distributions, educational attainment and lifespans. Triggering measurements for other topics, such as mental health, would be more complex to gauge and would benefit from bipartisan legislative guidance and monitoring from designated agencies. If partisan disagreement surfaces about whether a triggering event has occurred, courts can fulfil their established role as legal interpreters. 

Adaptive AI laws would provide three main benefits. First, they would empower lawmakers to act now to avoid future problems, to be proactive instead of reactive. This would occur without sacrificing flexibility, because legislators could always change the adaptive regulation in response to technological developments or social change. Second, the structure of adaptive regulation would encourage lawmakers to think more deeply about the different possible paths that AI could take, which one hopes would lead to better policies. Third, adaptive AI regulations would provide a more stable regulatory framework for AI labs, creating legal clarity by informing labs how laws will automatically change depending on their actions. 

This wouldn’t be the first time that legislatures adopted laws that only activate under certain scenarios. For example, states have passed trigger laws contingent on developments related to abortionMedicaid and rent control.

AI labs have already committed to voluntary if-then commitments, pledging to enhance safety measures once AI models have a particular capability. Yet these commitments are nonbinding, not universal and only touch on safety considerations, not how AI will affect society more broadly. 

We can reasonably imagine the different effects AI might have on society, but we can’t predict which path it will take. Adaptive AI laws would allow us to think through how to regulate the technology now without delaying until it’s too late or hurting innovation through having the regulations kick in immediately. To manage the potential AI revolution, law needs one of its own. 

版权声明:本文版权归FT中文网所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。

Lex专栏:铸犁为剑——给欧洲工业吹响的战斗号角

在重整军备的推动下,汽车制造商迎来了革新其生产线的又一次机遇。

为何仍应看多黄金?

库珀:尽管这种贵金属在中东战争期间遭到抛售,但其前景仍更为乐观。

试图摆脱对微软依赖的德国联邦州

在各国领导人日益主张欧洲减少对美国科技巨头的依赖之际,追求“数字主权”的努力使得石勒苏益格-荷尔斯泰因州成为欧洲的一块“试验田”。

FT社评:价格管制重返主流令人不安

价格管制虽然能带来短期纾困,但也会衍生新的问题。与其关注价格管制,各国政府不如把重点放在提高生产率上。

元首关系紧张,美英安全合作出现裂痕

英美围绕伊朗战争出现分歧,正在冲击两国外交人员、官员以及军方人员之间的工作关系。

FT社评:全球贸易保卫战中的“中间力量缺位”

有关取代美国、寻找多边体系之锚的讨论没有得出什么实际成果。
设置字号×
最小
较小
默认
较大
最大
分享×