The Pentagon-Anthropic dispute is a test of control - FT中文网
登录×
电子邮件/用户名
密码
记住我
请输入邮箱和密码进行绑定操作:
请输入手机号码,通过短信验证(目前仅支持中国大陆地区的手机号):
请您阅读我们的用户注册协议隐私权保护政策,点击下方按钮即视为您接受。
FT商学院

The Pentagon-Anthropic dispute is a test of control

Should private companies be able to set boundaries around the AI systems we integrate into our lives?
00:00

{"text":[[{"start":6.64,"text":"The writer is a senior fellow at the Foundation for American Innovation and was lead staff writer of the Trump administration’s AI Action Plan "}],[{"start":17.53,"text":"On March 4, the US Department of Defense took an unprecedented move against an American company: designating the frontier AI start-up Anthropic a “supply chain risk”. Typically, this designation is applied to technology from foreign-adversary countries. In this instance, it was invoked over a contract dispute."}],[{"start":40.13,"text":"The conflict, which was largely blocked by a judge in California last week, centred on the question of where control over AI should rest. Neither side had the answer quite right."}],[{"start":52.38,"text":"Trump administration officials sought to renegotiate the terms of the Pentagon’s contract to use Anthropic’s Claude — the only large language model certified for use in classified US military contexts — not because they intended to violate the company’s red line on lethal autonomous weapons and mass surveillance, they say, but because they believe only US law should limit the military’s use of technology."}],[{"start":78.92,"text":"The principle is reasonable enough. But, as Judge Rita Lin stated, the proposed punishment of Anthropic was “arbitrary and capricious”. The correct solution would have been to cancel the contract and pass laws concerning the government’s use of AI systems. "}],[{"start":96.76,"text":"The ruling is not a final decision and will probably be appealed by the Trump administration. But the issue raises a broader set of challenges that governments and citizens will grapple with for decades to come: where, precisely, should the locus of control over powerful AI systems rest? Should private companies be able to set ethical boundaries for the AI systems that may one day underpin our lives?"}],[{"start":125.86000000000001,"text":"Some liken advanced AI to nuclear weapons and conclude that no technology so powerful should rest in private hands. But there are crucial differences between the two. Early iterations of the atomic bomb did not provide the consumer and commercial benefits that many derive from today’s AI. "}],[{"start":145.3,"text":"The notion of a government passing laws that dictate the moral, ethical and philosophical values of AI systems therefore appears as a stark violation of the principle of free speech that underlies democratic nations. The prospect of nationalisation of AI labs — which is the logical endpoint of the “nuclear weapons” analogy — seems like a profound and radical act of tyranny."}],[{"start":173.28,"text":"But herein lies the central challenge of AI governance: the “nuclear weapons” analogists may not be correct but they are right to be concerned. Advanced AI systems really do pose serious risks to national security. Frontier models from the biggest US AI companies are classified, by the companies’ own admission, as having high risk for cyber attacks and assistance in the creation of bioweapons, for example. And as the US defence department’s own usage makes clear, AI — not some future version of the technology but the systems we have today — can assist in creating lethal outcomes."}],[{"start":215.91,"text":"The good news is that advanced capitalist societies have dealt with this sort of challenge before. Our civilisations rest upon successive generations of foundational technologies — the printing press, banking, the automobile, electricity and the computer itself. All of these are technologies without which it is hard to imagine a modern military, and thus are essential for national security. Yet none, at least in the US, have been nationalised. Instead, the technologies and industries are overseen by political, legal, regulatory and technical institutions — often a hybrid of public and private bodies."}],[{"start":258.96,"text":"Erecting something similar for AI is perilous. The Pentagon’s dispute with Anthropic shows that even an administration that brands itself as pro-AI can easily veer into regulatory over-reach. The chances of erring in a way that stifles innovation are high. Time, in this AI race, is a resource even more scarce than computing power. "}],[{"start":292.03,"text":""}]],"url":"https://audio.ftcn.net.cn/album/a_1774856645_6864.mp3"}

版权声明:本文版权归FT中文网所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。

良好判断力之谜

它对人生的塑造作用胜过智力与勤奋。

罗伯特•斯基德尔斯基:经济史学家,1939—2026

这位凯恩斯的传记作者在漫长而常引发争议的职业生涯中,跨越了多门学科与多个政党。

“石油美元”并不存在

对伊朗的战争正在改变海湾能源出口国的货币盘算。但美元的全球地位远不止取决于一桶石油以何种货币计价。

从Anthropic的Mythos到铂金包,稀缺即热卖

“Mythos太火以至难以驾驭”的说法,理应有利于公司的估值。

伊朗寻求恢复在炸弹损毁机场的航班运营

在美以发动袭击后,德黑兰一度暂停空中交通,现当局正重启各大交通枢纽。

“AI就业末日”叙事忽略了什么

新技术能否完成某项任务,只是整体图景中的一小部分。
设置字号×
最小
较小
默认
较大
最大
分享×