Voice phishing is AI fraud in real time - FT中文网
登录×
电子邮件/用户名
密码
记住我
请输入邮箱和密码进行绑定操作:
请输入手机号码,通过短信验证(目前仅支持中国大陆地区的手机号):
请您阅读我们的用户注册协议隐私权保护政策,点击下方按钮即视为您接受。
人工智能

Voice phishing is AI fraud in real time

Just as we learnt to treat emails with caution, we must now learn to doubt a human-sounding voice
00:00

{"text":[[{"start":7.34,"text":"The writer is an AI researcher at Bramble Intelligence and worked on the State of AI Report 2025"}],[{"start":16.33,"text":"Until recently, building an artificial intelligence system that could hold a convincing phone conversation was a laborious task. You had to combine separate tools for speech recognition, language processing and speech synthesis, all linked through fragile telephony software. "}],[{"start":36.68,"text":"This is no longer true. The arrival of real-time, speech-native AI models such as OpenAI’s RealTime API, launched last year, means a system that once required multiple components can now be created in minutes. "}],[{"start":54.76,"text":"Publicly available code can connect these models to a phone line. The AI model listens, “thinks” and responds in an instant. The result is a synthetic voice that can converse fluently, improvise naturally and sustain a dialogue in a way that feels human. "}],[{"start":77.41,"text":"In the past year we have moved from the theoretical possibility of widescale AI-enabled voice phishing — or vishing — scams to the reality. Last year, UK tech company Arup was defrauded of $25mn in a deepfake scam, while a vishing attack on Cisco succeeded in extracting information from a cloud-based customer relationship management system it used."}],[{"start":107.00999999999999,"text":"What once demanded expert knowledge is now available, pre-packaged, for anyone to exploit. Low-latency voice-native models have removed the final technical barriers to real-time AI voice fraud. "}],[{"start":123.21,"text":"In testing, it took me only a few lines of instruction to make such a system act like an HR manager calling about the payroll or a fraud officer warning of suspicious activity. Because AI can reason and change strategy in real time, its manipulation is adaptive."}],[{"start":146.54,"text":"The technology itself has legitimate uses, such as healthcare follow-ups, customer service or language tutoring. But the same accessibility that enables innovation also enables harm. A single operator could in theory launch hundreds of thousands of fraudulent calls a day, each one tailored to their target."}],[{"start":171.28,"text":"This threat is compounded by the increasing realism and low costs of platforms like ElevenLabs or Cartesia, which can facilitate voice cloning with very short audio samples."}],[{"start":184.81,"text":"In the case of public figures, it is possible — and relatively easy — to gather hours of audio and produce a compelling approximation of their voice without their knowledge. Public officials have already been impersonated in such attacks, according to the FBI. It has warned the public not to assume that messages claiming to be from a senior US official are authentic."}],[{"start":214.2,"text":"MIT’s Risk Repository, a database of over 1,600 AI risks, shows that in the past five years, the proportion of AI incidents associated with fraud has increased from around 9 per cent to around 48 per cent."}],[{"start":231.48999999999998,"text":"The scale of this cyber crime means voice-verification systems that identify customers by their speech patterns are now a liability. Sensitive requests and high-value transactions should require multi-factor verification that does not depend on how someone sounds."}],[{"start":252.46999999999997,"text":"For the rest of us, the lesson is simple: the voice on the other end of the line is no longer evidence of who is speaking. Just as we have learnt to treat emails with caution, we must today learn to doubt a human-sounding voice. In time, we may need to create vocal watermarks or digital signatures that verify speech as genuine."}],[{"start":279.34999999999997,"text":"Debates around AI are sometimes framed in existential terms. But it is the smaller risks that will reach us first."}],[{"start":290.29999999999995,"text":"Fraud and impersonation corrode trust in everyday communication. These supposedly mundane crimes are the front line of the AI transition. The same ingenuity that created the tools must be applied to securing them."}],[{"start":307.28,"text":"The real disruption of generative AI — the quiet, invisible kind — has already arrived. It will not announce itself with superhuman intelligence but with a phone call."}],[{"start":330.36999999999995,"text":""}]],"url":"https://audio.ftcn.net.cn/album/a_1763379847_6499.mp3"}

版权声明:本文版权归FT中文网所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。

前阿波罗风控主管警告新兴寿险公司风险

查克•拉古纳坦指出,一些较新的寿险公司高度依赖私募信贷以及易遭提款冲击的新型储蓄产品。

医疗保健正在推动美国经济

福鲁哈尔:医疗保健是一个庞大的行业——但这未必是好事。

公司董事会应该任命一名AI董事吗?

新的AI工具可帮助董事长和董事们进行准备与调研,但不太可能被赋予投票权。

盟友请求美国财政部支援?但互换额度存在上限

美国曾向阿根廷提供200亿美元的互换额度,但对海湾和亚洲盟友提供类似安排将面临诸多制约。

伊朗战争致汽油价格持续高企,美国车主减少加油

飙升的驾车成本在11月中期选举前给特朗普带来了政治难题,他在2024年竞选期间曾承诺将汽油价格降至每加仑2美元。

加拿大实业界:繁琐监管之害甚于特朗普关税

加拿大企业表示,过度监管正在扼杀卡尼总理提振经济增长的努力。
设置字号×
最小
较小
默认
较大
最大
分享×