首页 500强 活动 榜单 商业 科技 商潮 专题 品牌中心
杂志订阅

白领正在悄然抵制AI:80%的员工拒绝强制使用

Nick Lichtenberg
2026-04-25

员工担心,一旦AI工具“太好用”,后果将不堪设想。

文本设置
小号
默认
大号
Plus(0条)

 

AI应用正进入“悄然躺平”的阶段。图片来源:Getty Images

不久前,“影子AI”还被视为一则好消息。员工瞒着IT部门,用个人账号悄悄使用ChatGPT和Claude,把原本需要数小时的工作缩短到几分钟内完成。麻省理工学院(MIT)去年发布的一项研究显示,超过90%的企业员工在日常工作中使用个人聊天机器人账号,且往往未经批准;而与此同时,这些企业中只有40%正式订阅了大语言模型服务。当时,“影子经济”蓬勃发展:管理层将其视为治理问题;而员工则将其视为高效工作的利器。

但如今,数据呈现出另一番景象。曾经被员工争相私下使用的工具,正在被越来越多的人主动弃用。原因并非工具无效,而是员工担心,一旦它“太好用”,后果将不堪设想。

SAP旗下的WalkMe公司在其第五份年度《数字化采用现状》报告中,对14个国家的3,750名高管和员工进行了调研。结果显示,过去30天内,超过54%的员工绕开了公司提供的AI工具,选择手动完成工作;另有33%的员工从未使用过AI。两者合计,约有八成企业员工正在回避或者主动抵制这项技术,尽管企业正投入创纪录的资金进行部署。数据显示,企业数字化转型的平均预算同比增长38%,达到5,420万美元;但由于应用落地失败,其中40%的投入未能发挥应有成效。

高管与员工之间的认知鸿沟

早期的热情所掩盖的问题,如今在数据中显露出来。只有9%的员工信任AI能够处理复杂、关键的业务决策,而在高管中这一比例却高达61%,两者之间存在52个百分点的信任落差。另有88%的高管认为公司已为员工提供了充足的工具,但仅有21%的员工表示认同,仅在工具充分性这一项上就存在67个百分点的差距。用报告中的话说,高管与员工“仿佛在描述两家完全不同的公司”。

质疑者同样有数据支撑。约翰斯·霍普金斯大学(Johns Hopkins)经济学家史蒂夫·汉克经历过多轮技术周期,深谙“泡沫”的内在逻辑。他在近期对《财富》杂志表示:“AI并没有兑现承诺。欢迎回到现实世界。忘掉AI泡沫吧,它没有带来实质改变。你看各种调查显示大家确实都在尝试使用,但深入研究就会发现,它其实并没有产生多大影响。”他的结论很直接:“生产率表现依然疲软。如果AI真的带来了改变,生产率理应已大幅提升。硅谷那帮人号称GDP会增长5%或6%,生产率会飙升6%,但这根本没有发生。”

这种质疑,在某种程度上也与WalkMe的数据相呼应。WalkMe联合创始人兼首席执行官丹·阿迪卡一直在实际工作中密切关注这一分化趋势。他经常向企业首席信息官提出一个简单问题:到底有多少员工正在利用AI开展实质性工作?他表示:“这个数字还不到10%。”

阿迪卡借用了一个形象的比喻(本文编辑也很喜欢这个比喻),将AI比作一辆跑车,强调其“速度潜力”。他表示他最喜欢的类比是:如果公司给每位员工都配了一辆跑车,但他们却不会开——这意味着他们缺乏AI技能。

问题的一部分在于结构性因素,而非行为因素。阿迪卡表示:“你给每个员工都配了一辆法拉利,但他们不会开。有时他们缺乏‘燃料’,即缺乏上下文背景信息;驾驶技术对应的是提示词能力;有些情况下甚至连‘道路’都没有,即缺乏API或MCP服务器来执行任务。”当你拥有一辆法拉利,却没有驾驶者、没有燃料、也没有道路时,自然跑不起来。

毕马威(KPMG)美国税务技术与创新全球主管布拉德·布朗在接受《财富》杂志采访时,也使用了几乎相同的比喻。他说道:“这就像一辆F1赛车。车本身性能卓越,但如果没有技术过硬的车手,它对你来说毫无用处。”这两位资深技术专家,一位是创业公司创始人,一位是“四大”会计师事务所合伙人,不约而同地给出相同描述,说明他们在大规模实践中反复目睹了这一普遍性现象。

这种鸿沟正令企业付出代价

那辆“开不起来的法拉利”所产生的隐形成本,如今已经可以量化。WalkMe的报告显示,员工每年因技术使用不畅而损失相当于51个工作日的时间,接近两个月,较2025年上升了42%,相当于每周损失7.9小时。与此同时,高盛(Goldman Sachs)经济学家本周指出,能够正确使用AI的员工,每天可节省约40至60分钟时间。数据呈现出一种近乎讽刺的平衡:AI给熟练使用者带来的效率增益,几乎正好抵消了它给无法用好它的人造成的效率损失。

“影子AI”的现象依然在暗中存在。78%的高管表示希望对员工私自使用AI工具的行为进行约束,但只有21%的员工称曾收到过相关政策警告,甚至有34%的员工并不清楚公司批准了哪些工具。换句话说,高管一边威胁要惩罚相关行为,一边却从未说明要禁止哪些行为。这种自相矛盾的心理根深蒂固,以至于62%的高管私下承认,完全不使用AI的风险,要大于未经许可使用“影子AI”的风险——后者其实被夸大了。

Futurum Group副总裁兼企业软件数字工作流研究总监基思·柯克帕特里克表示:“使用‘影子AI’不应被简单视为违规行为,更应被看作企业弥补系统性缺口的契机。当员工转向未获批准的AI工具时,他们实际上是在弥补官方工具和模糊的治理体系留下的性能或效率上的不足。”

AI“疏离”现象

真正的新变化,也是数据才刚刚捕捉到的全新现象,是隐藏在“影子AI”之下的更深一层:一类员工不再偷偷绕开规则,而另一类员工则干脆选择了无为。

当被问及如何定义这种现象时,阿迪卡停顿了一下。他这样描述那些抗拒AI的员工:“他们对自己的工作有自豪感,不愿意让AI机器人取代自己,而且他们总能找出这些工具不如他们的地方。”这听起来很像疫情时期出现的“悄然躺平”,即员工并未离职,但也不再付出额外努力。这或许也反映了某种合乎情理的挫败感:AI工具频繁出现“幻觉”,不仅没有节省时间,反而增加了负担。

阿迪卡表示:“真正做对这件事的公司,不会简单地将大多数任务自动化,而是会弄清楚三个问题:什么时候该由人来行动,什么时候该由AI来执行,以及两者之间如何顺畅衔接。信任就建立在人机协作的衔接处。而现在,大多数公司甚至还没有开始认真思考这个问题。”这一点也得到了MIT研究的印证:90%的员工在处理关键任务时仍首选人工处理,这表明人们仍不愿意深度依赖AI。

在Block公司宣布裁员后,甲骨文(Oracle)也宣布裁员数万人。不过,一些批评人士认为,这更像是一种“AI洗白”,即用AI作为借口掩盖此前的过度招聘,同时顺势提振股价。基层员工对这套逻辑心知肚明。阿迪卡表示:“在某个阶段,我们会感受到不确定性和恐惧,也会目睹裁员。我认为这会是一段逐步推进的过渡期。但归根结底,现在人们还没有真正把AI利用起来。”

不过,阿迪卡也强调,员工对AI敬而远之并非杞人忧天,他们察觉到的风险是真实的,但他们据此得出的结论却未必正确。“你不会看到哪家银行或保险公司的首席执行官明天就大规模裁员,因为工作还得有人来做。”他认为,一个“重大问题”即将浮出水面:那些关于AI将取代所有人的说法,终将面对一个事实:“至少在当下并没有成为现实。”

“掌握车技的人”才是关键

布朗表示,他现在会花更多时间思考一个问题:如何弥合“法拉利”和“驾驶者”之间的差距。在毕马威内部,他开始将员工划分为三类:构建者、应用者和高级用户,对应不同层级的AI能力,并配套清晰的职业发展路径。他表示:“我们当前的重点是设计激励机制和职业路径,让所有员工都能提升到那个水平。是时候让人类赶上技术的发展步伐了。”

这一框架的关键在于,问题并不在于智力,甚至也不在于传统意义上的培训。布朗表示:“我认为真正重要的是员工自身具备的批判性思维和判断力,这些能力会帮助他们成为‘应用者’”,也就是能够熟练使用AI工具,甚至借助这些工具开发新工具的人。在他看来,风险最大的员工并非缺乏技术能力,而是雇主未能为他们提供尝试的“安全空间”、路径或激励。

目前,约三分之一的企业员工从未使用过AI工具,这一群体普遍表示得到的支持最少、培训最不足、对技术冲击最感到焦虑。WalkMe的报告指出,他们并非在“抵制AI”,而是“尚未被触达”。至于这些工具的进化速度是否已经超过了员工追赶的能力,布朗坦言确实感受到了这种差距的存在。

持续进化既是可能,更是必然

当汉克明确了对AI的预期用途后,它所节省的大量时间,最终让他的态度发生了转变。他表示:“对我来说,AI就像一个额外的研究助理。它能节省大把时间。如果让真人研究助理来做这些事,我得让他们去图书馆,他们可能要花上一周时间,而我用AI大约一小时就能搞定。”但前提是,“你必须知道它擅长做什么”。关键是,你必须具备深厚的专业背景,才能识别AI的错误。汉克在经济学、大宗商品和国际金融领域有数十年的造诣,因此他表示:“我知道该问AI什么问题,也知道该如何提出要求。”

他的心路历程,从最初禁止学生使用,到后来的审慎怀疑,再到如今的日常依赖,与许多严肃思考者经历的轨迹如出一辙。他说自己经历了“从‘不行’,到‘也许可以’,再到‘这很好——但有些工具确实很糟糕’的过程。”他对AI工具的评价一如既往地犀利:“AI种类繁多,其中一些确实很糟,这取决于你的具体需求。”

布朗则认为,这终究是一个乐观的故事,但前提是必须行动起来。他表示:“真正的赢家,是那些能让员工有效利用AI能力的企业。不愿拥抱AI的团队会面临挑战;而过度依赖AI、摒弃人类员工价值的工作环境,将难以为继。”(财富中文网)

译者:刘进龙

审校:汪皓

不久前,“影子AI”还被视为一则好消息。员工瞒着IT部门,用个人账号悄悄使用ChatGPT和Claude,把原本需要数小时的工作缩短到几分钟内完成。麻省理工学院(MIT)去年发布的一项研究显示,超过90%的企业员工在日常工作中使用个人聊天机器人账号,且往往未经批准;而与此同时,这些企业中只有40%正式订阅了大语言模型服务。当时,“影子经济”蓬勃发展:管理层将其视为治理问题;而员工则将其视为高效工作的利器。

但如今,数据呈现出另一番景象。曾经被员工争相私下使用的工具,正在被越来越多的人主动弃用。原因并非工具无效,而是员工担心,一旦它“太好用”,后果将不堪设想。

SAP旗下的WalkMe公司在其第五份年度《数字化采用现状》报告中,对14个国家的3,750名高管和员工进行了调研。结果显示,过去30天内,超过54%的员工绕开了公司提供的AI工具,选择手动完成工作;另有33%的员工从未使用过AI。两者合计,约有八成企业员工正在回避或者主动抵制这项技术,尽管企业正投入创纪录的资金进行部署。数据显示,企业数字化转型的平均预算同比增长38%,达到5,420万美元;但由于应用落地失败,其中40%的投入未能发挥应有成效。

高管与员工之间的认知鸿沟

早期的热情所掩盖的问题,如今在数据中显露出来。只有9%的员工信任AI能够处理复杂、关键的业务决策,而在高管中这一比例却高达61%,两者之间存在52个百分点的信任落差。另有88%的高管认为公司已为员工提供了充足的工具,但仅有21%的员工表示认同,仅在工具充分性这一项上就存在67个百分点的差距。用报告中的话说,高管与员工“仿佛在描述两家完全不同的公司”。

质疑者同样有数据支撑。约翰斯·霍普金斯大学(Johns Hopkins)经济学家史蒂夫·汉克经历过多轮技术周期,深谙“泡沫”的内在逻辑。他在近期对《财富》杂志表示:“AI并没有兑现承诺。欢迎回到现实世界。忘掉AI泡沫吧,它没有带来实质改变。你看各种调查显示大家确实都在尝试使用,但深入研究就会发现,它其实并没有产生多大影响。”他的结论很直接:“生产率表现依然疲软。如果AI真的带来了改变,生产率理应已大幅提升。硅谷那帮人号称GDP会增长5%或6%,生产率会飙升6%,但这根本没有发生。”

这种质疑,在某种程度上也与WalkMe的数据相呼应。WalkMe联合创始人兼首席执行官丹·阿迪卡一直在实际工作中密切关注这一分化趋势。他经常向企业首席信息官提出一个简单问题:到底有多少员工正在利用AI开展实质性工作?他表示:“这个数字还不到10%。”

阿迪卡借用了一个形象的比喻(本文编辑也很喜欢这个比喻),将AI比作一辆跑车,强调其“速度潜力”。他表示他最喜欢的类比是:如果公司给每位员工都配了一辆跑车,但他们却不会开——这意味着他们缺乏AI技能。

问题的一部分在于结构性因素,而非行为因素。阿迪卡表示:“你给每个员工都配了一辆法拉利,但他们不会开。有时他们缺乏‘燃料’,即缺乏上下文背景信息;驾驶技术对应的是提示词能力;有些情况下甚至连‘道路’都没有,即缺乏API或MCP服务器来执行任务。”当你拥有一辆法拉利,却没有驾驶者、没有燃料、也没有道路时,自然跑不起来。

毕马威(KPMG)美国税务技术与创新全球主管布拉德·布朗在接受《财富》杂志采访时,也使用了几乎相同的比喻。他说道:“这就像一辆F1赛车。车本身性能卓越,但如果没有技术过硬的车手,它对你来说毫无用处。”这两位资深技术专家,一位是创业公司创始人,一位是“四大”会计师事务所合伙人,不约而同地给出相同描述,说明他们在大规模实践中反复目睹了这一普遍性现象。

这种鸿沟正令企业付出代价

那辆“开不起来的法拉利”所产生的隐形成本,如今已经可以量化。WalkMe的报告显示,员工每年因技术使用不畅而损失相当于51个工作日的时间,接近两个月,较2025年上升了42%,相当于每周损失7.9小时。与此同时,高盛(Goldman Sachs)经济学家本周指出,能够正确使用AI的员工,每天可节省约40至60分钟时间。数据呈现出一种近乎讽刺的平衡:AI给熟练使用者带来的效率增益,几乎正好抵消了它给无法用好它的人造成的效率损失。

“影子AI”的现象依然在暗中存在。78%的高管表示希望对员工私自使用AI工具的行为进行约束,但只有21%的员工称曾收到过相关政策警告,甚至有34%的员工并不清楚公司批准了哪些工具。换句话说,高管一边威胁要惩罚相关行为,一边却从未说明要禁止哪些行为。这种自相矛盾的心理根深蒂固,以至于62%的高管私下承认,完全不使用AI的风险,要大于未经许可使用“影子AI”的风险——后者其实被夸大了。

Futurum Group副总裁兼企业软件数字工作流研究总监基思·柯克帕特里克表示:“使用‘影子AI’不应被简单视为违规行为,更应被看作企业弥补系统性缺口的契机。当员工转向未获批准的AI工具时,他们实际上是在弥补官方工具和模糊的治理体系留下的性能或效率上的不足。”

AI“疏离”现象

真正的新变化,也是数据才刚刚捕捉到的全新现象,是隐藏在“影子AI”之下的更深一层:一类员工不再偷偷绕开规则,而另一类员工则干脆选择了无为。

当被问及如何定义这种现象时,阿迪卡停顿了一下。他这样描述那些抗拒AI的员工:“他们对自己的工作有自豪感,不愿意让AI机器人取代自己,而且他们总能找出这些工具不如他们的地方。”这听起来很像疫情时期出现的“悄然躺平”,即员工并未离职,但也不再付出额外努力。这或许也反映了某种合乎情理的挫败感:AI工具频繁出现“幻觉”,不仅没有节省时间,反而增加了负担。

阿迪卡表示:“真正做对这件事的公司,不会简单地将大多数任务自动化,而是会弄清楚三个问题:什么时候该由人来行动,什么时候该由AI来执行,以及两者之间如何顺畅衔接。信任就建立在人机协作的衔接处。而现在,大多数公司甚至还没有开始认真思考这个问题。”这一点也得到了MIT研究的印证:90%的员工在处理关键任务时仍首选人工处理,这表明人们仍不愿意深度依赖AI。

在Block公司宣布裁员后,甲骨文(Oracle)也宣布裁员数万人。不过,一些批评人士认为,这更像是一种“AI洗白”,即用AI作为借口掩盖此前的过度招聘,同时顺势提振股价。基层员工对这套逻辑心知肚明。阿迪卡表示:“在某个阶段,我们会感受到不确定性和恐惧,也会目睹裁员。我认为这会是一段逐步推进的过渡期。但归根结底,现在人们还没有真正把AI利用起来。”

不过,阿迪卡也强调,员工对AI敬而远之并非杞人忧天,他们察觉到的风险是真实的,但他们据此得出的结论却未必正确。“你不会看到哪家银行或保险公司的首席执行官明天就大规模裁员,因为工作还得有人来做。”他认为,一个“重大问题”即将浮出水面:那些关于AI将取代所有人的说法,终将面对一个事实:“至少在当下并没有成为现实。”

“掌握车技的人”才是关键

布朗表示,他现在会花更多时间思考一个问题:如何弥合“法拉利”和“驾驶者”之间的差距。在毕马威内部,他开始将员工划分为三类:构建者、应用者和高级用户,对应不同层级的AI能力,并配套清晰的职业发展路径。他表示:“我们当前的重点是设计激励机制和职业路径,让所有员工都能提升到那个水平。是时候让人类赶上技术的发展步伐了。”

这一框架的关键在于,问题并不在于智力,甚至也不在于传统意义上的培训。布朗表示:“我认为真正重要的是员工自身具备的批判性思维和判断力,这些能力会帮助他们成为‘应用者’”,也就是能够熟练使用AI工具,甚至借助这些工具开发新工具的人。在他看来,风险最大的员工并非缺乏技术能力,而是雇主未能为他们提供尝试的“安全空间”、路径或激励。

目前,约三分之一的企业员工从未使用过AI工具,这一群体普遍表示得到的支持最少、培训最不足、对技术冲击最感到焦虑。WalkMe的报告指出,他们并非在“抵制AI”,而是“尚未被触达”。至于这些工具的进化速度是否已经超过了员工追赶的能力,布朗坦言确实感受到了这种差距的存在。

持续进化既是可能,更是必然

当汉克明确了对AI的预期用途后,它所节省的大量时间,最终让他的态度发生了转变。他表示:“对我来说,AI就像一个额外的研究助理。它能节省大把时间。如果让真人研究助理来做这些事,我得让他们去图书馆,他们可能要花上一周时间,而我用AI大约一小时就能搞定。”但前提是,“你必须知道它擅长做什么”。关键是,你必须具备深厚的专业背景,才能识别AI的错误。汉克在经济学、大宗商品和国际金融领域有数十年的造诣,因此他表示:“我知道该问AI什么问题,也知道该如何提出要求。”

他的心路历程,从最初禁止学生使用,到后来的审慎怀疑,再到如今的日常依赖,与许多严肃思考者经历的轨迹如出一辙。他说自己经历了“从‘不行’,到‘也许可以’,再到‘这很好——但有些工具确实很糟糕’的过程。”他对AI工具的评价一如既往地犀利:“AI种类繁多,其中一些确实很糟,这取决于你的具体需求。”

布朗则认为,这终究是一个乐观的故事,但前提是必须行动起来。他表示:“真正的赢家,是那些能让员工有效利用AI能力的企业。不愿拥抱AI的团队会面临挑战;而过度依赖AI、摒弃人类员工价值的工作环境,将难以为继。”(财富中文网)

译者:刘进龙

审校:汪皓

There was a moment, not long ago, when “shadow AI” felt like a good-news story. Workers were sneaking ChatGPT and Claude past the IT department, using personal accounts to do what used to take hours in minutes. An MIT study published last year found that employees at more than 90% of companies were using personal chatbot accounts for daily tasks — often without approval — even as only 40% of those same companies had official LLM subscriptions. The shadow economy was booming. Management called it a governance problem. The workers called it getting the job done.

Now the data tells a different story. The tool that workers once raced to adopt covertly has become, for a large and growing share of the workforce, the tool they’ve stopped using altogether. Not because it doesn’t work. Because they’re afraid of what happens when it works too well.

A new global survey of 3,750 executives and employees across 14 countries, conducted by SAP subsidiary WalkMe for its fifth annual State of Digital Adoption report, finds that more 54% of workers bypassed their company’s AI tools in the past 30 days and completed the work manually instead. Another 33% haven’t used AI at all. Combined, roughly eight in 10 enterprise workers are either avoiding or actively rejecting the technology their employers are spending record sums to deploy. Average digital transformation budgets rose 38% year-over-year to $54.2 million — yet 40% of that spend has been underperforming due to adoption failures.

Executives are blind to how employees really feel

What the early enthusiasm obscured is now visible in the numbers. Only 9% of workers trust AI for complex, business-critical decisions, compared to 61% of executives — a 52-point trust chasm. Eighty-eight percent of executives say their employees have adequate tools; only 21% of workers agree — a 67-point gap on tool adequacy alone. Executives and their employees are, in the report’s language, “describing fundamentally different companies.”

The skeptics have data on their side, too. Steve Hanke, the Johns Hopkins economist, has been through enough technology cycles to know what hype looks like from the inside. “AI didn’t deliver,” he told Fortune recently. “Welcome to the real world. Forget the AI bubble. You know, it didn’t deliver. You look at all the surveys and yeah, everybody’s using it a little bit, but you dig into it and it hasn’t done much.” Hanke’s bottom line: “Productivity, by the way, it was weak. If AI delivered, productivity would be way up. You listen to these Silicon Valley guys and they say we’re gonna have GDP going to 5% of 6%. Productivity is gonna go up to six. It’s just not happening.”

That skepticism is, in its own way, consistent with what the WalkMe data is finding. Dan Adika, CEO and co-founder of WalkMe, has been tracking this divergence from the front lines. He meets regularly with CIOs and asks them a simple question: how many of your people are actually using AI to do meaningful work? “The numbers are sub-10%,” he said.

Adika used the metaphor, favored by this particular editor as well, that AI is like a sports car in terms of its speed. He said his favorite analogy is if you buy every employee a sports car, but they don’t know how to drive it—they don’t have the AI skills.

Part of the problem is structural, not behavioral. “You buy every employee that sports car, the Ferrari, but they don’t know how to drive,” Adika said. “They don’t have fuel sometimes, which is the context. Knowing how to drive is the prompting. And in some cases, there are not even enough roads — there’s no API or MCP server to actually do what you want to do.” What do you do when you have a Ferrari, but no driver, no fuel, and no roads? You don’t go very fast.

Brad Brown, Global Head of Tax Technology & Innovation for KPMG in the U.S., used almost the same exact metaphor in a separate interview with Fortune. “It’s like an F1 car driver,” he said. “The F1 car is amazing. But if you don’t have a skilled and talented driver, that tool’s not gonna do much for you.” The fact that two veteran technologists — one a founder, one a Big Four partner — converged on the same description unprompted suggests they are describing something they’ve both seen firsthand, repeatedly, at scale.

The chasm is costing companies

The downstream cost of that undriven Ferrari is now quantifiable. The WorkMe report found that workers lose the equivalent of 51 working days per year to technology friction — nearly two full months — up 42% from 2025. That’s 7.9 hours per week. Goldman Sachs economists reported this week that AI saves workers who use it correctly an average of 40 to 60 minutes per day. The math is almost symmetrical: the productivity AI gives to people who use it well is almost exactly equal to the productivity it destroys for people who can’t get it to work.

The old shadow AI story is still alive beneath the surface. Seventy-eight percent of executives say they want to discipline shadow AI use — yet only 21% of workers report ever being warned about AI policy, and 34% don’t even know which tools their employer has approved. Executives are threatening punishment for behavior that they’ve never explained is prohibited. The contradiction runs so deep that 62% of those same executives privately concede that the risk of unsanctioned shadow AI is overstated compared to the risk of not leveraging AI at all.

“The use of shadow AI isn’t a behavior to penalize — rather, it’s an opportunity to address a systemic gap,” said Keith Kirkpatrick, Vice President and Research Director of Enterprise Software Digital Workflows at The Futurum Group. “When employees use unapproved AI tools, they’re compensating for performance or efficiency gaps left by sanctioned tools and unclear governance.”

AI disengagement

What’s new — and what the data is only beginning to capture — is the layer beneath shadow AI. Workers who aren’t sneaking around the rules. Workers who aren’t doing anything.

Adika was asked what he’d call this dynamic. He paused. “They have pride in what they do,” he said, about workers who are resisting AI adoption. “They won’t let some AI bot take over, and they will always find and show the flaws in that tool compared to them.” It sounds, unmistakably, like quiet quitting — the pandemic-era phenomenon in which workers stopped going above and beyond without formally resigning. It could also be a very understandable frustration with AI tools that just won’t stop hallucinating, wasting as much time as they promise to save.

“The organizations that get this right won’t be the ones that just automated the most tasks,” Adika said. “They’ll be the ones that figured out when the human should act, when the agent should act, and how the handoff between them works. That handoff is where trust lives. And right now, most companies haven’t even started thinking about it.” To this point, the MIT study found that 90% of workers still prefer humans for mission-critical work, a clear reluctance to dive into the deep end.

Oracle has announced layoffs of tens of thousands of workers, following a similar announcement from Block, although critics see this as “AI washing,” or disguising over-hiring with a convenient excuse that happens to boost the stock price. The logic is not lost on the rank and file. “We will be in a certain point of time when we will feel uncertainty, fear, we’ll see layoffs,” Adika said. “So I think it’s kind of a transition period that will happen over time. But again, at the end of the day, people are not using it yet.”

Adika was also clear that workers staying away from AI are not wrong to sense something real — they’re wrong about the conclusion. “You wouldn’t see any CEO of a bank or insurance company go tomorrow and lay off a lot of people, because who will do the work?” He said he sees a “big issue” coming to a head because claims that AI will replace everyone will have to confront the fact that “it’s just not happening right now.”

The skilled driver problem

Brown said he’s spending more time than ever thinking about what it actually takes to close the gap between the Ferrari and the driver. At KPMG, he has begun categorizing the workforce into what he calls builders, makers, and power users — distinct tiers of AI capability with explicit career paths attached. “Our focus right now is to craft incentives and career paths to get all our people to that level,” he said. “It’s time for the humans to catch up to where the tech is.”

The critical insight in that framing is that the problem isn’t intelligence, nor is it even training in the traditional sense. “I think with your sort of human skills that you bring to the table in terms of critical thinking and judgment,” Brown said, “that’s going to lend people into being makers” — workers who can leverage AI tools fluidly, including using them to build new tools themselves. The workers most at risk, in his view, are not the ones who lack technical skill. They’re the ones whose employers haven’t given them a safe space, a path, or an incentive to try.

A third of the enterprise workforce has never used AI tools at all — and they report the lowest levels of support, the least training, and the highest anxiety about disruption. They are not, the WalkMe report notes carefully, resisting AI. They have simply not been reached. As to whether the evolution of these tools is outpacing workers’ ability to catch up, Brown acknowledged that he definitely feels a gap.

Evolving is possible—and important

What brought Hanke back around was all the time saved, once he figured out what he wanted to use AI for. “AI to me is kind of like another research assistant,” he said, “and it saves a hell of a lot of time because if I had a research assistant doing this stuff, I’d have to send them to the library. They’d be screwing around over there for a week doing something I can do on AI in about an hour.” The caveat: “You have to know what they’re good for.” And, crucially, you have to know enough about the subject matter to catch the errors. “I know what to ask AI. I know how to structure what I want done,” Hanke said, pointing to his decades of domain expertise across economics, commodities, and international finance.

His own trajectory — from outright banning student use to cautious skepticism to daily reliance — tracks the arc many serious thinkers have traveled. He said he went from “‘no’, to ‘maybe’, to ‘this is great—but some of these tools suck.'” His verdict on the tools themselves is characteristically blunt: “There are all kinds of AI. And some of it’s really crap. It depends on what you need.”

Brown’s view is that this is ultimately an optimistic story — but only for those who move. “The winners are the ones where you have your workforce effectively leveraging the capabilities of AI,” he said. “A workforce that’s not leaning into AI is going to be challenged. And a work environment that is overly oriented to AI without the value of the human workforce is going to struggle.”

财富中文网所刊载内容之知识产权为财富媒体知识产权有限公司及/或相关权利人专属所有或持有。未经许可,禁止进行转载、摘编、复制及建立镜像等任何使用。
0条Plus
精彩评论
评论

撰写或查看更多评论

请打开财富Plus APP

前往打开