立即打开
欲速则不达,人工智能最重要的是安全性

欲速则不达,人工智能最重要的是安全性

François Candelon, Theodoros Evgeniou 2021-09-11
首席执行官们越早认识到使用人工智能系统的价值-风险权衡,他们就越可以在一个由人工智能驱动的世界中,更好地应对监管法规和社会期望。

图片来源:GETTY IMAGES
 
两年前,在苹果公司(Apple)还没有正式推出苹果信用卡(Apple Card)的时候,就有很多人在讨论,这张不收费的信用卡将如何助力这家科技巨头进军金融服务行业。但现如今,当人们讨论苹果信用卡的时候,那经常是因为苹果的人工智能算法出现了故障——该公司用这种算法来确定准持卡人的信用额度。
 
2019年11月,一名丹麦人在推特上发文称,虽然他和太太使用相同的财务信息申请苹果信用卡,但他获得的信用额度足足是妻子的20倍——即使正如他承认的那样,他妻子的信用评分其实更高。火上浇油的是,就连苹果公司的联合创始人史蒂夫·沃兹尼亚克也声称,他的太太遇到了同样的事情。苹果信用卡于2019年8月正式推出。到2020年年初,估计有310万美国人持有该卡。所以,这个问题很可能影响了数以万计的女性。纷至沓来的投诉促使纽约州金融服务管理局(New York Department of Financial Services)展开调查。尽管该机构最近洗脱了这家数字巨头的性别歧视罪名,但这是在苹果悄悄地将太太们的信用额度提升到与其丈夫相匹配的水平之后才发生的事情。
 
随着企业开始大规模部署人工智能,人们的关注点正在日益从利用这种技术创造和捕获价值,转向使用人工智能系统所带来的固有风险。诸如人工智能事件数据库(Artificial Intelligence Incident Database)这类监管机构,已经记录了数百起与人工智能相关的投诉——从学生考试的可疑评分,到招聘流程对算法的不当使用,再到医疗系统对患者的差别对待,不一而足。结果就是,企业很快就不得不遵守多个国家的监管新规。这些法规旨在确保人工智能系统是值得信赖、安全、强大且公平的。在这方面,欧盟(European Union)再次引领潮流,去年在其《人工智能白皮书:追求卓越和信任的欧洲之道》(White Paper on Artificial Intelligence: A European Approach to Excellence and Trust)中概述了一个框架,并在2021年4月提出了一个法律框架提案。
 
企业必须学会应对人工智能风险,不仅因为这将成为一项监管要求,还因为利益相关方期望它们这样做。根据经济学人智库(Economist Intelligence Unit)最近的一项研究,多达60%的高管表示,出于责任方面的担忧,他们的组织去年决定不与人工智能服务供应商合作。为了有效地管理人工智能,企业必须把握监管法规和社会期望对其使用这项技术的影响,同时铭记该技术独有的特性。我们最近在《哈佛商业评论》(Harvard business Review)撰文详细探讨了这个问题。事实上,搞清楚如何平衡使用人工智能的回报和风险,很可能成为一项新的、具有可持续性的竞争优势。

学习,还是不学习?

从一开始,人们就大肆吹嘘人工智能的超凡能力,即通过学习它研究的数据而不断变好——正是这一特性使人工智能成为一项独特的技术。这种良性循环可能导致人工智能的行为总是无法预测,就像微软(Microsoft)2016年推出的聊天机器人Tay所示;或者正如亚马逊(Amazon)使用人工智能筛选求职简历时生动展示的那样,它会导致可能引发公平担忧的结果。人工智能系统可能在某一天做出一项决定,然后从随后提供给它的数据中学习,并在第二天得出一个迥然不同的决定。这就是为什么美国食品与药品管理局(Food and Drug Administration)等监管机构只批准那些在使用过程中不会演变的算法。

同样,企业也需要决定是否允许自家的人工智能系统实时学习。遗憾的是,不允许人工智能持续学习,将迫使企业放弃该技术的一大关键好处——在某些情况下,它能够随着时间的推移表现得更好。在其他情况下,企业需要在风险水平和算法准确性之间进行权衡,而如果企业不允许持续学习,就会妨碍这种准确性的形成。

不断演变的人工智能系统也会增加运营的复杂性,因为同样的人工智能嵌入式产品或服务将在不同国家以不同方式运行。每个国家在监管法规和社会期望方面的细微差异将使得这些运营挑战变得更加复杂。企业将不得不使用当地数据训练其人工智能,并根据当地法规进行管理。而这势必会限制人工智能的扩展能力。

此外,企业必须将人工智能视为一个需要谨慎管理的应用组合。它们将不得不开发“哨兵”流程来监控这一组合,不断确保它公平、安全和高效运作。组织将不得不频繁测试人工智能系统的输出,而这无疑将增加成本。例如,纽约市在2017年通过立法成立了一个工作组,专门就自动决策系统的信息应该如何分享给公众,以及公共机构应该如何应对人们可能受到自动决策系统伤害的情况提供建议。

为人工智能的决定负责

另一个关键的差异化因素是人工智能做出复杂决定的能力,例如,在网上向谁推送哪些广告,或者是否给予基于人脸识别的访问权限。责任与决策能力携手并进。迄今为止,那些按照“负责任的人工智能”原则(Responsible A.I.)行事的企业和其他组织,专注于确保基于人工智能的决定对所有利益相关者——消费者、雇员、股东和其他利益相关者——一视同仁。如果人工智能算法待人不公,企业就会像苹果公司那样面临法律和声誉风险。它们需要理解其算法可能对人类产生的影响,甚至在某些情况下选择不使用人工智能。随着人工智能系统不断扩展,这些关切将会加剧;就平均而言,一种算法可能是公平的,但在特定的地理环境下仍然可能是不公平的,因为当地消费者的行为和态度可能不符合均值,因此可能没有反映在企业对算法的训练中。

企业别无选择,只能开发流程、角色和功能,以确保人工智能系统是公平和负责任的。一些机构,比如联邦住房贷款抵押公司(Federal Home Loan Mortgage Corporation,也就是人们熟知的房地美公司),已经任命了人工智能伦理官,并建立了人工智能治理结构,以及诸如可追溯协议和多样性训练这类流程来应对这一挑战。这是朝着正确方向迈出的一小步。此外,这些先行者正在建立审计流程,并开发监控工具来确保人工智能系统的公平运行。

问责制要求企业解释为什么它们的算法会做出这样的决定。这种“可解释性”要求将迫使企业做出取舍。相对容易解释的算法通常不如所谓的“黑匣子算法”准确。有鉴于此,如果企业只使用前者,就会限制人工智能的能力和质量。考虑到企业高管将不得不在“可解释性”和准确性之间进行权衡,这势必会在全球范围内创造一种不平等的竞争环境,因为市场法规和社会期望因国别而异。

举例来说,蚂蚁金服结合了阿里巴巴生态系统中数千个数据源的输入,为中国的借款人制定信用评级。这个流程使得任何人,甚至是监管机构,都难以理解这些算法的决策方式。尽管阿里巴巴的系统足以让该公司在短短几分钟内批准贷款,但它可能无法在中国以外使用同样的系统,特别是在那些无论是监管法规,还是社会期望都要求更高程度的“可解释性”的经济体。因此,人工智能公司究竟可以瞄准哪些市场,将在很大程度上受制于监管法规,这将对企业战略产生重大影响。事实上,在2019年的《通用数据隐私条例》(General Data Privacy Regulation)颁布后,一些公司,比如游戏开发商Uber Entertainment,选择远离欧盟。

随着越来越多的政府颁布人工智能的使用规则,企业在部署人工智能之前必须考虑一些关键问题。它们必须扪心自问:

* 我们应该在多大程度上差异化我们的产品或服务,以适应不同地区在人工智能法规和市场预期方面的差异?

* 在考虑了新的监管格局后,我们是否还应该致力于为全球所有市场提供服务?

* 如果人工智能业务的去中心化是大势所趋,我们是否应该建立一个中央组织来领导,或者至少连接和共享数据、算法及最佳实践?

* 考虑到人工智能法规和市场预期,我们将需要哪些新角色和组织能力,以确保我们的战略和执行相一致?我们将如何聘用或重新培训人才来获得这些能力?

* 我们的战略视野是否适当,能否将我们应对不断变化的技术和监管环境的短期反应与我们的长期人工智能愿景结合起来?

随着人工智能在企业内部和外部流程中的应用变得日益普遍,利益相关者对公平、安全和可信赖的人工智能的期望值不断攀升,企业必然会遭遇人与机器的冲突。首席执行官们越早认识到使用人工智能系统的价值-风险权衡,他们就越可以在一个由人工智能驱动的世界中,更好地应对监管法规和社会期望。(财富中文网)

本文作者范史华(François Candelon)是波士顿咨询公司(Boston Consulting Group)的董事总经理、高级合伙人,并兼任波士顿咨询公司亨德森研究所(BCG Henderson Institute)的全球董事。西奥多罗斯·叶夫根尼奥是欧洲工商管理学院(INSEAD)的决策科学和技术管理学教授,专攻人工智能和商业数据分析研究。

译者:任文科

两年前,在苹果公司(Apple)还没有正式推出苹果信用卡(Apple Card)的时候,就有很多人在讨论,这张不收费的信用卡将如何助力这家科技巨头进军金融服务行业。但现如今,当人们讨论苹果信用卡的时候,那经常是因为苹果的人工智能算法出现了故障——该公司用这种算法来确定准持卡人的信用额度。

2019年11月,一名丹麦人在推特上发文称,虽然他和太太使用相同的财务信息申请苹果信用卡,但他获得的信用额度足足是妻子的20倍——即使正如他承认的那样,他妻子的信用评分其实更高。火上浇油的是,就连苹果公司的联合创始人史蒂夫·沃兹尼亚克也声称,他的太太遇到了同样的事情。苹果信用卡于2019年8月正式推出。到2020年年初,估计有310万美国人持有该卡。所以,这个问题很可能影响了数以万计的女性。纷至沓来的投诉促使纽约州金融服务管理局(New York Department of Financial Services)展开调查。尽管该机构最近洗脱了这家数字巨头的性别歧视罪名,但这是在苹果悄悄地将太太们的信用额度提升到与其丈夫相匹配的水平之后才发生的事情。

随着企业开始大规模部署人工智能,人们的关注点正在日益从利用这种技术创造和捕获价值,转向使用人工智能系统所带来的固有风险。诸如人工智能事件数据库(Artificial Intelligence Incident Database)这类监管机构,已经记录了数百起与人工智能相关的投诉——从学生考试的可疑评分,到招聘流程对算法的不当使用,再到医疗系统对患者的差别对待,不一而足。结果就是,企业很快就不得不遵守多个国家的监管新规。这些法规旨在确保人工智能系统是值得信赖、安全、强大且公平的。在这方面,欧盟(European Union)再次引领潮流,去年在其《人工智能白皮书:追求卓越和信任的欧洲之道》(White Paper on Artificial Intelligence: A European Approach to Excellence and Trust)中概述了一个框架,并在2021年4月提出了一个法律框架提案。

企业必须学会应对人工智能风险,不仅因为这将成为一项监管要求,还因为利益相关方期望它们这样做。根据经济学人智库(Economist Intelligence Unit)最近的一项研究,多达60%的高管表示,出于责任方面的担忧,他们的组织去年决定不与人工智能服务供应商合作。为了有效地管理人工智能,企业必须把握监管法规和社会期望对其使用这项技术的影响,同时铭记该技术独有的特性。我们最近在《哈佛商业评论》(Harvard business Review)撰文详细探讨了这个问题。事实上,搞清楚如何平衡使用人工智能的回报和风险,很可能成为一项新的、具有可持续性的竞争优势。

学习,还是不学习?

从一开始,人们就大肆吹嘘人工智能的超凡能力,即通过学习它研究的数据而不断变好——正是这一特性使人工智能成为一项独特的技术。这种良性循环可能导致人工智能的行为总是无法预测,就像微软(Microsoft)2016年推出的聊天机器人Tay所示;或者正如亚马逊(Amazon)使用人工智能筛选求职简历时生动展示的那样,它会导致可能引发公平担忧的结果。人工智能系统可能在某一天做出一项决定,然后从随后提供给它的数据中学习,并在第二天得出一个迥然不同的决定。这就是为什么美国食品与药品管理局(Food and Drug Administration)等监管机构只批准那些在使用过程中不会演变的算法。

同样,企业也需要决定是否允许自家的人工智能系统实时学习。遗憾的是,不允许人工智能持续学习,将迫使企业放弃该技术的一大关键好处——在某些情况下,它能够随着时间的推移表现得更好。在其他情况下,企业需要在风险水平和算法准确性之间进行权衡,而如果企业不允许持续学习,就会妨碍这种准确性的形成。

不断演变的人工智能系统也会增加运营的复杂性,因为同样的人工智能嵌入式产品或服务将在不同国家以不同方式运行。每个国家在监管法规和社会期望方面的细微差异将使得这些运营挑战变得更加复杂。企业将不得不使用当地数据训练其人工智能,并根据当地法规进行管理。而这势必会限制人工智能的扩展能力。

此外,企业必须将人工智能视为一个需要谨慎管理的应用组合。它们将不得不开发“哨兵”流程来监控这一组合,不断确保它公平、安全和高效运作。组织将不得不频繁测试人工智能系统的输出,而这无疑将增加成本。例如,纽约市在2017年通过立法成立了一个工作组,专门就自动决策系统的信息应该如何分享给公众,以及公共机构应该如何应对人们可能受到自动决策系统伤害的情况提供建议。

为人工智能的决定负责

另一个关键的差异化因素是人工智能做出复杂决定的能力,例如,在网上向谁推送哪些广告,或者是否给予基于人脸识别的访问权限。责任与决策能力携手并进。迄今为止,那些按照“负责任的人工智能”原则(Responsible A.I.)行事的企业和其他组织,专注于确保基于人工智能的决定对所有利益相关者——消费者、雇员、股东和其他利益相关者——一视同仁。如果人工智能算法待人不公,企业就会像苹果公司那样面临法律和声誉风险。它们需要理解其算法可能对人类产生的影响,甚至在某些情况下选择不使用人工智能。随着人工智能系统不断扩展,这些关切将会加剧;就平均而言,一种算法可能是公平的,但在特定的地理环境下仍然可能是不公平的,因为当地消费者的行为和态度可能不符合均值,因此可能没有反映在企业对算法的训练中。

企业别无选择,只能开发流程、角色和功能,以确保人工智能系统是公平和负责任的。一些机构,比如联邦住房贷款抵押公司(Federal Home Loan Mortgage Corporation,也就是人们熟知的房地美公司),已经任命了人工智能伦理官,并建立了人工智能治理结构,以及诸如可追溯协议和多样性训练这类流程来应对这一挑战。这是朝着正确方向迈出的一小步。此外,这些先行者正在建立审计流程,并开发监控工具来确保人工智能系统的公平运行。

问责制要求企业解释为什么它们的算法会做出这样的决定。这种“可解释性”要求将迫使企业做出取舍。相对容易解释的算法通常不如所谓的“黑匣子算法”准确。有鉴于此,如果企业只使用前者,就会限制人工智能的能力和质量。考虑到企业高管将不得不在“可解释性”和准确性之间进行权衡,这势必会在全球范围内创造一种不平等的竞争环境,因为市场法规和社会期望因国别而异。

举例来说,蚂蚁金服结合了阿里巴巴生态系统中数千个数据源的输入,为中国的借款人制定信用评级。这个流程使得任何人,甚至是监管机构,都难以理解这些算法的决策方式。尽管阿里巴巴的系统足以让该公司在短短几分钟内批准贷款,但它可能无法在中国以外使用同样的系统,特别是在那些无论是监管法规,还是社会期望都要求更高程度的“可解释性”的经济体。因此,人工智能公司究竟可以瞄准哪些市场,将在很大程度上受制于监管法规,这将对企业战略产生重大影响。事实上,在2019年的《通用数据隐私条例》(General Data Privacy Regulation)颁布后,一些公司,比如游戏开发商Uber Entertainment,选择远离欧盟。

随着越来越多的政府颁布人工智能的使用规则,企业在部署人工智能之前必须考虑一些关键问题。它们必须扪心自问:

* 我们应该在多大程度上差异化我们的产品或服务,以适应不同地区在人工智能法规和市场预期方面的差异?

* 在考虑了新的监管格局后,我们是否还应该致力于为全球所有市场提供服务?

* 如果人工智能业务的去中心化是大势所趋,我们是否应该建立一个中央组织来领导,或者至少连接和共享数据、算法及最佳实践?

* 考虑到人工智能法规和市场预期,我们将需要哪些新角色和组织能力,以确保我们的战略和执行相一致?我们将如何聘用或重新培训人才来获得这些能力?

* 我们的战略视野是否适当,能否将我们应对不断变化的技术和监管环境的短期反应与我们的长期人工智能愿景结合起来?

随着人工智能在企业内部和外部流程中的应用变得日益普遍,利益相关者对公平、安全和可信赖的人工智能的期望值不断攀升,企业必然会遭遇人与机器的冲突。首席执行官们越早认识到使用人工智能系统的价值-风险权衡,他们就越可以在一个由人工智能驱动的世界中,更好地应对监管法规和社会期望。(财富中文网)

本文作者范史华(François Candelon)是波士顿咨询公司(Boston Consulting Group)的董事总经理、高级合伙人,并兼任波士顿咨询公司亨德森研究所(BCG Henderson Institute)的全球董事。西奥多罗斯·叶夫根尼奥是欧洲工商管理学院(INSEAD)的决策科学和技术管理学教授,专攻人工智能和商业数据分析研究。

译者:任文科

Two years ago, before Apple’s launch of the Apple Card, there was much discussion about how the no-fee credit card would enable the tech giant to storm into the financial services business. However, when people discuss the Apple Card today, it’s in part because of the glitches in Apple’s artificial intelligence algorithms that determine wannabe cardholders’ credit limits.

In November 2019, a Dane tweeted that while his wife and he had both applied for the Apple Card with the same financial information, he was awarded a credit limit 20 times higher than that of his wife—even though, as he admitted, his wife had a higher credit score. Adding fuel to the fire, Apple’s cofounder, Steve Wozniak, claimed that the same thing had happened to his wife too. The card had been launched in August 2019, and it was estimated that there were 3.1 million Apple Card credit card holders in the U.S. at the beginning of 2020, so this issue may well have affected tens of thousands of women. A spate of complaints resulted in a New York Department of Financial Services investigation, which recently cleared Apple of gender-based discrimination, but only after the digital giant quietly raised wives’ credit limits to match those of their husbands.

As business sets about deploying A.I. at scale, the focus is increasingly shifting from the use of the technology to create and capture value to the inherent risks that A.I.-based systems entail. Watchdog bodies such as the Artificial Intelligence Incident Database have already documented hundreds of cases of A.I.-related complaints, ranging from the questionable scoring of students’ exams to the inappropriate use of algorithms in recruiting and the differential treatment of patients by health care systems. As a result, companies will soon have to comply with regulations in several countries that aim to ensure that A.I.-based systems are trustworthy, safe, robust, and fair. Once again, the European Union is leading the way, outlining a framework last year in its White Paper on Artificial Intelligence: A European Approach to Excellence and Trust, as well as its proposal for a legal framework in April 2021.

Companies must learn to tackle A.I. risks not only because it will be a regulatory requirement, but because stakeholders will expect them to do so. As many as 60% of executives reported that their organizations decided against working with A.I. service providers last year due to responsibility-related concerns, according to a recent Economist Intelligence Unit study. To effectively manage A.I., business must grasp the implications of regulations and social expectations on its use even while keeping in mind the technology’s unique characteristics, which we’ve discussed at length in our recent Harvard Business Review article. Indeed, figuring out how to balance the rewards from using A.I. with the risks could well prove to be a new, and sustainable, source of competitive advantage.

To learn, or not to learn?

At the outset, consider A.I.’s much-vaunted ability to continuously become better by learning from the data it studies—a characteristic that makes A.I. a unique technology. The virtuous cycle can lead to A.I. behavior that cannot always be anticipated, as the example of Microsoft’s chatbot, Tay, showed in 2016, or to outcomes that may raise concerns of fairness, as Amazon’s use of A.I. to screen résumés vividly demonstrated. An A.I. system can make one decision one day, and, learning from the data it is subsequently fed, could arrive at a vastly different decision the very next day. That’s why U.S. regulators, such as the Food and Drug Administration, approve only algorithms that don’t evolve during their use.

Similarly, companies will need to decide whether or not to allow their A.I. systems to learn in real time. Not allowing continuous learning will, sadly, result in companies having to forgo one of the key benefits of A.I., viz its ability to perform better over time, in some cases. In others, business will need to balance the tradeoffs between risk levels and algorithmic accuracy, which will be hampered if companies don’t allow continuous learning.

Ever-evolving A.I. systems also generate operational complexities because the same A.I.-embedded product or service will work differently in each country. These operational challenges will be compounded by the subtle variations in regulations and social expectations in each nation. Companies will have to train their A.I. using local data and manage them according to local regulations. That is bound to limit A.I.’s ability to scale.

In addition, companies will have to treat their A.I. as a portfolio of applications that needs careful management. They will have to develop sentinel processes to monitor the portfolio, continuously ensuring its fair, safe, and robust functioning. Organizations will have to frequently test the output of A.I. systems, which will add to costs. For example, a 2017 New York City law mandated the creation of a task force to provide recommendations on how information on automated decision systems should be shared with the public, and how public agencies should address instances where people could be harmed by automated decision systems.

Taking responsibility for A.I.’s decisions

Another key differentiator is A.I.’s ability to make complex decisions, such as which ads to serve up online to whom or whether to grant facial recognition–based access. Responsibility comes hand in hand with the ability to make decisions. So far, companies and other organizations acting according to the principles of Responsible A.I. have focused on ensuring that A.I.-based decisions treat all stakeholders—consumers, employees, shareholders, stakeholders—fairly. If A.I. algorithms treat people unfairly, companies will face legal and reputational risks, as Apple did. They need to understand the possible impact that their algorithms can have on humans, and even choose not to use A.I. in some contexts. These concerns will be exacerbated as A.I. systems scale; an algorithm may be fair, on average, but may still be unfair in specific geographical contexts because local consumer behavior and attitudes may not correspond to the average, and thus may not be reflected in the algorithm’s training.

Companies have no option but to develop processes, roles, and functions to ensure that A.I. systems are fair and responsible. Some, like the Federal Home Loan Mortgage Corporation (Freddie Mac), have already appointed A.I. ethics officers and set up A.I. governance structures and processes—such as traceability protocols and diversity training—to tackle this challenge, which are small steps in the right direction. In addition, the pioneers are setting up auditing processes and developing monitoring tools to ensure the fair functioning of A.I. systems.

Accountability requires companies to explain why their algorithms make decisions the way they do. This idea of “explainability” will force companies to make tradeoffs. Easier-to-explain algorithms are usually less accurate than so-called black box algorithms, so if companies use only the former, it will limit the A.I.’s abilities and quality. Because executives will have to make tradeoffs between explainability and accuracy, it’s bound to create an unequal playing field across the globe since market regulations and social expectations will differ across nations.

By way of illustration: Ant Financial combines thousands of inputs from data sources in the Alibaba ecosystem to develop credit ratings for borrowers in China. The process makes it difficult for anyone, even regulators, to understand how the algorithms make decisions. While Alibaba’s systems allow the company to approve loans within minutes, it may not be able to use the same system outside China, especially in economies with regulations and expectations that demand a higher degree of explainability. Consequently, A.I. regulations will limit the markets that A.I.-driven companies can target, which has major strategy implications. In fact, a few companies, such as game developer Uber Entertainment, chose to stay away from the EU after the enactment of the General Data Privacy Regulation in 2019.

As more governments unveil rules about the use of A.I., companies will need to consider some key questions before deploying A.I. They must ask themselves:

* To what extent should we differentiate our product or service offering to follow local differences in A.I. regulations and market expectations?

* Should we still serve all these markets worldwide after accounting for the new regulatory landscape?

* If decentralizing A.I. operations is essential, should we set up a central organization to lead, or at least connect, the sharing of data, algorithms, insights, and best practices?

* Given A.I. regulations and market expectations, what are the new roles and organizational capabilities that we will need to ensure that our strategy and execution are aligned? How will we hire, or reskill, talent to acquire these capabilities?

* Is our strategy horizon appropriate to combine the short-run responses to a constantly changing technology and regulatory environment with our long-term A.I. vision?

As the use of A.I. in companies’ internal and external processes becomes more pervasive, and the expectations of stakeholders about fair, safe, and trustworthy A.I. rise, companies are bound to run headlong into man vs. machine clashes. The sooner CEOs come to grips with the value-risk tradeoffs of using A.I.-driven systems, the better they will be able to cope with both regulations and expectations in an A.I.-driven world.

François Candelon (Candelon.Francois@bcg.com) is a managing director and senior partner at the Boston Consulting Group and the global director of the BCG Henderson Institute. Theodoros Evgeniou (theodoros.evgeniou@insead.edu) is a professor of decision sciences and technology management at INSEAD working on A.I. and data analytics for business.

热读文章
热门视频
扫描二维码下载财富APP