立即打开
谷歌“首席决策科学家”透露离职原因

谷歌“首席决策科学家”透露离职原因

Rachyl Jones 2023-09-09
科济里科夫在谷歌任职10年,其中5年担任首席决策科学家。她负责指导公司领导者在人工智能领域做出明智且负责任的决策等。

卡西·科济里科夫在谷歌工作了10年。图片来源:COURTESY OF PERSONA PR

谷歌(Google)的首席决策师离职了。

卡西·科济里科夫曾经担任谷歌的首席决策科学家,并帮助该公司开拓决策智能领域,如今她准备单干,致力于帮助商业领袖应对人工智能领域的棘手难题。

在人工智能变得更加强大并且更广泛地应用于各行各业之际,科济里科夫将针对如何做出明智的决策推出她在LinkedIn的首套课程、出版一本书,以及发表一系列的主题演讲。科济里科夫对《财富》杂志表示,其目标是为领导者在思考如何利用人工智能时提供工具,并帮助公众督促人工智能决策者对影响数百万人的选择负起责任。

科济里科夫在谷歌任职10年,其中五年担任首席决策科学家。她负责指导公司领导者在人工智能领域做出明智且负责任的决策等。

科济里科夫说:“我一直相信谷歌的初衷是好的。”然而,谷歌是一家大公司,因此外界有时会把她在某个话题上的个人观点与谷歌的立场等同起来。科济里科夫告诉《财富》杂志,在其新岗位上,她将不必担心自己的主张会对她所代表的公司产生影响。

人工智能正在经历漫长的发展期,这引起了一些人对未来的担忧。人工智能领域的某些顶尖人才最近警告称,正如我们所知,人工智能可能会终结人类。此时此刻让人感觉像是科技世界的一个拐点。科济里科夫表示,至关重要的是要有接受过决策教育的领导者和可以督促他们负责任的消费者。

科济里科夫在南非长大,获得了芝加哥大学(University of Chicago)的经济学学士学位。她还拥有北卡罗来纳州立大学(North Carolina State University)的数理统计学硕士学位以及完成了部分的杜克大学(Duke University)心理学和神经科学博士课程。在加入谷歌之前,她从事了10年的独立数据科学顾问。

科济里科夫从2018年开始担任谷歌的首席决策科学家,在此期间,谷歌的人工智能部门迅猛发展。谷歌的首席执行官桑达尔·皮查伊推出了谷歌助手(Google Assistant)的附加组件Duplex,它能够代表用户拨打电话,旨在帮助他们安排约会、预订餐厅和预约其他活动。谷歌在根据提示生成文本、图像和视频方面取得了飞跃式的进展,目前正在开发可以自己编写代码的机器人。该公司还发布了能够与ChatGPT相媲美的大型语言模型Bard。与其他人工智能公司的情况类似,谷歌的许多发展成果也在员工和学者中引发了道德问题讨论。谷歌没有回应置评请求。

由于签订了保密协议,科济里科夫并未对其在谷歌促成的决策发表评论,但不难想象该公司在人工智能领域曾经面临哪些艰难抉择。在构建Bard的过程里,谷歌必须决定是否抓取受版权保护的信息来训练该人工智能模型。今年7月,一项针对谷歌的诉讼就指控诉该公司的这种做法。此外,谷歌还需要决定在什么时候发布这项技术,以确保维持与ChatGPT的竞争力,同时又不损害其声誉。在发布Bard的演示视频后(视频中这一聊天机器人给出了错误答案),谷歌立即遭到猛烈抨击。

科济里科夫的工作围绕这样一种观点展开:个人做出的选择可能会影响很多人,而那些高层未必接受过决策实践方面的教育。她说:“人们很容易会认为技术发展是自发性的。但实际上技术背后有其推动者,他们无论有或没有相关技能,都会做出非常主观的决策,从而影响着数百万人的生活。”

长期以来,人类一直在努力寻求做出决策的最佳方法,因而推动这类方法不断演变。科济里科夫称:“当需要解答重要问题时,我们可以运用本杰明·富兰克林在300年前提出的赞成/反对模型,但也有更先进的方法。”虽然科济里科夫的目标对象是商业领袖,但她的方法也能够用于做出其他重要的人生决定,比如去哪里上大学,以及是否要生孩子。

决策者应该问问自己:怎样才可以改变我的想法?他们还应该利用数据信息,但在看到数据之前,首先要规定好面对不同的数据结果要采取怎样的做法。这有助于决策者避免证真偏差,即利用数据来证实他们已有的观点。科济里科夫表示,记录做出重要决定的过程(包括当时能够获取的信息)也有助于在选择做出后评估其优劣水平。(财富中文网)

译者:中慧言-刘嘉欢

谷歌(Google)的首席决策师离职了。

卡西·科济里科夫曾经担任谷歌的首席决策科学家,并帮助该公司开拓决策智能领域,如今她准备单干,致力于帮助商业领袖应对人工智能领域的棘手难题。

在人工智能变得更加强大并且更广泛地应用于各行各业之际,科济里科夫将针对如何做出明智的决策推出她在LinkedIn的首套课程、出版一本书,以及发表一系列的主题演讲。科济里科夫对《财富》杂志表示,其目标是为领导者在思考如何利用人工智能时提供工具,并帮助公众督促人工智能决策者对影响数百万人的选择负起责任。

科济里科夫在谷歌任职10年,其中五年担任首席决策科学家。她负责指导公司领导者在人工智能领域做出明智且负责任的决策等。

科济里科夫说:“我一直相信谷歌的初衷是好的。”然而,谷歌是一家大公司,因此外界有时会把她在某个话题上的个人观点与谷歌的立场等同起来。科济里科夫告诉《财富》杂志,在其新岗位上,她将不必担心自己的主张会对她所代表的公司产生影响。

人工智能正在经历漫长的发展期,这引起了一些人对未来的担忧。人工智能领域的某些顶尖人才最近警告称,正如我们所知,人工智能可能会终结人类。此时此刻让人感觉像是科技世界的一个拐点。科济里科夫表示,至关重要的是要有接受过决策教育的领导者和可以督促他们负责任的消费者。

科济里科夫在南非长大,获得了芝加哥大学(University of Chicago)的经济学学士学位。她还拥有北卡罗来纳州立大学(North Carolina State University)的数理统计学硕士学位以及完成了部分的杜克大学(Duke University)心理学和神经科学博士课程。在加入谷歌之前,她从事了10年的独立数据科学顾问。

科济里科夫从2018年开始担任谷歌的首席决策科学家,在此期间,谷歌的人工智能部门迅猛发展。谷歌的首席执行官桑达尔·皮查伊推出了谷歌助手(Google Assistant)的附加组件Duplex,它能够代表用户拨打电话,旨在帮助他们安排约会、预订餐厅和预约其他活动。谷歌在根据提示生成文本、图像和视频方面取得了飞跃式的进展,目前正在开发可以自己编写代码的机器人。该公司还发布了能够与ChatGPT相媲美的大型语言模型Bard。与其他人工智能公司的情况类似,谷歌的许多发展成果也在员工和学者中引发了道德问题讨论。谷歌没有回应置评请求。

由于签订了保密协议,科济里科夫并未对其在谷歌促成的决策发表评论,但不难想象该公司在人工智能领域曾经面临哪些艰难抉择。在构建Bard的过程里,谷歌必须决定是否抓取受版权保护的信息来训练该人工智能模型。今年7月,一项针对谷歌的诉讼就指控诉该公司的这种做法。此外,谷歌还需要决定在什么时候发布这项技术,以确保维持与ChatGPT的竞争力,同时又不损害其声誉。在发布Bard的演示视频后(视频中这一聊天机器人给出了错误答案),谷歌立即遭到猛烈抨击。

科济里科夫的工作围绕这样一种观点展开:个人做出的选择可能会影响很多人,而那些高层未必接受过决策实践方面的教育。她说:“人们很容易会认为技术发展是自发性的。但实际上技术背后有其推动者,他们无论有或没有相关技能,都会做出非常主观的决策,从而影响着数百万人的生活。”

长期以来,人类一直在努力寻求做出决策的最佳方法,因而推动这类方法不断演变。科济里科夫称:“当需要解答重要问题时,我们可以运用本杰明·富兰克林在300年前提出的赞成/反对模型,但也有更先进的方法。”虽然科济里科夫的目标对象是商业领袖,但她的方法也能够用于做出其他重要的人生决定,比如去哪里上大学,以及是否要生孩子。

决策者应该问问自己:怎样才可以改变我的想法?他们还应该利用数据信息,但在看到数据之前,首先要规定好面对不同的数据结果要采取怎样的做法。这有助于决策者避免证真偏差,即利用数据来证实他们已有的观点。科济里科夫表示,记录做出重要决定的过程(包括当时能够获取的信息)也有助于在选择做出后评估其优劣水平。(财富中文网)

译者:中慧言-刘嘉欢

Google’s chief decision woman is out.

Cassie Kozyrkov, who has served as the internet company’s chief decision scientist and helped pioneer the field of decision intelligence, is going solo and working on projects to help business leaders navigate the tricky waters of artificial intelligence.

As AI becomes more powerful and more prevalent across industries, Kozyrkov will launch her first LinkedIn course, publish a book, and give keynote speeches about how to make informed decisions. Her goal is to give leaders the tools to think about how they deploy AI, and to help the public hold AI decision-makers accountable for the choices that impact millions of people, she told Fortune.

She spent 10 years at Google, five of which as chief decision scientist. Among her responsibilities, she guided company leaders to make informed and responsible decisions regarding AI.

“I’ve always believed Google’s heart is in the right place,” Kozyrkov said. But it is a large company, and outsiders sometimes equated her personal opinions with Google’s stance on a topic. In her new role, she won’t have to worry about how her advocacy impacts a company she represents, she told Fortune.

AI is undergoing a massive period of growth, which has caused anxieties about the future for some. Top minds in the AI space recently warned it could end humanity as we know it. This point in time feels like an inflection point in the world of tech. It is essential to have leaders in place that are educated in decision-making and consumers that can hold them accountable, according to Kozyrkov.

Kozyrkov, who grew up in South Africa, received a bachelor’s degree in economics from the University of Chicago. She also has a master’s degree in mathematical statistics from North Carolina State University and a partially completed PhD in psychology and neuroscience from Duke University. Prior to working at Google, she spent 10 years as an independent data science consultant.

During Kozyrkov’s time as chief decision scientist, which began in 2018, Google’s AI division grew substantially. CEO Sundar Pichai unveiled Duplex, an add-on to Google Assistant that can make phone calls on behalf of a user, intended to help schedule appointments, restaurant reservations, and other engagements. Google has made leaps in generating text, images, and videos from prompts, and it is developing robots that can write their own code. It also released Bard, its large language model rivaling ChatGPT. Many of Google’s developments have raised ethical questions from employees and academics, which isn’t unlike what’s happening at other AI companies. Google didn’t respond to requests for comment.

Kozyrkov would not comment on decisions she helped make at Google because of her nondisclosure agreement, but it’s not difficult to think of areas where the company has faced difficult choices when it comes to AI. In building Bard, Google had to decide whether to scrape copyrighted information to train the AI model. A lawsuit filed against Google in July accuses the company of doing so. Google also had to decide at what point to release the technology to remain competitive with ChatGPT but not damage its reputation. It came under fire right after it published the Bard demo video in which the chatbot gave an incorrect answer.

Kozyrkov’s work revolves around the idea that individuals can make choices that affect a lot of people, and those at the top aren’t necessarily educated in the practice of decision-making. “It is easy to think of technology as autonomous,” she said. “But there are people behind that technology making very subjective decisions, with or without skill, to affect millions of lives.”

The best way to make a decision is something humans have long grappled with, and which continues to evolve. There’s Benjamin Franklin’s three-century-old pro/con model, but there are also more advanced ways to answer important questions, Kozyrkov said. While she is targeting business leaders, her methods can also be used to make other important life decisions, like where to go to college or whether to start a family.

Decision-makers should ask themselves: What would it take to change my mind? They should also use data, but prior to seeing it, set criteria for what they will do based on what the data says. This helps decision-makers avoid confirmation bias, or using data to confirm an opinion they already have. It is also helpful to document the process of coming to an important decision—including the information available at the time—to evaluate the quality of a choice after it is made, according to Kozyrkov.

热读文章
热门视频
扫描二维码下载财富APP