AI Now-人工智能(AI)多元化危机(英文)-2019.4-33页.pdf
《AI Now-人工智能(AI)多元化危机(英文)-2019.4-33页.pdf》由会员分享,可在线阅读,更多相关《AI Now-人工智能(AI)多元化危机(英文)-2019.4-33页.pdf(35页珍藏版)》请在淘文阁 - 分享文档赚钱的网站上搜索。
1、DISCRIMINATING SYSTEMSGender,Race,and Power in AISarah Myers West,AI Now Institute,New York University Meredith Whittaker,AI Now Institute,New York University,Google Open ResearchKate Crawford,AI Now Institute,New York University,Microsoft ResearchAPRIL 2019Cite as:West,S.M.,Whittaker,M.and Crawford
2、,K.(2019).Discriminating Systems:Gender,Race and Power in AI.AI Now Institute.Retrieved from https:/ainowinstitute.org/discriminatingsystems.html.This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International LicenseRESEARCH FINDINGSRECOMMENDATIONSINTRODUCTIONWHICH HUMANS
3、 ARE IN THE LOOP?HOW WORKFORCES AND AI SYSTEMS INTERACTWHO MAKES AI?Diversity Statistics in the AI Industry:Knowns and UnknownsFROM WORKFORCES TO AI SYSTEMS:THE DISCRIMINATION FEEDBACK LOOPCORPORATE DIVERSITY:BEYOND THE PIPELINE PROBLEMCore Themes in Pipeline ResearchLimitations of Pipeline Research
4、Pipeline Dreams:After Years of Research,The Picture WorsensWORKER-LED INITIATIVESTHE PUSHBACK AGAINST DIVERSITYCONCLUSION3458 1012 15 19 212325 262832CONTENTSRESEARCH FINDINGS There is a diversity crisis in the AI sector across gender and race.Recent studies found only 18%of authors at leading AI co
5、nferences are women,i and more than 80%of AI professors are men.ii This disparity is extreme in the AI industry:iii women comprise only 15%of AI research staff at Facebook and 10%at Google.There is no public data on trans workers or other gender minorities.For black workers,the picture is even worse
6、.For example,only 2.5%of Googles workforce is black,while Facebook and Microsoft are each at 4%.Given decades of concern and investment to redress this imbalance,the current state of the field is alarming.The AI sector needs a profound shift in how it addresses the current diversity crisis.The AI in
7、dustry needs to acknowledge the gravity of its diversity problem,and admit that existing methods have failed to contend with the uneven distribution of power,and the means by which AI can reinforce such inequality.Further,many researchers have shown that bias in AI systems reflects historical patter
8、ns of discrimination.These are two manifestations of the same problem,and they must be addressed together.The overwhelming focus on women in tech is too narrow and likely to privilege white women over others.We need to acknowledge how the intersections of race,gender,and other identities and attribu
9、tes shape peoples experiences with AI.The vast majority of AI studies assume gender is binary,and commonly assign people as male or female based on physical appearance and stereotypical assumptions,erasing all other forms of gender identity.Fixing the pipeline wont fix AIs diversity problems.Despite
10、 many decades of pipeline studies that assess the flow of diverse job candidates from school to industry,there has been no substantial progress in diversity in the AI industry.The focus on the pipeline has not addressed deeper issues with workplace cultures,power asymmetries,harassment,exclusionary
11、hiring practices,unfair compensation,and tokenization that are causing people to leave or avoid working in the AI sector altogether.The use of AI systems for the classification,detection,and prediction of race and gender is in urgent need of re-evaluation.The histories of race science are a grim rem
12、inder that race and gender classification based on appearance is scientifically flawed and easily abused.Systems that use physical appearance as a proxy for character or interior states are deeply suspect,including AI tools that claim to detect sexuality from headshots,iv predict criminality based o
13、n facial features,v or assess worker competence via micro-expressions.vi Such systems are replicating patterns of racial and gender bias in ways that can deepen and justify historical inequality.The commercial deployment of these tools is cause for deep concern.i.Element AI.(2019).Global AI Talent R
14、eport 2019.Retrieved from https:/jfgagne.ai/talent-2019/.ii.AI Index 2018.(2018).Artificial Intelligence Index 2018.Retrieved from http:/cdn.aiindex.org/2018/AI%20Index%202018%20Annual%20Report.pdf.iii.Simonite,T.(2018).AI is the future-but where are the women?WIRED.Retrieved from https:/ neural net
15、works are more accurate than humans at detecting sexual orientation from facial images.Journal of Personality and Social Psychology.v.Wu,X.and Zhang,X.(2016).Automated Inference on Criminality using Face Images.Retrieved from https:/arxiv.org/pdf/1611.04135v2.pdf.vi.Rhue,L.(2018).Racial Influence on
16、 Automated Perceptions of Emotions.Retrieved from https:/ Systems:Gender,Race,and Power in AI|Research Findings|3RECOMMENDATIONS 1.Publish compensation levels,including bonuses and equity,across all roles and job categories,broken down by race and gender.2.End pay and opportunity inequality,and set
17、pay and benefit equity goals that include contract workers,temps,and vendors.3.Publish harassment and discrimination transparency reports,including the number of claims over time,the types of claims submitted,and actions taken.4.Change hiring practices to maximize diversity:include targeted recruitm
18、ent beyond elite universities,ensure more equitable focus on under-represented groups,and create more pathways for contractors,temps,and vendors to become full-time employees.5.Commit to transparency around hiring practices,especially regarding how candidates are leveled,compensated,and promoted.6.I
19、ncrease the number of people of color,women and other under-represented groups at senior leadership levels of AI companies across all departments.7.Ensure executive incentive structures are tied to increases in hiring and retention of under-represented groups.8.For academic workplaces,ensure greater
20、 diversity in all spaces where AI research is conducted,including AI-related departments and conference committees.Recommendations for Addressing Bias and Discrimination in AI Systems9.Remedying bias in AI systems is almost impossible when these systems are opaque.Transparency is essential,and begin
21、s with tracking and publicizing where AI systems are used,and for what purpose.10.Rigorous testing should be required across the lifecycle of AI systems in sensitive domains.Pre-release trials,independent auditing,and ongoing monitoring are necessary to test for bias,discrimination,and other harms.1
22、1.The field of research on bias and fairness needs to go beyond technical debiasing to include a wider social analysis of how AI is used in context.This necessitates including a wider range of disciplinary expertise.12.The methods for addressing bias and discrimination in AI need to expand to includ
23、e assessments of whether certain systems should be designed at all,based on a thorough risk assessment.Recommendations for Improving Workplace DiversityDiscriminating Systems:Gender,Race,and Power in AI|Recommendations|4INTRODUCTIONThere is a diversity crisis in the AI industry,and a moment of recko
24、ning is underway.Over the past few months,employees have been protesting across the tech industry where AI products are created.In April 2019,Microsoft employees met with CEO Satya Nadella to discuss issues of harassment,discrimination,unfair compensation,and lack of promotion for women at the compa
25、ny.1 There are claims that sexual harassment complaints have not been taken seriously enough by HR across the industry.2 And at Google,there was an historic global walkout in November 2018 of 20,000 employees over a culture of inequity and sexual harassment inside the company,triggered by revelation
- 配套讲稿:
如PPT文件的首页显示word图标,表示该PPT已包含配套word讲稿。双击word图标可打开word文档。
- 特殊限制:
部分文档作品中含有的国旗、国徽等图片,仅作为作品整体效果示例展示,禁止商用。设计者仅对作品中独创性部分享有著作权。
- 关 键 词:
- AI Now-人工智能AI多元化危机英文-2019.4-33页 Now 人工智能 多元化 危机 英文 2019.4 33
限制150内