世界经济论坛-一代人工智能为儿童和人工智能建立全球标准(英文)-2019.9-18页.pdf
Project Workshop ReportGeneration AIEstablishing Global Standards for Children and AIJune 2019World Economic Forum91-93 route de la CapiteCH-1223 Cologny/GenevaSwitzerlandTel.:+41(0)22 869 1212Fax:+41(0)22 786 2744Email:contactweforum.orgwww.weforum.org 2019 World Economic Forum.All rights reserved.No part of this publication may be reproduced or transmitted in any form or by any means,including photocopying and recording,or by any information storage and retrieval system.3Generation AI:Establishing Global Standards for Children and AIBackground and contextDiscussionChild rightsPlenary overview:Flash talksBreakout group discussionsPrivacyAlgorithms for childrenAgencyCorporate governancePlenary overview:Moderated panel conversationBreakout group discussionsInternal processesPublic educationConsumer protectionAssessment and evaluationPublic policyPlenary overview:Panel presentationsBreakout group discussionsLaws and regulation protecting childrenGovernment protectionScience to policyPolicy guidance for AI and child rightsOutcomesWorkshop participantsWorld Economic Forum contact45666678999910101112121212131414161717Contents4Generation AI:Establishing Global Standards for Children and AIBackground and contextArtificial intelligence(AI)carries with it the promise of enhancing human potential and improving upon social outcomes where existing systems have fallen short.Numerous risks and uncertainties,however,must be addressed as AI continues to evolve and integrate into public and private decision-making systems that define the world and,in particular,the world of opportunity for the people born to it.As digital natives,perhaps no group will be more affected by AI than children.It thus warrants special care to ensure that it is built to uphold childrens rights and maximize their developmental growth.On 6-7 May 2019,the World Economic Forum Centre for the Fourth Industrial Revolution and its partners UNICEF and the Canadian Institute for Advanced Research(CIFAR)hosted a workshop in San Francisco on the joint“Generation AI”initiative.Comprised of key stakeholders from business,academia,government and civil society,the Generation AI community is committed to driving multistakeholder policy solutions that enable opportunities of AI for children while minimizing its potential harms.This workshop identified deliverables in two key areas:1.A set of public policy guidelines that direct countries on creating new laws focused on children and AI2.A corporate governance charter that guides companies leveraging AI to design their products and services with children in mind.To inform the development of these two deliverables,the workshop was divided into three main sections:child rights,corporate governance and public policy.Each section began with a plenary overview canvassing relevant issues and leadership perspectives from the variety of experts in the room.Following the plenary,the workshop participants divided into breakout groups focused on a more granular subset of issues under the sessions main rubric.The goal at each stage was to consider the issues through an interdisciplinary lens,evaluating cutting-edge perspectives from media experts and developmental psychology,for example,alongside insight from business and legal practitioners.The result was a rich foundation of material on which the Forum and its Generation AI partners can begin to tangibly structure the governance mechanisms that the project has identified as current priorities.5Generation AI:Establishing Global Standards for Children and AIDiscussionThis section captures the conversation that took place during each of the three main sections:child rights,corporate governance and public policy.6Generation AI:Establishing Global Standards for Children and AIPlenary overview:Flash talksShort presentations or“flash talks”from participants prompted the group to think about child rights.A summary of the key issues discussed during these flash talks in advance of the issue-specific breakout group sessions follows.Flash talk presenters:Ronald Dahl,Director,Institute of Human Development,University of California,BerkeleyMizuko Ito,Director,Connected Learning Lab,University of California,IrvineErica Kochi,Co-Founder,UNICEF Innovation,United Nations Childrens Fund,New YorkLeveraging developmental windows of opportunityWhen it comes to children,no one-size-fits-all policy is possible because their vulnerabilities and opportunities for growth and development vary at different ages.Using a developmental science model allows delving deeper into how different identifying features of each developmental stage can be leveraged to inform positive outcomes with respect to AI-enabled toys and products.The transition from childhood to adolescence is particularly defined by notable changes in emotional response levels and motivational goals.Understanding these changes can help to identify the natural attractors for adolescent learning,which can be leveraged to inform design and policy measures governing AI that impacts this group.Further,due to the intensity of changes that happen at the adolescent stage,adolescence offers a crucial window of opportunity to influence the development of children in their second decade of life.The potential for the positive impact of AI that has been mindfully designed to support and nurture growth is thus heightened during adolescence and should be evaluated with care.AI amplifies both risks and opportunities;looking forward,it is beneficial to strive to leverage the unique opportunities at each stage of childhood development to inform AI policy and design.This proposed approach can be contrasted with that taken by the EU in its General Data Protection Regulation(GDPR),which considered issues concerning childrens data from a singular perspective.This frames the reality that sophisticated thinking on developmental science is not currently in the lexicon of most policy-makers,as well as the importance of finding ways to translate these concepts for decision-makers.Contextualizing UNICEF and Generation AICertain AI applications that could have a positive impact,such as using facial recognition technology to evaluate whether a child is malnourished,cannot move forward in the context of UNICEFs focus on ending child hunger because of issues concerning privacy.This highlights a larger theme of the workshop on the priority of privacy rights in any given situation concerning children and AI,when considered against other rights(in this instance,the right to health).UNICEF is advancing its work in this area by developing AI literacy among its experts and field practitioners.It is working with states to promote the equitable representation of children in data sets that are being used to train AI that will impact them,and developing policy guidance for countries and companies to ensure that child rights are protected.UNICEF is focused on developing AI and child rights policy guidance for national policy-makers,corporations and the UN system,to help put child rights on the policy agenda.Influencing cultural normsThe intentions of those who are designing technology are not the same as those using it.Regulating technology will not address the underlying issues that contribute to behavioural problems stemming from culture and society.To address these issues,consideration should be given to influencing cultural norm setting and designing community-based interventions.Such interventions are distributed and difficult to control,but necessary from the perspective of meeting people and their circumstances.Breakout group discussionsPrivacyKey points:Balancing trade-offs.Context is key when evaluating privacy issues,and a binary,all-or-nothing approach will no longer work.When are other interests considered a priority?It is important to be aware of both sides to effectively consider the trade-offs.Large and unbiased data sets are needed to train fair algorithms and create actionable insight to address major social challenges.It is indispensable to meet the challenge of robust data collection while managing privacy concerns.Collecting data from children may in some cases enhance development,but in others it may support structures that oppress their potential for growth and their ability to thrive later in life.Child rights7Generation AI:Establishing Global Standards for Children and AI Should companies consider childrens data sensitive by default?How can the benefits of behavioural analytics be captured without chilling the freedom of expression in children?Considering parents and data privacy.Parents present a third actor in the relationship with data.Considering the role of parents makes cases involving children categorically different from the rest of the discussion on data ownership and privacy.It may not always be in the interest of children for their parents to have access to their raw data past a certain age or developmental stage.Toys that speak with children might hear reports of abuse or other situations that are harmful to the child.This introduces the question of when a toy has the duty to report potential harm,which raises issues on data collection and surveillance practices that would support such a reporting structure in accordance with privacy goals.Should a parent be allowed to sell a childs data and,if so,at what age should the child recover such agency?How should the value be kept in trust?The exposure that children would face in such a situation is likely to be divided along lines of privilege and parental engagement,which raises concerns about equality under a framework for child rights.Limiting exposure to commercial advertising.What are the responsibilities of technology platforms regarding privacy rights?Advertisers can work around rules to deliver ads to children through online platforms.For example,although child-focused platforms limit forms of paid advertising,they still support entire channels devoted to brands,which could be considered a hypercommercialized form of advertising masquerading as content.Tech companies are primarily incentivized to make money from advertising;what mechanisms can be created to prioritize the interests of children,given the dominance of market-based incentives?To create momentum for policy,it could be useful to define the harms that advocates are attempting to protect against by limiting child exposure to advertising.Algorithms for childrenKey points:Optimizing algorithms for learning.One opportunity is to focus on building algorithms that are optimized for learning objectives and steer users in positive ways towards pro-social outcomes,much like books,curricula and other legacy forms of educational programming.Any situation in which a child is directed towards an outcome raises issues of algorithmic manipulation and agency.However,implementing algorithmic“nudges”towards long-term goals over short-term motivations can work to enhance autonomy.The key issue here is to identify and promote healthy goals for algorithmic exposure when children are involved.Teachers and other adults need to entice children a bit in the short term to engage in behaviour that necessitates the development of creativity,character and critical learning skills.The group discussed the viability of applying this concept to purposeful algorithmic design.Modelling algorithms after learning.How can better algorithms be created by modelling them after how children learn?The discussion focused on framing how algorithms are limited with respect to certain phases and aspects of learning.Children demonstrate certain qualities of learning and absorbing the world,especially curiosity and exploration.These qualities could be used to build more effective algorithms if it were possible to determine how to recreate these processes through AI.Children are data-efficient,extracting and processing the most relevant data to learn from just a few examples,whereas AI needs to be fed millions of examples to render accurate judgements.The challenges in this area are not entirely technical.One way to support a child-like learning potential for algorithms is to create more awareness of learning models(including developmental vulnerabilities and opportunities),which can filter up to influence design.The development of learning models that integrate the“salience features”that children demonstrate would also help to mitigate data concerns.8Generation AI:Establishing Global Standards for Children and AIIntroducing new market forces.In the current market system,it is unclear how to meaningfully introduce products for children that are optimized for responsibility over revenue.Is it possible to work through markets and policy to transcend the dominance of big tech in this area?In the history of television broadcasting,one might make an analogy to the introduction of public broadcasting,public funded programming that aims to be fun,engaging,development-oriented and educational.Is it possible to create a“public option”for digital technology that can motivate youngsters as users?Establishing the social expectations of companies(through media and other channels)helps to create the business case for responsibility in algorithmic product design.This approach does not strive to work beyond commercial objectives but rather within them.If the notion that companies ultimately cannot look beyond profit is accepted,the focus should be on elevating new movers,such as socially focused companies,media voices and non-governmental organizations,into the space that can operate from a more neutral perspective.Remedy.How can a process of remedy be supported when algorithms get it wrong and make decisions that have a negative impact?This question,and the important issue of remedy,requires further consideration.AgencyKey points:Championing children.The group considered different forms of authority and implementation that ultimately framed the importance of bringing children into the policy-making process by accounting for their views on the policies that affect them.No one stakeholder in the life of children,whether it be business,government or parents,can be relied upon to entirely represent the best interests of the child.Thus capturing and incorporating the perspectives of children are needed.Are there ways that certain stakeholders can ensure that a childs agency is protected against potential abuse of authority or illicit control by another?This question becomes particularly difficult when children do not have present or reliable and loving guardians.What responsibility do government and companies have to protect childrens long-term interests against oppressive guardianship?This issue is not new but AI could have new implications for perpetuating inequality and limiting opportunities to develop and thrive.Considering parents and data agency.To what extent should parents and guardians have control over their childrens data?The subject of how the relationship between parents and their childrens data