You are viewing a javascript disabled version of the site. Please enable Javascript for this site to function properly.
Go to headerGo to navigationGo to searchGo to contentsGo to footer
In content section. Select this link to jump to navigation

Children’s right to participation in AI: Exploring transnational co-creative approaches to foster child-inclusive AI policy and practice


The right to participation in matters related to children is a fundamental right of every child. AI systems are emerging in all contexts of children’s lives, both in the US and EU, yet children’s voices are often ignored in AI policy and practice, particularly children from historically marginalised communities. This article explores the policy and practice gaps in Europe and the US and the lack of children’s participation in AI at the global level. Current AI policies and practices discount the transnational implications of AI on children’s lives and their rights in the context of the United Nations Convention on the Rights of the Child (UNCRC). The article calls for co-creative approaches which should be implemented transnationally to elevate the benefits of the inclusion of children in AI policy and practice – not only as users but also as contributors and innovators. This approach offers AI systems and policies that are more inclusive and children-friendly. By offering children more agency as contributors and innovators, children would gain more power compared to only being users. We propose that the balance in transnational power dynamics in AI policy and practice could become reversed. Ultimately, children’s increased agency as innovators in shaping AI practices can offer mutual benefits for children’s individual and social development, inclusive AI policy, and innovation practice.


Children’s Right to Participation is a fundamental right of every child globally (Article 12 of the United Nations Convention on the Rights of the Child – UNCRC, 1989). Since February 2021, the General Comment 25 of the UNCRC specifies that children’s fundamental rights equally apply to online interactions with children as much as to offline (Committee on the Rights of the Child, 2021). The US is the major provider of AI products and services but it is the only country in the world that has not ratified the UNCRC. Children are actively using AI-driven emerging technology products for their education, health, and entertainment across the globe. However, children scarcely participate in shaping AI policy and practice beyond being users (Dignum et al., 2021).

The World Economic Forum (WEF) forecasts that Internet of Things (IoT) devices with AI systems will only proliferate in citizens’ lives (WEF, 2021), yet safety, security, privacy, and trust remain significant issues for improving their governance. Perceived benefits of AI tools, such as offering personalised services, recommendations, and fast and comprehensive prognoses on individual and societal needs, are vast. Nonetheless, AI systems have proven to cause social inequalities (European Commission – EC, 2020), unjustifiable biases, unjust risk scoring, and addictive algorithms aggravating mental health harm to young netizens (Karim et al., 2020). These implications on young users compared to adults can be even more profound across the world, a phenomenon which has elsewhere been coined as “generational unfairness” (McStay & Rosener, 2021). This is evident in the child’s developmental stages in life and that their earliest experiences, including their AI-mediated ones, can affect them throughout their adulthood (Epps-Darling, 2020). Moreover, the number of AI systems is exponentially increasing in children’s everyday education, healthcare, entertainment, and socialisation (from basic web searches to critical health assessments) both in the US and in the Netherlands (Statista, 2022).

Child participation in AI design in the US is limited and not inclusive (Psihogios et al., 2022) – an obstacle perpetuated in part by the lack of federal policy and practice guidelines. However, California is the only state in the US to have introduced a relevant policy (CCPA, 201 ) for an age-appropriate AI design for children (Assembly Bill No. 2273). The Netherlands, while a signatory of the UNCRC, also lags in child participation in decision-making and artificial intelligence system design practices and policies (WRR, 2021). Tech policies without young people’s voices have proven to be ineffective and incomplete due to the lack of a framework for inclusive policies and practices at the different levels of AI design (Gasser, 2019). Introducing AI to children is a proper strategy to prepare an AI-capable generation (Kahn et al., 2018). Including children from diverse backgrounds and lived experiences in AI design-oriented, co-creational processes is even more vital now due to the increased access to AI products and services. AI systems’ implications have multiple effects on children’s daily lives and personal development.

In this article, we highlight the challenges of child participation in AI policy and practice. As part of the challenges, we explore how US and European AI policy and relevant AI practice gaps impede the participation of children in AI policy and practice. To address policy and child participation gaps, we argue for co-creative approaches with children to understand the power dynamic between adults and children in the AI systems. We propose a three-step child-centred participation model in AI policy and practice through co-creative approaches to ensure inclusiveness. Children’s participation and increased agency in AI development is not only about protecting them; we call for children empowerment in AI towards a just digital future.

2.Challenges in child participation in AI policy and practice

Children’s safe and inclusive participation in Artificial Intelligence (AI) has become essential. Yet, it is challenged by gaps in theory, policy, and practice. From these perspectives, these gaps pertain to how to increase children’s agency and what would that mean for AI policy and practice. From theoretical perspectives, for instance, Roger Hart (1992) introduced a ladder for children’s participation model. Within this ladder, the gradual increase in children’s agency is depicted. Whereas this model paves a path for children’s agency and citizenship in the AI-driven digital society, challenges of this ladder could be addressed by defining an AI-specific model on how children can gradually execute more agency and reach their fullest agency as innovators of AI policy and AI practice. To address such challenges, models for children’s meaningful participation in policy-related decision-making processes could be inspirational. Dutch authors (Bouma et al. 2018) demonstrate a model of how children’s growing agency in decision-making processes within the welfare systems can have beneficial effects. Child participation in AI challenges emerges from the gaps in the national policies and the lack of inclusive AI practice frameworks.

3.Transnational AI policy gaps

3.1United States

In 2021, the United States accounted for 34.7% of global tech revenues, the largest portion of which stemmed from AI-driven systems (Sava, 2022). The US neither adopts global policy on children’s rights and participation nor on AI; it is not a signatory of the UNCRC and did not sign onto the recent framework for ethics of AI from the United Nations Educational, Scientific and Cultural Organisation (UNESCO, 2020). Likewise, the US has no federal laws relating to children’s participation in AI. While the recently proposed Algorithmic Accountability Bill (117th US Congress, 2022) briefly discusses the participation of impacted groups, it does not specifically discuss children’s participation in the AI design, development, and deployment process. Children’s Online Privacy Protection Act (COPPA)-1998 provides data privacy and protection up to the age of 13 but there are no policy regulations to safeguard 14–18 years old children and no opportunities to participate in the AI systems. Moreover, this policy gap further expands social exclusions of the digital divide across the US and aggravates the effects of children’s non-participation in AI practices. Nearly half of Americans without at-home internet were in Black and Hispanic households (Chakravorti, 2022), and the digital divide is known to diminish opportunities for children from lower incomes, people of colour (Benjamin, 2019), and foreign-born migrants, further exacerbating their economic, social, and political marginalisation. AI has been perceived as a human right (AccessNow, 2020). We argue that it requires including the most vulnerable impacted groups, such as children from historically marginalised groups (Kalluri, 2020). Digital rights are human rights and human rights are children’s rights. The child-centred approach would pave a way for child-friendly AI policy and practice to humanise AI systems.


The Council of Europe’s Children’s Rights Strategy outlines the problem of misalignment between children’s developmental needs and current AI design practices (CoE, 2022). Whereas the strategy provides recommendations for child participation in relation to digital systems. There is a policy gap surrounding how AI-related regulations are developed within the European Union. There are promising European developments in the month of May 2022, the European Strategy for Better Internet for Kids (BIK+) for child empowerment was introduced (EC, 2022c). The BIK+ strategy is constituted by four main pillars: 1) safe digital experiences to protect children; 2) digital empowerment, 3) active participation, and 4) respecting children. These pillars make up the policy map that compares child participation in policy-making, regarding the online behaviour of children. Although the strategy offers promising directions for child participation in their online interactions, it remains to be seen how these pillars will become enacted regarding children’s AI-system-related interactions.

One of the main objectives behind a series of upcoming EU regulations, such as the Digital Services Act (EC, 2022) and/or the Digital Markets Act (EC, 2022), is to curb the monopolistic power of platforms by categorising such companies as “gatekeepers”. The goal is to frame them as major accountability nodes in the network between global (including European) advertisers and other entities on platforms. This regulation aims to facilitate small companies’ competitive possibilities against big tech companies and foster interoperability between large and smaller AI technology innovators. These regulations, similar to General Data Protection Regulation (GDPR), will also have extraterritorial effects on US companies, such as Apple, Microsoft, Google, Meta, and IBM while fostering AI innovation. These companies’ AI products are fueled by personal data and algorithmic assessments which have a severe impact on children (La Fors & Larsen, 2022). Therefore, although these regulations are focused on rebalancing the power dynamics toward more influence for smaller AI companies and autonomy for the data subject, the effectiveness of these regulations is yet to be seen. Bringing back more control into the hands of the data subjects in Europe has already provided the impetus for the General Data Protection Regulation (GDPR). Yet, the emergence of a large portion of AI systems can have significant implications on such vulnerable groups as children. Civil society organisations are also concerned about how the proposed EU regulations will respect human rights while promoting responsible AI innovation (APC, 2021). For example, the 5Rights Foundation’s report (2022) brought to light a wide diversity of online harm to children. Another main objective behind the upcoming European regulations, particularly, the proposed European AI Act (EAIA) is to prevent the innovation of AI systems which would perpetuate “reasonably foreseeable misuse” (EC, 2021/0106). The EAIA establishes a novel assessment for detecting AI system-mediated harms. The Act separates systems according to their harmfulness into four risk categories: unacceptable risk, high-risk, limited risk, and minimal risk. Given that children’s data also constitutes “raw material” for AI systems, and although relevant GDPR prescriptions for children remain applicable when AI systems are used, the AI Act and its risk-based assessment scheme would benefit from the children’s perspectives (Zuboff, 2021). This seems urgent as the proposed AI Act’s focus remains mainly centred on the safety of AI systems and less on the safety of humans subjected to these systems (Alan Turing Institute, 2021). Consequently, child participation in closing transnational policy gaps regarding AI systems, and in shaping more inclusive and children-friendly US and European AI policies and practices remains urgent to address.

4.AI practice gaps

Child participation-oriented tools and guidance mechanisms lay bare AI practice gaps. The nine basic requirements for meaningful child participation have also been set out as international standards to follow in 2009 by the General Comment 12 of the UNCRC (UNCRC, 2009). These principles hold that child participation needs to be: “1) transparent and informative; 2) voluntary; 3) respectful; 4) relevant; 5) child-friendly; 6) inclusive; 7) supported by training; 8) safe and sensitive to risks, and 9) accountable.” UNICEF’s Policy Guidance on AI for children stresses the need for child participation in AI system design but neglects to offer models for what the increased agency of involving children could mean for AI systems and policies (Dignum et al., 2021). The World Economic Forum’s AI for Children Toolkit offers FIRST (Fair, Inclusive, Responsible, Safe, and Transparent) principles for ethical AI product development which would foster children’s healthy development (WEF, 2022). The Children’s Online Safety Toolkit of the 5Rights Foundation explains a set of online harms that children can come across (5RightsFoundation, 2022). Neither UNICEF’s guidance nor the WEF’s or the 5Rights Foundation’s toolkit advocates for meaningful child participation in AI policy and practices. There is a clear practice gap and a lack of a child participation framework to promote inclusive children’s participation in the AI policy and practice. We explore co-creative approaches with children through a design justice lens to address the child participation challenges in AI.

5.Co-creative approaches with children

Children are often controlled by adults and it is challenging to establish a flat circle power structure. Co-creative processes are structured to be insightful learning experiences. It offers opportunities to empower and equip children to hold opinions and make decisions on all matters related to them (Hansen, 2017). Children’s inventive capacity and even life-saving societal impact have already been demonstrated by a variety of technological inventions (Vaden, 2017). These skills and competencies could be expanded further into AI policy and practice to address the pressing issues in their respective communities (Hansen, 2017). Co-creation with children (Borum et al., 2015) helps adult stakeholders to understand and interact with children on a deeper, more equal, and respectful level; to be sensitive and ethically responsible to balance diverse interests and the children’s perspectives and at the same time to ensure that no harm comes to children (CocPlayfulMinds, 2022). Co-creative processes recognize children’s competence and offer methods of self-expression that promote comfort and creativity in AI design (Hansen, 2017). Applying participatory and design justice approaches could offer benefits for AI policy and practice with children. The co-creative design experts have already indicated the four roles for children to play in the design process as a user, tester, informant, and design partner to ensure their participation in the design process (Borum et al., 2015). Iversen et al. (2017) argued around power dynamics in the co-creative design process and introduced children’s role as protagonists in the participatory design process to ensure equal power with adult designers. These children’s roles in the co-creative approach discuss the distribution of power between the children and the adult stakeholders which has a scope to change the objective, process, and outcome measures of the design process.

Co-creative design with children experiments have shown how specific perspectives of children can offer refreshing insights that are different from adult perspectives. The City of Children Project acknowledges that the co-design process with children adds fresh perspectives and competencies to technology developers and administrators (Biosca, 2017). Similarly, the Estonian Academy of Arts explored co-creation with children in the AI design process which enabled shared decision-making and engaged children as equal design partners (Kubinyi et al., 2021). Utopian agenda for Child-Computer Interaction (CCI) positioned democracy, skillfulness, and emancipation for the already established co-creative participatory design justice approach (Iversen & Didler, 2013). These inputs from past experiments offer a framework for how differing degrees of agency for children could benefit from implementing the right to children’s participation in more just AI policy and practice. In addition to other co-design experiments, inclusive approaches supported the adoption of design justice models to promote social inclusion and equity in the design process. This was achieved through meaningful participation of historically marginalised children based on the local, social, and cultural contexts (Costanza-Chock, 2020). Moreover, AI co-design experiments would serve to prevent the broadening of the AI-mediated divide and prejudices between marginalised and privileged communities through their meaningful and inclusive participation in AI design. Each earlier experiment demonstrates that the co-creative process raises critical questions of who participates in and who holds power and controls the process. Allowing for the roles of participants and controllers to change by involving a broader diversity of children from diverse backgrounds would bring more balance into the power dynamics by democratising the process.

6.Meaningful child participation in AI

Children’s meaningful participation in shaping AI would be important to close the policy and practice gaps for two major reasons: 1) Enabling children’s healthy development and 2) Preventing harm to children. Not involving children meaningfully in AI practices and policies would impede both the prevention of AI-mediated harms coming to them and their healthy (biological and civic) development when interacting with AI. We argue for children’s meaningful participation at all levels of AI policy and practice. By focusing on the three participatory roles of children, we offer perspectives on how children’s participatory influence and the impact of their agency could be scaled up in AI system-related policies and practices. These three roles are namely: 1) users as consumers of AI; 2) contributors to AI development and deployment; and, in the most positive scenario, 3) active innovators capable of influencing AI system design. We discuss these roles of children in a co-creative approach through a six-step AI development process: 1) planning, 2) data collection, 3) data access, 4) use of algorithms, 5) deployment and 6) reporting and dissemination. Describing how these steps relate to the three participatory roles of children can shed light on the increasing agency to transform power imbalances that expose children to such online harms, effectively moving toward more children-friendly AI system development and policy-making practices in the US and The Netherlands.

6.1Reducing and preventing AI harms

1) Reduce algorithmic bias and prevent harm: Inclusive participation enables children to question AI-mediated decisions toward themselves. If AI systems distribute unfair judgments about children this can be perceived as punishment or “unforgiveness toward children” (La Fors, 2020). Digitised welfare services are increasingly using AI tools, however, due to biased algorithms, less privileged groups of society have limited access to housing, jobs, and other welfare programs in the US (Sisson, 2019; Eubanks, 2019). The Federal Trade Commission’s report also underlined that addressing such online harms as algorithmic bias requires a holistic approach and cannot be addressed by an AI system design alone (FTC, 2022). Children can experience long-term detrimental implications due to AI-mediated biases; the inclusive and holistic participation of children would allow children from diverse backgrounds to explain and liberate themselves from unfair judgments and to be perceived in their full humanity (La Fors, 2020). Their participation could further enable children in enacting a more nuanced socio-cultural context, hyper-local languages, and equal representation. All this can serve to improve the mitigation and prevention of biased predictions and recommendations, contributing to fairer AI-mediated decision-making processes.

2) Avoid addictive algorithms: Currently no age verifications or age-appropriate social media policy regulations to prevent existing addictive algorithms. Children’s participation in policy formation and design may enable their agency to make more informed decisions, and AI tools can be more appropriate to age and cultural contexts. Addictive algorithms and the lack of comprehensive policies and practices continue to keep children’s participation in AI development and deployment unattainable.

3) AI-mediated cyber crime and child abuse prevention: Children’s participation in AI policy and practice can help victims to cope better with online crimes against children. Crimes such as child pornography, unauthorized live streaming, and online grooming are often mediated by AI systems. Child participation in this sense would contribute to shredding harmful practices of victimisation online (Turton, 2022). Child participation in AI policy and practice would also raise awareness among parents and guardians about the mediating effects of AI systems in offline environments that lead to cybercrime and online exploitation of children, such as cyberbullying and child-sex trafficking (ChildHub, 2022). Beyond addressing harms, broad and structural child participation could facilitate AI innovations that serve children’s healthy development.

6.2Healthy development through children-AI interactions

1) Child development matters: Contemporary AI tools are “exploiting the behavioural surplus” of children as if they were adult consumers (Zuboff, 2019). Despite stricter data protection policies for children in Europe and certain states of the US, children remain AI market users. Currently, children can only be users of AI systems that are the choices of adults in children’s lives. These can cause developmental issues in every spectrum of children’s identity development (Livingstone & Blum-Ross, 2020).

2) Understand AI systems: AI systems are already difficult to explain to adults, and even more so to children. However, children are digital natives and spend a significant amount of their time interacting with AI systems in diverse contexts of their lives. Their understanding of AI systems is vital. Moreover, addressing this knowledge gap will provide long-term benefits in ensuring a just digital future. Diverse children’s participation in AI design, development, and deployment would increase their understanding of AI systems and help them to make informed choices about digital footprints.

3) Child friendliness: Child participation in AI will promote child-friendly AI products and services. If AI tools are child friendly, they can be more accessible to any social group. It will help expand AI services to a larger audience.

4) Be diversity-aware and inclusive: Diverse children’s participation, particularly from historically marginalised communities, in the end-to-end process of the AI policy and practice with the so-called socially privileged children, would sensitise them toward each other’s differences. Facilitating interactions among children with different backgrounds regarding AI systems would render both AI policies and systems more inclusive and equitable. Inclusive AI will lead to just tools for pressing social needs.

7.Child participation models in AI

In this section, we present our arguments for implementing a three-step children role as participation model in the AI design, development, and deployment process not only to address AI-mediated harms but to also foster the healthy development of children. A broader set of online harms have also been raised by the Age-Appropriate Design Breaches report of the 5Rights Foundations (2022). This demonstrates that children are experiencing a vast amount of online harm as users, such as the use of their data creating exploitative risks toward themselves with “content, conduct and contact risks”. To prevent such probable harm to children, and promote AI for children’s good, their participation in AI policy and design is essential. Digital Access to Scholarships (DASH) at Harvard has raised critical questions to ensure young people’s participation in AI systems in health, education, entertainment, and beyond (Cortesi et al., 2021). Child-Centred AI case studies from UNICEF (SomeBuddy, 2021) offer examples of child participation in the AI development process.

The three-step roles of child participation model in AI, we developed considered children’s engagement, role, and power to make decisions in the AI policies and practices. We introduce three roles as: User, Contributor, and Innovator of AI.


We acknowledge that children as users participate with the least agency in AI and have the least power to prevent and mitigate AI-mediated risks. Children as users are often unaware of the consequences of their interactions with AI and continue to consume AI-mediated content directly and indirectly without knowledge. They are often victimised and criminalised while engaging with AI systems. Conventional tech companies like Google, Meta, TikTok, IBM, Microsoft, and more are following user research models (Chakravorti, 2021) to engage children’s inputs in the product development process; however, the participant may not be aware of the product development cycle, deployment strategies, and how data will be used. Children primarily use products as market-driven consumers and do not have further opportunities to contribute to product development and deployment, which often leads to bias and harm. Current STEM education is preparing children to become more skilled users of AI tools but this learning opportunity is not accessible and affordable for most historically marginalised groups.


Children’s agency as a contributor is larger than as a user. This can manifest in children being participants in the AI design process with content, input to policy, and practice to develop products with stakeholders. But, children have limited knowledge of where and how it will be deployed in the social settings and no power to make decisions in any of the processes. UNICEF’s Policy Guidance on AI for Children seeks children’s contributions to the AI development process. The World Economic Forum’s Generation AI Toolkit (WEF, 2021) perceives children as AI consumers and offers principles in line with their developmental needs for AI system developers. These policy and practice discussions highlight AI and children but do not address how children’s contribution to AI with their social inclusion in all contexts would make a difference. Without the diverse children’s participation in AI policy and practice, there is a risk of misrepresentation and tokenism in reflecting upon AI systems.


Children’s agency as an innovator would grow compared to being a contributor. As innovators, children shape the end-to-end AI design processes as designers, developers, and deployers. The ethical use of data would foster more informed decisions and more child-friendly AI in the products and services. Children are already coming up with innovative solutions for pressing social and environmental challenges (Ramnani, 2021). The 2020 winner of the Children’s Peace Prize aims to address cyberbullying through AI-driven applications (Rahman, 2020). This particular invention shows that children can innovate both AI technology as well as prevent AI-mediated harm. Children can serve as innovators in AI practice as well as transform their knowledge for child-friendly AI policy (Webster, 2022). The fact that children’s AI innovations have been taken up by society demonstrates that children’s agency can bring along different forms of societal change. Such change cannot be achieved if children only remain users or contributors to AI practices. Table 1 provides child participation models in the AI development process to showcase the children’s roles and power dynamics of children’s roles in AI.

8.Benefits of child participation models in addressing AI-mediated harms

Algorithmic bias and harm can most effectively be prevented if children become innovators. As innovators, children could contribute to the planning of data collection, distribution of data, and how algorithms would need to access data. Becoming part of this process informs children about the effects of data and the effects of their use by algorithms. By participating in designing algorithms, children would need to know when they actively plan to design them and what kinds of AI implications need to be anticipated (such as human rights implications; technical aspects; and how the purpose of algorithms is achieved). Children would also gain more skills to understand and distinguish what bias is, how bias emerges through data collection, and what could potentially be interpreted as bias when providing access

Table 1

Child participation models in the AI development process

Children rolePlanningData collectionData accessUse of algorithmsDeploymentReporting and dissemination
UserNo powerNo other commercially viable option to use services but by agreeing to and sharing dataFree access for big tech and no power to userNo control, no knowledge, and no powerNo understanding of targeted audience and consequencesNo control of representation
ContributorLimited engagement of stakeholders and powerTokenism/limited power to share opinions regarding data collectionEngagement is embraced to the extent to which it provides economic surplusChildren’s coding is embraced to the extent to which it results in economic surplusNo power but more knowledge of the consequences of data processing and algorithmsPossible misrepresentation/tokenism
Innovator (designer, coder, UX developer)Children gain power through meaningful engagementUnderstanding and more assertiveness to shape how/who’s data is collectedMore assertiveness to shape how/who’s data is accessed/under what conditions sharedMore understanding of how the algorithm worksMore power to control where and when to be appliedOffer a better representation of social diversity

to certain information. This understanding would include an increased power of control regarding the deployment of algorithms and the reporting and dissemination of data.

Addictive algorithms would potentially become minimised if children could actively become involved as innovators not only in learning what makes addictive algorithms addictive but also in being able to actively shape the code behind such algorithms to become less so. As the 5Rights Foundations (2022) also underlined, attention-grabbing algorithms are often ingrained in the business model of big technology platforms which are frequently used by children. Consequently, child innovators could shape addictive algorithms to be less addictive.

AI-mediated cyber crime could also be effectively addressed if children would become aware of how their data is collected and shared. Children would better be able to avoid AI-mediated cybercrime if they can shape how data can be accessed and learn how algorithms can be misused in such ways that they compromise cyber security. Furthermore, innovator children would also have more control over how algorithms are deployed and where soft spots for cybercrime could be found in the network. To some degree, children as innovators would be enabled to shape how to build in or use cyber security functions, such as potential firewalls in AI-mediated systems. The inclusion of diverse children would also offer a better representation of social diversity in shaping cyber security features. The latter requires access to cyber security education for the most marginalised groups and potentially access to AI ethics and children’s rights courses so that they could innovate in line with children’s rights impact assessment requirements, such as laid down by the EU High-Level Expert Group on Trustworthy AI (EC, 2019).

Child abuse prevention could gain multiple benefits from involving children as innovators in shaping algorithmically mediated processes (Turton, 2022). Offering children room and skills to shape such algorithms could offer multiple skills sets for children: a) to learn what can be called abusive behaviour online and offline; b) to learn what to share and not online regarding abusive content; c) to code algorithms which can amplify the spread of abusive online or offline content. The example of the Children’s Peace Prize winner who developed the Cyberbully prevention app exemplifies how a form of online child abuse could be prevented by young persons’ active involvement in the innovation process.


Children’s right to participation in AI at all levels has become cardinal for the present and future generations for ethical AI innovations. We have explored theoretical frameworks, policy gaps, and closely related practical challenges that impede children’s participation and we argued for child participation models that could address them. Whereas, co-creation with children has proven to be an effective technique to bridge the gap between theory, policy, and practice (Bouma et al., 2018). There has been increased research, toolkits, and policy briefs focusing on online child safety (5Rights Foundation, 2022), but children’s meaningful participation in AI policy and practice remains underexplored. The meaningful participation of children needs to be reviewed from the perspective of transnational communities and in line with the guiding principles of the General Comment 12 of the UNCRC for child participation (UNCRC, 2009). However, no clear reflections were available on what children’s meaningful participation in AI policy and practice, would look like if children could step out of their role as only users and become contributors and innovators. With our exploration, we have shown that children in such roles could thrive to achieve their fullest potential, foster their healthy development, reduce harm, and promote safety in AI products and services.

We have explored how co-creative approaches through child participation models could assist inclusive AI policy and practice. We understand that the successful co-creation with and meaningful participation of children in AI policy and practice requires the aligned multi-stakeholder engagement of AI developers, policymakers, parents, and children. Participatory co-creative AI design justice approaches have a high potential to provide opportunities for equal access to children from all walks of the social pyramid. The meaningful participation of children as contributors in shaping AI policy and practice is currently very limited both in the US and in The Netherlands; chances for children to emerge as innovators from historically marginalised communities are even lower. We have explored that different degrees of child participation can be beneficial to prevent and mitigate the harmful effects of algorithmic bias, addictive and exploitative algorithms, AI-mediated cybercrime, and online child abuse. Our models rendered it traceable what children’s more active role as contributors or innovators would mean from the perspectives of data processing, algorithm design, development, and deployment. We acknowledge that this process could be more time- and resource-consuming but leads to a safe and sustainable future for children and AI. We call for transnational policy makers in the respective governments and tech companies to update and incorporate meaningful children’s participation models and appropriate roles for children in their AI policy and practice. We can conclude that the engagement of children together at all levels of the AI system process, would help to achieve a more children-friendly AI policy and practice. Our child participation models in the AI development processes can be used to inspire more meaningful and practical participation of children in the end-to-end process of AI design, development, and deployment. In future research, co-creative design justice frameworks could benefit from implementing child participation models in AI policy, practice, tools, products, and services. In these frameworks, children’s power in co-creative approaches would be concretised and more visible on the level of design, data, algorithm, development, and deployment of AI systems.



5Rights Child Online Safety Toolkit. ((2022) , May). 5Rights Foundation.


AccessNow. ((2018) ). Human Rights in the Age of Artificial Intelligence.


Alan Turing Institute. ((2021) ). Architecting our Future Insights from the Inaugural Trustworthy Digital Identity Conference. Alan Turing Institute.


Algorithmic Accountability Act of 2022, H.R. 6580 IH, 117th Congress, 2D Session. ((2022) ).


Baroness Kidron. ((2022) ). Systemic breaches of the Age Appropriate Design Code. Https://5rightsfoundation.Com/Uploads/Letter_5RightsFoundation-BreachesoftheAgeAppropriateDesignCode.Pdf.


Benjamin, R. ((2019) ). Race After Technology: Abolitionist Tools for the New Jim Code. Polity.


Biosca, O. ((2017) , July 25). Co-creation with children for designing tomorrow’s cities and mobility. Https://Mobilitybehaviour.Eu/2017/07/25/Co-Creation-with-Children-for-Designing-Tomorrows-Cities-and-Mobility/.


Bouma, H., López, M., Knort, E.J., & Grietens, H. ((2018) ). Meaningful participation for children in the dutch child protection system: A critical analysis of relevant provisions in policy documents. Child Abuse & Neglect, 82: (08). doi: 10.1016/j.chiabu.2018.02.016.


California Legislature Regular Session. ((2022) , April). AB-2273 The California Age-Appropriate Design Code Act (NO. 2273). California Legislative Information.


California Consumer Privacy Act (CCPA). ((2018) ). State of California Department of Justice.


Chakrovorti, B. ((2021) , July 20). How to close the digital divide. Harvard Business Review – Https://Hbr.Org/2021/07/How-to-Close-the-Digital-Divide-in-the-u-s.


ChildHub. ((2022) ).


Coc Playful Minds. ((2022) ). Retrieved May 13, 2022, from


Constanza-Chock, S. ((2022) , February 27). Design Practices: “Nothing about Us without Us.”


Council of Europe Strategy for the Rights of the Child (2022–2027). ((2022) ). Council of Europe.


Cortesi, S., Hasse, A., & Gasser, U. ((2021) ). Youth participation in a digital world: Designing and implementing spaces, programs, and methodologies. Youth and Media, Berkman Klein Center for Internet & Society.


Dignum, V., Penagos, M., Pigmans, K., & Vosloo, S. ((2021) , November). UNICEF Policy Guidance on AI for Children: Version 2.0 | Recommendations for building AI policies and systems that uphold child rights. UNICEF.


DSA Alliance. ((2022) , April). Digital Services Act Human Rights Alliance: Don’t compromise on the protection of fundamental rights in the ongoing negotiations. APC.


Epps-Darling, A. ((2022) , October 24). How the racism backed into adolescents hurts. The Atlantic.


EU White Paper on Artificial Intelligence. ((2022) ).


Eubanks, V. ((2019) ). Automating inequality: how high-tech tools profile, police and punish the poor. MacMillan Publisher.


European Commission. ((2021) , April). Regulation of the European Parliament and the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain union legislative acts (2021/0106(COD)).


European Commission. ((2022) a). Digital Services Act: Commission welcomes political agreement on rules ensuring a safe and accountable online environment.


European Commission. ((2022) b, April). Digital Markets Act: Commission welcomes political agreement on rules to ensure fair and open digital markets.


European Commission. ((2022) c, May). European Strategy for a Better Internet for Kids (BIK+).


Global market share of the information and communication technology (ICT) market from 2013 to 2022, by selected country. ((2022) , March). Statista


FTC-Federal Trade Commission. ((2022) , June). Combating Online Harms Through Innovation.


Hansen, A.S. ((2017) ). Co-design with children How to best communicate with and encourage children during a design process. Retrieved June 7, 2022, from


Iversen, O.S., & Dindler, C. ((2013) ). A utopian agenda in child-computer interaction. International Journal of Child-Computer Interaction, 1: (1), 24-29.


Iversen, O., Smith, R.C., & Didler, C. ((2017) ). Child as Protagonist: Expanding the Role of Children in Participatory Design. IDC ’17: Proceedings of the 2017 Conference on Interaction Design. doi: 10.1145/3078072.


Kahn, K., Megasari, R., Piantari, E., & Junaeti, E. ((2018) ). AI programming by children using Snap! block programming in a developing country. Thirteenth European Conference on Technology Enhanced Learning, 11082.


Kalluri, P. ((2020) , July 7). Don’t ask if artificial intelligence is good or fair, ask how it shifts power. Nature.


Kharim, F., Oyewande, A., Abdalla, L.F., Ehsanullah, R.C., & Khan, S. ((2022) ). Social Media Use and its Connection to the Internet. National Library of Medicine.


Kubinyi, E.L., Naydenova, V., & Kuusk, K. ((2020) ). Children and design students practicing playful co-creation in a youth creativity lab. Https://Wiki.Aalto.Fi/Download/Attachments/191500264/Kubinyi_Naydenova_Kuusk.Pdf?version=1&modificationDate=1621264242055&api=v2. Retrieved May 12, 2022, from


La Fors, K., & Larsen, B. ((2022) ). Why AI companies should develop child-friendly toys and how to incentivize them. World Economic Forum.


Lee, N.T. ((2022) ). Closing the digital and economic divides in rural America. Brookings.


Livingstone, S., & Blum-Ross, A. ((2020) ). Parenting For a Digital Future, How Hopes and Fears About Technology Shape Children’s Lives, Oxford University Press. Oxford University Press.


McStay & Rosner, G. ((2021) ). Emotional artificial intelligence in children’s toys and devices: Ethics, governance and practical remedies. Big Data and Society, 8: (1).


Psihogios, A.M. ((2022) , April 11). Adolescents Are Still Waiting on a Digital Health Revolution Accelerating Research-to-Practice Translation Through Design for Implementation. JAMA Pediatrics. Retrieved May 12, 2022, from


Raamnani, M. ((2021) , November 26). Young Innovators & Whizz-kids That Made A Mark In 2021. Https://Analyticsindiamag.Com/Young-Innovators-Whizz-Kids-That-Made-a-Mark-in-2021/.


Rahman, S. ((2020) ). Sadat Rahman (17) from Bangladesh wins International Children’s Prize 2020. Https://Www.Kidsrights.Org/News/Sadat-Rahman-17-from-Bangladesh-Wins-International-Childrens-Peace-Prize-2020/.


Hart, R.A. ((1992) ). Children’s Participation: From tokenism to citizenship, Innocenti Essay no. 4.


Sisson, P. ((2019) , December 17). Housing algorithm discrimination goes high-tech. Https://Archive.Curbed.Com/2019/12/17/21026311/Mortgage-Apartment-Housing-Algorithm-Discrimination. Retrieved April 10, 2022, from


SomeBuddy. ((2021) , August). SomeBuddy. UNICEF.


Statista. ((2022) , July). Global toy market: total revenue 2007–2020.


Turton, W. ((2022) , April 26). Tech Giants Duped Into Giving Up Data Used to Sexually Extort Minors. Https://Www.Bloomberg.Com/News/Articles/2022-04-26/Tech-Giants-Duped-by-Forged-Requests-in-Sexual-Extortion-Scheme. Retrieved May 16, 2022, from


UNESCO. ((2020) ). Outcome document: first draft of the Recommendation on the Ethics of Artificial Intelligence (SHS/ BIO/AHEG-AI/2020/4 REV.2).


UNCRC – United Nations Committee on the Rights of the Child. ((2009) , July). General comment No. 12 (2009): The right of the child to be heard (CRC/C/GC/12).


Vaden, A. ((2017) , July 12). 5 Kid Inventors Saving Lives With Their Famous Inventions.


Webster, N. ((2022) , March 30). World’s youngest computer programmer wants to help children tap job market. The National News. Retrieved May 12, 2022, from


WEF-World Economic Forum. ((2021) ). Future of the Connected World Report (No. 2021).


WEF-World Economic Forum. ((2022) b, March). Artificial Intelligence for Children Toolkit.


WRR. ((2021) , November). Opgave AI. De nieuwe systeemtechnologie (No. 105). Wetenschappelijke Raad voor het Regeringsbeleid.


Yang, B., Wei, L., & Pu, Z. ((2020) ). Measuring and improving user experience through artificial intelligence-aided design. Front. Psychol., 11: (11). doi: 10.3389/fpsyg.2020.595374.


Zuboff, S. ((2019) ). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. Profile Books Ltd.