Check out our Design it For Us Campus Tour!

 Learn more

Design It For Us AI Policy

About us: Design It For Us is a youth-led coalition advocating for safer online platforms and social media. We aim to drive and achieve key policy reforms to protect kids, teens, and young adults online through the mobilization of youth activists, leaders, and voices.

INTRODUCTION

Artificial intelligence (AI) is not new. But the rise of generative AI gives access to powerful tools that are currently completely unchecked. As a generation of digital natives, we see how AI tools boast impressive opportunities for growth, innovation, creativity, and learning for our generation. But if we are not careful and measured in our approach to designing these products, the same patterns of harm we’ve seen on social media will occur with AI tools.

With social media, companies consistently told regulators to “trust us,” but failed to live up to their promises, leaving a generation completely hooked on products that became more addictive, harmful, and unsafe over time. What started as an exciting place to foster our closest connections quickly became a desolate environment where we clung to the benefits while visibly experiencing its harms. Many of us started our journey on social media excited by the possibilities — by the opportunities to connect with classmates and make friends, find community, and more, but we also found ourselves subjected to myriad harms — from eating disorder content to risky social challenges, self-harm, and more.

We cannot make the same mistake with AI. We have seen with social media that trying to undo its harms in real-time is next to impossible without clear and thoughtful regulations. As much as AI presents unique opportunities, it also presents tremendous risk, especially if regulators don’t move quickly to regulate these products.

VISION

Like our policy platform, our AI policy puts forth key principles that we strive to uphold in our support for legislation and policy meant to best protect kids, teens, and young people online. While this technology may be new, the principles we wish to see governing it are not. As technology and the AI ecosystem continue to evolve, we will update these principles to meet the moment.

To achieve that vision, Design It For Us supports AI policies that adhere to 7 key principles:

  1. The responsibility for safety rests on AI companies
  2. Give young people a meaningful seat at the table
  3. Address the business model
  4. Adhere to strict transparency and reporting requirements
  5. Promote suitable use of AI without preventing or blocking its benefits
  6. Provide access to AI technologies in the classroom, with clear, strict restrictions and guidance
  7. Ensure AI technologies are not being used to replace human skills.


1. The responsibility for safety rests on AI companies.AI companies are the designers, creators, and manufacturers of product features that may potentially be harmful to kids, teens, and young adults. Tech policy should put the responsibility of safety on AI companies – not kids or parents – to make their products safer

  1. Policies that hold AI companies accountable require platforms to uphold the highest safety standards by design and by default. As manufacturers do for cars and seatbelts, platforms should be required to be safe upon entry, so that young users can safely experience the services provided by the platform. For example, chatbots should ensure they are serving age-appropriate content. 
  2. Policies that hold AI companies accountable should restrict the testing, training, and rollout of AI products without adequate third-party testing for safety. Companies and platforms should not have an unmitigated ability to subject young users to potentially harmful applications or features that are not yet refined. Without guardrails, platforms are already designing and implementing features that will have harmful impacts on younger users. For example, Snapchat’s My AI offered a 13-year-old tips on sex with a 31-year-old. One study conclusively discovered that ChatGPT engages in toxic dialogue and propagates incorrect stereotypes about countries, religions, and races among others.1


2. Give young people a meaningful seat at the table. Prioritize providing young people with meaningful and intentional seats at the table to discuss effective regulation for our generation.

  1. Policies and policymaking that meaningfully include young people should ensure that young people are present in regulatory decision-making processes pertaining to AI, apportioning space for young leaders on expert agency panels.
  2. Policies and policymaking that meaningfully include young people should ensure that there are young people present in legislative decision-making processes pertaining to AI, creating space for young people to inform legislation.
  3. Policies and policymaking that meaningfully include young people should establish youth involvement that is sustainable and not symbolic. Young people should be invited to provide their experiential expertise and perspective as early as possible and should be continuously included and represented in the policymaking process from ideation to execution and implementation.

3. Address the business model. Proactively and directly regulate surveillance capitalism model used by AI companies to exploit users’ information to compete, oftentimes unfairly. Centralize privacy as a fundamental and essential requirement, embedding it throughout the lifecycle of AI products.

  1. Policies that prioritize privacy should minimize the amount of personal data an AI platform can collect or input from users at any point. Companies should be restricted from collecting and using interoperable personal data to train and implement models. Critically, Big Tech companies like Google, for example, should be restricted from using data from their other businesses to train and improve their new AI models.
  2. Policies should recognize that younger users of AI products deserve heightened, stricter privacy settings to prevent the abuse of their personal information and safeguard the use of their data. By default, products should provide appropriate privacy settings that consider the well-being of all users, especially younger users. These should be on by default, with the ability to be customized. Explicitly, this does not call for age requirements.
  3. Policies that address the surveillance advertising business model should eliminate the mechanisms that target youth attention and engagement to drive profit. Meta, for example, is pushing AI products to retain younger users.2

4. Companies should be responsible for enacting policies that clearly and publicly outline and disclose how tools are created, and what they are used for. This must include a clear delineation and publication of a tool’s voluntary/opt-in and involuntary/default use, application, and impact.

  1. Policies that uphold strict transparency and reporting standards should require AI companies to explain publicly how their AI platform(s):
    1. Makes decisions 
    2. Allow third parties, academics, and government entities and agencies access to data and information
    3. Tests for safety and privacy compliance standards
    4. Ensures prevention of dangers and biases
    5. Engages human moderators
  2. Policies that uphold strict transparency and reporting standards should allow heightened reporting mechanisms for young users to report harmful, unsafe, and/or dangerous content to the AI platform so that human moderators can review both the platform and the content.
  3. Policies that uphold strict transparency and reporting standards should prevent the creation and proliferation of misinformation and disinformation by requiring the use of labels and/or disclosures that denote the use of AI in the creation of political content (see Appendix for definition of political content). Polling has found that teenagers (13-17) are more susceptible to online conspiracy theories, 3which can be promoted via the undisclosed use of AI in political interactions online.
5. Promote suitable use of AI without preventing or blocking its benefits. Proposals across the U.S. banning access to social media for teenagers do little to address problems related to social media. The same goes for AI technologies. Policies should seek to allow for access while mitigating risk and ensuring safety.

  1. Policies that maintain access should not use bans or other purely restrictive options in an attempt to create safer spaces. Bans do not make tools safer – rather they tempt young users to find ways around them and thus expose them to the harms that have not been addressed by simply attempting to keep young people away. Younger users who have grown alongside these tools have and will continue to find ways to get around barriers that ultimately make it more exciting to use tools that are still unsafe. 
  2. Policies that maintain access should prioritize safety and privacy at the point of product ideation, design, and production. The onus should be on companies, designers, and engineers who have the capacity to build and incorporate safety and privacy as a default product feature before their products reach any user at any point. 
  3. Policies that maintain access should preserve rather than inhibit a user’s ability to actively find resources and community spaces. Policies must uphold their responsibility to safeguard, rather than undo, vital community spaces by intentionally and thoroughly examining and understanding how any new policy may have adverse effects on vulnerable communities. Companies embarking upon policy implementation should be responsible for reaching out to communities to meaningfully and holistically understand any potential risks prior to implementation.

6. Provide access to AI technologies in the classroom, with clear, strict restrictions and guidance. As AI plays a greater role in education, policies in this area should balance reaping the benefits of the available technologies without presuming to be the panacea for all educational needs. Telling students not to use ChatGPT and other LLMs does little to address the nuance of their impacts. Strict guidelines should be in place about its usage while allowing educators and students to express educational creativity and learn about how to effectively and safely use technologies.

  1. Policies around the use of AI in educational spaces should create clear guidance for educators allowing them to write curricula that specify when the use of AI products is suitable for use in assignments, examinations, classroom activities, and other modes of learning. Like the introduction of the calculator to classrooms, there should be clear guidance that allows teachers and educators to decide when its use is appropriate and inappropriate for educational purposes.
  2. Policies around the use of AI in educational spaces should prioritize understanding of the benefits and drawbacks of these technologies, equipping teachers and educators with guides, training, and further professional development to learn how to use these technologies safely and productively to educate students and allow for their appropriate use in educational spaces. Policies should bolster equal access to this kind of education to minimize disparities.
  3. Policies around the use of AI in educational spaces should require strict, constantly updated guidance on what AI products are sufficiently privacy-preserving for use in educational environments, recognizing that “edtech” products often capitalize on the collection of user information from young users, utilizing deceptive consent mechanisms, giving students and their parents or guardians little choice over data collection, and allowing for little protections for young users’ data.4
  4. Policies around the use of AI in educational spaces should create guidance that restricts the use of AI writing checkers and similar features until further research and evidence suggest that these technologies provide accurate and unbiased outputs. The use of these early technologies, like TurnItIn’s AI Writing Detection,5 have been incorrectly applied to check student’s work in educational environments and reliance on these technologies presents equity concerns.6 7

7. Ensure AI technologies are not being used to replace human skills. The use of AI presents as most often frightening and threatening when it conveys that humans and our skills will become obsolete. It is critical that AI advancement augments human productivity, not replaces it. Otherwise, the potential for job loss threatens both young people’s futures and their immediate economic security. We must continue to see AI as a tool that can amplify and facilitate our existing responsibilities, and innovate towards productivity that makes our lives better.

  1. Policies should support skill-building, career development, and AI literacy programs aimed at equipping young people with the tools to navigate an increasingly automated world. This starts with integrating AI literacy topics into K-12 computer science education and extends to creating career counseling programs that allow young people to understand how their job pursuits will be disrupted by AI and plan accordingly.Policies should encourage AI innovation designed to uplift humans in critical areas like healthcare and education, rather than simply displace humans through task automation. AI R&D agendas are often directed towards skill replacement as opposed to skill augmentation; this must change.8
  2. Policies should encourage AI innovation designed to uplift humans in critical areas like healthcare and education, rather than simply displace humans through task automation. AI R&D agendas are often directed towards skill replacement as opposed to skill augmentation; this must change.8
  3. Policies that amplify human capacity should emphasize ensuring that human copyright is protected. Copyright protections make it easy for small creators to be protected and benefit from their work rather than just large companies.

CONCLUSION

We’ve learned a valuable lesson from the fight to reform social media that must be at the forefront of all responsible tech policy – we cannot trust that Big Tech will self-regulate against their own profit motives. Doing so puts all of our safety, health, and well-being at risk. It is the responsibility of policymakers and implementers on Capitol Hill, behind computer screens and in board rooms to consider AI as a second chance to get it right. As youth advocates and as the future of our society, we deserve an opportunity to thrive even and especially as technology rapidly advances. Centering our voices and our priorities can improve online experiences for all. 


  1. https://arxiv.org/pdf/2304.05335.pdf ↩︎
  2. https://www.wsj.com/tech/ai/meta-ai-chatbot-younger-users-dab6cb32?ns=prod/accounts-wsj ↩︎
  3. https://www.theguardian.com/us-news/2023/aug/16/teens-online-conspiracies-study ↩︎
  4.  https://news.uchicago.edu/story/uchicago-nyu-team-find-online-education-tools-pose-privacy-risks ↩︎
  5. https://www.turnitin.com/solutions/topics/ai-writing/ ↩︎
  6. https://www.washingtonpost.com/technology/2023/06/02/turnitin-ai-cheating-detector-accuracy/ ↩︎
  7. https://www.bloomberg.com/news/newsletters/2023-09-21/universities-rethink-using-ai-writing-detectors-to-vet-students-work ↩︎
  8. https://hbr.org/2021/03/ai-should-augment-human-intelligence-not-replace-it ↩︎
  9. https://hbr.org/2021/03/ai-should-augment-human-intelligence-not-replace-it ↩︎

Submit your story

Accepted file types: jpg, png, mp4, mov, wmv, avi, Max. file size: 50 MB.