Design It For Us 2025 AI Policy Platform

About us: Design It For Us is a youth-led coalition advocating for safer online platforms and social media. We aim to drive and achieve key policy reforms to protect kids, teens, and young adults online through the mobilization of youth activists, leaders, and voices.

Our AI Policy Platform is constantly evolving. To view our 2024 AI Policy Platform: click here.


INTRODUCTION

Artificial intelligence (AI) itself is not new, but the recent rise of generative AI places powerful, largely unregulated tools into everyday use. As a generation that has grown up in digital spaces, we see how AI can spark creativity and bolster learning. Yet, without clear guardrails, the same harmful patterns we witnessed with social media—unchecked expansion, exploitative design, and systemic failures to protect young people—will repeat themselves with AI.

In the case of social media, companies pledged to self-regulate and act responsibly, but those promises fell short. What began as platforms to strengthen our social connections ultimately evolved into environments where harmful, addictive features thrived. Many of us initially embraced social media for its sense of community and the ability to connect with friends, yet we soon faced the very real dangers of distorted content, high-risk viral trends, and pervasive self-harm material.

We cannot afford to make the same mistakes again. The history of social media shows how difficult it is to undo deeply entrenched harms once they become embedded in users’ lives. AI, while presenting enormous potential, also carries serious risks—especially if left solely to corporate self-regulation. Now is the time for thoughtful policy, meaningful accountability, and proactive governance to ensure that AI’s benefits do not come at the expense of our well-being.

VISION

Like our initial Policy Platform, our AI Policy Platform puts forth key principles that we strive to uphold in our support for legislation and policy meant to best protect young people online. While this technology may still be developing, the principles we wish to see governing it are derivative from the years we have spent navigating an unregulated digital ecosystem. As AI continues to evolve, we will update these principles in order to meet the moment.

To achieve that vision, Design It For Us supports AI policies that adhere to 6 key principles:

  1. AI Companies Must Prioritize Safety
  2. Address the Business Model
  3. Adhere to Strict Transparency and Reporting Requirements
  4. Foster AI Literacy in Education While Preparing for Profound Economic Transformation
  5. Advance Human-AI Collaboration While Safeguarding Meaningful Work
  6. Allow Young People to Shape the Future of AI


1. AI Companies Must Prioritize Safety. Young people today did not choose to grow up in a world shaped by artificial intelligence. We did not consent to an online environment where AI-driven systems decide what is relevant, what is real, or what we deserve to feel. We deserve better than to inherit Silicon Valley’s failures.

  1. Make AI companies fully accountable for the harms their systems cause. Companies that build and deploy AI models too often evade responsibility when their systems cause harm. Key anticipated harms include psychological damage to individuals, CBRN (Chemical, Biological, Radiological, and Nuclear) risks via misuse, and potentially harmful persuasion at scale. AI firms must implement safety protocols and demonstrate through independent auditing that their systems mitigate these risks before deployment.
  2. AI must meet rigorous safety standards before deployment. We demand a world where safety is integrated from the start—where AI systems must prove they will not harm young people before being introduced into our lives. We demand systems that prove their safety before release through independent risk assessments, comprehensive red-teaming, public disclosure of model capabilities (e.g., model cards), and standardized safety evaluations.
  3. AI safety must remain an ongoing obligation, not a one-time check. It is the responsibility of AI companies to ensure safety doesn’t end at launch. Continuous oversight and rapid-response mechanisms are essential since harms can evolve over time as AI is updated or used in new contexts. Companies profiting from AI must commit to long-term monitoring and mitigation of risks to customers. Protections should adapt alongside AI models – not stop once a product hits the market.


2. Address the Business Model. AI companies are exploring monetization strategies that, without oversight, may prioritize profit through dependency, engagement, and targeting young users. Rather than upholding responsible production standards, emerging business models treat people as exploitable revenue sources. AI policy must prevent companies from profiting through harmful reliance, behavioral manipulation, or circumventing youth protections to expand their market reach.

  1. Prevent AI from creating dependency and engagement traps. As AI chatbots and virtual assistants become central to daily life, companies will be tempted to maximize user engagement and retention at all costs. Policies should prevent AI providers from profiting off of unhealthy psychological dependencies, parasocial relationships, or exploitative feedback loops that keep young users hooked in damaging ways. AI tools should be designed to serve our needs – not to manipulate us into endless use.
  2. AI companies must not unilaterally influence regulations governing youth online engagement. The question of where and when young people can independently consent to data collection, algorithmic profiling, and AI interactions should be determined through democratic processes that prioritize youth safety and well-being. Industry representatives have financial incentives to expand their user base and data gathering capabilities. All policy development in this domain must center protective standards rather than commercial interests, with substantive participation from young people, families, educators, and child development experts.

3. Adhere to Strict Transparency and Reporting Requirements. AI is being deployed at scale with limited public visibility into how models are trained or how they operate. AI companies should not be allowed to hide behind opaque systems that make it difficult to understand their safeguards. The lessons learned from social media’s hidden algorithms must inform an insistence on clear, enforceable disclosure.

  1. AI companies must disclose how their products are trained, tuned, and tested. Effective regulation depends on transparent disclosure of model safety attributes in line with historical commitments that frontier AI developers have already agreed to. Companies should be required by law to publish comprehensive model cards upon first release, provide safety frameworks and documentation to bodies like AISI, and document risk thresholds and mitigation strategies internally.
  2. AI moderation should only replace human moderation when demonstrably safer. Human content moderation often exposes workers to traumatic material like child abuse, violence, and graphic content, causing documented psychological harm and creating unsustainable working conditions with high turnover and burnout. AI moderation offers a solution to these human costs, but systems should demonstrate adequate accuracy, fairness, and effectiveness before replacing human oversight completely.
  3. Whistleblower protections are essential to AI accountability. The most dangerous failures of AI—whether biased outputs, unsafe deployments, or efforts to evade regulation—are often most visible to those inside the industry. Workers who speak out must be shielded from corporate retaliation. Young professionals entering the AI workforce must be allowed to champion ethical practices without jeopardizing their careers.

4. Foster AI Literacy in Education While Preparing for Profound Economic Transformation. Education must do more than create AI-literate students—it must prepare them for significant shifts in how skills are valued and work is structured. As digital natives facing unprecedented technological change, we need education that builds adaptability for a rapidly evolving economic landscape.

  1. Educational systems must be prepared for the possibility of widespread economic obsolescence. Large AI labs explicitly aim to build systems that outperform humans at most economically valuable work. Educational institutions must foster comprehensive AI literacy through technical skills, critical thinking, and ethical reasoning. Access to these resources and opportunities should be made available to all students regardless of socioeconomic status. Policymakers should incentivize bold experimentation in education to ensure no young person is left unprepared for a post-AI economy.
  2. Strict privacy and security guidelines must be enforced for AI tools in education. It is paramount that educational AI systems do not exploit student data or introduce undue risks to the classroom. AI tools should be subject to rigorous pedagogical and ethical reviews prior to adoption, ensuring they contribute positively to student learning without undermining trust in education services.
  3. Educational frameworks should focus on outcomes rather than preserving traditional models. We need rigorous and unbiased evidence-based assessments for teaching methods that combine AI tools with human-led instruction. This will ensure young people acquire both the technical fluency and the distinctively human skills that remain crucial in what will likely be a more automated future.

5. Advance Human-AI Collaboration While Safeguarding Meaningful Work. AI presents an unprecedented opportunity to enhance our capabilities and quality of life. The challenge is to leverage these benefits while preserving meaningful human input and agency. As digital natives who will spend our careers interacting with AI, we should be allowed to help shape its integration rather than merely adapt to it.

  1. Career development must empower youth to guide AI integration in the workforce. Rather than focusing only on preserving existing jobs, policies should encourage continuous skill-building and create paths for young people to participate in innovative work across all sectors. This ensures that youth have a role in defining how AI is implemented, rather than simply reacting to changes.
  2. Intellectual property frameworks must protect young creators and innovators. Young people should benefit from AI-assisted creation without losing control over their creative outputs. Updating intellectual property law to safeguard individual contributions and fairly compensate those who develop or inspire AI system outputs will prevent a consolidation of creative power in the hands of AI model developers.

6. Allow Young People to Shape the Future of AI. The decisions being made about AI today will define the world we inherit tomorrow. We cannot afford to be excluded. Those who build and regulate AI systems must be accountable to the generation that will live with these consequences the longest.

  1. AI governance must create clear pathways for young people to develop expertise. True influence requires more than a seat at the table. Young people need equitable access to education, mentorship, and policymaking experience so that they can engage meaningfully in AI governance. Public investments in AI research, training programs, and structured oversight should empower young people to become leaders in determining AI’s trajectory.
  2. AI governance must prioritize transparency and public accountability. AI continues to be shaped by private interests behind closed doors. Policymakers should ensure that AI governance remains open, equitable, and directly responsive to the public, including the needs of young people. This means enabling independent research, rigorous investigation, and clear legal channels to challenge AI systems that pose serious risks to society.

CONCLUSION

We’ve learned a valuable lesson from the fight to reform social media that must be at the forefront of all technology policy – Big Tech is not capable of self-regulation. To believe otherwise is to put our safety, health, and well-being at risk. It is the responsibility of policymakers and implementers–on Capitol Hill, behind computer screens, and in boardrooms–to consider AI as a second chance to get technology policy right. As youth advocates and as the future of our AI-driven society, we deserve an opportunity to thrive as this new and uncertain technology rapidly advances. Centering our voices and our priorities can and will improve our collective future.


Submit your story

Accepted file types: jpg, png, mp4, mov, wmv, avi, Max. file size: 50 MB.