Back to Press

Design It For Us Unveils Youth AI Policy Framework

Design It For Us today unveiled a policy framework on the regulation of artificial intelligence, which includes key principles that will inform the coalition’s...

Design It For Us today unveiled a policy framework on the regulation of artificial intelligence, which includes key principles that will inform the coalition’s advocacy efforts related to the rapid deployment of AI technology and to help reinforce the critical need for youth voices when it comes to tech regulation. Aimed to fight against the threats of AI ahead of the 2024 U.S. elections and beyond, Design It For Us’s AI policy framework serves as a guide for long-term, sustainable and responsible AI regulation. 

“Having grown up in this digital era, I’ve witnessed how social media companies have failed to live up to their promises to protect my generation,” said Neha Shukla, core team member of Design It For Us and Chair of the World Economic Forum’s Generation AI Youth Council. “AI cannot be left unchecked if we hope to prevent the same patterns of harm we’ve seen on social media from occurring with AI tools. With a commitment to keeping up with the ever-evolving technology and AI ecosystem, our new AI policy platform prioritizes the protection of kids, teens, and young people online.” 

The framework, which can be read here, includes 7 key policy priorities:

  1. The responsibility for safety rests on AI companies. 
  2. Give young people a meaningful seat at the table. 
  3. Address the business model. 
  4. Adhere to strict transparency and reporting requirements. 
  5. Promote suitable use of AI without preventing or blocking its benefits. 
  6. Provide access to AI technologies in the classroom, with clear, strict restrictions and guidance. 
  7. Ensure AI technologies are not being used to replace human skills.

Design It For Us also partnered with Data For Progress on new polling to reveal the urgency in addressing the rapid deployment of AI technology by tech companies to ensure their products are safe for public use. Toplines from the poll include: 

  • 86% of voters strongly agree that AI companies should enact policies that clearly and publicly outline how they develop their products and what they are used for. 
  • 67% of voters also support AI tools like “My AI” undergoing rigorous third-party testing and safety checks before being launched to the general public. 
  • After hearing about new AI tools that are being rolled out in classrooms in the upcoming Fall semester, nearly 9 in 10 voters want these tools to undergo safety checks and third-party testing before they are used by students. 
  • Voters who are both parents of minors (85%) and adults (92%) believe these tools should undergo third-party testing before being used by students this Fall. 

###

Submit your story

Accepted file types: jpg, png, mp4, mov, wmv, avi, Max. file size: 50 MB.