Responsible AI
Responsible AI has been a key focus for All Tech Is Human since our founding in 2018, when the first sets of principles and guidelines were being drawn up by pioneering NGOs and early-adopting tech giants.
We maintain a diverse network of AI experts who contribute to the development of Responsible AI (RAI) guidelines, conduct bias audits on AI systems, offer insights on the societal impacts of AI deployment, and more as part of their work with Big Tech companies, non-profits, startups, and governments all over the world.
Through our multifaceted initiatives, All Tech Is Human aims to create a vibrant ecosystem where Responsible AI professionals can collaborate, share knowledge, and work towards ensuring that AI systems are designed, developed, and deployed in ways that benefit society as a whole.
Interested in collaborating with us on a Responsible AI workshop, report or other initiative?
The Latest
Responsible AI Impact Report (2025)
REPORT
Our inaugural Responsible AI Impact Report is a crucial roadmap of the most urgent risks, emerging safeguards, and public-interest solutions shaping how AI will impact society in the year ahead.
This report provides an assessment of the global RAI landscape, highlighting the most influential civil society contributions that shape AI governance, assurance, safety, and public-interest infrastructure. It documents a RAI field in transition and looks to 2026 with clear priorities.
AI Assurance Ecosystem Workshop Summary Report (2025)
SUMMARY REPORT
This is a summary of the AI Assurance Ecosystem workshop co-hosted by All Tech Is Human, techUK, and Trilligent in London. This document contains insights and perspectives shared by presenters and participants during the workshop’s breakout discussions, and summarizes participants' engagement during the workshop.
ATIH x IBM Research: AI Governance Workshop Summary
This report is a summary of the AI Governance: AI Assessments, Audits, and Public Oversight Methods workshop that All Tech Is Human organized with sponsorship from IBM Research at Betaworks in New York City on September 20th, 2024. Download Report
We brought diverse perspectives into the development of the model and provided AI ethics education. Participants practiced AI ethics skills and used the maturity model to evaluate either their own company using internal information or any other company using public information. Download Report
ATIH x TechBetter: AI Governance Maturity Model Hackathon
ATIH x Thorn: AI Industry Commitments & White Paper
We partnered with Thorn to guard against AI-generated child sexual abuse material, co-authoring a White Paper to serve as a resource for a set of co-defined Safety by Design principles and mitigations for generative AI in the context of child sexual abuse. Download Report
Gatherings, Livestreams & More
On May 21, 2025, All Tech Is Human gathered at the Finnish Consulate in Manhattan for a convening focused on Strengthening Multistakeholder Collaboration in Responsible AI, followed by a reception at the Finnish Residence.
The afternoon began with welcome and opening remarks by All Tech Is Human’s Founder & President David Ryan Polgar and Finnish Ambassador Jarmo Sareva before exploring the topic with two panels.
In a dynamic and thought-provoking discussion, Panel 1, The Role of Civil Society and Research in Shaping Responsible AI Policy, moderated by All Tech Is Human Executive Director, Rebekah Tweed, explored the role of civil society organizations in shaping the trajectory of artificial intelligence amid political upheaval and technological acceleration. Panelists included: Nabiha Syed, Executive Director of Mozilla Foundation, Serena Oduro, Policy Manager at Data & Society Research Institute, and Miranda Bogen, Director of the Center for Democracy & Technology’s AI Governance Lab.
In the second panel, a nuanced and globally-minded discussion on Aligning Transatlantic AI Governance in a Shifting Regulatory Landscape, moderated by Tatiana Rice, Director of U.S. AI Legislation at Future of Privacy Forum, speakers explored the challenges of developing inclusive, effective AI governance amid shifting legal, political, and technological landscapes. Panelists included: Mala Kumar, Head of Impact at Humane Intelligence, Shahla Naimi, Policy Director for AI & Human Rights at Salesforce, and Lucia Velasco, Senior Advisor to the United Nations Under-Secretary-General and Special Envoy on Technology. Read More
If you have an idea for a topic or an event, or would like to recommend a guest speaker, please email us at [email protected].
FINNISH CONSULATE | NEW YORK CITY
Strengthening Multistakeholder collaboration in Responsible AI
Hacking Algorithms Around Factuality, Bias, and Misdirection
We collaborated with Humane Intelligence to present Hacking algorithms around factuality, bias, and misdirection. With the support of Google.org, Humane Intelligence building themed programs that aim to build community and professionalize the practice of algorithmic assessment. Read More
Livestream with Renée Cummings on Policy, Best Practices, & What’s Next
All Tech Is Human regularly engages with Responsible AI leaders for “This Month in Responsible AI,” a livestream series led by Rebekah Tweed; watch this livestream with Renée Cummings (All Tech Is Human Senior Fellow for AI, Data, & Public Policy) on Policy, Best Practices, & What’s Next. Watch Now
Hao’s book is the culmination of over 300 interviews and 7 years of wrestling with questions around Responsible AI. It is a call to action that offers the inside story of OpenAI as a lens for understanding this moment: the tech elite's extraordinary seizure of power and its threat to democracy. See Event
Livestream with Karen Hao, author of NYT best seller Empire of AI
FINNISH CONSULATE | NEW YORK CITY
AI Companions & Chatbots
This guide's creation was led by All Tech is Human’s Princeton University GradFUTURES Social Impact Fellow, Rose Guingrich, in collaboration with our Associate Director Sandra Khalil.
AI companions and chatbots are a rapidly evolving issue that touches upon design, law, psychology, ethics, sociology, and more. Unpacking the values, tensions, tradeoffs, and best practices around AI companions and chatbots is an ongoing initiative at All Tech Is Human, so please reach out to us with your ideas for collaboration.
Responsible AI Course Series
This series of courses will equip participants with the essential foundational knowledge to begin the process of effectively operationalizing Responsible AI and AI governance programs within organizations.
Rooted in practical insights and real-world applications, this series of courses, which can be completed in just a few hours, offers a foundational understanding of Responsible AI: its principles, history, and evolution as a field, as well as an exploration of current roles, industry best practices, and the evolving governance landscape.
Courses include:
Principles of Responsible AI: Identifying, Understanding, and Mitigating Risks
The History of Responsible AI: From Principles to Professionalization
Operationalizing AI Governance: Strategy & Foundations
Operationalizing AI Governance: Execution & Scaling
Governing Agentic AI Systems
All courses are made freely available thanks to the support of the Patrick J. McGovern Foundation.
What is Responsible AI?
Responsible AI is revolutionizing the tech landscape by prioritizing the ethical development and deployment of artificial intelligence systems. As AI becomes increasingly integrated into our daily lives, it's crucial to address concerns about transparency, fairness, and accountability.
Industry leaders and researchers are collaborating to establish guidelines that ensure AI technologies benefit society while minimizing potential risks. Practitioners focus on eliminating bias in algorithms, protecting user privacy, and maintaining human oversight in critical decision-making processes.
By embracing responsible AI practices, companies can build trust with consumers, comply with emerging regulations, and contribute to a sustainable digital future. As we navigate the complexities of AI advancement, the principles of responsible AI serve as a compass, guiding us towards innovative solutions that are both powerful and ethically sound.
Learn More About Responsible AI
-
Safety by Design for Generative AI: Preventing Child Sexual Abuse (Thorn & ATIH): In collaboration with Thorn and All Tech Is Human, Amazon, Anthropic, Civitai, Google, Invoke, Meta, Metaphysic, Microsoft, Mistral AI, OpenAI, and Stability AI have publicly committed to Safety by Design principles. These principles guard against the creation and spread of AI-generated child sexual abuse material (AIG-CSAM) and other sexual harms against children.
The 2023 Responsible Tech Guide: Our annual comprehensive guide to the people, organizations, and ideas of the Responsible Tech ecosystem and actionable ways to get involved.
The Business Case for AI Ethics: Interviews with 28 leaders in Responsible AI about ways to operationalize AI Ethics and move from theory to action.
AI and Human Rights Report: This report outlines key issues and opportunities related to the intersection of AI and Human Rights, across seven key areas: Automated Decision Making Systems and Civil Rights, Data Privacy, Synthetic Media and Information Integrity, Content Moderation, Healthcare, Surveillance Technology, Predictive Technology and Criminal Justice, Cybersecurity and Autonomous Weapons.
Co-Creating a Better Tech Future: We explore community visions for what an ideal tech future looks like and what we would need to do in order to make that a reality. Based on dozens of new perspectives, along with the insight our non-profit has gathered from previous reports, our large global Slack community, and numerous summits and mixers we have held.
Responsible AI Governance Maturity Model Hackathon Report(TechBetter & ATIH): an interdisciplinary research project to develop a Responsible AI Governance Maturity Model that is rigorous, based on widely accepted resources, and empirically informed.
Video: “What are the problems with considering the existential risks of #AI? w/ Dr. Joy Buolamwini”
Video:Kashmir Hill in conversation with Rebekah Tweed | Your Face Belongs To Us Book Tour
-
AI Incident Database (PAI): The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.
AI Risk Management Framework (NIST): In collaboration with the private and public sectors, NIST has developed a framework to better manage risks to individuals, organizations, and society associated with artificial intelligence (AI). The NIST AI Risk Management Framework (AI RMF) is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
EU AI Act (European Parliament): The use of artificial intelligence in the EU will be regulated by the AI Act, the world’s first comprehensive AI law.
Generative AI Red Teaming Challenge: Transparency Report (Humane Intelligence): Humane Intelligence, Seed AI and AI Village partnered to hold the first public red teaming event for closed source API models at DEF CON 2023. This paper provides some insights into the potential and promise of public red teaming, framed around the Generative AI Red Teaming Challenge conducted at AI Village within DEF CON 31.
Global AI Legislation Tracker (IAPP): Countries worldwide are designing and implementing AI governance legislation and policies commensurate to the velocity and variety of proliferating AI-powered technologies. This tracker identifies legislative or policy developments or both in a subset of jurisdictions.
OECD AI Principles (OECD): Countries use the OECD AI Principles and related tools to shape policies and create AI risk frameworks, building a foundation for global interoperability between jurisdictions. The European Union, the Council of Europe, the United States, and the United Nations and other jurisdictions use the OECD’s definition of an AI system and lifecycle in their legislative and regulatory frameworks and guidance.
AI Risk Repository (MIT): The AI Risk Repository has three parts: The AI Risk Database captures 700+ risks extracted from 43 existing frameworks, with quotes and page numbers. The Causal Taxonomy of AI Risks classifies how, when, and why these risks occur. The Domain Taxonomy of AI Risks classifies these risks into seven domains.
AI Vulnerability Database (AVID): Open-source knowledge base of failure modes for AI models, datasets, and systems. Includes a taxonomy of the different avenues through which an AI system can fail, and a database of evaluation examples.
Blueprint for an AI Bill of Rights (White House OSTP): White paper published by the White House Office of Science and Technology Policy. It is intended to support the development of policies and practices that protect civil rights and promote democratic values in the building, deployment, and governance of automated systems.
-
A large and growing number of impactful organizations are focused on Responsible AI. These organizations research, advocate, monitor, and evaluate AI systems to mitigate their known and potential harms, working to ensure that these tools are aligned with the public interest. If you're interested in responsible AI, here are a few organizations you should know:
Ada Lovelace Institute: An independent research institute with a mission to ensure that data and AI work for people and society. They believe that a world where data and AI work for people and society is a world in which the opportunities, benefits and privileges generated by data and AI are justly and equitably distributed and experienced.
AI Risk and Vulnerability Alliance (ARVA): Their mission is to empower communities to recognize, diagnose, and manage vulnerabilities in AI that affects them. Our vision is to work across disciplines to make AI safer for everyone by exposing vulnerabilities in the AI ecosystem. They build tools for practitioners and communities to interrogate the growing number of models capable of producing material harm to our world. Together they operationalize AI risk management to protect the future of our world.
Alan Turing Institute: The Alan Turing Institute, headquartered in the British Library, London, was created as the national institute for data science in 2015. In 2017, as a result of a government recommendation, they added artificial intelligence to their remit.
Algorithmic Justice League: AJL’s mission is to raise public awareness about the impacts of AI, equip advocates with resources to bolster campaigns, build the voice and choice of the most impacted communities, and galvanize researchers, policymakers, and industry practitioners to prevent AI harms.
Center for AI and Digital Policy: The Center for AI and Digital Policy aims to ensure that artificial intelligence and digital policies promote a better society, more fair, more just, and more accountable – a world where technology promotes broad social inclusion based on fundamental rights, democratic institutions, and the rule of law. As an independent non-profit research organization, they assess national AI policies and practices, train AI policy leaders, and promote democratic values for AI.
Algorithmic Justice League: Their mission is to raise public awareness about the impacts of AI, equip advocates with resources to bolster campaigns, build the voice and choice of the most impacted communities, and galvanize researchers, policymakers, and industry practitioners to prevent AI harms.
Center for AI and Digital Policy: The Center for AI and Digital Policy aims to ensure that AI and digital policies promote a better society, more fair, more just, and more accountable–a world where technology promotes broad social inclusion based on fundamental rights, democratic institutions, and the rule of law. As an independent non-profit research organization, the Center for AI and Digital Policy assesses national AI policies and practices, trains AI policy leaders, and promotes democratic values for AI.
Data & Society Research Institute: Data & Society is an independent nonprofit research organization. They study the social implications of data, automation, and AI, producing original research to ground informed public debate about emerging technology. Data & Society believes that technology policy must be grounded in empirical evidence, account for tech’s real-world impacts, and serve the public.
Distributed AI Research Institute (DAIR): An interdisciplinary and globally distributed AI research institute rooted in the belief that AI is not inevitable, its harms are preventable, and when its production and deployment include diverse perspectives and deliberate processes it can be beneficial. Our research reflects our lived experiences and centers our communities.
ForHumanity: Their purpose is to examine and analyze the downside risks associated with the ubiquitous advance of AI & Automation. With the assistance of the ForHumanity Fellows, they developed Independent Audit and Governance of Contact Tracing. They are developing the larger work of Independent Audit of AI Systems to build an infrastructure of trust for the world's autonomous systems.
Humane Intelligence: Humane Intelligence is a tech nonprofit building a community of practice around algorithmic evaluations. The organization, led by Dr. Rumman Chowdhury, is building a programming platform for model evaluators and individuals seeking to learn more about model evaluations.
International Association of Algorithmic Auditors (IAAA): The IAAA is a community of practice that aims to advance and organize the algorithmic auditing profession, promote AI auditing standards, certify best practices and contribute to the emergence of Responsible AI.
Partnership on AI: Partnership on AI develops tools, recommendations, and other resources by inviting voices from across the AI community and beyond to share insights that can be synthesized into actionable guidance. They then work to drive adoption in practice, inform public policy, and advance public understanding. Through dialogue, research, and education, PAI is addressing the most important and difficult questions concerning the future of AI.
RAI Institute: The Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organizations. The RAI Institute’s conformity assessments and certifications for AI systems support practitioners as they navigate the complex landscape of creating, selling or buying AI products.
The Future Society: The Future Society (TFS) is an independent 501(c)(3) nonprofit organization based in the United States and Europe. Their mission is to align artificial intelligence through better governance. We develop, advocate for, and facilitate the implementation of AI governance mechanisms, ranging from laws and regulations to voluntary frameworks such as global principles, norms, standards, and corporate policies
Women in AI Ethics: Women in AI Ethics™ (WAIE) is a fiscally sponsored project of Social Good Fund, a California nonprofit corporation and registered 501(c)(3) organization with a mission to increase recognition, representation, and empowerment of brilliant women in this space who are working hard to save humanity from the dark side of AI.
See a full list of organizations:http://www.alltechishuman.org/responsible-tech-organizations
Apply to join All Tech Is Human’s Slack with over 10K members across 100+ countries!
Join the Responible AI Community
The technical and social complexity of AI systems requires a multi-voice effort to explore what AI can do, what it should do, and what it could do in the future. Want to get involved?
We have a large (and growing) community of Responsible AI practitioners at All Tech Is Human. Here are some ways you can meet other practitioners, learn more, or work on something:
Join our Responsible Tech Mentorship Program
Join the Responsible Tech Conversation and Build Your Network on ATIH’s Slack Community
Check out our RAI Working Group
Find Your Next Role In Responsible AI
Working in responsible AI is a dynamic and intellectually stimulating field that combines technical expertise with ethical considerations. Professionals in this area often find themselves at the forefront of technological innovation while grappling with complex moral dilemmas. Daily tasks may include developing algorithms that minimize bias, conducting rigorous testing to ensure fairness in AI systems, and collaborating with diverse teams to address potential societal impacts.
The work requires a unique blend of skills, including strong technical knowledge, critical thinking, and an understanding of social sciences and ethics. Practitioners often feel a sense of purpose, knowing their efforts contribute to the development of AI technologies that benefit society while mitigating potential risks. The field is rapidly evolving, offering continuous learning opportunities and the chance to shape the future of AI in a responsible manner. See our Job Board
According to ATIH’s Job Board, most Responsible AI roles posted require 5-6 years (35%), 7-9 years (32%), and 10+ years (23%). Here are some recent job titles for Responsible AI practitioners:
AI Ethics & Safety Manager
Sr. Director, Responsible AI Testing, Evaluation, and Alignment
Technical AI Ethicist/AI Red Teamer
Director, Responsible AI
Research Scientist, Model Safety
Applied Scientist, Responsible AI
AI/ML Privacy Engineer
Software Developer/Engineer
Data Scientist, Responsible AI
Research Analyst/Associate/Researcher, Trustworthy ML
Research Engineer, Trustworthy AI

