Last reviewed: May 2026
Every AI tool has an ethics profile, an ecological footprint, and a governance structure. Most hide all three. This directory rates 47 of them transparently , so you know what you are actually choosing.
How to read the ratings below
Suggest a tool, flag a score, or share anything else. Your input shapes how this directory grows.
Your Google Form just opened in a new tab. Sara reviews every submission and updates the directory when evidence warrants it.
Two AI systems reviewed this directory independently. Here is what they said and how we responded.
Ratings are based on publicly available information, published policies, academic research, and applied sustainability frameworks.
Evaluates whether the tool is built and governed responsibly. Draws on AI ethics literature, UNESCO AI Ethics principles, and published governance disclosures.
Grounded in the Framework for Strategic Sustainable Development (Blekinge Institute of Technology). Asks whether the tool supports a society that can meet everyone's needs within planetary boundaries , now and in the future.
Based on Regenerative AI Cultures principles. Goes beyond "do no harm" to ask: does this tool actively restore, give back, and support flourishing for people and planet?
Based on Regenerative AI Cultures principles. Goes beyond "do no harm." Asks whether a tool actively gives back more than it takes to people and planet , centering marginalized voices, preserving cultural and biological diversity, supporting human dignity, and operating as a participant in community rather than a pure utility.
Framework: Regenerative AI Cultures (SRAGI). See also: Doughnut Economics Action Lab, Kate Raworth.
Energy and water use ratings reflect the estimated footprint of using each tool at a typical organizational scale. Both are shown as color-coded dots on every card. Water use refers to data center cooling water consumption, which is a material but often invisible resource cost of AI inference.
Source: Published data center WUE figures, Google Environmental Reports, Microsoft Sustainability Reports, AWS infrastructure disclosures, and academic literature on AI inference water consumption (Luccioni et al., 2023; Li et al., 2023).
Data sovereignty asks who controls data about a community, who profits from it, and who decides how it is used. In AI, this is most acute for Indigenous communities whose language, cultural knowledge, and identity data have historically been taken without consent and used to train commercial systems that return no benefit to the source community.
Framework: CARE Principles for Indigenous Data Governance. Reference implementation: Te Hiku Media Kaitiakitanga License (Aotearoa New Zealand).
All ratings draw on: published terms of service and privacy policies, founder statements and investor disclosures, peer-reviewed academic research, investigative journalism, organizational sustainability reports, and direct product testing. Sources are cross-referenced where possible.
Ratings reflect information available as of May 2026. AI companies change policies frequently , always verify current documentation before organizational deployment.
The AI Compass rates 46 AI tools on three dimensions that most directories ignore: ethics, sustainability, and regenerative impact. Use it to make more values-aligned decisions about which tools you adopt, recommend, or fund.
Use the search bar to find tools by name, company, or function. Filter by category to focus on a specific type of tool. Filter by risk level to surface tools that meet your organisation's risk threshold.
Each tool gets three scores from 0 to 5. Ethics covers training data consent, safety guardrails, and labor practices. Sustainability is grounded in the FSSD framework and covers energy and water footprint. Regenerative asks whether the tool gives back more than it takes. Click a score dimension in the top badge row to read the full rubric.
Sort by any score column to find the best performers in your area of concern. Click Details on any card to see the full profile including founder ethos, security practices, and FSSD and regenerative assessments. Click the tool name to visit its website directly.
Tools with the most favorable ethics-to-footprint ratio in their category carry a Least Extractive badge. These are the tools that deliver the most value with the smallest extractive footprint.
AI companies change policies and practices constantly. Use the Give Feedback button to flag anything that looks outdated or missing. Your input shapes future updates.