The more I reflect on ethics statements from companies entering the K-16 space, I am fascinated by the idea of “ethicswashing.” It’s easy to think that a lot of “ethics” conversations from businesses, who measure their ethics implementation by how much money they are likely to make, involve ethicswashing. Real ethics changes how AI is built and used in an organization.
What Is It?
Ethicswashing is when companies pretend to care about ethics but don’t actually change how they operate. Claude 4.0 Opus defines it in this way:
Think of it like this:
• A company says “We follow ethical AI guidelines!”
• But they only do the bare minimum
• They use fancy ethics talk to avoid real rules • Nothing actually changes in how they build AIRed flags to watch for:
• Lots of ethics committees but no real power
• Beautiful ethics documents that gather dust
• Big announcements with zero follow-through
• Using “ethics” to dodge government regulationsWhy it matters:
• Makes people trust companies that shouldn’t be trusted
• Blocks real solutions to AI problems
• Wastes time on fake fixes • Lets harmful AI practices continue
What would a checklist look like for K-16 oriented GenAI companies??
Ethicswashing Checklist for K-16 GenAI Companies
✓ Ethics without student data protection: Do they talk ethics but collect/sell student data without clear consent?
✓ Vague promises, no transparency: Can teachers/parents actually see how the AI makes decisions about students?
✓ Committee theater: Is their “ethics board” just advisors with no power to change products?
✓ Equity words, biased results: Do they claim to reduce bias while their AI consistently disadvantages certain student groups?
✓ Self-regulation shield: Do they wave their “ethics guidelines” to avoid education-specific regulations and audits?
Applying the Checklist
What might this checklist look like if applied to three vendors in the GenAI education space now? I’ll leave them anonymous but see if you can identify them by their practices.
Ethicswashing Assessment Framework: K-16 GenAI Platforms
These are three companies that target the K-16 education space. They are quite popular. And, the data for each product is only as good as GenAI can develop or put together.
| Checklist Item | Product A | Product B | Product C |
|---|---|---|---|
| 1. Ethics without data protection? | Low Risk: Designed for institutional control; user data is not used for training models and is protected under standard privacy agreements (FERPA, COPPA). | Low Risk: Publicly states commitment to student data privacy, FERPA/COPPA compliance, and not selling user data. | Low Risk: Emphasizes a “walled garden” approach, also stating commitment to FERPA/COPPA and not selling student data. |
| 2. Vague promises, no transparency? | Lower Risk: High transparency in model choice. Users can select and see exactly which AI model (e.g., GPT-4.1, Claude 4, Gemini 2.5 Pro) is processing their request. | Higher Risk: Less transparency. The platform is built on top of models (primarily OpenAI), but users typically don’t choose or see the specific underlying model, making the process more of a “black box.” | Medium Risk: Offers transparency by using a school’s own data for context, but the core AI model’s decision-making process is not transparent to the end-user. |
| 3. Committee theater? | Low Risk: Focus is on providing platform-level controls and features (like model choice and knowledge sandboxing) rather than a public-facing ethics committee. | Unclear: Does not prominently feature a public, independent ethics board. The ethical stance is communicated through its policies and partnerships. | Unclear: Like others, an independent, empowered ethics board is not a central part of its public-facing materials. |
| 4. Equity words, biased results? | Lower Risk: Mitigates bias by allowing users to switch between different AI models, enabling them to compare outputs and challenge a single model’s perspective. | Medium Risk: Offers tools for differentiation and translation, but users are reliant on a single underlying model, which may carry inherent biases that are harder to check. | Medium Risk: Aims to reduce bias by using local school data, but this could also risk amplifying any existing biases present within that specific school’s curriculum or culture. |
| 5. Self-regulation shield? | Low Risk: Adheres to external education-specific regulations (FERPA, etc.) as a core part of its enterprise offering, rather than relying solely on its own internal guidelines. | Low Risk: Actively seeks and advertises compliance with external standards and state-level data privacy agreements, indicating a commitment beyond internal policies. | Low Risk: Compliance with external educational regulations is a key part of its value proposition for districts. |
Product A is best if your priority is transparency, user agency, and the ability to audit AI outputs.
Product B and Product C are best if your priority is a highly simplified, curated experience with strong, straightforward privacy guardrails.
Discover more from Another Think Coming
Subscribe to get the latest posts sent to your email.
