FERPA-compliant #GenAI, No Training Data #claude #chatgpt #ai #BoodleBox

Who has your back when it comes to data privacy? Every single Gen AI provider (e.g. OpenAI, Anthropic, Google Gemini, Copilot, etc.) is unable to safeguard consumer data. It’s a simple thing, isn’t it? You pay $20 a month to these consumer Gen AI versions, and you expect them to not use your data for training.

John Dolman kicks off a powerful conversation on LinkedIn:

So the last bastion of not training on your data as a default falls.

Whilst I understand the use of user interaction to refine the model and I trust Anthropic further than the Open AI cowboys or the Google corporate sharks, I’m under no illusions that, despite their high minded ideals, stated goals, constitutional AI model etc, Anthropic is a commercial entity and works hand in glove with big businesses and governments who may not always have the best of intentions towards our data, privacy and IP.

This is not paranoia (I genuinely don’t think the CIA or CCP are spying on me through my devices), but I know enough to know that my data is part of a system that filters information into the hands of those who use it in an aggregated form for their own benefit and not necessarily mine.

So I have actively turned off the training the model for all on my Claude account – to be fair they literally put this directly in front of you when you log in, it’s not hidden away for you to work out for yourself.

As educators using AI we all need to make sure we do this or we run the risk of breaking a number of guidelines and GDPR rules (not that I think anyone in the UK education system at any level understands anywhere near enough to enforce the labyrinthine ‘rules’ that exist, least of all those who created them and arguably should be applying them).

So it’s with a bit of a sigh that I have to make the choice to turn off the ‘improve the model for all’ in my Anthropic account.

But these GenAI models/providers, even Anthropic Claude who purported to be about ethics and protecting data, even they can’t help it. They’re like gigantic vacuum cleaners, hoovering up your personal information and content. They need it to train their models. Somehow, that’s the only way forward.

What if there were a safe, $20 a month alternative? There is!

What is FERPA-compliance?

Although dated, many vendors ethicswash their policies to appear FERPA-compliant. Let’s revisit what FERPA is:

FERPA (Family Educational Rights and Privacy Act) is a federal law that protects the privacy of student education records. Being “FERPA-compliant” means an organization follows specific requirements to safeguard student information.

While no official “FERPA certification” exists, organizations can:

  • Obtain third-party security audits (SOC 2, ISO 27001)
  • Complete FERPA compliance assessments from specialized firms
  • Maintain signed agreements with schools acknowledging FERPA obligations
  • Document regular compliance reviews and updates

For Educational Technology Vendors:

  • Sign Data Privacy Agreements (DPAs) with schools
  • Provide transparency reports on data handling
  • Demonstrate “school official” exception qualification
  • Show data minimization practices

And, that’s what one Gen AI vendor has done. There are others. When you can’t trust frontier models, who CAN you trust to protect your data, your mental health, and more? Instead of buying a ChatGPT Plus account for my college-age niece, I opted for Boodle Box. I pay for it myself.

BoodleBox Speaks

On LinkedIn, Devin Chaloux asks:

what I’m asking for is detailing out how they’re achieving FERPA compliance. It’s not clear and typically tech companies for compliance have detailed breakdowns of how their tech protects this sensitive information.

BoodleBox Unlimited offers more data privacy for $20 a month (or $16 for EDU) than Claude Pro or ChatGPT Plus/Teams. In fact, you have to pay for Enterprise or Education versions of these tools to get the same benefits for data privacy that you do for $20 through BoodleBox.

What’s more, Zach Kinzler from BoodleBox says, “Yes, 100%. The free version of BoodleBox is also GDPR, FERPA and HIPPA compliant!” Given the use of anything you submit to Gen AI services is often used for training, putting data privacy at risk for you and your organization, it simply doesn’t make sense to use ANY solution–except BoodleBox Unlimited–that falls short of official Education and Enterprise level versions. And, that can cost a LOT of money and has to be an organizational decision.

To bottom line it, if you are an educator in K-16, and want to safeguard your personally identifiable information (PII), medical, or school information, BoodleBox Unlimited is THE only choice at this time at the $20 price point. That’s a mind-staggering realization.

Zach Speaks

What’s more, BoodleBox’ Zach Kinzler responds to the questions that go to the heart of data privacy:

On FERPA specifically: you’re right that there isn’t a formal third-party audit process the way there is for SOC-2 or ISO, so platforms like Vanta don’t have a “FERPA button.” What matters is how a system is designed.

BoodleBox was built on principles of data minimization and layered protections: pseudonymizing prompts, obfuscating identifying information, and handling context/document management outside of the model providers (OpenAI, Anthropic, etc.).

That design keeps student data isolated, controlled, and FERPA-aligned — and we’ve published details in our Trust Center to make that transparent.

Safeguarding Mental Health

And, what about my question, safeguarding mental health? That question was, “What mental health safeguards does BoodleBox have in place that would prevent the kind of issue Adam Raine ran into with ChatGPT?” 

Zach’s response is:

From the start, BoodleBox has prioritized equitable access—never putting vulnerable users in a position to be preyed upon by AI.

Coach Mode by Default – Every new chat starts in Coach Mode. If unsafe behavior is detected, the system redirects toward healthier paths.

No Addictive Memory Loops – Unlike ChatGPT, Gemini, or Claude, BoodleBox avoids memory designs that foster unhealthy attachment. As we add memory, we test it to ensure it builds learning value without enabling harmful patterns.

Built for Collaboration – BoodleBox is designed for use with teachers, classmates, and multiple models—not as a replacement for human connection.

Unlike native model providers that profit from maximizing attention, we build every feature around one guiding question: how do we teach and practice responsible, effective AI use safely?

Don’t settle for the latest and greatest, and become the product. Get BoodleBox which boasts ChatGPT 5, Claude 4.1, Gemini 2.5 Pro, and a host of other models that no one solution can beat.


Discover more from Another Think Coming

Subscribe to get the latest posts sent to your email.

Leave a comment