Built for people who work.
If your job involves reading, writing, or evaluating other people's output—this is for you.
How this started
OAF started with a management problem: people on my team were submitting work that was clearly AI-generated. Not because it was bad—sometimes it was fine—but because it was obvious they'd skipped the thinking part. Copy, paste, submit.
The conversations that followed were awkward. Not because using AI is wrong, but because we had no shared language for it. No framework for "I used AI to brainstorm, but wrote this myself" versus "I asked ChatGPT and hit send."
I use AI constantly—to learn new concepts, to build things faster, to explore ideas I wouldn't have time for otherwise. I actually prefer when people use it to learn and improve. What I don't want is people outsourcing the thinking entirely.
"AI is like a calculator or a car: it has plenty of downsides, but it gets us there faster. The question is whether 'there' is worth going to."
What we believe
We need to keep what makes us human: asking better questions (that's prompting), thinking critically about what we read (that's evaluating AI output), and being honest about when machines are helping us—and crucially, how they're helping.
Honesty Over Detection
An 82% AI score is a guess, not a verdict. We believe in voluntary disclosure—telling people how you worked, not waiting for an algorithm to accuse you.
Privacy by Default
Your work stays on your device. Our tools run locally in your browser. We don't train on your data, store your documents, or sell your information. We use privacy-focused analytics (no cookies).
Human + Machine
We're not anti-AI. We're pro-thinking. Use AI to learn faster and build more—just don't outsource the thinking itself. Augmentation, not replacement.
Transparency as Standard
Disclosing AI assistance shouldn't be shameful. It should be normal—like citing a source or crediting a collaborator. We make that easy.
The framework
Simple, standardized, professional.
A note on AI models
OAF is not affiliated with, endorsed by, or licensed by OpenAI, Anthropic, Google, Meta, or any other AI provider. Our benchmark comparisons, detection tools, and commentary are provided for educational purposes under fair use. We test publicly available models to help professionals understand the landscape—not to make claims about any specific model's capabilities or limitations. For complete details, see our Terms of Service.
Ready to be honest about AI?
Start with a label. Prove it with a receipt. Keep thinking for yourself.