Hi HN I’m Tina.
We all know LLM hallucinations can be funny — but also frustrating and risky. Instead of ignoring them, I thought: what if we could collect them and make it useful?
That’s why I’m building CompareGPT.io. The platform does two things:
Multi-model comparison to reduce hallucinations in real workflows (ChatGPT, Gemini, Claude, Grok & more).
A new campaign: users can submit hallucination cases → earn credits → unlock API usage → even convert credits into real cash rewards.
The goal is to make AI more trustworthy, while giving users a fun and rewarding way to participate.
We just opened the waitlist, with referral links so anyone can share and grow credits faster.
I’d love feedback from this community:
Would you consider multi-model consistency + crowdsourced hallucination collection as a useful guardrail?
Do you see a future for credit-based incentives in AI platforms?
Thanks!
Loading...