• Inbox Experiments
  • Posts
  • From Yes-Man to Reality Check on ChatGPT | Inbox Experiments #4

From Yes-Man to Reality Check on ChatGPT | Inbox Experiments #4

Steal this tactic and make your ideas 10× sharper.

Hi, Yunus here 👋

Have you ever tried to develop an idea with ChatGPT and felt like it just confirms everything you say? No objection, no challenge, just approval as if every idea is perfect.

Yeap, I’ve been there. It feels good at first. But if you're serious about building something real, it's useless.

This week I saw a Reddit post that suggested a fix.
One system prompt. That’s it. I tested it, and ChatGPT stopped being polite.
It started questioning my assumptions. It spotted logical gaps.
It even changed how I was thinking about the idea.

And that changed everything.

🧪 The Experiment: Training ChatGPT to Be Your Sparring Partner

Goal – Replace empty praise with critical thinking.
Time – 5 minutes

Here’s the full prompt I used. Paste it into the “instructions” box of a new ChatGPT project:

🎯 Result

I tested it with this idea:

“I want to build an app that helps people with social anxiety process their worries before entering a social situation.”

Here’s what happened:

Classic ChatGPT

  • Called it “a great and valuable idea.”

  • Listed some generic tips like “talk to your users” and “focus on UX.”

Reality-Check ChatGPT

  • Flagged three assumptions:

    1. Users are aware of their anxiety.

    2. They trust the app to handle emotional data.

    3. They actually want pre-event coaching.

  • Offered counterpoints:

    • What if this reinforces avoidance rather than healing?

    • What if users feel overwhelmed instead of reassured?

  • Spotted a logic gap:

    • “Processing worries” might increase anxiety, not reduce it.

  • Suggested a better first step:

    • Run a one-week mood-tracking diary study with 10 people. Use sliders. Measure change.

What changed:
Before the prompt, I would’ve started designing the app right away.
After the prompt, I realized I had skipped the most critical part; testing if the problem framing was even valid.
That’s not a small shift. That’s weeks of time saved.

🚀 Why This Matters

  • Blind spots are expensive. Every assumption you leave unchecked multiplies downstream errors.

  • You’re not your audience. Your logic might make sense to you — but does it survive contact with reality?

  • AI should sharpen your thinking, not flatter it. Most people use it like a mirror. You can turn it into a lens.

🛠 Tool Discovery

This week’s tool is an assistant for you but it can see your screen:
Blackbox.ai → an AI agent that watches your screen and walks you through problems step by step. You share your screen, it gives help in real time.

If you want real-time AI support (with visuals), it’s worth checking out.
Thanks to Robert Rizk for the recommendation!

Got something you build? Just hit reply and tell me. I love featuring your products.

🧠 Personal Insight

We often think the cost of a bad idea is “wasted time.” But the real cost is momentum lost on the wrong direction.

Good feedback early is like a compass correction. You might still walk the hard path, but at least you’re heading somewhere useful.

📬 Your Turn

Try the prompt with your current project or idea and hit reply with the most surprising insight it revealed. I'm genuinely curious which assumptions it challenged for you.

One-click question: What's your biggest struggle when evaluating your own ideas? Reply with just a number:

  1. Getting honest feedback

  2. Knowing when to pivot vs. persist

  3. Separating emotions from analysis

  4. Something else (tell me!)

Each reply helps shape what I test and share next. And yes, I read every single one.

See you next Wednesday, Yunus 🚀