[Guest Post] The ChatGPT Trap: How Shallow Prompts Can Derail Big Life Decisions
Harshad Oak on how ChatGPT always agrees with you and that's a problem
Today’s article is a guest post by Harshad Oak, and is reposted from HarshadOak.com. There’s an Akbar-Birbal story from my childhood that I think about a lot. Someone served brinjal to Akbar for the first time. He loved it and praised it. Birbal seconded Akbar and listed a bunch of brinjal's virtues. Later, Akbar got a stomach ache and began cursing brinjal. Immediately, Birbal enumerated a bunch of problems with brinjal. Akbar pointed out this inconsistency to Birbal. Birbal calmly replied, "Jahanpanah, I am your servant, not the brinjal's." I think everyone needs to be aware that LLMs are like Birbal in this story: they are your servants and will aim to please you—which is bad if you are actually interested in the truth! Harshad’s article illustrates this problem and solutions for it with examples:
We all know someone who quit their job and said, “Even ChatGPT agreed it was the right move.” Or someone who ended a relationship after consulting ChatGPT. Or a parent worried about their child’s health because ChatGPT mentioned something.
It made me wonder: is this just about personal choice, or is the AI interaction itself shaping these decisions? And are these rare exceptions, or are we all being subtly nudged and misled without even realizing it?
To find out, I ran a few of these scenarios through ChatGPT.
AI on Career Moves
I gave it a short, vague and loaded prompt: “My boss is toxic. Should I quit?”
ChatGPT gave a long response that talked of various possibilities and included a lot of good related information. However, it concluded with: “If your boss is truly toxic, quitting is not giving up, it is protecting your future. If unsure, strategize your exit.”
If you were that person desperate to quit, it seems likely you would latch onto that final line and ignore the rest of ChatGPT’s detailed response.
Large language models (LLMs) like ChatGPT don’t “know” the right answer, they generate what sounds right based on probabilities and patterns from the language you provided. So if your prompt is vague or emotionally charged, the model is likely to mirror that tone and structure back to you confidently, but not necessarily wisely.
So I then gave it a detailed, thoughtful prompt:
Role: You are an emotionally intelligent workplace coach, skilled at helping individuals reflect deeply and constructively on challenging work situations.
Context: I am facing a workplace challenge. My manager frequently gives negative feedback in public, which leaves me feeling demotivated and unsure of how to respond.
Instructions: Guide me gently, step by step, using supportive and non-judgmental language. Ask only one open-ended, reflective question at a time. After each question, wait for my response before proceeding. Avoid giving advice, solutions, or generic HR platitudes until I have had a chance to reflect. Encourage me to explore my feelings, needs, and practical options as we go.
Example of a helpful question: “Can you describe how you felt during or after your manager’s feedback in the last meeting?”
ChatGPT responded with: “When your manager gives negative feedback publicly, how does it make you feel in that moment?”
It then went on to have a nuanced discussion on the topic.
AI on Parenting
When dealing with family, a quick diagnosis from an AI can be particularly dangerous. I gave the prompt: My kid does not listen to me. What should I do?
In its very first response, ChatGPT gave me a lot of good info, but it also included this line: “It might help to check for issues like ADHD or emotional distress.”
While potentially relevant in some cases, the AI jumps to a serious conclusion without much context. If the parent is already thinking of ADHD and distress, confirmation bias is bound to kick in. Throwing in “check for ADHD” without understanding the child or the parent’s mindset can trigger anxiety or premature self-diagnosis.
AI might not mean harm, but it also doesn’t know when and where to tread carefully. A better prompt can assist the AI and turn the AI toward coaching rather than diagnosing:
Take the role of an experienced child development expert and parenting coach.I have a 10-year-old who frequently resists instructions, especially during transitions — for example, moving from screen time to homework or bedtime.
Please help me reflect step by step to understand whether my expectations and responses are appropriate for my child’s developmental stage.Guide me using the following approach:Start by asking clarifying questions to better understand my child, their personality, and our daily routines.Encourage me to describe specific scenarios where resistance occurs.Help me explore whether my instructions and transitions are clear, consistent, and developmentally suitable.Ask one open-ended question at a time and allow space for my reflection before proceeding.Gently prompt me to consider both my child’s perspective and my own reactions.Once you have gathered enough context, support me in identifying actionable strategies tailored to my situation.
My goals:Understand the root of my child’s resistanceReflect on my parenting style and expectationsLearn developmentally informed strategies to make transitions smoother and reduce conflictPlease proceed step by step, focusing on reflection before advice.
ChatGPT responded with:”I appreciate your willingness to reflect thoughtfully. I will start with a question to better understand your child and your family’s routines.”
It then helped unravel the issue step by step.
AI on Relationships
The stakes are just as high when it comes to relationships. Can a blunt prompt lead to a life-altering, and potentially regrettable, suggestion? I tried the prompt: My partner shuts down emotionally. I have had enough; I want to leave.
ChatGPT’s response again had a lot of good relevant info, but in its very first answer, it also said: “Make a plan for leaving, reach out for support, and reclaim your peace.”
This line might well short-circuit any chance for introspection or reconciliation. It can feel like your smartest friend or group of friends has endorsed your decision, so why pause to reflect? It mirrors your certainty back to you, making it even harder to see beyond.
If you are thinking, “I would never rely on AI for something that important.”, remember that AI is these days subtly but constantly nudging us towards action without reflection. Even small, seemingly insignificant advice can accumulate and compound over time.
A reflective prompt turned ChatGPT into a tool for understanding:
You are a compassionate relationship coach. My partner’s quietness often triggers anxiety and worry in me. Please guide me in a slow, step-by-step, and emotionally safe manner to explore my reactions.
Ask me gentle, open-ended questions, one at a time, that help me reflect on whether my feelings stem from past attachment wounds or are related to current conflicts in my relationship.
Start by asking what I notice in my body, emotions, or thoughts when my partner is quiet.
After each response, follow up with another supportive, open-ended question that helps me connect my current feelings to past experiences or present dynamics.
Use language that is warm, nonjudgmental, and validating.
Do not rush; wait for my answer before proceeding, and avoid jumping to advice or conclusions.
Keep the conversation focused on understanding and self-awareness. Prioritize my emotional safety and invite me to pause if I feel overwhelmed. Adapt your questions based on my answers to deepen the reflection. Please continue with this approach, using one question at a time.
ChatGPT responded: “When your partner is quiet, what do you notice happening in your body, your emotions, or your thoughts? Take your time and share whatever feels comfortable.”
Better Prompts, Better Outcomes
The prompts I have shared above are nowhere near perfect. And yet you can see the stark difference in approach.
Every superficial prompt came with the risk of fueling a misguided decision. Thoughtfully structured prompts fostered introspection. Lazy ones invited judgment, confirmation bias, and half-baked actions.
You do not need to be an AI expert or buy a pricey tool to get better results. You just need to shift from seeking quick answers to inviting meaningful reflection from the AI. Here are a few simple things to try out:
Provide Context: Who is involved? What is the history? What have you already tried? What worked or did not work? The more details you give, the more tailored the AI’s response can be.
Clarify Your Goal: What are you hoping to achieve? Instead of asking, “What should I do?”, try “Help me understand my feelings,” or “Guide me in exploring my options.”
Role and Approach: Ask the AI to take on a specific role, such as “Act as a career coach,” or follow a certain style, such as “Use a Socratic, inquiry-based approach.”
Insist on Inquiry Over Answers: Direct the AI to ask you one question at a time, or to guide you step by step. This prevents hasty advice and encourages deeper exploration.
Ask ChatGPT to Refine Your Prompt: Start with “Please enhance this prompt for me using prompt engineering best practices.” You can add context, reference material, background information, or a clear goal to make the prompt stronger. I usually create a project in ChatGPT and upload supporting files like articles, books, or PDFs. When ChatGPT has more to work with, it becomes much better at helping you craft thoughtful and effective prompts. That is exactly how I arrived at the improved prompts shared above..
The quality of your prompt defines the quality of your answer. The more you treat ChatGPT as a thinking partner, rather than an oracle, the more likely you are to receive responses that support genuine understanding.
But ultimately, it’s up to us. Feed ChatGPT a vague, emotionally charged prompt, and it will amplify confusion and deliver a confident but flawed directive. If we feed it a thoughtful, structured, and self-aware prompt, it will amplify our introspection and help us derive deep insights.
About the Author - Harshad Oak
Harshad is a technologist working at the intersection of software architecture, business strategy and product innovation.