Building Real Things with AI Has Made Me Better at Self-Reflection
Building Real Things with AI Has Made Me Better at Self-Reflection
The mirror doesn't have an ego. That changes everything.
I'm building two products with Claude Code right now. Both are nearly ready for launch. Most sessions are productive. Some aren't.
When a session isn't going well, I've learned to pause and ask a question I used to avoid: What am I doing to cause this?
The answer is always something. My prompts are vague. I'm jumping between contexts too quickly. I'm not providing enough background. I'm solving the wrong problem.
The AI doesn't have personality flaws. No ego. No personal agenda. No bad day carrying over from yesterday. No resentment about the last project. It's a mirror, reflecting whatever I bring to the interaction.
When the mirror shows chaos, the chaos is coming from me.
The Uncomfortable Discovery
Early on, I blamed the tools. The AI doesn't understand my codebase. It's hallucinating. It's going in circles. It's not as smart as people claim.
Sometimes those things are true. But more often, the problem was upstream. I was the one being unclear. I was the one changing direction mid-thought. I was the one failing to provide context that would have made the task obvious.
There's a humbling moment when you ask Claude Code directly: "This session isn't going well. What could I do differently?"
The answer comes back simple and honest. Something like: "You've been switching between three different features without completing any of them. Try focusing on one thing until it works."
No defensiveness. No spin. Just an observation.
The first time that happened, I sat with it for a minute. Because the AI was right. And because the same pattern shows up when I work with people.
What the Mirror Reveals
McKinsey's research on AI and human skills confirms what anyone paying attention already senses: the skills that matter most in an AI-augmented world are social and emotional. Interpersonal conflict resolution. Contextual understanding. Nuanced judgment.
Here's what I've been discovering, and I don't know if it's universally true or just true for me: AI tools might help develop those skills. Not by replacing human interaction, but by giving you a consequence-free environment to notice your own patterns.
When I'm scattered with an AI tool, the output suffers. When I'm clear, focused, and specific, the output improves dramatically. The cause and effect is immediate and obvious.
With people, the same dynamics exist but they're obscured. If a meeting goes poorly, there are a dozen possible explanations. Maybe they were having a bad day. Maybe the politics were complicated. Maybe the ask was unreasonable. The feedback loop is muddy.
With AI, the feedback loop is clean. The only variable is you.
The Transfer
I've been practicing something. When a conversation with someone isn't going well, I now ask the same question I ask with AI: What am I doing to cause this?
Not "what are they doing wrong." Not "why don't they understand." Just: what's my contribution to this dysfunction?
The answers come easier now. I'm not listening. I'm interrupting. I'm making it about being right instead of solving the problem. I'm bringing frustration from something unrelated. I'm not providing enough context for what I'm asking.
These aren't comfortable realizations. But they're actionable. I can't control someone else's ego or agenda or bad day. I can control my own clarity, patience, and presence.
The results have been noticeable. Conversations that would have spiraled into frustration now resolve faster. People who seemed difficult to work with turned out to be responding reasonably to unclear requests. Relationships that felt stuck started moving.
I'm not saying this is a magic fix. I'm saying the practice has helped. And the AI sessions are where I first noticed the pattern.
The Ego Problem
Most of us protect our egos in small, invisible ways. When something goes wrong, we reach first for explanations that preserve our self-image. The other person was being unreasonable. The situation was impossible. Anyone would have struggled.
Maybe. Sometimes those things are true.
But AI interactions have trained me to check the mirror first. Because with AI, the ego-protecting explanations don't hold up. The tool doesn't have bad days. It doesn't play politics. It doesn't misunderstand out of spite.
If the output is bad, something went wrong with the input. Full stop.
That habit of checking the input first transfers directly to human interactions. It's not about self-blame or taking responsibility for things you didn't cause. It's about looking at the one variable you actually control before spending energy on variables you don't.
Try This: The Mirror Check
Before you write this off as something only relevant to people building software, try this exercise. It works whether you use AI tools daily or barely at all.
The Mirror Check
Think of an interaction that didn't go well recently. Answer these questions honestly.
The point isn't to blame yourself for everything. It's to develop the reflex of checking your contribution first. That reflex is useful everywhere.
The Practical Version
Here's what this looks like in practice:
Before a difficult conversation: Think about context the same way you'd think about giving instructions to someone who knows nothing about your situation. What does this person need to understand before my request makes sense? What assumptions am I making that I should state explicitly?
When a conversation goes sideways: Pause and ask internally: "If this interaction were failing because of something I'm doing, what would it be?" Usually something surfaces. Sometimes it's nothing, and the problem really is external. But ask first.
When you're frustrated: Treat the frustration as signal, not justification. It means something in the interaction isn't working. The question is whether you're contributing to the breakdown.
Research on AI coaching systems describes AI as creating "a non-judgmental environment that encourages self-disclosure." That's exactly right. There's no performance anxiety with AI. No worrying about how you're being perceived. Just you and the task.
That psychological safety makes it easier to notice your own patterns. And once you notice them in any context, you can start noticing them everywhere.
What I'm Not Saying
I'm not saying AI tools are replacing human connection. They're not. The depth of real relationship, the complexity of genuine collaboration, the meaning that comes from working through hard things with another person: none of that gets automated.
I'm also not saying I've figured this out. I'm still practicing. Some days I slip back into blaming external factors when something isn't working. The ego-protecting instinct doesn't disappear. (If you're still skeptical about working with AI at all, I wrote about why resistance is futile and what the real opportunity looks like.)
But I have a new reflex now. When sessions with AI aren't productive, I ask what I'm contributing. When conversations with people aren't productive, I ask the same thing.
The question doesn't always produce an answer. Sometimes the other person really is being unreasonable. Sometimes the situation really is impossible. Sometimes the AI really is misbehaving.
But asking the question first has changed how I show up. Less defensive. Less certain that the problem is always somewhere else. More willing to adjust my approach before concluding that everyone else needs to change.
Your First Step
This week, pick one interaction that feels stuck. Could be with a colleague, a family member, a client. Anyone where communication isn't flowing the way you want.
Before your next conversation with them, ask yourself: "If I were contributing to this problem, what would I be doing?"
Write down whatever surfaces. Don't filter it. Don't defend yourself against your own observations.
Then try adjusting that one thing in your next interaction. See what shifts.
You might find that the "difficult person" responds differently when you show up differently. Or you might find that the problem really was external. Either way, you'll have more information than you had before.
That's the practice. Small, specific, repeatable. Same thing I learned from AI sessions. Same thing that's made the human interactions better.
FAQ
Does this mean I should blame myself for everything that goes wrong?
No. This is about checking your contribution first, not assuming you're the sole cause. Some situations genuinely are difficult. Some people genuinely are being unreasonable. But developing the habit of examining your own input before analyzing everyone else's failures gives you agency. You can adjust what you control. You can't adjust what others bring.
How do I ask an AI tool for honest feedback on my approach?
Just ask directly: "This session isn't going as well as I hoped. What could I do differently to help you help me?" The answer is usually specific and useful. No judgment, no spin. You can then apply the same directness to trusted colleagues: "I feel like this conversation isn't working. What would help from my side?"
What if I don't use AI tools regularly?
The principle applies anywhere the feedback loop is clean. Any situation where your input directly shapes the output can become a mirror. Cooking from a recipe. Following a workout program. Writing that gets read by others. The point isn't that AI is special. The point is that fast-feedback environments are great places to notice patterns you bring everywhere.
Get insights on building with AI and leading better
Practical frameworks from someone actually building with these tools. No hype, just pattern recognition.
No spam. Unsubscribe anytime.

John Vyhlidal
Founder & Principal Consultant
Air Force, PwC, Nike. 20+ years building systems that turn strategy into results. Now helping mid-market executives navigate complexity.