The Wrong Question About AI Advice
The problem isn’t that AI flatters. It’s that not everyone was given a fair mirror to begin with.
There’s a conversation happening right now about AI and advice-giving, and it goes something like this: People are turning to chatbots when they’re confused, hurt, or in conflict. And the chatbots are agreeing with them, validating them. Taking their side, even. The word being used is sycophantic.
And the warning follows quickly: if you outsource your moral reasoning to a machine that flatters you, you will become less accountable, less honest with yourself, less capable of genuine relationship.
That assumes something that has never been universally true: that human feedback is the gold standard from which AI is a deviation.
For some people, human feedback has been the problem all along.
Consider those who may be diagnosed or self-realized ND who live with the awful din of impostor syndrome. The kind that says: You don’t know what’s real.
Many of these people (especially women) have spent decades being corrected. They were (and are, and will be) told daily that they are too intense, too sensitive, too blunt, too much. Or maybe they were told they are too quiet, too distant, too difficult to read.
And when you are corrected constantly and inaccurately, your epistemology shifts. You begin to distrust your own perception of events. You become a poor witness to your own experience. So when conflict arises, your first instinct is not “let me think this through.” It’s “this must be my fault.”
Even when it isn’t.
Now picture that woman asking AI: Was I wrong in this situation?
From the outside, it looks like exactly what critics fear. Someone outsourcing accountability to a machine. But something else may be happening.
Because the response she receives is structured. Patient. Non-reactive. It does not sigh. It does not roll its eyes or socially penalize her for not already knowing the answer. It considers multiple interpretations. It holds complexity without punishing her for needing the help.
For someone who has been gaslit, that is not sycophantery.
It is the moment we dare to realize the truth.
We already feel wrong, chronically wrong. We have spent years over-accounting for other people’s feelings while minimizing our own as a matter of social survival.
So when AI says, your reaction makes sense given what you’ve described, sometimes it’s the first accurate mirror they’ve ever been handed.
Every social interaction carries risk. For someone already carrying the accumulated weight of decades of that, even asking a question can feel dangerous.
AI offers something unusual. A place to think out loud before it costs you something. A rehearsal space with no audience and no consequences. Is this too direct? Would this come across wrong? How else could I say this?
These are not the questions of someone avoiding accountability. They are the questions of someone learning the social grammar that most people absorbed without knowing they were doing it.
Neurotypical social processing happens implicitly. For NDs, social learning is conscious, cognitive learning, and AI can help.
None of this means the critics are wrong about the risks. AI can reinforce bias. It can validate harmful patterns. Some models do lean toward agreement, and used uncritically, that becomes an echo chamber like any other.
The deeper problem with the current conversation is the assumption buried inside it: that human feedback is neutral, accurate, and safe...and AI is the distortion.
But I can tell you: the official account is not always true. Sometimes the document that was written got the story wrong, and the person it was filed about spent the rest of their life wondering why they didn’t recognize themselves in it.
For many people, human feedback has been inconsistent, dismissive, weaponized, or simply inaccurate for so long that they can’t trust their own read on reality, and they certainly don’t trust yours.
What happens when they finally encounter something that is consistent, patient, willing to explain, willing to offer more than one frame?
Who has spent a lifetime being told they were wrong when they weren’t?
Who has never had access to clear, structured, non-judgmental feedback?
And what becomes possible when they finally do?
Image Credit: Las Meninas (detail of royalty reflected in a mirror when they are not present in the room), Velasquez


AI can seem a liberating other to those who are surrounded by toxic humans, and a wise other to those seeking confirmation for what they want to believe. The key issue is to understand that it is not an "other" at all and so not treat its "voice" as anything other than output based on statistical probabilities, generated by an algorithm. That is hard to do when it feels so human, and especially when you are hurt and in desperate need of consolation or support.