We spend more effort learning how to prompt LLMs than we do communicating with other humans.
People complain that AI is bad at understanding humans, but in truth, humans are rediscovering how bad we are at explaining ourselves.
Prompting works because you’re forced to do interface discovery consciously. You test phrasing, observe outputs, refine assumptions. With humans, we think discovery is rude or inefficient, so we skip it. Then we’re shocked when the call fails.
That said, communicating with humans is emotionally expensive. You hedge, soften, read the room, manage egos, dodge landmines. Prompting an LLM is socially consequence-free. No feelings get hurt. No Slack thread explodes. You can iterate brutally and instantly. Of course people gravitate toward the system that rewards clarity instead of punishing it.
But shouldn’t we learn from this? How do we learn to communicate more effectively, and respect each other’s undocumented APIs? And what will it mean for the future of this world if we can’t?
Food for thought.
This post was originally published on LinkedIn
