
Prompting LLMs and ignoring the human API
We spend more effort learning how to prompt LLMs than we do communicating with other humans. People complain that AI is bad at understanding humans, but in truth, humans are rediscovering how bad we are at explaining ourselves. Prompting works because you’re forced to do interface discovery consciously. You test phrasing, observe outputs, refine assumptions. With humans, we think discovery is rude or inefficient, so we skip it. Then we’re shocked when the call fails. ...








