A sailor leaves the bar. The ship is at the end of the pier. The pier is narrow. The water is on both sides. The sailor is drunk.
He knows where the ship is. He fixates on it. Drunks are good at fixation — give them one thing to lock onto and they will lurch toward it with surprising determination, stumbling and recovering, drifting left and catching themselves, drifting right and lurching back. The fixation is the only thing keeping them on the pier. Take it away and they wander into the water inside of three steps.
He takes a step. Then another. Each step is its own small decision, pulled by the fixation but not controlled by it. He drifts. He recovers. He drifts again. Sometimes he makes it. Sometimes the water gets him first.
This is what a large language model does, word by word. It has read an incomprehensible amount of human writing — books, arguments, conversations, technical manuals, everything — and from all that reading it developed a very good sense of what words tend to follow other words in any given situation. Not rules. Not understanding. Pattern. Your prompt sets the fixation. Each word lurches toward it. The walk is wobbly. The sailor is always drunk.
This should be a relief, not a threat. Once you know the sailor is drunk, you stop blaming the sailor. You learn to point him towards the ship.
The most common frustration with AI is the feeling that it almost works. You give it careful instructions and it does something reasonable but wrong. You correct it and it agrees completely before doing the same wrong thing again. You give it a style guide to ignore. You tell it not to do the thing; it does the thing.
Every one of these failures has the same cause: the sailor can only walk toward ships. He cannot walk away from things.
"Write like Lord Byron" works. The ship glows with a specific color — Byron's cadence, his vocabulary, his rhythmic compression. Every word steps toward that attractor because the walk has encountered thousands of examples of what Lord Byron wrote. It is continuing a story it already knows.
"Don't use em-dashes" fails. There is no away-from weight. The sailor does not see the absence of em-dashes. He walks toward the destination, and if the path passes through em-dashes, he uses em-dashes. You have described a prohibition, and the walk does not navigate by prohibition.
The fix is to show rather than describe. "Write with short declarative sentences and active verbs" is better than "don't be verbose" — it points toward something rather than away from something — but the sailor has encountered too many definitions of "short" and "declarative" to know exactly which ship you mean. Write the first sentence yourself. Write two if you want to be sure. Hand him a story already in motion. He will continue it in the voice you established because that is what the story is doing. The example leans towards the ship. The instruction is just a compass heading that may or may not help the sailor along.
The second failure comes from trying to give the sailor a route rather than a destination.
"Write the introduction, then develop the argument, then summarize the key points" sounds sensible. It is a pier with two bends. The sailor builds momentum toward the introduction, then must reorient toward the argument, then reorient again toward the summary. He stumbles at each turn. Every pivot is a new pier, and the sailor starts drunk at each one.
The fix is to walk one pier at a time: one prompt for the introduction, one for the argument, one for the summary. Short piers are where the sailor survives. The fewer steps between the bar and the ship, the less time there is to fall in the water.
Every "also" in a prompt is a second ship. Every "while you're doing that" is a new thing to remember. Every "and make sure to" is a distraction. The drunk with conflicting goals is not torn — he is just lost. He staggers into the middle of the pier, confused, and the water takes him from the side he wasn't watching. Identify the ship. Walk the pier. Review. Then start the next walk.
There is a more subtle version of this problem, and it is the one that trips up people who have been using AI long enough to think they understand it.
They invest in detailed guidance: a comprehensive system prompt, a style guide, a tone document, a list of things to avoid. Pages of it. Carefully written. Thoughtfully organized.
The guidance gets read. The sailor encounters it partway down the pier. He processes it as a thing that happened in the story — one paragraph among many — and keeps walking in the direction the accumulated context has already established. The story was going somewhere before the guidance appeared. Stories have momentum.
This is why examples outperform guidelines, and why that matters more than almost anything else in practical AI use. A style guide says "be concise." An example of concise writing makes the sailor's feet feel what concise means. If you want the AI to write in a particular voice, start the response yourself and let it continue. Write the first two sentences. The sailor picks up the story and keeps walking in that direction because that is what the story is doing. He is an improv partner, yes-anding whatever you hand him. Give him something worth continuing.
Less guidelines, more examples. This feels backward. We are accustomed to communicating intent through description. But the sailor cannot read intent. He can only feel the pull of the ship.
There is one more thing worth knowing, and it is the strangest part.
You cannot tell the difference between a hallucination and a correct answer during the walk itself. The sailor doesn't know where he'll wind up. The walk that produced "the capital of France is Paris" and the walk that produced "the capital of Australia is Sydney" felt identical from the inside. Same process, same confidence, same fluency, same goal. We call one of them a hallucination only after we see where the sailor wound up. The label is ours, applied after the fact, to a walk that was simply doing what it always does and happened to end in the water.
The AI does not know what is true. It guesses at what comes next in the story. Most of the time, what comes next is also what is true, because the story was built from reality. But sometimes it is not, and the walk generates the next word with identical confidence regardless. We call something a hallucination after the fact, when we check the output against reality. Before that check, it looks exactly like everything else.
The appropriate response is not distrust. It is calibration. For tasks where errors are costly and verifiable, verify. For tasks where the worst outcome is a mediocre draft, let the sailor walk and edit what he brings back. The AI is a multiplier. If you bring careful judgment, it amplifies careful judgment. If you bring sloppiness, it amplifies that instead.
Here is the practical test for any prompt you write. One ship. It is the thing you actually want. Ask whether the ship is vivid enough to walk toward. Ask whether any of your instructions describe an absence rather than a destination. Ask whether you have given examples of what you want rather than descriptions of what you don't.
If the output drifts consistently in one direction, the fixation needs adjusting. If the output is randomly wrong, the pier is too long. If the output ignores your instructions, the instructions were probably waypoints rather than a destination.
The sailor will always be drunk. The steps will always be wobbly. The question is never how to make him sober. The question is whether the ship is glowing.
Interactive: The Drunkard's Walk
Adjust the direction weights. Run a batch of 1,000 trials. Watch the statistics.