4 min read

13 Ten Steps Behind, Ten Steps Ahead

On empathy, humility, and AI status signalling.

Tweet: In programming, if someone tells you you're overcomplicating it, they're either 10 steps behind you or 10 steps ahead of you
@acdlite, June 2018.

I first saw this tweet in 2018 and it has stayed with me. What makes it so good is that it invites empathy. Not “who is right” but “what are they seeing that I am not?” And the other way around - “what am I seeing that they might not be?”

It happens everywhere. Between engineers at different experience levels. Between product and engineering. Between someone who has been deep in a problem for months and someone seeing it for the first time. Different backgrounds, different contexts, different depths of understanding, different blind spots. AI has widened the range - more people are capable across more domains than ever before, and the lines between disciplines are blurring fast. But it has also made it harder to tell who has depth and who has surface familiarity.

Tom DeMarco and Timothy Lister wrote in 1987 that the major problems of our work are not so much technological as sociological in nature. People problems. That was true then. It is true now. The confidence and competence gaps. The HiPPO in the room shaping the decision. The quiet engineer who sees the flaw but does not have the political capital to say it. The confident voice is rarely the most informed. More capability has not changed this. It has added a new layer. Open LinkedIn on any given morning and you can see it.

AI Status Signalling - The act of publicly aligning oneself with AI to project competence, relevance, or forward-thinking - often prioritising visibility over substance, and signalling participation in the trend regardless of depth of understanding or meaningful impact.

Or, in more colloquial terms across Britain and Ireland: to waffle about AI.

The tell is volume. The people doing the most interesting work with AI tend to be the quietest about it. The loudest advocates are rarely the ones who have to make it actually work - they dismiss the details, skip the nuance, and move on with more confidence than the situation earned. Someone else inherits the complexity, the failure modes, and the Friday night debugging session. The hardest part is that you cannot tell when you are the one doing it.

Michael Polanyi put it simply: “We can know more than we can tell.” The best teams operate implicitly - shared context, shorthand, decisions that do not need explaining because everyone already understands the reasoning. That is what makes them fast. A new CTO who wants to rewrite everything with microservices before understanding why it was built the way it was. A product leader who assumes the team is overcomplicating things. A CEO who cannot understand why it takes two weeks to put a button in the app - and maybe he is right. Neither is wrong to ask the question. But the team cannot hand over what they know - it was never written down, never made explicit. And the team itself may be protecting assumptions that stopped being true long ago. Neither has the full picture.

Context shifts. People move ahead or fall behind. And because the knowledge is implicit, nobody can easily tell where the gaps are. Fred Brooks called this the distinction between accidental and essential complexity. AI is stripping away the explicit kind - the ceremony, the boilerplate, the friction you can see and name. The implicit kind remains. And it is easier than ever to mistake the removal of one for the resolution of the other.

Marty Cagan argues that the best products come from the tension between product, design, and engineering - distinct disciplines, each bringing perspective the others lack. When everyone can do a bit of everything, the counterintuitive case is that specialists matter more, not less. That separation exists because “should we” is a harder question than “can we.” But expertise can also become its own blind spot. The question is whether new capability comes with the awareness of what you still do not know.

When everything can be built - and built by anyone - the challenge is deciding what to build and how deep to go. Des Traynor at Intercom put it well: “Good product owners let in very few dud features. Great ones kill them on sight.” The discipline is in what you choose not to build. And that choice depends on seeing the full picture - which nobody does alone.

What has changed is the speed - and the confidence. More capability, but also sometimes more certainty than the situation warrants.

Am I ten steps behind, or ten steps ahead? What if we are both a little bit wrong and a little bit right?