People Management Skills Are AI Skills

People Management Skills Are AI Skills

The professionals transforming their work with AI and the ones shrugging it off are doing the same thing: treating it like the best new hire straight out of undergrad. So why does one group multiply their efficiency and identify, and fill, organizational gaps in days not months, while the other complains about junk writing and worthless output that missed the point?

The difference is that one group practices good people management while the other does what bad managers have always done, and blames the employee for their failures as a leader.

The overlap exists because both AI and talented people require the same thing from the person in charge: clear context before execution, rigor after, and enough patience to build something repeatable.

The Human Equivalent

A brilliant 21-year-old, unmanaged, will produce work that is confident, capable, and wrong in ways that take longer to fix than the original task would have taken. The failure lies in whoever handed them work without a real brief, didn't confirm understanding before they ran with it, and didn't review the output with any rigor. We've all seen this play out. At one point, we've probably all been the confident 21-year-old and the frustrated manager.

The people who report that AI produces junk are, in my observation, describing the same situation. The people who report that it changed how they work are describing what happens when a capable person is managed well.

What the Overlap Looks Like

Briefing

Good managers give a goal, constraints, context, and a desired format. They explain what matters about a task and why. They articulate what success looks like before work begins, and they create space for questions.

Bad managers walk by a new employee's desk and drop an article/10-K/complaint and say, "Look into this," then move along, only to be surprised when the employee misses the mark. Without context about what's important, who it's for, why it matters — why are we looking into this — a capable person given a vague directive will make their best guess and execute it confidently.

The same thing happens with AI.

Confirmation

Good managers don't send someone off to do three days of work without confirming mutual understanding first. A sharp new hire who misheard the brief will complete the wrong task flawlessly, then hand it back looking proud of themselves. Two minutes of alignment at the front end is one of the cheapest investments in management.

Same rule, same reason.

Review

Good managers review a new employee's first draft. They look at the reasoning, check the accuracy, and ask whether the result fits the need. They don't skim for obvious errors and forward it up the chain because it looked plausible. Rubber-stamping capable work is how confident errors become your errors.

That was true long before AI came along.

Weakness

Good managers learn their people's consistent weaknesses early and structure work accordingly, not out of distrust, but out of competence. There are known patterns here too: overconfidence on specifics, a tendency to produce plausible-sounding information that doesn't hold up to scrutiny, hedging where clarity is needed. Knowing these patterns changes how you assign work and what you verify. This is not a critique of the tool any more than knowing a talented analyst struggles with executive communication is a critique of the analyst.

The Ceiling Test

Good managers who say their team isn't capable have often, if they're being honest, never actually tested the ceiling. They assigned the same narrow work on repeat, got adequate results, and stopped there. The people who discover what AI can actually do for their work tend to be the same people who kept pushing: new problems, harder questions, more ambiguity. Capability reveals itself under pressure, not under repetition.

Complementary Teams

Good managers don't build teams by stacking identical capabilities. If you have a brilliant strategist, you find people who can execute. If execution is strong but direction is unclear, you find someone who can define and orchestrate strategy. Worse, bad managers often hire skills that mirror their own, building a mono-skilled team constrained by a manager unable to move beyond a single point of strength. The skill is knowing what you have, being clear-eyed about what's missing, and hiring deliberately into the gap rather than defaulting to more of what already works.

The same principle applies here. AI is remarkably strong at synthesis, structure, first-draft generation, and pattern recognition across large amounts of information. It is not a substitute for domain judgment, institutional context, or accountability. The people getting real mileage out of it aren't using it to replace their own thinking. They're using it to fill the gaps around their thinking: the parts of the work that consume time without requiring the judgment only they can supply. They're using it to expand their team's capabilities and augment their team's strengths by offering support, critiques, and assessments without consequence. They've assessed what they have, identified what slows them down, and staffed accordingly.

AI has generalist capabilities, but the more specific the skill or domain, the more intentional context is required. Someone with decades of legal marketing experience who moves into consumer goods marketing requires some time to acclimate and understand the industry shift. The people who try to use it as a generalist replacement for everything tend to find it mediocre at everything. Which is, again, exactly what happens when you hire without intention.

The Onboarding Investment

This is where the "it's faster if I just do it myself" conclusion does the most damage, because it is also the most understandable one. One time, for one task, doing it yourself probably is faster. You know what you want. You don't have to explain it. You don't have to review it.

But that logic only holds if you intend to do this task exactly once. If this is something you expect to do five times or fifty times, the time you spend building toward a repeatable result is not overhead. It's leverage. Every subsequent execution costs less. Every iteration gets closer to what you actually need. The managers who treat every interaction as a single transaction never build anything. The ones who treat interactions as training compound that investment every time the task recurs — and every time something similar comes up. It is building capability within the team that enables growth and scale.

The "faster to do it myself" conclusion is also, if you're being rigorous about it, a description of your management, not a verdict on the capability of the person in front of you.

What Happens When You Scale?

One new hire is a management challenge. Ten new hires is a systems challenge. The same is true here.

If you're the only person working this way, you can hold the context in your head: you know what matters, how you communicate, what good output looks like, what to watch for. But the moment you want consistent results across a team, or want to stop re-explaining the same things every time you start a new task, you need to do what good organizations do when they scale people: write it down.

High-functioning teams don't onboard every new hire through informal osmosis. They build onboarding materials. They document communication standards. They codify what the organization values, how it speaks, what it will and won't do, and what good work looks like in practice. They create those materials precisely because they don't want results to depend on who happens to be in the room.

The same discipline applies here. The organizations getting consistent, scalable results from AI are the ones that have documented what they'd otherwise explain repeatedly: the purpose and context of the work, the tone and format expected, the constraints that always apply, the things that matter most and why. Every new task starts from that shared foundation rather than from zero.

This is not a technical undertaking. It's a documentation undertaking, and it's one that good managers already know how to do. The question is whether you treat your AI workflows the same way you'd treat any other scaled people process, or whether you keep managing each interaction like a one-off.

Be Honest With Yourself

None of this is technical. It doesn't require you to care about AI or follow the discourse or have an opinion on which model is best. It requires you to manage well: set clear expectations, confirm understanding, review rigorously, know what you're working with, and invest in repeatability over the convenience of just doing it yourself.

If you've led people well, you already know how to do all of that. The only question is whether you've applied it here.

And if you consider yourself a strong people manager but have consistently struggled to get usable results from AI, that's worth sitting with. Either you haven't transferred the skills you have to this context, which is a simple correction, or it's an invitation to examine whether your people management practice is as strong as you believe it to be. Only you can answer that. But one of those two things is true.