Companies automate their org charts and call it transformation. The failure mode is organizational. The company automated an artifact of its own political history, a capable enough model pointed at the wrong target, and mistook that artifact for work.
Conway’s Law says that systems reflect the communication structures of the organizations that build them. This is usually read as advice for software architects: watch your team boundaries, because they will appear in your APIs. But there is a more unsettling version of the same observation, and we do not need to wait for a transformation to fail to see it. The structure of the org already tells us what the system will inherit. Organizations themselves are systems, and their structures reflect the history of how work was negotiated, not the nature of the work itself.
Org charts are archaeological records. A role exists because someone won an argument about who should own something. The boundary between two teams exists because the boundary was contested and then settled. The company shipped its politics, and called it structure. Roles are defined by their interfaces to other roles, by what they are permitted to know, permitted to touch, permitted to decide. Their relationship to the underlying work is incidental. When we automate a role, we automate those permissions. The work was never really the thing we captured.
I keep seeing the same architecture wherever I look. A front-of-house function that owns the customer relationship, and a back-of-house function that owns the data and resolution. Support and operations. Intake and investigation. Triage and treatment. It exists for reasons that feel sensible at the time they are established. Front-line staff shouldn’t need deep access to account data to do their job. So they don’t have it. The handoff between front and back is where context degrades, where cases get misrouted, where the customer repeats themselves three times to three different people who each see a different slice of the same problem.
This is a known cost of the structure. Organizations absorb it through human intuition. The support agent who knows to ask the right questions to get routing right, the operations analyst who reads between the lines of a badly-formed ticket. The humans work around the information architecture. When we automate the front-of-house role, we inherit its information constraints along with its responsibilities. The system sees what the role saw. It makes decisions with what the role was permitted to know. And the gap that humans were working around becomes, suddenly, a gap that nothing is working around. The misrouting rate climbs. Cases stall. Customers have worse experiences than they did before the transformation.
The regulatory version of this constraint is just a sharp formalization of something softer that exists almost everywhere. The information boundary between front and back office isn’t always written into a compliance framework. Often it’s just a norm, a habit, a vestige of a hiring decision or a reorg that nobody fully remembers. But the effect is the same: a role that cannot see enough to do the job it is nominally responsible for, with improvisation filling the gaps at every seam. The seams are not automatable. We cannot automate the judgment that fills a structural gap, because automating it requires seeing both sides of the gap, which is what the role structure prevents. The capability was there. What failed was the imagination about what to point it at.
The familiar advice is to not pave the cowpath: don’t automate a broken process, fix it first. This is correct but insufficient. It tells us to improve what we automate without questioning what the unit of automation should be. A well-designed role is still a role. It is still defined by its permissions and interfaces. Automating it still captures the shell. The cowpath argument is about the quality of the process. This is an argument about the wrong level of abstraction entirely.
The right unit of automation is a task with complete ownership of a unit of data. What I mean by this is something close to a database transaction: complete ownership of read, transformation, and write, with nothing exported halfway through. A task is well-defined when it can be specified entirely in terms of its relationship to data: what it takes in, what it produces, what it is allowed to know. But most roles aren’t defined that way. The most important thing about them is who they aren’t allowed to talk to and what they aren’t allowed to see.
Before automating anything, it is worth asking what the smallest complete unit of work looks like. Not the smallest role, but the smallest piece that can be owned end to end, from intake through resolution. That piece is almost always larger than a role and almost always crosses the boundary between front-of-house and back. This reframing has a consequence that is more radical than it first appears. If the right unit of automation is a task with atomic data ownership, and if that task cannot currently exist as a human role because of how the org is structured, then AI transformation is a question less about building better tools than about whether the information architecture the org has been working around for years still needs to exist. What we are automating, in other words, is residue. The org chart is what is left after decisions about work have been made, negotiated, and forgotten. The roles hardened in place. The boundaries calcified. And now we are pointing capable systems at the calcified structure and asking them to do the work that the structure was only ever an approximation of. We have been compensating for the approximation with human skill for years, and now we are asking systems to operate in the same reduced space and wondering why the results are flat.
The opportunity is resolution. Or at least, that’s how I’ve been thinking about it. I’m not sure this framing is complete. There are real reasons information boundaries exist, and not all of them are vestigial. Compliance requirements, privacy obligations, genuine separation of concerns. But I think the default should be to question the boundary rather than to assume it, and most organizations I’ve worked with do the opposite. An automated system that owns the full context can do something that no human role was ever designed to do: hold the whole problem at once.
Most AI transformation efforts fail because the ambition is too small. The technology is rarely the problem. These efforts try to automate what exists rather than asking what should exist. They inherit the org chart’s assumptions about what a single function should know and do and see. And then they are surprised when the system produces the same failures that the human structure produced.
The org chart is not the work. It is the residue of decisions about the work, made under constraints that may no longer apply, by people who may no longer be at the company, for reasons that may no longer be legible. Automating it preserves the residue. What we actually want to automate is the thing the residue was always pointing at: the actual movement of a problem from existence to resolution, with everything that movement requires. That is a harder thing to build. It requires asking questions that make people uncomfortable, about why certain information lives where it does and who decided that and whether the decision still makes sense. I don’t know how many organizations are willing to ask them. Most, I suspect, will automate the role, absorb the failure, and conclude that the technology wasn’t ready. I’d like to be wrong about that.