A few weeks ago Schaun wrote about the costs for an organization of prioritizing tactical over strategic thinking. That characterization seems right as far as it goes. However it’s also worth it to consider the subject as a problem with the distribution of ‘thinking’ across the organization. It’s too easy to come away from those kinds of examples with the lesson being that the manager or leader failed to see the big picture or wasn’t thinking strategically enough. It may be that the problem was that the manager or leader was doing ‘the thinking’ in the first place.
Organizations have been modeled on biological organisms (mostly the human) for about as long as people have thought about organizations. Take the front-cover of Hobbes’ Leviathan (the state): it shows the top half of a man (the state) seeming to rule over a landscape.
The model or analogy pops up frequently in the English language too: a president is called the ‘head of state’, and the entry-level military service-member the ‘foot-soldier’. It seems fairly intuitive that in the body the head is the only part doing the thinking, so it makes sense to tell the ‘foot-soldier’: “don’t think, obey”. When the industrial age introduced machines capable of competing with humans in terms of their capabilities, social thinkers started to model society on machines and governing as mechanics. Again it seems intuitive that with machines it’s not the claw or arm or drill that’s doing the processing, it’s the processor or computer.
Without overstating their influence, models do affect the expectations and predictions that are held and made for the dataset (subject, population, etc.) being modeled. The organism and machine models seem to underlie, sometimes implicitly, thinking about organizations often enough that it’s worth it to think about updating both models, because neither really works the way they’ve conventionally been understood.
I started thinking about this as I began attending a lot of talks on robotics and artificial intelligence and self-assembly here in the Cambridge area. There’s also plenty of interesting stuff being written on this area of work. A recent article (“Squishybots”) in the New Scientist provides a good discussion of the rethink being done in artificial intelligence and robotics.
“Until recently, the conventional view of intelligence has been based on the way we tend to think about ourselves – a central, powerful brain commanding and controlling how we breathe, walk, talk, and so on. The same approach to robotics initially looked promising. Industrial robots have transformed the way we make everything from cars to computer chips. And in virtually every one of them, each movement of an arm and every twist of a joint is controlled by a central processor.”
This conventional view still has a lot of intuitive appeal. It seems to make sense that the difficulty of developing really autonomous and flexibly adaptable robots (or artificial intelligence) stems from the difficulty of creating a computer as capable as ‘real’ intelligence, or the human brain. This way of thinking is described as the “conventional approach” to robotics and results in the conventional limitations of robots:
“The conventional approach works well enough inside settings like factories, a predictable environment where specific actions have to be repeated over and over again. But ask such a robot to navigate a maze or tie a pair of laces and it will blow a fuse. As soon as a robot faces a situation it doesn’t recognize, it is paralyzed by inadequacy. It is simply impossible to program for every eventuality.”
To get past these traditional limitations, roboticists had to rethink the way they approached robotics. For the new approach (sometimes called morphological computing), better robots are not mainly about big advances in brainpower. Instead they result from focus on how the different parts of the robot interact with the world it’s operating in. To create a robot capable of doing the complex things we’d like it to do, it doesn’t just need a brain that can compute, it needs an entire body that can compute. When an arm or tentacle reaches out to grab something, it’s not the robot’s brain that tells it where to grab the object or when to grab or how tightly to grab, instead the process of grabbing emerges from the arm’s interaction with the thing it’s grabbing, as it’s grabbing.
It’s not just about additional computation performed by the body either. The conventional approach to robotics placed too much responsibility in the central processor. When the central processor attempts to control behavior it gets easily confused and becomes obstructive. The assumption that the central processor would perform the complex computations needed to navigate the world was to some extent an early design limitation influenced by a misunderstanding of how its model (the human brain/body) operated. The New Scientist piece gives a great example:
“To understand how building robots can be done differently, just think about the way you walk… During this process, the brain doesn’t monitor and control the trajectory of each ankle, knee and hip joint. Instead it simply changes the stiffness of the leg muscles. The muscles have low stiffness when the leg swings forward and high stiffness on impact with the ground. Other than that, the brain lets the local dynamics take over. The knee joint simply swings passively, but the design of the knee, the materials from which it is made and the laws of physics all combine to do the rest. In a sense, the morphology of the body – its shape and substance – perform a kind of computation to control what is going on.”
The last two decades of neuroscience and cognitive psychology have significantly changed the way we understand the distribution of control in human behavior. It’s just one example that you don’t consciously control where you place your foot when you walk. This change also applies to important decisions, like the social judgments formed about others, which seem even more intuitively like they emerge from a single powerful part of the body (the brain). In general our bodies do a lot more work and computation than they’re given credit for.
These updates in the thinking about robots and human behavior (the models) suggest we should update our expectations for the organizations we’re thinking about. The example Schaun gave in his post could be seen as an example of a manager failing to think strategically. But it might also be considered an example of a manager ‘doing the thinking’ for her analysts where she didn’t need to. We were actually really lucky in having a manager who rarely made that kind of mistake (something I think worth saying), but that kind of mistake was one of the most common I saw in the two-plus years I worked for the Army.
For example, during my two years working within the DoD my organization undertook a large workplace reorganization, a big part of which involved physically moving and rearranging the workspaces (cubes mostly) of a large workforce. During the long period of time in which that reorganization occurred, I often wondered when the organization’s leadership would ask for input about new desk and workplace arrangements. They never did. Instead the members of the workforce were assigned their new desks and a date to be at them. When I mentioned how strange and ineffective that process seemed to me (giving analytical workers no input in the design and maintenance of their physical and social environments), the typical response was: well no arrangement is going to satisfy everyone so someone has to make the ultimate decision, otherwise you’d have anarchy. That response was blind to its huge underlying assumption: that one decision needed to be made at one time. The model of highly and rigidly structured, static environments forces a skewed distribution of decisions. That’s why really innovative organizations are in ways that bypass that model.
The way the Joint Special Operations Command reorganized to fight Al Qaeda is another example.
Schaun and I have both made the argument that organizations should be regularly and consistently collecting a lot more data than they do (on themselves and on the environments they’re operating in). When and by whom decisions are being made is a set of information that would be one of the first things I’d start to record if I were setting out to analyze and understand an organization. What is the distribution of decision-making in the organization? Is a small set of people in a front office making the vast majority of the significant decisions? It’s unlikely that there’s any single distribution that’s best for all situations and all environments. But narrowly hierarchical and uniformly structured organizations (where the commander makes the decisions and everyone else carries them out) probably aren’t the ones best suited to face the kind of problems and challenges that today’s environments pose.