Thinking in Systems: A Primer

by Donella Meadows

Non-fiction
StartedMarch 5, 2025
FinishedJuly 6, 2025
Reading time119d

Highlights

Strange bedfellows, but systems thinking transcends disciplines and cultures and, when it is done right, it overarches history as well.

In 1972, Dana was lead author of The Limits to Growth—a best-selling and widely translated book. The cautions she and her fellow authors issued then are recognized today as the most accurate warnings of how unsustainable patterns could, if unchecked, wreak havoc across the globe. That book made headlines around the world for its observations that continual growth in population and consumption could severely damage the ecosystems and social systems that support life on earth, and that a drive for limitless economic growth could eventually disrupt many local, regional, and global systems.

Hunger, poverty, environmental degradation, economic instability, unemployment, chronic disease, drug addiction, and war, for example, persist in spite of the analytical ability and technical brilliance that have been directed toward eradicating them. No one deliberately creates those problems, no one wants them to persist, but they persist nonetheless. That is because they are intrinsically systems problems—undesirable behaviors characteristic of the system structures that produce them. They will yield only as we reclaim our intuition, stop casting blame, see the system as the source of its own problems, and find the courage and wisdom to restructure it.

This ancient Sufi story was told to teach a simple lesson but one that we often ignore: The behavior of a system cannot be known just by knowing the elements of which the system is made.

I have yet to see any problem, however complicated, which, when looked at in the right way, did not become still more complicated. —POUL ANDERSON

A system* is an interconnected set of elements that is coherently organized in a way that achieves something. If you look at that definition closely for a minute, you can see that a system must consist of three kinds of things: elements, interconnections, and a function or purpose.

No one understands all the relationships that allow a tree to do what it does. That lack of knowledge is not surprising. It’s easier to learn about a system’s elements than about its interconnections.

As the days get shorter in the temperate zones, a deciduous tree puts forth chemical messages that cause nutrients to migrate out of the leaves into the trunk and roots and that weaken the stems, allowing the leaves to fall.

Many of the interconnections in systems operate through the flow of information. Information holds systems together and plays a great role in determining how they operate.

Some interconnections in systems are actual physical flows, such as the water in the tree’s trunk or the students progressing through a university. Many interconnections are flows of information—signals that go to decision points or action points within a system.

The best way to deduce the system’s purpose is to watch for a while to see how the system behaves.

Purposes are deduced from behavior, not from rhetoric or stated goals.

An important function of almost every system is to ensure its own perpetuation.

Systems can be nested within systems. Therefore, there can be purposes within purposes.

Keeping sub-purposes and overall system purposes in harmony is an essential function of successful systems. I’ll

Changing elements usually has the least effect on the system. If you change all the players on a football team, it is still recognizably a football team.

A system generally goes on being itself, changing only slowly if at all, even with complete substitutions of its elements—as long as its interconnections and purposes remain intact.

The least obvious part of the system, its function or purpose, is often the most crucial determinant of the system’s behavior.

If in a university the students graded the professors, or if arguments were won by force instead of reason, the place would need a different name. It might be an interesting organization, but it would not be a university. Changing interconnections in a system can change it dramatically.

Changes in function or purpose also can be drastic. What if you keep the players and the rules but change the purpose—from winning to losing, for example?

A change in purpose changes a system profoundly, even if every element and interconnection remains the same.

A stock is the foundation of any system. Stocks are the elements of the system that you can see, feel, count, or measure at any given time. A system stock is just what it sounds like: a store, a quantity, an accumulation of material or information that has built up over time.

Stocks change over time through the actions of a flow. Flows are filling and draining, births and deaths, purchases and sales, growth and decay, deposits and withdrawals, successes and failures. A stock, then, is the present memory of the history of changing flows within the system.

If you understand the dynamics of stocks and flows—their behavior over time—you understand a good deal about the behavior of complex systems.

It is in a state of dynamic equilibrium—its level does not change, although water is continuously flowing through it.

Everyone understands that you can prolong the life of an oil-based economy by discovering new oil deposits. It seems to be harder to understand that the same result can be achieved by burning less oil. A breakthrough in energy efficiency is equivalent, in its effect on the stock of available oil, to the discovery of a new oil field—although different people profit from it.

A stock can be increased by decreasing its outflow rate as well as by increasing its inflow rate. There’s more than one way to fill a bathtub!

The tub can’t fill up immediately, even with the inflow faucet on full blast. A stock takes time to change, because flows take time to flow. That’s a vital point, a key to understanding why systems behave as they do. Stocks usually change slowly.

Stocks generally change slowly, even when the flows into or out of them change suddenly. Therefore, stocks act as delays or buffers or shock absorbers in systems.

A stock takes time to change, because flows take time to flow.

Once an economy has a lot of oil-burning furnaces and automobile engines, it cannot change quickly to furnaces and engines that burn a different fuel, even if the price of oil suddenly changes. It has taken decades to accumulate the stratospheric pollutants that destroy the earth’s ozone layer; it will take decades for those pollutants to be removed.

If you have a sense of the rates of change of stocks, you don’t expect things to happen faster than they can happen. You don’t give up too soon.

The presence of stocks allows inflows and outflows to be independent of each other and temporarily out of balance with each other.

Stocks allow inflows and outflows to be decoupled and to be independent and temporarily out of balance with each other.

It would be hard to run an oil company if gasoline had to be produced at the refinery at exactly the rate the cars were burning

Most individual and institutional decisions are designed to regulate the levels in stocks.

As the stock of growing grain rises or fails to rise in the fields, farmers decide whether to apply water or pesticide, grain companies decide how many barges to book for the harvest, speculators bid on future values of the harvest, cattle growers build up or cut down their herds.

Systems thinkers see the world as a collection of stocks along with the mechanisms for regulating the levels in the stocks by manipulating flows.

Not all systems have feedback loops. Some systems are relatively simple open-ended chains of stocks and flows.

Remember—all system diagrams are simplifications of the real world. We each choose how much complexity to look

This kind of stabilizing, goal-seeking, regulating loop is called a balancing feedback loop, so I put a B inside the loop in the diagram. Balancing feedback loops are goal-seeking or stability-seeking.

The second kind of feedback loop is amplifying, reinforcing, self-multiplying, snowballing—a vicious or virtuous circle that can cause healthy growth or runaway destruction. It is called a reinforcing feedback loop, and will be noted with an R in the diagrams.

The more prices go up, the more wages have to go up if people are to maintain their standards of living. The more wages go up, the more prices have to go up to maintain profits. This means that wages have to go up again, so prices go up again.

This is not simple linear growth. It is not constant over time. The growth of the bank account at lower interest rates may look linear in the first few years. But, in fact, growth goes faster and faster. The more is there, the more is added. This kind of growth is called “exponential.”

Reinforcing feedback loops are self-enhancing, leading to exponential growth or to runaway collapses over time. They are found whenever a stock has the capacity to reinforce or reproduce itself.

HINT ON REINFORCING LOOPS AND DOUBLING TIME Because we bump into reinforcing loops so often, it is handy to know this shortcut: The time it takes for an exponentially growing stock to double in size, the “doubling time,” equals approximately 70 divided by the growth rate (expressed as a percentage). Example: If you put $100 in the bank at 7% interest per year, you will double your money in 10 years (70 ÷ 7 = 10). If you get only 5% interest, your money will take 14 years to double.

In a well-insulated house, the leak will be slower and so the house more comfortable than a poorly insulated one, even a poorly insulated house with a big furnace.

The information delivered by a feedback loop can only affect future behavior; it can’t deliver the information, and so can’t have

The information delivered by a feedback loop—even nonphysical feedback—can only affect future behavior; it can’t deliver a signal fast enough to correct behavior that drove the current feedback. Even nonphysical information takes time to feedback into the system.

The specific principle you can deduce from this simple system is that you must remember in thermostat-like systems to take into account whatever draining or filling processes are going on. If you don’t, you won’t achieve the target level of your stock.

A stock-maintaining balancing feedback loop must have its goal set appropriately to compensate for draining or inflowing processes that affect that stock. Otherwise, the feedback process will fall short of or exceed the target for the stock.

This behavior is an example of shifting dominance of feedback loops. Dominance is an important concept in systems thinking. When one loop dominates another, it has a stronger impact on behavior. Because systems often have several competing feedback loops operating simultaneously, those loops that dominate the system will determine the behavior.

Dynamic systems studies usually are not designed to predict what will happen. Rather, they’re designed to explore what would happen, if a number of driving factors unfold in a range of different ways.

System dynamics models explore possible futures and ask “what if” questions.

Model utility depends not on whether its driving scenarios are realistic (since no one can know that for sure), but on whether it responds with a

One important piece of the larger system that affects population is the economy. At the heart of the economy is another reinforcing-loop-plus-balancing-loop system—the same kind of structure, with the same kinds of behavior, as the population (see Figure 27). The greater the stock of physical capital (machines and factories) in the economy and the efficiency of production (output per unit of capital), the more output (goods and services) can be produced each year.

The more output that is produced, the more can be invested to make new capital. This is a reinforcing loop, like the birth loop for a population. The investment fraction is equivalent to the fertility. The greater the fraction of its output a society invests, the faster its capital stock will grow.

Physical capital is drained by depreciation—obsolescence and wearing-out. The balancing loop controlling depreciation is equivalent to the death loop in a population. The “mortality” of capital is determined by the average capital lifetime. The longer the lifetime, the smaller the fraction of capital that must be retired and replaced each year.

This is another example of a principle we’ve already encountered: You can make a stock grow by decreasing its outflow rate as well as by increasing its inflow rate!

Just as many factors influence the fertility and mortality of a population, so many factors influence the output ratio, investment fraction, and the lifetime of capital—interest rates, technology, tax policy, consumption habits, and prices, to name just a few.

It may seem strange to you that I call the capital system the same kind of “zoo animal” as the population system. A production system with factories and shipments and economic flows doesn’t look much like a population system with babies being born and people aging and having more babies and dying. But from a systems point of view these systems, so dissimilar in many ways, have one important thing in common: their feedback-loop structures. Both have a stock governed by a reinforcing growth loop and a balancing death loop. Both also have an aging process. Steel mills and lathes and turbines get older and die just as people do. Systems with similar feedback structures produce similar dynamic behaviors.

A delay in a balancing feedback loop makes a system likely to oscillate.

Something has to change and, since this system has a learning person within it, something will change. “High leverage, wrong direction,” the system-thinking car dealer says to herself as she watches this failure of a policy intended to stabilize the oscillations. This perverse kind of result can be seen all the time—someone trying to fix a system is attracted intuitively to a policy lever that in fact does have a strong effect on the system. And then the well-intentioned fixer pulls the lever in the wrong direction! This is just one example of how we can be surprised by the counterintuitive behavior of systems when we start trying to change them.

Delays are pervasive in systems, and they are strong determinants of behavior. Changing the length of a delay may (or may not, depending on the type of delay and the relative lengths of other delays) make a large change in the behavior of a system.

Part of the problem here is that the car dealer has been reacting not too slowly, but too quickly. Given the configuration of this system, she has been overreacting. Things would go better if, instead of decreasing her response delay from three days to two, she would increase the delay from three days to six, as illustrated in Figure 36. As Figure 36 shows, the oscillations are

That very large system, with interconnected industries responding to each other through delays, entraining each other in their oscillations, and being amplified by multipliers and speculators, is the primary cause of business cycles. Those cycles don’t come from presidents, although presidents can do much to ease or intensify the optimism of the upturns and the pain of the downturns. Economies are extremely complex systems; they are full of balancing feedback loops with delays, and they are inherently oscillatory.

Therefore, any physical, growing system is going to run into some kind of constraint, sooner or later. That constraint will take the form of a balancing loop that in some way shifts the dominance of the reinforcing loop driving the growth behavior, either by strengthening the outflow or by weakening the inflow.

But any real physical entity is always surrounded by and exchanging things with its environment. A corporation needs a constant supply of energy and materials and workers and managers and customers. A growing corn crop needs water and nutrients and protection from pests. A population needs food and water and living space, and if it’s a human population, it needs jobs and education and health care and a multitude of other things. Any entity that is using energy and processing materials needs a place to put its wastes, or a process to carry its wastes away. Therefore, any physical, growing system is going to run into some kind of constraint, sooner or later. That constraint will take the form of a balancing loop that in some way shifts the dominance of the reinforcing loop driving the growth behavior, either by strengthening the outflow or by weakening the inflow.

In physical, exponentially growing systems, there must be at least one reinforcing loop driving the growth and at least one balancing loop constraining the growth, because no physical system can grow forever in a finite environment.

A quantity growing exponentially toward a constraint or limit reaches that limit in a surprisingly short time. The higher and faster you grow, the farther and faster you fall, when you’re building up a capital stock dependent on a nonrenewable resource. In the face of exponential growth of extraction or use, a doubling or quadrupling of the nonrenewable resource give little added time to develop alternatives.

The real choice in the management of a nonrenewable resource is whether to get rich very fast or to get less rich but stay that way longer.

I will just point out that, according to the dynamics of depletion, the larger the stock of initial resources, the more new discoveries, the longer the growth loops elude the control loops, and the higher the capital stock and its extraction rate grow, and the earlier, faster, and farther will be the economic fall on the back side of the production peak.

Renewable resources are flow-limited. They can support extraction or harvest indefinitely, but only at a finite flow rate equal to their regeneration rate. If they are extracted faster than they regenerate, they may eventually be driven below a critical threshold and become, for all practical purposes, nonrenewable.

Nonrenewable resources are stock-limited. The entire stock is available at once, and can be extracted at any rate (limited mainly by extraction capital). But since the stock is not renewed, the faster the extraction rate, the shorter the lifetime of the resource. In many real economies based on real renewable

Very long-term renewable-resource cycles like these have been observed, for example, in the logging industry in New England, now in its third cycle of growth, overcutting, collapse, and eventual regeneration of the resource. But this is not true for all resource populations.

Neither renewable nor nonrenewable limits to growth allow a physical stock to grow forever, but the constraints they impose are dynamically quite different. The difference comes because of the difference between stocks and flows. The trick, as with all the behavioral possibilities of complex systems, is to recognize what structures contain which latent behaviors, and what conditions release those behaviors—and, where possible, to arrange the structures and conditions to reduce the probability of destructive behaviors and to encourage the possibility of beneficial ones.

If the land mechanism as a whole is good, then every part is good, whether we understand it or not. If the biota, in the course of aeons, has built something we like but do not understand, then who but a fool would discard seemingly useless parts? To keep every cog and wheel is the first precaution of intelligent tinkering. —Aldo

Why do systems work so well? Consider the properties of highly functional systems—machines or human communities or ecosystems—which are familiar to you. Chances are good that you may have observed one of three characteristics: resilience, self-organization, or hierarchy.

A set of feedback loops that can restore or rebuild feedback loops is resilience at a still higher level—meta-resilience, if you will. Even higher meta-meta-resilience comes from feedback loops that can learn, create, design, and evolve ever more complex restorative structures. Systems that can do this are self-organizing, which will be the next surprising system characteristic I come to.

There are always limits to resilience.

Resilience is not the same thing as being static or constant over time. Resilient systems can be very dynamic. Short-term oscillations, or periodic outbreaks, or long cycles of succession, climax, and collapse may in fact be the normal condition, which resilience acts to restore!

Hundreds of years of intensive management of the forests of Europe gradually have replaced native ecosystems with single-age, single-species plantations, often of nonnative trees. These forests are designed to yield wood and pulp at a high rate indefinitely. However, without multiple species interacting with each other and drawing and returning varying combinations of nutrients from the soil, these forests have lost their resilience. They seem to be especially vulnerable to a new form of insult: industrial air pollution.

Systems need to be managed not only for productivity or stability, they also need to be managed for resilience—the ability to recover from perturbation, the ability to restore or repair themselves.

Loss of resilience can come as a surprise, because the system usually is paying much more attention to its play than to its playing space. One day it does something it has done a hundred times before and crashes.

This capacity of a system to make its own structure more complex is called self-organization. You see self-organization in a small, mechanistic way whenever you see a snowflake, or ice feathers on a poorly insulated window, or a supersaturated solution suddenly forming a garden of crystals. You see self-organization in a more profound way whenever a seed sprouts, or a baby learns to speak, or a neighborhood decides to come together to oppose a toxic waste dump.

Self-organization is such a common property, particularly of living systems, that we take it for granted. If we didn’t, we would be dazzled by the unfolding systems of our world. And if we weren’t nearly blind to the property of self-organization, we would do better at encouraging, rather than destroying, the self-organizing capacities of the systems of which we are a part.

Self-organization produces heterogeneity and unpredictability. It is likely to come up with whole new structures, whole new ways of doing things. It requires freedom and experimentation, and a certain amount of disorder. These conditions that encourage self-organization often can be scary for individuals and threatening to power structures. As a consequence, education systems may restrict the creative powers of children instead of stimulating those powers. Economic policies may lean toward supporting established, powerful enterprises rather than upstart, new ones. And many governments prefer their people not to be too self-organizing.

Systems often have the property of self-organization—the ability to structure themselves, to create new structure, to learn, diversify, and complexify. Even complex forms of self-organization may arise from relatively simple organizing rules—or may not.

In the process of creating new structures and increasing complexity, one thing that a self-organizing system often generates is hierarchy.

Hora’s watches were no less complex than those of Tempus, but he put together stable subassemblies of about ten elements each. Then he put ten of these subassemblies together into a larger assembly; and ten of those assemblies constituted the whole watch. Whenever Hora had to put down a partly completed watch to answer the phone, he lost only a small part of his work. So he made his watches much faster and more efficiently than did Tempus.

The watches made by both Hora and Tempus consisted of about one thousand parts each. Tempus put his together in such a way that if he had one partly assembled and had to put it down—to answer the phone, say—it fell to pieces. When he came back to it, Tempus would have to start all over again. The more his customers phoned him, the harder it became for him to find enough uninterrupted time to finish a watch. Hora’s watches were no less complex than those of Tempus, but he put together stable subassemblies of about ten elements each. Then he put ten of these subassemblies together into a larger assembly; and ten of those assemblies constituted the whole watch. Whenever Hora had to put down a partly completed watch to answer the phone, he lost only a small part of his work. So he made his watches much faster and more efficiently than did Tempus.

Hierarchies evolve from the lowest level up—from the pieces to the whole, from cell to organ to organism, from individual to team, from actual production to management of production.

When a subsystem’s goals dominate at the expense of the total system’s goals, the resulting behavior is called suboptimization.

To be a highly functional system, hierarchy must balance the welfare, freedoms, and responsibilities of the subsystems and total system—there must be enough central control to achieve coordination toward the large-system goal, and enough autonomy to keep all subsystems flourishing, functioning, and self-organizing. Hierarchical systems evolve from the bottom up. The purpose of the upper layers of the hierarchy is to serve the purposes of the lower layers. Resilience, self-organization, and hierarchy are three of the reasons dynamic systems can work so well. Promoting or managing for these properties of a system can improve its ability to function well over the long term—to be sustainable. But watching how systems behave also can be full of surprises.

So are the ways I picture the world in my head—my mental models. None of these is or ever will be the real world. Our models usually have a strong congruence

Everything we think we know about the world is a model. Our models do have a strong congruence with the world. Our models fall far short of representing the real world fully.

You can’t navigate well in an interconnected, feedback-dominated world unless you take your eyes off short-term events and look for long-term behavior and structure; unless you are aware of false boundaries and bounded rationality; unless you take into account limiting factors, nonlinearities and delays. You are likely to mistreat, misdesign, or misread systems if you don’t respect their properties of resilience, self-organization, and hierarchy.

The behavior of a system is its performance over time—its growth, stagnation, decline, oscillation, randomness, or evolution. If the news did a better job of putting events into historical context, we would have better behavior-level understanding, which is deeper than event-level understanding.

Flows go up and down, on and off, in all sorts of combinations, in response to stocks, not to other flows.

And that’s one reason why systems of all kinds surprise us. We are too fascinated by the events they generate. We pay too little attention to their history. And we are insufficiently skilled at seeing in their history clues to the structures from which behavior and events flow.

The budworm/spruce/fir system oscillates over decades, but it is ecologically stable within bounds. It can go on forever. The main effect of the budworm is to allow tree species other than fir to persist. But in this case what is ecologically stable is economically unstable. In eastern Canada, the economy is almost completely dependent on the logging industry, which is dependent on a steady supply of fir and spruce.

There are only boundaries of word, thought, perception, and social agreement—artificial, mental-model boundaries.

The lesson of boundaries is hard even for systems thinkers to get. There is no single, legitimate boundary to draw around a system. We have to invent boundaries for clarity and sanity; and boundaries can produce problems when we forget that we’ve artificially created them.

When you draw boundaries too narrowly, the system surprises you. For example, if you try to deal with urban traffic problems without thinking about settlement patterns, you build highways, which attract housing developments along their whole length. Those households, in turn, put more cars on the highways, which then become just as clogged as before. There are no separate systems. The world is a continuum. Where to draw a boundary around a system depends on the purpose of the discussion—the questions we want to ask.

This “my model is bigger than your model” game results in enormously complicated analyses, which produce piles of information that may only serve to obscure the answers to the questions at hand. For example, modeling the earth’s climate in full detail is interesting for many reasons, but may not be necessary for figuring out how to reduce a country’s CO2 emissions to reduce climate change.

It’s a great art to remember that boundaries are of our own making, and that they can and should be reconsidered for each new discussion, problem, or purpose.

This concept of a limiting factor is simple and widely misunderstood. Agronomists assume, for example, that they know what to put in artificial fertilizer, because they have identified many of the major and minor nutrients in good soil. Are there any essential nutrients they have not identified? How do artificial fertilizers affect soil microbe communities? Do they interfere with, and therefore limit, any other functions of good soil? And what limits the production of artificial fertilizers? At any given time, the input that is most important to a system is the one that is most limiting.

At any given time, the input that is most important to a system is the one that is most limiting.

One of the classic models taught to systems students at MIT is Jay Forrester’s corporate-growth model.

The company may hire salespeople, for example, who are so good that they generate orders faster than the factory can produce. Delivery delays increase and customers are lost, because production capacity is the most limiting factor. So the managers expand the capital stock of production plants. New people are hired in a hurry and trained too little. Quality suffers and customers are lost because labor skill is the most limiting factor. So management invests in worker training. Quality improves, new orders pour in, and the order-fulfillment and record-keeping system clogs. And so forth.

Insight comes not only from recognizing which factor is limiting, but from seeing that growth itself depletes or enhances limits and therefore changes what is limiting.

To shift attention from the abundant factors to the next potential limiting factor is to gain real understanding of, and control over, the growth process.

Understanding layers of limits and keeping an eye on the next upcoming limiting factor is not a recipe for perpetual growth, however. For any physical entity in a finite environment, perpetual growth is impossible. Ultimately, the choice is not to grow forever but to decide what limits to live within.

There always will be limits to growth. They can be self-imposed. If they aren’t, they will be system-imposed. No physical entity can grow forever. If company managers, city governments, the human population do not choose and enforce their own limits to keep growth within the capacity of the supporting environment, then the environment will choose and enforce limits.

We are surprised over and over again at how much time things take. Jay Forrester used to tell us, when we were modeling a construction or processing delay, to ask everyone in the system how long they thought the delay was, make our best guess, and then multiply by three. (That correction factor also works perfectly, I have found, for estimating how long it will take to write a book!) Delays are ubiquitous in systems. Every stock is a delay. Most flows have delays—shipping delays, perception delays, processing delays, maturation delays.

Delays are often sensitive leverage points for policy, if they can be made shorter or longer.

When there are long delays in feedback loops, some sort of foresight is essential. To act only when a problem becomes obvious is to miss an important opportunity to solve the problem.

Delays determine how fast systems can react, how accurately they hit their targets, and how timely is the information passed around a system. Overshoots, oscillations, and collapses are always caused by delays.

Because of decades-long delays as the earth’s oceans respond to warmer temperatures, human fossil-fuel emissions have already induced changes in climate that will not be fully revealed for a generation or two.

Because of long delays in building new power plants, the electricity industry is plagued with cycles of overcapacity and then undercapacity leading to brownouts.

Bounded rationality means that people make quite reasonable decisions based on the information they have. But they don’t have perfect information, especially about more distant parts of the system. Fishermen don’t know how many fish there are, much less how many fish will be caught by other fishermen that same day.

So instead of finding a long-term optimum, we discover within our limited purview a choice we can live with for now, and we stick to it, changing our behavior only when forced to.

In your new position, you experience the information flows, the incentives and disincentives, the goals and discrepancies, the pressures—the bounded rationality—that goes with that position.

If you are now a fisherman with a mortgage on your boat, a family to support, and imperfect knowledge of the state of the fish population, you will overfish.

Within the bounds of what a person in that part of the system can see and know, the behavior is reasonable.

The bounded rationality of each actor in a system—determined by the information, incentives, disincentives, goals, stresses, and constraints impinging on that actor—may or may not lead to decisions that further the welfare of the system as a whole. If they do not, putting new actors into the same system will not improve the system’s performance. What makes a difference is redesigning the system to improve the information, incentives, disincentives, goals, stresses, and constraints that have an effect on specific actors.

Such resistance to change arises when goals of subsystems are different from and inconsistent with each other. Picture a single-system stock—drug supply on the city streets, for example—with various actors trying to pull that stock in different directions. Addicts want to keep it high, enforcement agencies want to keep it low, pushers want to keep it right in the middle so prices don’t get either too high or too low. The average citizen really just wants to be safe from robberies by addicts trying to get money to buy drugs. All the actors work hard to achieve their different goals. If any one actor gains an advantage and moves the system stock (drug supply) in one direction (enforcement agencies manage to cut drug imports at the border), the others double their efforts to pull it back (street prices go up, addicts have to commit more crimes to buy their daily fixes, higher prices bring more profits, suppliers use the profits to buy planes and boats to evade the border patrols). Together, the countermoves produce a standoff, the stock is not much different from before, and that is not what anybody wants.

So, they resisted the government’s pull toward larger family size, at great cost to themselves and to the generation of children who grew up in orphanages.

This is what happened with the formulator of the Romanian population policy, dictator Nicolae Ceausescu, who tried long and hard to overpower the resistance to his policy. When his government was overturned, he was executed, along with his family. The first law the new government repealed was the ban on abortion and contraception.

One way to deal with policy resistance is to try to overpower it. If you wield enough power and can keep wielding it, the power approach can work, at the cost of monumental resentment and the possibility of explosive consequences if the power is ever let up. This is what happened with the formulator of the Romanian population policy, dictator Nicolae Ceausescu, who tried long and hard to overpower the resistance to his policy. When his government was overturned, he was executed, along with his family. The first law the new government repealed was the ban on abortion and contraception.

The alternative to overpowering policy resistance is so counterintuitive that it’s usually unthinkable. Let go. Give up ineffective policies. Let the resources and energy spent on both enforcing and resisting be used for more constructive purposes. You won’t get your way with the system, but it won’t go as far in a bad direction as you think, because much of the action you were trying to correct was in response to your own action. If you calm down, those who are pulling against you will calm down too.

The most effective way of dealing with policy resistance is to find a way of aligning the various goals of the subsystems, usually by providing an overarching goal that allows all actors to break out of their bounded rationality.

The most familiar examples of this harmonization of goals are mobilizations of economies during wartime, or recovery after war or natural disaster.

The trap called the tragedy of the commons comes about when there is escalation, or just simple growth, in a commonly shared, erodable environment.

Since the herdsman receives all the proceeds from the sale of the additional animal, the positive utility is nearly +1.… Since, however, the effects of overgrazing are shared by all, … the negative utility for any particular decision-making herdsman is only a fraction of –1.… The rational herdsman concludes that the only sensible course for him to pursue is to add another animal to his herd. And another; and another.… But this is the conclusion reached by each and every rational herdsman sharing a commons. Therein is the tragedy. Each … is locked into a system that compels him to increase his herd without limit—in a world that is limited. Ruin is the destination toward which all … rush, each pursuing his own best interest.

The roots no longer hold the soil from washing away in the rains. With less soil, the grass grows more poorly. And so forth. Another reinforcing feedback loop running downhill.

The hopeful immigrant to Germany expects nothing but benefit from that country’s generous asylum laws, and has no reason to take into consideration the fact that too many immigrants will inevitably force Germany to toughen those laws. In fact, the knowledge that Germany is discussing that possibility is all the more reason to hurry to Germany! The tragedy of the commons arises from missing (or too long delayed) feedback from the resource to the growth of the users of that resource.

It is to everyone’s immediate advantage to go on using fossil fuels, although carbon dioxide from these fuels is a greenhouse gas that is causing global climate change.

If you think that the reasoning of an exploiter of the commons is hard to understand, ask yourself how willing you are to carpool in order to reduce air pollution, or to clean up after yourself whenever you make a mess. The structure of a commons system makes selfish behavior much more convenient and profitable than behavior that is responsible to the whole community and to the future.

There are three ways to avoid the tragedy of the commons. Educate and exhort. Help people to see the consequences of unrestrained use of the commons. Appeal to their morality. Persuade them to be temperate. Threaten transgressors with social disapproval or eternal hellfire. Privatize the commons. Divide it up, so that each person reaps the consequences of his or her own actions. If some people lack the self-control to stay below the carrying capacity of their own private resource, those people will harm only themselves and not others. Regulate the commons. Garrett Hardin calls this option, bluntly, “mutual coercion, mutually agreed upon.” Regulation can take many forms, from outright bans on certain behaviors to quotas, permits, taxes, incentives. To be effective, regulation must be enforced by policing and penalties.

Most people comply with regulatory systems most of the time, as long as they are mutually agreed upon and their purpose is understood. But all regulatory systems must use police power and penalties for the occasional noncooperator.

Notice from these examples how many different forms “mutual coercion, mutually agreed upon” can take. The traffic light doles out access to the commons on a “take your turn” basis. The meters charge for use of the parking commons. The bank uses physical barriers and strong penalties. Permits to use broadcasting frequencies are issued to applicants by a government agency. And garbage fees directly restore the missing feedback, letting each household feel the economic impact of its own use of the commons.

Another name for this system trap is “eroding goals.” It is also called the “boiled frog syndrome,” from the old story (I don’t know whether it is true) that a frog put suddenly in hot water will jump right out, but if it is put into cold water that is gradually heated up, the frog will stay there happily until it boils.

There are two antidotes to eroding goals. One is to keep standards absolute, regardless of performance. Another is to make goals sensitive to the best performances of the past, instead of the worst.

THE TRAP: DRIFT TO LOW PERFORMANCE Allowing performance standards to be influenced by past performance, especially if there is a negative bias in perceiving past performance, sets up a reinforcing feedback loop of eroding goals that sets a system drifting toward low performance. THE WAY OUT Keep performance standards absolute. Even better, let standards be enhanced by the best actual performances instead of being discouraged by the worst. Use the same structure to set up a drift toward high performance!

Leaders of Chancellor Helmut Kohl’s coalition, led by the Christian Democratic Union, agreed last week with the opposition Social Democrats, after months of bickering, to turn back a flood of economic migrants by tightening conditions for claiming asylum. —International Herald Tribune, 19925

Islamic militants kidnapped an Israeli soldier Sunday and threatened to kill him unless the army quickly releases the imprisoned founder of a dominant Muslim group in the Gaza Strip.… The kidnapping … came in a wave of intense violence, … with the shooting of three Palestinians and an Israeli soldier who … was gunned down from a passing vehicle while he was on patrol in a jeep. In addition Gaza was buffeted by repeated clashes between stone-throwing demonstrators and Israeli troops, who opened fire with live ammunition and rubber bullets, wounding at least 120 people. —Clyde Haberman, International Herald Tribune, 1992

“I’ll raise you one” is the decision rule that leads to escalation. Escalation comes from a reinforcing loop set up by competing actors trying to get ahead of each other. The

You hit me, so I hit you back a little harder, so you hit me back a little harder, and pretty soon we have a real fight going. “I’ll raise you one” is the decision rule that leads to escalation. Escalation comes from a reinforcing loop set up by competing actors trying to get ahead of each other.

Like many of the other system traps, escalation is not necessarily a bad thing. If the competition is about some desirable goal, like a more efficient computer or a cure for AIDS, it can hasten the whole system toward the goal. But when it is escalating hostility, weaponry, noise, or irritation, this is an insidious trap indeed. The most common and awful examples are arms races and those places on earth where implacable

Advertising companies escalate their bids for the attention of the consumer. One company does something bright and loud and arresting. Its competitor does something louder, bigger, brasher. The first company outdoes that. Advertising becomes ever more present in the environment (in the mail, on the telephone), more garish, more noisy, more intrusive, until the consumer’s senses are dulled to the point at which almost no advertiser’s message can penetrate.

Escalation in morality can lead to holier-than-thou sanctimoniousness. Escalation in art can lead from baroque to rococo to kitsch. Escalation in environmentally responsible lifestyles can lead to rigid and unnecessary puritanism.

One way out of the escalation trap is unilateral disarmament—deliberately reducing your own system state to induce reductions in your competitor’s state. Within the logic of the system, this option is almost unthinkable. But it actually can work, if one does it with determination, and if one can survive the short-term advantage of the competitor.

THE TRAP: ESCALATION When the state of one stock is determined by trying to surpass the state of another stock—and vice versa—then there is a reinforcing feedback loop carrying the system into an arms race, a wealth race, a smear campaign, escalating loudness, escalating violence. The escalation is exponential and can lead to extremes surprisingly quickly. If nothing is done, the spiral will be stopped by someone’s collapse—because exponential growth cannot go on forever. THE WAY OUT The best way out of this trap is to avoid getting in it. If caught in an escalating system, one can refuse to compete (unilaterally disarm), thereby interrupting the reinforcing loop. Or one can negotiate a new system with balancing loops to control the escalation.

Another expression of this trap was part of the critique of capitalism by Karl Marx. Two firms competing in the same market will exhibit the same behavior as two species competing in a niche. One will gain a slight advantage, through greater efficiency or smarter investment or better technology or bigger bribes, or whatever. With that advantage, the firm will have more income to invest in productive facilities or newer technologies or advertising or bribes. Its reinforcing feedback loop of capital accumulation will be able to turn faster than that of the other firm, enabling it to produce still more and earn still more. If there is a finite market and no antitrust law to stop it, one firm will take over everything as long as it chooses to reinvest in and expand its production facilities.

Some people think the fall of the communist Soviet Union has disproved the theories of Karl Marx, but this particular analysis of his—that market competition systematically eliminates market competition—is demonstrated wherever there is, or used to be, a competitive market.

Diversification is not guaranteed, however, especially if the monopolizing firm (or species) has the power to crush all offshoots, or buy them up, or deprive them of the resources they need to stay alive. Diversification

THE TRAP: SUCCESS TO THE SUCCESSFUL If the winners of a competition are systematically rewarded with the means to win again, a reinforcing feedback loop is created by which, if it is allowed to proceed uninhibited, the winners eventually take all, while the losers are eliminated. THE WAY OUT Diversification, which allows those who are losing the competition to get out of that game and start another one; strict limitation on the fraction of the pie any one winner may win (antitrust laws); policies that level the playing field, removing some of the advantage of the strongest players or increasing the advantage of the weakest; policies that devise rewards for success that do not bias the next round of competition.

One definition of addiction used in Alcoholics Anonymous is repeating the same stupid behavior over and over and over, and somehow expecting different results.

Is the price of oil going up? Rather than acknowledge the inevitable depletion of a nonrenewable resource and increase fuel efficiency or switch to other fuels, we can fix the price. (Both the Soviet Union and the United States did this as their first response to the oil-price shocks of the 1970s.) That way we can pretend that nothing is happening and go on burning oil—making the depletion problem worse.

Breaking an addiction is painful. It may be the physical pain of heroin withdrawal, or the economic pain of a price increase to reduce oil consumption, or the consequences of a pest invasion while natural predator populations are restoring themselves.

It’s worth going through the withdrawal to get back to an unaddicted state, but it is far preferable to avoid addiction in the first place.

If you are the one with an unsupportable dependency, build your system’s own capabilities back up before removing the intervention. Do it right away. The longer you wait, the harder the withdrawal process will be.

THE TRAP: SHIFTING THE BURDEN TO THE INTERVENOR Shifting the burden, dependence, and addiction arise when a solution to a systemic problem reduces (or disguises) the symptoms, but does nothing to solve the underlying problem. Whether it is a substance that dulls one’s perception or a policy that hides the underlying trouble, the drug of choice interferes with the actions that could solve the real problem. If the intervention designed to correct the problem causes the self-maintaining capacity of the original system to atrophy or erode, then a destructive reinforcing feedback loop is set in motion. The system deteriorates; more and more of the solution is then required. The system will become more and more dependent on the intervention and less and less able to maintain its own desired state. THE WAY OUT Again, the best way out of this trap is to avoid getting in. Beware of symptom-relieving or signal-denying policies or practices that don’t really address the problem. Take the focus off short-term relief and put it on long-term restructuring.

Here are some examples, some serious, some less so, of rule beating: Departments of governments, universities, and corporations often engage in pointless spending at the end of the fiscal year just to get rid of money—because if they don’t spend their budget this year, they will be allocated less next year.

The U.S. Endangered Species Act restricts development wherever an endangered species has its habitat. Some landowners, on discovering that their property harbors an endangered species, purposely hunt or poison it, so the land can be developed.

Notice that rule beating produces the appearance of rules being followed.

Notice that rule beating produces the appearance of rules being followed. Drivers obey the speed limits, when they’re in the vicinity of a police car.

THE TRAP: RULE BEATING Rules to govern a system can lead to rule beating—perverse behavior that gives the appearance of obeying the rules or achieving the goals, but that actually distorts the system. THE WAY OUT Design, or redesign, rules to release creativity not in the direction of beating the rules, but in the direction of achieving the purpose of the rules.

Now that the forecast … has been lowered sharply, pressure from politicians and business is likely to grow on the Finance Ministry to take stimulative measures.

a year ago.… GNP grew in 1991 by 3.5 percent and in 1990 by 5.5 percent. Since the beginning of this fiscal year … the economy has been stagnant or contracting.… Now that the forecast … has been lowered sharply, pressure from politicians and business is likely to grow on the Finance Ministry to take stimulative measures.

GNP grew in 1991 by 3.5 percent and in 1990 by 5.5 percent. Since the beginning of this fiscal year … the economy has been stagnant or contracting.… Now that the forecast … has been lowered sharply, pressure from politicians and business is likely to grow on the Finance Ministry to take stimulative measures.

If the desired system state is good education, measuring that goal by the amount of money spent per student will ensure money spent per student. If the quality of education is measured by performance on standardized tests, the system will produce performance on standardized tests. Whether either of these measures is correlated with good education is at least worth thinking about.

These examples confuse effort with result, one of the most common mistakes in designing systems around the wrong goal.

New light bulbs that give the same light with one-eighth the electricity and that last ten times as long make the GNP go down.

If you define the goal of a society as GNP, that society will do its best to produce GNP. It will not produce welfare, equity, justice, or efficiency unless you define a goal and regularly measure and report the state of welfare, equity, justice, or efficiency.

THE TRAP: SEEKING THE WRONG GOAL System behavior is particularly sensitive to the goals of feedback loops. If the goals—the indicators of satisfaction of the rules—are defined inaccurately or incompletely, the system may obediently work to produce a result that is not really intended or wanted. THE WAY OUT Specify indicators and goals that reflect the real welfare of the system. Be especially careful not to confuse effort with result or you will end up with a system that is producing effort, not result.

INTERLUDE • The Goal of Sailboat Design Once upon a time, people raced sailboats not for millions of dollars or for national glory, but just for the fun of it. They raced the boats they already had for normal purposes, boats that were designed for fishing, or transporting goods, or sailing around on weekends. It quickly was observed that races are more interesting if the competitors are roughly equal in speed and maneuverability.

Soon boats were being designed not for normal sailing, but for winning races within the categories defined by the rules. They squeezed the last possible burst of speed out of a square inch of sail, or the lightest possible load out of a standard-sized rudder. These boats were strange-looking and strange-handling, not at all the sort of boat you would want to take out fishing or for a Sunday sail. As the races became more serious, the rules became stricter and the boat designs more bizarre.

So, how do we change the structure of systems to produce more of what we want and less of that which is undesirable?

It was in just such a moment of frustration that I proposed a list of places to intervene in a system during a meeting on the implications of global-trade regimes. I offer this list to you with much humility and wanting to leave room for its evolution. What bubbled up in me that day was distilled from decades of rigorous analysis of many different kinds of systems done by many smart people.

But, despite all the fireworks, and no matter which party is in charge, the money hole has been deepening for years now, just at different rates.

Whatever cap we put on campaign contributions, it doesn’t clean up politics. The Fed’s fiddling with the interest rate hasn’t made business cycles go away.

You can often stabilize a system by increasing the capacity of a buffer.5 But if a buffer is too big, the system gets inflexible. It reacts too slowly. And big buffers of some sorts, such as water reservoirs or inventories, cost a lot to build or maintain.

Businesses invented just-in-time inventories, because occasional vulnerability to fluctuations or screw-ups is cheaper (for them, anyway) than certain, constant inventory costs—and because small-to-vanishing inventories allow more flexible response to shifting demand.

The only way to fix a system that is laid out poorly is to rebuild it, if you can. Amory Lovins and his team at Rocky Mountain Institute have done wonders on energy conservation by simply straightening out bent pipes and enlarging ones that are too small. If we did similar energy retrofits on all the buildings in the United States, we could shut down many of our electric power plants.

Physical structure is crucial in a system, but is rarely a leverage point, because changing it is rarely quick or simple. The leverage point is in proper design in the first place. After the structure is built, the leverage is in understanding its limitations and bottlenecks, using it with maximum efficiency, and refraining from fluctuations or expansions that strain its capacity.

A complex system usually has numerous balancing feedback loops it can bring into play, so it can self-correct under different conditions and impacts. Some of those loops may be inactive much of the time—like the emergency cooling system in a nuclear power plant, or your ability to sweat or shiver to maintain your body temperature—but their presence is critical to the long-term welfare of the system. One of the big mistakes we make is to strip away these “emergency” response mechanisms because they aren’t often used and they appear to be

Strengthening and clarifying market signals, such as full-cost accounting, don’t get far these days, because of the weakening of another set of balancing feedback loops—those of democracy. This great system was invented to put self-correcting feedback between the people and their government. The people, informed about what their elected representatives do, respond by voting those representatives in or out of office. The process depends on the free, full, unbiased flow of information back and forth between electorate and leaders. Billions of dollars are spent to limit and bias and dominate that flow of clear information. Give the people who want to distort market-price signals the power to influence government leaders, allow the distributors of information to be self-interested partners, and none of the necessary balancing feedbacks work well. Both market and democracy erode.

A thermostat system may work fine on a cold winter day—but open all the windows and its corrective power is no match for the temperature change imposed on the system. Democracy works better without the brainwashing power of centralized mass communications.

Reinforcing feedback loops are sources of growth, explosion, erosion, and collapse in systems. A system with an unchecked reinforcing loop ultimately will destroy itself.

The death rate will rise to equal the birth rate—or people will see the consequences of unchecked population growth and have fewer babies.

Population and economic growth rates in the World model are leverage points, because slowing them gives the many balancing loops, through technology and markets and other forms of adaptation (all of which have limits and delays), time to function. It’s the same as slowing the car when you’re driving too fast, rather than calling for more responsive brakes or technical advances in steering.

Rich people collect interest; poor people pay it. Rich people pay accountants and lean on politicians to reduce their taxes; poor people can’t. Rich people give their kids inheritances and good educations. Antipoverty programs are weak balancing loops that try to counter these strong reinforcing ones. It would be much more effective to weaken the reinforcing loops. That’s what progressive income tax, inheritance tax, and universal high-quality public education programs are meant to do. If the wealthy can influence government to weaken, rather than strengthen, those measures, then the government itself shifts from a balancing structure to one that reinforces success to the successful!

Contrary to economic opinion, the price of fish doesn’t provide that feedback. As the fish get more scarce they become more expensive, and it becomes all the more profitable to go out and catch the last few. That’s a perverse feedback, a reinforcing loop that leads to collapse. It is not price information but population information that is needed.

such a creation. Further investigation of self-organizing systems reveals that the divine creator, if there is one, does not have to produce evolutionary miracles. He, she, or it just has to write marvelously clever rules for self-organization.

A system that can evolve can survive almost any change,

A system that can evolve can survive almost any change, by changing itself.

Further investigation of self-organizing systems reveals that the divine creator, if there is one, does not have to produce evolutionary miracles. He, she, or it just has to write marvelously clever rules for self-organization.

The intervention point here is obvious, but unpopular. Encouraging variability and experimentation and diversity means “losing control.” Let a thousand flowers bloom and anything could happen! Who wants that? Let’s play it safe and push this lever in the wrong direction by wiping out biological, cultural, social, and market diversity!

If the goal is to bring more and more of the world under the control of one particular central planning system (the empire of Genghis Khan, the Church, the People’s Republic of China, Wal-Mart, Disney), then everything further down the list, physical stocks and flows, feedback loops, information flows, even self-organizing behavior, will be twisted to conform to that goal.

So how do you change paradigms? Thomas Kuhn, who wrote the seminal book about the great paradigm shifts of science, has a lot to say about that.8 You keep pointing at the anomalies and failures in the old paradigm. You keep speaking and acting, loudly and with assurance, from the new one.

There is yet one leverage point that is even higher than changing a paradigm. That is to keep oneself unattached in the arena of paradigms, to stay flexible, to realize that no paradigm is “true,” that every one, including the one that sweetly shapes your own worldview, is a tremendously limited understanding of an immense and amazing universe that is far beyond human comprehension. It is to “get” at a gut level the paradigm that there are paradigms, and to see that that itself is a paradigm, and to regard that whole realization as devastatingly funny. It is to let go into not-knowing, into what the Buddhists call enlightenment.

Surely there is no power, no control, no understanding, not even a reason for being, much less acting, embodied in the notion that there is no certainty in any worldview. But, in fact, everyone who has managed to entertain that idea, for a moment or for a lifetime, has found it to be the basis for radical empowerment.

The higher the leverage point, the more the system will resist changing it—that’s why societies often rub out truly enlightened beings.

This mistake is likely because the mind-set of the industrial world assumes that there is a key to prediction and control.

Our first comeuppance came as we learned that it’s one thing to understand how to fix a system and quite another to wade in and fix it. We had many earnest discussions on the topic of “implementation,” by which we meant “how to get managers and mayors and agency heads to follow our advice.” The truth was, we didn’t even follow our advice. We gave learned lectures on the structure of addiction and could not give up coffee. We knew all about the dynamics of eroding goals and eroded our own jogging programs. We warned against the traps of escalation and shifting the burden and then created them in our own marriages.

We ran into another problem. Our systems insights helped us understand many things we hadn’t understood before, but they didn’t help us understand everything. In fact, they raised at least as many questions as they answered.

Systems thinkers are by no means the first or only people to ask questions like these. When we started asking them, we found whole disciplines, libraries, histories, asking the same questions, and to some extent offering answers.

Systems thinking makes clear even to the most committed technocrat that getting along in this world of complex systems requires more than technocracy.

The idea of making a complex system do just what you want it to do can be achieved only temporarily, at best.

Systems thinking leads to another conclusion, however, waiting, shining, obvious, as soon as we stop being blinded by the illusion of control. It says that there is plenty to do, of a different sort of “doing.” The future can’t be predicted, but it can be envisioned and brought lovingly into being. Systems can’t be controlled, but they can be designed and redesigned.

We can’t surge forward with certainty into a world of no surprises, but we can expect surprises and learn from them and even profit from them.

We can’t control systems or figure them out. But we can dance with them!

All those endeavors require one to stay wide awake, pay close attention, participate flat out, and respond

We can’t control systems or figure them out. But we can dance with them! I already knew that, in a way. I had learned about dancing with great powers from whitewater kayaking, from gardening, from playing music, from skiing. All those endeavors require one to stay wide awake, pay close attention, participate flat out, and respond to feedback. It had never occurred to me that those same requirements might apply to intellectual work, to management, to government, to getting along with people.

Before you disturb the system in any way, watch how it behaves. If it’s a piece of music or a whitewater rapid or a fluctuation in a commodity price, study its beat. If it’s a social system, watch it work. Learn its history. Ask people who’ve been around a long time to tell you what has happened. If possible, find or make a time graph of actual data from the system—peoples’ memories are not always reliable when it comes to timing.

These are the take-home lessons, the concepts and practices that penetrate the discipline of systems so deeply that one begins, however imperfectly, to practice them not just in one’s profession, but in all of life.

Listen to any discussion, in your family or a committee meeting at work or among the pundits in the media, and watch people leap to solutions, usually solutions in “predict, control, or impose your will” mode, without having paid any attention to what the system is doing and why it’s doing it.

Expose Your Mental Models to the Light of Day When we draw structural diagrams and then write equations, we are forced to make our assumptions visible and to express them with rigor. We have to put every one of our assumptions about the system out where others (and we ourselves) can see them. Our models have to be complete, and they have to add up, and they have to be consistent. Our assumptions can no longer slide around (mental models are very slippery), assuming one thing for purposes of one discussion and something else contradictory for purposes of the next discussion. You don’t have to put forth your mental model with diagrams and equations, although doing so is a good practice. You can do it with words or lists or pictures or arrows showing what you think is connected to what. The more you do that, in any form, the clearer your thinking will become,

Remember, always, that everything you know, and everything everyone knows, is only a model.

If I could, I would add an eleventh commandment to the first ten: Thou shalt not distort, delay, or withhold information. You can drive a system crazy by muddying its information streams. You can make a system work better with surprising ease if you can give it more timely, more accurate, more complete information.

Our information streams are composed primarily of language. Our mental models are mostly verbal. Honoring information means above all avoiding language pollution—making the cleanest possible use we can of language. Second, it means expanding our language so we can talk about complexity.

Attention rests upon percentages, categories, abstract functions.… It is not language that the user will very likely be required to stand by or to act on, for it does not define any personal ground for standing or acting. Its only practical utility is to support with “expert opinion” a vast, impersonal technological action already begun.… It is a tyrannical language: tyrannese.5

Our culture, obsessed with numbers, has given us the idea that what we can measure is more important than what we can’t measure. Think about that for a minute. It means that we make quantity more important than quality.

Carter also was trying to deal with a flood of illegal immigrants from Mexico. He suggested that nothing could be done about that immigration as long as there was a great gap in opportunity and living standards between the United States and Mexico. Rather than spending money on border guards and barriers, he said, we should spend money helping to build the Mexican economy, and we should continue to do so until the immigration stopped.

The thing to do, when you don’t know, is not to bluff and not to freeze, but to learn. The way you learn is by experiment—or, as Buckminster Fuller put it, by trial and error, error, error. In a world of complex systems, it is not appropriate to charge forward with rigid, undeviating directives.

Aldo Leopold did with his land ethic: “A thing is right when it tends to preserve the integrity, stability, and beauty of the biotic community. It is wrong when it tends otherwise.”10

When you’re walking along a tricky, curving, unknown, surprising, obstacle-strewn path, you’d be a fool to keep your head down and look just at the next step in front of you. You’d be equally a fool just to peer far ahead and never notice what’s immediately under your feet. You need to be watching both the short and the long term—the whole system.

Don’t Erode the Goal of Goodness The most damaging example of the systems archetype called “drift to low performance” is the process by which modern industrial culture has eroded the goal of morality. The workings of the trap have been classic, and awful to behold. Examples of bad human behavior are held up, magnified by the media, affirmed by the culture, as typical. This is just what you would expect. After all, we’re only human. The far more numerous examples of human goodness are barely noticed. They are “not news.” They are exceptions. Must have been a saint. Can’t expect everyone to behave like that. And so expectations are lowered. The gap between desired behavior and actual behavior narrows. Fewer actions are taken to affirm and instill ideals. The public discourse is full of cynicism. Public leaders are visibly, unrepentantly amoral or immoral and are not held to account. Idealism is ridiculed. Statements of moral belief are suspect. It is much easier to talk about hate in public than to talk about love. The literary critic and naturalist Joseph Wood Krutch put it this way: Thus though man has never before been so complacent about what he has, or so confident of his ability to do whatever he sets his mind upon, it is at the same time true that he never before accepted so low an estimate of what he is. That same scientific method which enabled him to create his wealth and to unleash the power he wields has, he believes, enabled biology and psychology to explain him away—or at least to explain away whatever used to seem unique or even in any way mysterious.… Truly he is, for all his wealth and power, poor in spirit.12 We know what to do about drift to low performance. Don’t weigh the bad news more heavily than the good. And keep standards absolute. Systems thinking can only tell us to do that. It can’t do it. We’re back to the gap between understanding and implementation. Systems thinking by itself cannot bridge that gap, but it can lead us to the edge of what analysis can do and then point beyond—to what can and must be done by the human spirit.