The Devil Is In The Details
Models: Good for homes, airplanes, cars and trains. Not so good for setting policy in complex systems.
“The sky is falling! The sky is falling!” Chicken Little cried as she ran hither and yon after being clocked on the head with an acorn. The allegory is not particularly clear on how the farm yard denizens reacted to Chicken Little’s screeching, but given the herd mentality of many farm animals, it’s likely that she influenced many of them to panic, too
.
Influence … there’s a pertinent word. We seem to be overrun by so-called “influencers” these days—self-appointed “experts” who, for whatever reason, and with questionable credentials, noisily attempt to steer various populations to think in a certain way.
Some influencers can actually lay legitimate claim to expertise in their areas of influence. The whole COVID Panic of 2020-2022 flushed mavens of many stripes out of the woodwork and into the public domain. In retrospect, much (if not most) of what they inveighed about turned out to be a tempest in a teapot: overblown, inaccurate, false, or just self-serving. “But, hey! Just wait a while. It won’t be long before the ‘great unwashed’ forget how wrong we were!”
You can see this phenomenon in a variety of different situations. “Woke-ism” is one. Social influencing is a key element in what Mattias Desmet has called mass formation. [1] But some influencing efforts are much more insidious than those founded purely on opinions. Such efforts use computer models to support their arguments. After all, if a “dispassionate” computer says so, it must be right!
Models or Clones?
We’ve all seen physical models: trains, airplanes, cars, houses, and so forth. Models, as such, are not new. (See, Da Vinci, Leonardo, 1452 – 1519.) But with the advent of computers, modeling took on a whole new meaning. Computer models were offered not just as a likely representation of reality, but as a “clone” of reality itself. And they have been accepted as such by people who fail to “look behind the curtain” at the details.
One of the most common uses for computer models has been to promote the hypothesis of anthropomorphic global warming (AGW). Actually, to present it as fact. (After all, the computer says so.) As a result, the mass begins to form. Over the past 25-plus years, there has been no shortage of reports and media articles decrying the worldwide disaster looming as a result of AGW. In other words, man-made global warming.
I won’t pursue the man-made global warming trope here. It would take too long. Rather, I draw your attention to some of the reasons why reliance on computer models to drive policy decisions is a really bad idea.
Models are a popular and accepted way of representing systems, especially large or complex ones that it might not be easy to examine in reality. They can be used to conceptualize and analyze system structures and functions, or even to design new systems. There are graphical models, mathematical models, information technology models, and physical models. There are functional models, systems architectural models, business process models, and enterprise models. [2]
Perhaps the greatest contribution to the field of modeling has been the widespread use of computers. The entire information technology field has reached levels of development almost beyond belief because of the marriage of hardware and software with modeling.
One of the models most familiar to most of us is the one we see on television news weather reports. It’s a graphic display of a computer model that represents the weather patterns we’ve experienced during the current day, and the forecasting module of that same computer model projects what tomorrow’s weather will be like. As we all know, those weather models are very good for about 24 hours into the future. They’re fair-to-middling for 48 to 72 hours, and pretty much pie-in-the sky for a week or more out.
The National Aeronautics and Space Administration (NASA) uses models to predict the trajectories of space probes. Engineering companies use computer models to depict the behavior of aerodynamic vehicles and fluids, such as wave action. Ultimately, the very realistic computer games we see on the market these days are, in essence, models.
Models are used (and abused) for a variety of purposes. Business models, originally designed to enhance executive decision making, have morphed into “substitute decision makers.” It’s not uncommon to see system managers take a particular course of action “because the model tells them so.” Human judgment is frequently subordinated to a computer model. In some cases, this provides managers plausible deniability—“Well, the computer said that’s what we should do. I can’t understand why it didn’t work out...”
Effective organizations won’t buy that excuse. Ultimately, to paraphrase a sign on President Harry S Truman’s desk, the buck stops with the decision maker. Nevertheless, the question arises: Why are some models effective and others not so much? (I’m referring here to dynamic computer models, not static models such as organization or graphical charts.)
Computing Power
One limitation is computing power. Despite the fact that most of us sit in front of laptop or desktop computers with more computing power than a 1960s mainframe, the demands of complex systems modeling can far exceed even that hardware very quickly. This is because they run into an inconvenient limitation known as the Square Law of Computation. [3]
Let’s consider a simple example.
Sending satellites into space around the earth is certainly a complicated endeavor, but it’s simple compared with the challenge of sending a probe from Earth to Mars, inserting it into the desired orbit, and landing it precisely at a desired location. The calculations used to model that interplanetary trajectory are an order of magnitude more complicated than an Earth orbit. Yet we have computer hardware and software capable of doing that job.
The universe is a complex arrangement of celestial bodies in various orbits, at various distances. One might say that the number of bodies whose mass affects orbital calculations approaches the infinite, especially if we include smaller bodies such as asteroids, comets, satellites, and such. But even if we could identify and quantify such factors, folding them into the calculation to produce a precise orbit would impose unbelievable computational capacity demands.
Consider first the equations needed to describe the most general system of only two objects. We must first describe how each object behaves by itself—the “isolated” behavior. We must also consider how the behavior of each body affects that of the other—the “interaction.” Finally, we must consider how things will behave if neither of the bodies is present—the “field” equation. All together, the most general two-body system requires four equations: two “isolated” equations, one “interaction” equation, and one “field” equation.
As the number of bodies increases, there remains but a single “field” equation, and only one “isolated” equation per body. The number of “interaction” equations, however, grows magnificently, with the result that for n bodies we would need 2n relationships! To be more concrete, for 10 bodies we would need 210 = 1024 equations and for 100,000 bodies, about l030,000. By “ignoring small masses,” then, the number of equations is reduced from perhaps l030,000 to approximately 1000. At least it would now be possible to write down the equations, even if we still could not afford to solve them. [4]
The Square Law of Computation means doubling the number of equations (factors) in the model generates a requirement for four times the computational capacity to accommodate them. This would be a serious challenge even for a Cray El Dorado supercomputer— and most of us don’t have many of those lying around!
If 1030,000 equations are beyond the capability of even current high-powered computers to handle, what choice is there but to simplify the model?
In the planetary example above, what is such a reduction of bodies (and equations) but the mathematical equivalent of “rounding off the corners” of reality? In the case of orbital mechanics, this can be done with some degree of confidence that the results are not being compromised. But what happens if we attempt to model systems that are composed not of quantifiable orbital bodies, but rather of physical phenomena, energy exchange, or various behaviors?
In such a situation the number of equations not only vary dramatically in their character, but also in the breadth of their influence. Two current examples that have received significant attention lately (and probably for the foreseeable future) are the world economy and climate change. And guess what? These two exceedingly complex systems are not mutually exclusive. Climate changes can be definitively demonstrated to have a significant impact on the world economy.
Nevertheless, notwithstanding opinions in some quarters to the contrary, predicting the future in either of those areas with any confidence at all is fraught with risk. Each of these systems alone is so complex that most modeling efforts fail abjectly. Consider how difficult it is to get an accurate weather forecast beyond a day or two into the future and you have some sense of how difficult it might be to predict climate changes years or decades into the future.
The main role of models is not so much to explain and to predict (though ultimately these are the main functions of science) as to polarize thinking and to pose sharp questions... The “survival of the fittest” applies to models even more than it does to living creatures. They should not, however, be allowed to multiply indiscriminately without real necessity or real purpose. [5]
Two additional factors that tend to undermine today’s over-dependence on models in decision making are linearity and cycles. These two pitfalls are related. Linearity assumes that a trend of some kind that has been measured and established in the past will continue into the indefinite future. Most such trends can be (and are) plotted as graphs. As such, the trend lines manifest as a line with a slope of some kind, either increasing or decreasing.
The problem is that when one reaches the end of the data set, there’s a great temptation merely to extend the most recent trend as a purely linear extension into the indefinite future. The most notorious of such extrapolations is the so-called “hockey stick” phenomenon [6], first advanced in 1995 by Michael Mann and debunked in 2003.
By extending the most recent data trend a hundred years into the future, Mann predicted that global warming would raise the average temperature on the surface of the earth by several degrees, leading to dire effects such as major heat waves, heavy rainfalls, and rising ocean sea levels due primarily to loss of land ice and increasing ocean temperatures. Nevertheless, despite the refutation of the hockey stick phenomenon by qualified climatology experts, Mann’s prediction was embraced by many throughout the world who stood to gain by its broadbased acceptance.
The second pitfall is the failure to recognize the cyclic characteristic of natural phenomena in constructing models in the first place. Since nearly all natural phenomena, from geophysics (plate tectonics and volcanology) to climate changes, to solar radiation exhibit cyclic characteristics, a failure to incorporate cyclic phenomena into predictive models very rapidly compromises the efficacy of such models. Thus, assuming linearity and ignoring cycles become crucial defects in many modeling efforts.
CONCLUSION
Ultimately, in the management of complex systems, success or failure depends in large part on the state of our knowledge, both concerning the system itself and the environment in which our systems exist and operate. This is particularly true in the use of computer models in decision making and policy formulation.
It isn’t what we don’t know that gives us trouble, it’s what we know that ain’t so.
—Will Rogers [27]
ENDNOTES:
1. Desmet, Mattias. The Psychology of Totalitarianism. (2021)
2. Dettmer, H.W. Systems Thinking and Other Dangerous Habits. College Station, TX: Virtualbookworm.com Publishing. (2021)
3. Weinberg, Gerald M. An Introduction to General Systems Thinking. http://leanpub.com/generalsystemsthinking. 2015, p.15.
4. Ibid., p.15.
5. Ibid., p.15.
6. https://www.newscientist.com/article/dn11646-climate-myths-the-hockey-stick-graph-has-been-proven-wrong/
Engineering models are quantitative up to safety factors. Other models are qualitative. For weather forecasts they run multiple simulations with slightly different settings and watch them diverge before taking some average for the forecast. Similar stuff with the stochastic models of the stock market.