Making Decisions Under Conditions of Chaos

In 1961, Edward Lorenz discovered chaos in the clockwork universe.

Lorenz was running a computer simulation of the atmosphere to help forecast the weather. He wanted to rerun just part of one sequence, so instead of starting at the beginning, he started the run in the middle. In order to start in the middle, he used the output of the program at its midpoint as a new starting point, expecting to get the same result in half the time.

Unexpectedly, even though he was using a computer, following strict deterministic rules, the second run started with the same values as before but produced a completely different result. It was as if uncertainty and variability had some how crept into the orderly, deterministic world of his computer program.

As it turned out, the numbers used from the middle of the run were not quite the same as the ones used when the program was at that same point the first time because of rounding or truncation errors. The resulting theory, Chaos Theory, described how for certain kinds of computer programs, small changes in initial conditions could result in large changes later on. These systems change over time where each condition leads to the next. This dependence on initial conditions has been immortalized as “the butterfly effect” where a small change in initial conditions- the wind from a butterfly’s wings in China, can have a large effect later on- rain in New York.

This sensitivity to the exact values of parameters in the present makes it very hard to know future values in the future. As its been formalized mathematically, chaos theory applies to “dynamical system” which simply is a system that changes over time according to some rule. The system starts in some initial state at the beginning. For our purposes, think of it as now, time zero. Rules are applied and the system changes to its new state- wind is blowing, temperature is changing, etc based on rules applied to the initial state of the atmosphere. The rules are applied to the new state to produce the next state, etc.

Chaos may not have been the best word to describe this principle though. To me it suggests complete unpredictability. Most real or mathematically interesting dynamical systems don’t blow up like that into complete unpredictability. Using the weather, for example, even if the butterfly or small differences in ocean surface temperature makes it impossible to know whether the temperature in TImes Square in New York will be 34 degrees or 37 degrees on February 7th, either one is a likely value to be found in the system at that time in that place. Measuring a temperature of 95 degrees F in New York in February is impossible or nearly so.

Dynamical systems like the weather often show recurrent behavior, returning to similar but non-identical states over and over as the rules are applied. Following the values over time describes a path that wanders over values, returning after some time to the same neighborhood. Not exactly the same place because it started in a slightly different place than the last time around, but in the same neighborhood. Just unpredictably in the same neighborhood.

This returns us to the distinction between knowing the future and predicting it. The future state of chaotic system can be known because small changes in initial conditions result in large changes in result. But those large changes recur in a predictable range of values. A chaotic system can be predicted even though its future state can’t be known. When it comes to the TImes Square temperature, climate data tells us what range the chaotic values move within from season’s cycle to season’s cycle. In drug development, the chaotic system of taking a pill every day and a measuring drug levels in the blood allows prediction as the range of likely values, but because initial conditions change and cause large, unpredictable effects one can’t know in advance whether today’s measure will be high or low. Its almost never the average, it varies around the average.

It’s important to see how important prediction is for making decisions when the future is unknown. Because the uncertain future is orderly we actually know a lot about it; we don’t know it in all of its particulars. We must make decisions knowing what range of possibilities the future can assume. Chaos Theory suggests that this kind of uncertainty is in the very nature of the world because of the behavior of dynamical systems, any rules dictate how a system changes over time.

Thinking Without Knowing

We have less free will than we think. Our thoughts are severely constrained by both brain mechanisms and the metaphors that the brain has been filled with from environmental input. Physics, culture, experience.

We have more free will than we think because this remarkable consciousness we’re all endowed with can act back on the brain and change it. We can also hack the brain in new and interesting ways to create.

John Cleese on Creativity:

John Cleese on how to put your mind to work via John Paul Caponigro

Cleese points out two important non-conscious phenomenon. First there’s the “sleep on it” effect where upon taking something up again the next morning the solution often appears obvious. The answer presents itself without thinking; it’s just there. Of course most thought is “just there”, but the previous effort makes it seem remarkable that you can fail to think of something one day but succeed the next. The brain is an odd muscle indeed. Imagine if you failed to hold something one day but could grasp it the next morning.

And his point that you need to put in the work of thinking the night before shouldn’t be lost. He calls it “priming the pump”, but I think of it as one long thinking process. Its useful to interrupt the conscious work to free the brain to produce a solution. This is brain work without knowing.

His second point about recreation from memory is related. We now know that remembering is a creative act, not a playback of a brain tape of events. Its easy to remember something into a different form than the original. Creatively, the new version may be better.

Cleese tells a story about losing and recreating a script. The second version, the one from memory, was funnier and crisper. I use this trick all the time in creating presentations or writing. I step away from the words or slide and get myself to say in words what it is I’m trying to convey. I’ve realized also that if I can’t say it easily, my problem is that I don’t know what to say, not that I don’t know how to say it.

It’s always fun to turn to someone during a prep meeting who’s struggling to create a slide. They’re lost in a verbal maze trying to find the right words as if it were some magical incantation that will unlock the meaning. I’ll say, “Tell me what you’re trying to convey here.” Once they’ve told me, I say, “Well, write that down” and we get a clear and crisp rendering of the thought in words.

William James and John Maynard Keynes on Deciding Better

I’m indebted to Glen Alleman for pointing out that John Maynard Keynes wrote a book on probability, A Treatise on Probability which starts with the Bayesian view of probability as belief and moves on to explain how Frequentist concepts fit into the Bayesian world view.

Herding Cats: Books for Project Managers: “Each paragraph in the book provides insight like this. Two paragraphs later is the core of the current ‘black swan’ of probabilistic management. There is a distinction between part of our rational belief which is certain and that part which is only probable. The key here is there are degrees of rational belief and if we fail to understand, and more  important, fail to ‘plan’ in the presence of these degrees, then were are taking on more risk and not knowing it. This is a core issue in the financial crisis and managing projects in the presence of uncertainty. “

This relationship between belief and probability is an important basis for decision making, forming the bedrock of what I see as the American philosophy of Pragmatism, Its a bottom up point of view that is rooted in experience and practicality. William James, who codified this point of view as “Pragmatism” famously said, “Truth is what works”.

Exploring a bit of this Keynes book already and knowing that Keynes was influenced by his Cambridge associations with G.E. Moore who similarly took this bottom up, individual belief based approach most famously in Principia Ethica. So perhaps this Cambridge-Bloomsbury connection makes this really Anglo-American philosophy.

There was a time when our search for truth as a culture led us into periods of severe doubt and Continental philosophies like Existentialism and Deconstructionism. These were times of great shifts in values and cultural upheaval. Arguments from first principles were swept aside by feelings of being without roots in a world without intrinsic meaning.

For James, Moore and Keynes, there’s a grounding in the pragmatic idea that there is a real world out there that we can know and predict however imperfectly. Decisions based on our beliefs have consequences so we had better work on refining those beliefs and improve our decision making.

Perhaps we’re ready for a return to a more practical Anglo-American philosophy based on experience, culture, belief and the scientific approach to finding meaning in the world. At least I know I am.

Steven Strogatz: Sync

Steven Strogatz has been one of the leading figures in the mathematics of biological systems. While synchronization of independent elements is the thread that brings his book  Sync together, it’s all in the context of the new systems view of biology.

His process is to frame a question about complex systems and then look for answers by running computer simulations of the process. When order emerges in the simulation, he and colleagues try to discern the mathematics underlying the order. You see these connections are so complicated that one can’t predict their behavior by inspection and reason. In general, its easier to recreate aspects of them in a simple model in order to understand how they behave.

This is a very basic demonstration of emergent behavior of a system. The behavior of the larger system can’t be predicted by understanding the behavior of the components and their interconnections. Even more interesting is how small changes to the indivual units or their connection strength can radically change the emergent bahavior of the system. Once you have a simple working model, deeper understanding of the possible states of the system can be gained by looking at behavior over a wide range of assumptions and conditions. Here Strogatz is interested in how synchronous activity emerges in networks.

These simplified systems aren’t real, but are useful tools. Just as a map is not the terrain, a system model is not the system itself. The model is useful only if it predicts the behavior of the real system. Just as a map is only useful if it can predict the proper route through the landscape. This is the iterative nature of science and a reflection of William James’ Pragmatism. Truth is what works.

As a scientist, I gained a bit of insight into why it’s easy to manipulate the state of some biological systems. I spent many years in the lab studying mechanisms of cell death. I could never understand why so many investigators were able to find so many different ways to halt the process once it had been set in motion by an experimental perturbation. Surely all of these processes couldn’t be independently responsible for the cell death? If they were independent, then blocking just one wouldn’t help cells survive. Other, unaffected processes would carry out the deed.

The many interactions within a cell place some events at nodes that have broader effects. The other day, an accident on a highway here in Baltimore managed to tie up much of the traffic north of the city. There were cascading events as traffic was shunted first here and then there by blockage and congestion in one key pathway after another. Similarly, cell processes or cell state can be shifted from one state to another by strategic triggers.

Tools like network maps and simulations promise to provide a means for understanding complexity that won’t yield to simple cause and effect diagrams. Strogatz ends the book with some contemplation of how consciousness arises from the network of neural connections. It may be that syncronization across the cerebral cortex is responsible for the binding of shape and color of visual objects or the binding of object and word.

Of course, its this idea of mind and meaning as the emergent effect of complex systems that has interested me for some time now. As a neurobiologist, calling meaning an emergent quality of brain is a neat way to bridge the material with the immaterial worlds.

Embracing Uncertainty

How do you abandon the illusion of control and embrace uncertainty?

We could be less anxious and less stressed if we gave up trying to control the uncertain future. Living in the now, choosing actions that increase the chances of a favorable state of affairs can be challenging and stimulating.

The uncertainty we live with every moment is hard to grasp. In fact, we tend to simplify the world down to something less complex and more manageable. But the simplicity and gain in control are illusions. So we stress about what will really happen.

Uncertainty is lack of predictability. Predictable situations are comfortable because we know what’s almost certainly going to happen. Surprises are just that- unexpected and unanticipated.

We know we can’t predict the future in part because  our knowledge of the world is generally incomplete. Since we don’t know what’s around the corner or what other people are thinking, that part of the world can’t be predicted.

Even with perfect knowledge, the world itself would remain unpredictable. The result of any particular choice is uncertain because it can result in a wide range of outcomes. We never know whether we made the right decision even in retrospect because we only know what happened after that decision was made. Speculating about what a different decision would have brought about is not very useful, because again the outcome can only be guessed at and never known. There are two many complex interactions and unintended consequences. Once the choice is made, the result is largely out of our control.

Interestingly, giving up the illusion of control is a core belief in religious and spiritual movements. Recognizing a greater power like God or Fate that controls the world is a great aid in abandoning illusions of determining outcomes. I don’t think spirituality is necessary to adopt an attitude of embracing uncertainty, uncertainty arises naturally in the world. But the attitude of humility gained by recognizing a power greater than ourselves seems to help in this embrace of uncertainty. Adopting this mental stance in making decisions provides reward in the  releases from the fear of unanticipated outcomes. It also provides a space for God or Fate to act even in a seeming clockwork world of cause and effect.

After all, the only truth certain is that the world is uncertain and truth can’t be fully known.

The Devil’s Triangle: Fear, Uncertainty, Control

The great goal of Deciding Better is to escape the trap of fear, uncertainty and control.

Decision making is hard when the outcome is uncertain. What’s so bad about a little doubt? Joined to uncertainty are two interacting factors: Fear and Control.

Uncertainty provokes anxiety. When we don’t know how things will turn out, the emotion of fear comes into play. It gets us ready for fight or flight. This is anxiety, a neurochemically induced cognitive state. It’s a deep seated brain mechanism with great adaptive utility. A little fear can be a very good thing at the right time.

The problem is that we experience this fear constantly. Then we call it stress and anxiety. Our big brains help us see how uncertain the world really is. I talked about it the other day in a discussion of what makes decisions hard. Decisions aren’t only hard, they provoke fear because of the associated uncertainty.

So what’s scary about uncertainty? Ultimately, its having to face a loss of control. When we’re masters of our environment and in control, we know what to expect. Lose that certainty and we lose control. Causing anxiety and stress.

Deciding better must include embracing uncertainty without engaging the other two sides of this triangle of fear and control. At least not any more than necessary.

The more we understand about the world and its complexity, the more profound our appreciation of how unpredictable the world really is. We are never really in control of outcomes and we are truly powerless to bend the world to our will. We can powerfully influence the world through our actions, but we can’t control anything other than how we choose to act in the moment.

I believe this is at the core of why what Stephen Covey called the world’s “wisdom literature” emphasizes humility and releasing the illusion that we’re in control of the future. At the same time, Covey started with his First Habit, “Be Proactive” as a step in controlling ourselves rather than controlling the world.

Emergent Behavior of Links and Clicks

One of the most interesting chapters in Mark Bernstein’s The Tinderbox Way is on links- both in Tinderbox and on the Internet. Mark provides a personal and historical overview of the approaches and attitudes toward linking beginning with the early days of hypertext and leading up to our current environment.

Linking evolved, guided by the users of the net in a way suitable for navigation within and between sites for readers. It’s  now adapted and grown to enable search advertising and the social networking systems.

What’s interesting to me is how difficult it is to show the utility of linking in a Tinderbox document. One ends up pretty quickly with a spaghetti plot of links between boxes. Mark provides some illustrations that look interesting but don’t seem to mean much at all as a map. There’s actually a site that collects these pretty network pictures: Visual Complexity.

As I read Mark’s discussion, I was struck by the similarity between these links and the interconnections of metabolic pathways within a cell or the interconnections between neurons. Mapped, we see spaghetti. But there is an emergent behavior from the network that only arises from the functioning of those interactions. On the web perhaps these are communities of shared interest.

We need a large amount of computational power to visualize the emergent network. Its easier if its geographical:

Via GigaOm:

If there’s one thing you get when you have close to 600 million users the way Facebook does, it’s a lot of data about how they are all connected — and when you plot those inter-relationships based on location, as one of the company’s engineers found, you get a world map made up of social connections.

We’re used to seeing maps as geographical metaphor. Maps of meaning are not well developed as mental models. I submit that Google’s algorithms for advertising and ranking are providing semantic functions that are such maps. The actual movement of people through the network as measured by following user clicks across sites is another even more important map. The data is massive and difficult to display simply, but the emergent behavior can be detected and used.

Revisiting Searle’s Chinese Room

This is a retraction. I no longer think that John Searle’s Chinese Room is trivial. It is a powerful demonstration of the failure of materialism to provide an adequate explanation for consciousness.

The Chinese Room is Searle’s most famous argument against materialism. He asks us to imagine that we are in a sealed room, communicating by text with the outside. We have a manual that allows us to respond to questions in Chinese even though we have no knowledge of the language. Or if asked in English we respond in the usual way.

Thus, we’d be answering English or Chinese appropriately. The outside observer can’t distunguish how we’re coming up with replies. But inside, the two are totally different. One is done mechanically, by rote, the other is done with awareness and thought. This is analogous to the observation of a person, obviously. Is there a mind responding or just mechanical response without consciousness?

Materialism says that only the physical exists. But such a view cannot account for the difference between response by some one who understands and mechanical responses. This seemingly most scientific and rational approach fails to admit the simple fact- we know that there is such a thing as awareness and consciousness because we experience it constantly. Any theory of mind that fails to account for it is incomplete.

Dualism accounts for consciousness, but in its separation of mind from material, it loses all of its explanatory power and becomes unacceptable.

Here’s what I wrote in the comments to Aaron Swartz’s description of the argument:

Searle’s Chinese Room experiment is a trivial misdirection. He focuses on the man in the room matching symbols rather than the creator of the semantic and syntactic translation rules. That designer was conscious. The man in the room is working unconsciously. When I speak my mouth and vocal cords do the translation from nerve impulses to sound patterns but it is entirely unconscious. You have to follow the trail back into the brain where you get lost because consciousness is an emergent property of the neural networks, not a property of the machinery at all. posted by James Vornov on March 15, 2007 #

I don’t actually remember whether I wrote that before or after I read Searle’s The Rediscovery of the Mind, but at some point I did come to agree with him. The simple way out of the problem is to admit that mind does indeed exist. As evidenced by my comment, I had already decided that mind was real and it was emergent from brain activity. Interestingly, using different terminology, I think that Searle’s points out the same irreducibility in the later book, The Mind.

Why Enrichment Designs Don’t Work in Clinical Trials

Last week I was discussing a clinical trial design with colleagues. This particular trial used an enrichment design. A few years ago I did some simulation work to show that you can’t pick patients to enroll in a clinical trial in order to improve the results.

People are probabilistic too.

The idea of and enrichment design is to winnow the overall patient group down to those individuals who are likely to respond to therapy. One way is to give all of the candidates a placebo and eliminate placebo responders. Another strategy is to give a test dose of drug and keep only those who respond. Either way, the patients that pass the screening test get to go on to a double blind test of active drug versus placebo.

Sounds like a great idea, but it doesn’t really work most of the time in practice. While this idea of screening out patients, it turns out that it mostly just excludes patients who are varying in their complaints over time. You can’t really  tell who are going to be better patients during the screening test. It turns out that most patients look different at one time point compared to any other.

The mistake that we make is in thinking that people can be categorized by simple inspection. We think of patients as responders or non-responders, an intrinsic characteristic they have or don’t have. Trying to screen out patients we don’t want falls into the trap of thinking that a single set of tests can successfully discriminate between classes.

The way I think of it is that we need relatively large clinical trials to prove the value of a modestly effective drug. So it seems odd to think that one could easily categorize patients themselves when tested. You can see this by looking at how well a test dose of a drug looking for drug responders would be able to enrich a patient population. Variability over time makes this impossible.

Let’s walk through an example. An imaginary trial of a drug to treat migraine attacks.

Lets say we know the truth and this candidate is in reality a pretty good treatment for a migraine attack. But the patient varies in headache severity and responsiveness to treatment.

Some headaches are mild and will resolve without treatment. That mild attack will act no differently whether the active drug or placebo was administered. Some headaches are very bad and even a really effective drug might not touch that kind of headache. So again the attack will be the same whether placebo or treatment is given.

And what about the headaches that are in between and could respond? Well if a drug worked half the time, then out of every two of those attacks, the active drug would show an effect where the placebo did not. The other half the time, it would look just like placebo again.

Add up these cases, there are four of them. For only one atttack did the active drug work where the placebo would fail. One out of 4 times, a 25% overall response rate. All just because in the same patient the headache and its response to drug changes. So if I did a test treatment to see if I had a responder, I would eliminate half of the responders because either they had a headache that was the the one too severe to respond or the one that happened not to respond that time.

Of course you’d eliminate some of the non-repsonders. But we know that even non-responders may have 1 in 4 headaches that are mild enough that they don’t need the treatment anyway. So you eliminate 75% of the non-responders with a test dose which is better than the 50% of responders that were eliminated. You’ve done better. How much better depends on the ratio of responders to non-responders in the population, a ratio that is completely unknown.

What’s nice is that while you can see the logic by reading the story I’ve told, a mental simulation, one can create an explicit mathematical model of the clinical trial and simulate running the trial hundreds of times. It turns out that there very few conditions where this kind of enrichment really works. I turns out its simpler and just as informative to see whether or not the drug is effective in the overall population without trying to prejudge who is a responder or not with a test dose.

The irony? This is exactly the opposite of clinical practice. In the real clinic, each patient is their own individual clinical trial, an “N of 1” as we say. N is the symbol for the number in a population. An individual is a population of one. N of 1. We treat the patient and over time judge whether or not they respond in a personal clinical trial. Not to see whether the drug works but whether the patient is a responder.  If they don’t, therapy is adjusted or changed. But in our migraine example, multiple headaches of various intensity would have to be treated so see the benefit.

Perhaps variability across a population is easily grasped. People are tall or short, have dark or light hair color. Variability within an individual over time is perhaps more subtle but just as important for over time.

Trust is Simplifying

The outrage directed toward the TSA reflects a breakdown in trust.

With terrorists trying to bring down planes, we don’t trust our fellow passengers. Every fresh attempt, even when not successful lowers that trust even further. The government and its TSA becomes the vehicle to demonstrate that lack of trust. As trust declines, surveillance increases. In a decade it’s gone from identity and magnetometer checks to direct body searches, either by technology or direct physical contact.

As discussed in the NYT today, there’s also a lack of trust between the government and the citizenry. We feel angry that government is being so intrusive and body searches seems to cross a personal limit for us. And the TSA doesn’t trust is to just go along and let them do their job.

The loss of trust in air travel creates hassle and uncertainty. Everything being carried onto a plane must be checked. Every person must be checked. No one is trusted in this system. Calls for more targeted surveillance are really calls for more trust of at least some individuals. After all, I know they can trust me. Its those suspicious looking young men I’m worried about. That would remove lots of hassle. Actually all of my hassle if they would trust me somehow.

Trust is a great simplifying principle. I trust my bank to keep my accounts private and secure. I trust other drivers on the road to stay in their lanes. As trust goes down, complexity goes way up. I have to worry about more and more because so much more could go wrong in so many unexpected ways.

I was introduced to the importance of trust in Francis Fukuyama’s book“Trust” In it he looks across different cultures and describes the  structure of trust in each one and how it affects politics, economics and quality of life. Not surprisingly, the higher the level of trust, the better off people are. And one of his theses is that the U.S. with its frontier driven communitarianism, is one of the highest trust societies in the world.

Most simply, trust transform an uncertain potentially hazardous environment into a safe, reliable socially driven model. Its such a powerful simplifying principle that the desire to cooperate in a fair way is a deeply felt human quality, wired into our brains it seems.

Since I’m currently exploring ideas about extended cognition, lets turn the view 180 degrees. Usually we think of trusting in the external environment, looking for predictability. I think there’s an important aspect of self-trust that contributes to simplicity. If I can rely on myself to remember how to do something complex, I approach it with confidence.

That sense of mastery and self-confidence dispels fear just as trust in the world does.