Perceptual Choices

Deciding Without Thinking

The original premise of this site is that it possible to actually make better decisions. That’s why I called it called “On Deciding . . . Better” to begin with. After all, we are the agents responsible for our actions, so who else is responsible for making better choices? I’ve written about the false promises made by Decision Theory which asserts that by making choices can be made more rational, the decisions can be more successful. The problem isn’t the mathematicl basis of Decision Theory, it’s the problem with implementing it when people actually need to make decisions in the real world. There are valuable habits and tools in the Decision Theory literature, but it’s clear to me that when our brains are engaged in rapidly making decisions, we generally are not aware of the decision process. If we’re not aware of the decisions, then those deliberative executive function mechanisms can’t be brought online as they are being made.

Perceptual Decision Making

This is the Kanizsa Triangle, created by the Italian psychologist Gaetano Kanizsa in 1955. I like it as an example because it is so simple and yet so stable. The brain creates the contours of the second white triangle. When I first heard this kind of illusion being called a “perceptual choice”, I rejected the notion. After all, a choice is an act of will, of mental agency.

Yet calling this “a perceptual choice” makes a lot of sense from a brain mechanism point of view. A simple set of shapes and lines is projected on the retina and relayed back through the visual cortex and the surrounding cortical areas. That part of the brain makes an unshakable choice to interpret the center of the figure as containing a triangle. Similarly, seeing the face of my son, a different area of cortex decides who it is. Further, circuits are activated with all kinds of associated information, some brought up into consciousness, others not, but just ready to be used when needed.

Are We Responsible for Perceptual Choice?

If perceptual choice is like most other choices, like choosing a breakfast cereal or a spouse, it seems I’m advocating abandoning a lot of perceived responsibility for one’s own’s actions. It seems that we walk through the world mostly unaware of how perceptions are constructed and don’t have access to why we act the way we do. Actions are based on perceptions that were chosen without awareness in the first place. And it goes without saying that we have no responsibility for the perceptions and actions of everyone around us. Their brains, wired mostly in the same way as ours, chose how to perceive our words and our acts.

It seems to me that to make better decisions there have to be rather deep changes in those perceptual brain processes. Any decision tools have to become deeply embedded in how our brains work, any rules to guide how we perceive, choose or act lie as deep habits in those automatic functioning circuits of the brain. Some, like the the Kanizsa Triangle are in the very structure of the brain and can’t be changed. Others are strongly influenced by experience and deepened by practice.

Lessons in Science and Culture

John Nernst at Everything Studies provides a long and thoughtful analysis of a discussion of a dangerous idea: A Deep Dive into the Harris-Klein Controversy. I think it’s worth a comment here as well.

As a neuroscientist and reader of all of these public personalities (Charles Murray, Sam Harris and Ezra Klein), I’ve followed the discussion race and IQ over the years. We know that intelligence, like many other traits like height or cardiovascular risk are in part inherited and influenced strongly by environment. Professionally, I’m interested in the heritability of complex traits like psychiatric disorders and neurodegenerative diseases. The measured differences in IQ between groups falls squarely in this category of heritable traits where an effect can be measured, but the individual genes responsible have remained elusive.

I’m going to side with Erza Klein who in essence argues that there are scientific subjects where it is a social good to politely avoid discussion. One can learn about human population genetics, even with regard to cognitive neuroscience without entering into an arena where the science is used for the purpose of perpetuating racial stereotypes and promoting racist agendas of prejudice. That the data has a social context that cannot be ignored.

Sam Harris, on the other side of the argument, has taken on the mantle of defender of free scientific discourse. He takes the position that no legitimate scientific subject should be off limits for discussion based on social objections. His view seems to be that is that there is no negative value to free and open discussion of data. He was upset, as was I, at Murray’s treatment at Middlebury College and invited Murray onto his podcast. Sam was said by some to be promoting a racist agenda by promoting discussion of the heritability of IQ in the context of race.

In fact, Ezra Klein joined the conversation after his website Vox published a critique of the podcast portraying Harris as falling for Murray’s pseudoscience. But that’s nothing new really; Murray surfaces and his discussion of differences in IQ between populations is denounced.

As one who knows the science and have looked at the data, it bothers me like it bothers Harris that the data itself is attacked. Even if Murray’s reasons for looking at group differences is to further his social agenda, the data on group differences is not really suprising. Group differences for lots of complex inherited traits are to be expected, so why would intelligence be any different than height? And the genes responsible for complex traits are being explored, whether its height, body mass index or risk for neurodegenerative disease. Blue eyes or red hair, we have access to genomic and phenotypic data that is being analyzed. The question is whether looking at racial differences in IQ is itself racist.

I’ve surprised myself by siding with Klein in this case. His explanation of the background is here and his discussion after his conversation directly with Harris is here. Klein convincingly makes the argument that social context cannot be ignored in favor of some rationalist ideal of scientific discourse. Because we’re human, we bring our cultural suppositions to every discussion, every framing of every problem. Culture is fundamental to perception, so while data is indifferent to our thought, the interpretation of data can never be free of perceptual bias. Race, like every category we create with language, is a cultural construct. It happens to be loaded with evil, destructive context and thus is best avoided if possible, unless we’re discussing the legacy of slavery in the United States, which I think is Klein’s ultimate point.

Since these discussions are so loaded with historical and social baggage, they inevitably become social conversations, not scientific ones. Constructive social conversations are useful. Pointless defense of data is not useful; we should be talking about what can be done to overcome those social evils. No matter how much Sam would like us to be rational and data driven, people don’t operate that way. I see this flaw, incidentally, in his struggle with how to formulate his ethics. He argues with the simple truth that humans are born with basic ethics wired in just like basic language ability is wired in. We then get a cultural overlay on the receptive wiring that dictate much of how we perceive the world.

Way back when, almost 20 years ago, I named this blog “On Deciding . . . Better” based on my belief that deciding better was possible, but not easy. In the 20 years that have passed I’ve learned just how hard it is to improve and how much real work it takes. Work by us as individuals and work by us in groups and as societies.

Lack of Authority

It's hard to believe how little we trust what we read in this age of the internet.

The US election of 2016 demonstrated just how profoundly our relationship to authority has changed. We're exposed to conflicting opinions from the online media. We hear facts followed by denial and statement of the opposite as true. Everyone lies, apparently. There's no way to make sense of this online world in the way one makes sense of a tree or a dog or a computer.

Perhaps relying on confirmation bias, we are forced to interpret events without resort to reasoned argument or weight of evidence. We have to fall back on what we already believe. You have to pick a side. Faced with a deafening roar of comments on Twitter, cable news, news websites, we shape what we hear to be create a stable, consistent worldview.

Welcome to a world of narrative where the truth is simply the story we believe. And pragmatically, it seems not to matter much. Believe what you will, since we mostly yield no power in the world.

So what am I to make of this nice new MacBook Pro that I'm using right now? Is it really evidence of Apple's incompetence or their desire to marginalize or milk the Mac during its dying days? Again, believe what you will, but I've got some work to do.

Mind In The Cloud

“Technology changes ”how“ not ”what.“ Expands in space, compresses in time. The results are sometimes breathtaking.”

Notebooks as Extended Mind

In 1998, Andy Clark and David Chalmers made the radical suggestion that the mind might not stop at the borders of the brain. In their paper, The Extended Mind, they suggested that the activity of the brain that we experience as consciousness is dependent not only on brain but also on input from the rest of the world. Clark’s later book, Supersizing the Mind clarifies and expands on the idea. Taken to its logical conclusion, this extended mind hypothesis locates mind in the interactions between the brain and the external world. The physical basis of consciousness includes the brain, the body and nearby office products.

I mean to say that your mind is, in part, in your notebook. In the original paper, Clark and Chalmers use the hypothetical case of Otto. Otto has Alzheimer’s Disease and compensates for his memory deficit by carrying a notebook around with him at all times. They argue for externalism- that Otto’s new memories are in the notebook, not in his brain. The system that constitutes Otto’s mind, his cognitive activities depends not only on his brain, but on the the notebook. If he were to lose the notebook those memories would disappear just as if removed from his brain by psychosurgery. It should make no difference whether memory is stored as physical traces in neuronal circuitry or as ink marks on paper since the use is the same in the end.

The paper actually opens with more extreme cases like neural implants that blur completely whether information is coming from the brain or outside. We have brain mechanisms to separate what is internally generated and what is external. The point is that these external aids are extensions. In medical school I learned to use index cards and a pocket notebook reference, commonly referred to as one’s “peripheral brain”. Those of us who think well but remember poorly succeed only with these kinds of external knowledge systems.

In 1998, when The Extended Mind was published, we used mostly paper notebooks and computer screens. The Apple Newton was launched August, 1993. The first Palm Pilot device, which I think was the first ubiquitous pocket computing device , in March, 1997.

The Organized Extended Mind

When David Allen published Getting Things Done in 2001, index cards and paper notebooks were rapidly being left behind as the world accelerated toward our current age of email and internet. I’ll always think of Getting Things Done system as a PDA system because of the lists I created my system that lived on mobile devices. First it was the Palm, then Blackberry and most recently, iPhone. @Actions, @WaitingFor, @Projects were edited on the PC and synced to a device that needed daily connection to the computer. I had a nice collection of reference files, particularly for travel called “When in London”, “When in Paris”, etc.

My information flow moved to the PC as it became connected to the global network. Two communication functions really: conversations and read/write publishing. Email and message boards provided two way interaction that was generally one to one or among a small community. Wider publishing was to the web. Both of these migrated seamlessly to hand held devices that replicated email apps or the browser on the PC. Eventually the mobile device became combined with phone. Even though capabilities have grown with faster data rates, touch interfaces, bigger screens and large amounts of solid state data storage, The first iPhones and iPads showed their PDA roots as a tethered PC device in the way they backed up and synced information. That world is rapidly fading as the internet becomes a ubiquitous wireless connection.

Access to email and internet through smartphones has served to further “expand time”“ and ”compress space" as Dave put it. I adopted a text file based approach so that I could switch at will between my iPhone, iPad and MacBook Air and have my external thoughts available. The synced plain text files seems transformational, but feels like my old Palm set of lists.

The age of the cloud is one of information flakes. Much of what we know is now latent and external requiring reference to a small device. Is it any wonder that our streets and cafes are filled with people peering down into a screen rather than out into the world?

It was a rapid transition. One that continues to evolve and that demands frequent reconsideration of the means and methods for constructing the extended mind.

A Mind Released

The SimpleNote and Notational Velocity and DropBox ecosystem was the enabling technology for me. Suddenly there was seamless syncing between the iPad or iPhone and the Mac. The rapid adoption of Dropbox as the defacto file system for iOS broke the game wide open so that standard formats could be edited anywhere- Mac, Windows, iPhone, iPad, Unix shell. This was a stable fast data store available whenever a network was available.

Editing data on a server is also not a new idea. Shell accounts used for editing text with vi or Emacs on a remote machine from anywhere is as old as computer networking. I started this website in late 1999 on Dave Winer’s Edit This Page service where a text editor in the browser allowed simple website publishing for the first time.

Incremental searching of text files eliminates the need for databases or hierarchical structure. Text editors like Notational Velocity, nvAlt, SimpleNote or Notesy make searching multiple files as effortless as brain recall from long term memory. Just start typing associations or, for wider browsing, tags embedded in metadata, and unorganized large collections become useful. Just like brain free recall of red objects or words that begin with the letter F. Incremental searching itself is not a new idea for text editors. What’s new is that we’re not seeing just a line of text, but rather multiline previews and instant access to each file found. But together incremental searching with ubiquitous access and the extended mind is enabled across time and space.

What seems to have happened is that the data recorded as external memory has finally broken free from its home in notebooks or on the PC and is resident on the net where it can be accessed by many devices. My pocket notebook and set of GTD text lists is now a set of text files in the cloud. Instantly usable across platforms , small text files have once again become the unit of knowledge. Instant access to personal notebook knowledge via sync and search.

Do ADHD Drugs Work Longterm?

“Illness is the doctor to whom we pay most heed; to kindness, to knowledge, we make promise only; pain we obey.” ― Marcel Proust

An essay on ADHD in the New York Times launched an interesting Twitter exchange with Steve Silberman and a medical blogger PalMD on how well we understand psychiatric disorders and treatment.

In the article, Dr. Sroufe concludes that since there is no evidence for longterm use of ADHD medication, their use should be abandoned. He is right that the evidence of efficacy is all short term. Over the long term, no benefit has been shown. Of course almost no one dealing with the issue on a day to day basis would agree. Parents, teachers and physicians all agree that these medications have a use to improve the lives of these children. Count me among those who believe it is highly probable that treatment over the course of months and years has utility, but is hard to prove.

As a problem in decision making, this is a good example of the difference between believing and knowing.

There is a difference between the practice of science and an absolutist approach to truth. In decision making, we must be practical. As Williams James said, “Truth is what works.” He believed that science was a pragmatic search for useful models of the world, including mind. Those that look for abstract, absolute truth in clinical research will be confused, misguided and as often as not, wrong in their decisions. Truth is something that happens to a belief over time as evidence is accumulated, not something that is established by a single positive experiment.

Belief in the usefulness of therapy in medicine follows this model of accumulation of belief. The complexity and variability of human behavior demands a skeptical approach to evidence and a sifting through to discover what works.

Clinical trials for drugs to affect behavior are generally relatively small, short experiments that measure a change from baseline in some clinically meaningful variable. These trials are clinical pharmacology studies in the classic sense, studies in patients (clinical) of drug effect (pharmacology). No one is expecting cure or even modification of the disease. The benefit is short term symptom relief, so the trial examines short term symptom relief. In the case of a pain reliever, we ask whether patient’s self reports of pain are decreased by therapy compared to before therapy. In ADHD, we ask whether a group of target behaviors is changed by treatment compared to baseline.

This approach of measuring change from baseline has a host of pitfalls that limit the generalizability of clinical trials to real life medicine. First, baseline measures are subject to large amounts of bias. One of the worst sources of bias in these trials is the patient and physician’s joint desire to have the patient meet the severity required to be enrolled. The investigator is under pressure to contribute patients to the trial. The patient hopes to gain access to some new therapy, either during the trial or during some subsequent opportunity. Both of these factors pressure patients to maximize the severity of their complaint at baseline. How do you get into a trial? Exaggerate your problem! Even without conscious or unconscious bias from patients, any trial will enroll patients that happen to be worse than their average state. When measured repeatedly over time, the scores will tend to drop- a classic regression to the mean. If you select more severe outliers, they will tend to look more average over time.

Second, diseases are not stable over time. Without any intervention, measures of a disease will be most highly correlated when measured with a short duration between assessments. The longer you wait to measure again, the lower the correlation. Measuring a drug effect in a controlled trial accurately depends on a high level of correlation. All else being equal, the longer one treats, the harder it will be to measure the effect of the drug. This is the major pitfall of depression trials. Episodes are usually limited in duration, so most patients will get better over time without treatment.

So perhaps its not surprising that its very hard to measure the effect of ADHD drugs after months or years in chronic therapy trials. These kids get better over time both from regression to the mean and the natural history of the disease.

Another important issue in ADHD research is that these drugs have effects in healthy volunteers. As Dr. Sroufe points out, amphetamines help college students study for exams- no diagnosis of ADHD needed. This makes it easier to do pharmacology studies, but means that diagnosis in those studies doesn’t really matter- the pharmacology is largely independent of any real pathological state. One could never study a cancer drug in some one without cancer, but this is not true of a cognitive enhancing drug. Its probably most likely that the kids with ADHD don’t have a single pathophysiology, but rather a combination of being at one end of a normal spectrum of behavior plus stress or lack of coping mechanisms that create problems for them in the school environment where those behaviors are disruptive to their learning and that of others. The pharmacology of stimulants helps then all- after all it helps even neurotypic college students and computer programmers.

Treatment response does not confirm diagnosis in ADHD as it does in some other neurological diseases like Parkinson’s Disease. While we’d like to call ADHD a disease or at least abnormal brain state, we have no routine way of assessing the current state of a child’s brain. We have even less ability to predict the state of the brain in the future. Thus diagnosis, in the real meaning of the word- “dia” to separate and “gnosis” to know, is something we can’t do. We don’t know how to separate these kids from normal or into any useful categories. And we have no way of describing prognosis- predicting their course. So a trial that enrolls children on the basis of a behavior at a moment in time and tries to examine the effects of an intervention over the long term is probably doomed to failure. Many of those enrolled won’t need the intervention over time. Many of those who don’t get the intervention will seek other treatment methods over time.

With all of these methodological problems, we can’t accept lack of positive trials to be proof that drugs are ineffective long term. We can’t even prove that powerful opioid pain relievers have longterm efficacy. In fact, it was not too long ago that we struggled with a lack of evidence that opioids were effective even over time periods as short as 12 weeks.

Our short term data in ADHD provides convincing evidence of the symptomatic effects of treatment. Instead of abandoning their use, we should be looking at better ways to collect long term data and test which long term treatment algorithms lead to the best outcomes. And we should be using our powerful tools to look at brain function to understand both the spectrum of ADHD behaviors and the actions of drugs in specific brain regions.

The Brain is the Map of the Mind

I dwell in Possibility– – Emily Dickinson

So What?

Why learn about the world with no immediate practical application?

I’ve said that there is value in writing to learn and photographing to see. What’s the value of learning? Of seeing?

Knowing how to bake bread or bake beans is clearly useful and there’s no question of practicality. Science, even at its most exploratory, seems useful as long as it promises more powerful manipulation of nature. Sometimes the possibility of science is obvious, as in understanding the role of an enzyme in energy metabolism to affect cellular function. Even when the connection is unclear, learning more seems to have potential value even if the present result is impractical. All information has value and there are no dead ends, only detours. Learning broadly is often necessary preparation for learning more narrowly and usefully

Is philosophy of mind of any use? Is it as useful as neuroscience itself? Might thinking about the nature of the mind at least contribute to the usefulness of information about brain structure and function? Why explore the relationship between mind and brain? Why worry about the apparent contradictions between deterministic physical models and subjective free will?

If I can’t tell whether the world is real or an illusion, does it matter? Is the mind made of ectoplasm attached to the brain by a neural-spiritual interface? These questions have been around for centuries. Every year we learn more about the brain. Do we know any more about the mind? Is what we’ve learned potentially useful?

I’d like to convince you that understanding how mind is generated from brain is a useful way to improve brain function. For me, this is a fundamental reason why brain science is important. Learning about the brain should be a path to deciding better.

Learning From Experience, Teaching the Brain

In the *Consolations of Philosophy“, Alain DeBotton writes, ”In their different ways, art and philosophy help us, in Schopenhauer’s words, to turn pain into knowledge." We know what art is and we know that art helps us learn to see. Philosophy, in the broadest meaning of love of knowledge is a similar direct path from experience to knowledge.

Ignoring the question and pretending that knowledge and brain are independent domains is to miss an opportunity to understand what it means to “know” and therefore try to improve the everyday use of knowledge.

So what have you learned personally from your years of mental experience? TYou’ve made good, profitable decisions about the world. You’ve made mistakes of course. Better yet, how often have you thought that you were right, absolutely sure you were right, and later learned that the true state of the world was not at all what you thought?

The stock market serves as a wonderful lab for training the mind to decide better if approached mindfully. There’s profit in correctly identifying an undervalued stock that subsequently rises in value. On the other hand, buying into hype and choosing a company close to failure is exactly the kind of pain that Schopenhauer was referring to as leading to knowledge. What is it that has improved from years of experience in the market that we call “knowledge”? Is it mind that has better judgement now? Is it the brain that can now choose more accurately under conditions of uncertainty?

The Brain Makes Maps

We are beginning, just beginning I think, to understand how knowledge is stored and retrieved in the brain. The insights go back to the beginnings of brain physiology, when recordings of single neurons in awake, behaving animals began to be possible. It was obvious from the start that you and I don’t perceive the world directly and whole, but broken down into very small elements. Our retinas are the light sensing neural arrays at the back of the eye. Like the individual pixels that make up the sensor array in a camera, each photoreceptor senses the light from a small part of the visual scene. The whole picture is represented, but its been deconstructed into a mosaic in which each element has been disconnected from every other element.

Somehow that array of light intensity is reconstructed into a sensory impression that we experience subjectively as seeing the world. What’s reconstructed is more than just a visual sensory impression, the seen world has meaning. Its as if there are little call outs from the objects- blue book, time, moving fly making that buzzing noise, so annoying …

The way in which sensory input is organized into coherent perception remains one of the fundamental questions in neuroscience. In the visual system, the brain starts abstracting local features like color form and edges from the map of intensities sensed at the eye. These features are mapped from visual space into brain maps, creating a neural representation of features in the scene. At higher levels, features become maps of objects with meaning like books and flies. The maps of words for these objects are separate, but can be called on when the fly or the book needs to be mentioned.

The brain is a set of maps, spatially orgainized, each representing different sensory streams or, on the action side, control of different parts of the bodies and movement of them through space. To catch a fly requires the map of visual space containing the fly to be registered with the map of arm and hand movement. The connections and coordinating systems to do all of that are known. In fact, simpler versions of them can be studied in frogs who need project a sticky tongue out into visual space for fly catching activities. Lunch in this case.

It’s a small conceptual step to suggest that valuation of stock, reading another person’s motivation, and understanding calculus are all brain maps of various types. They are simplified representations that model aspects of the real world. The maps are not strictly spatial, but reflect our models of how the physical world is laid out and how it can be manipulated. As simple models, they are not perfectly accurate just as a geographical map is not the terrain itself but rather a useful representation for navigation.

The Mind Mapped

Learning is the act of making better brain maps. The more accurate the model of the world is in the brain, the better it will navigate the world itself. Misconceptions, inaccuracies and the unknown are all bad or missing parts of the map that will make decisions more prone to error. A fully accurate and comprehensive map isn’t ever possible. By their very nature, maps are restricted representations of the world. The world itself is too big and complex to deal with directly.

The exploration of the relationship between mind and brain is for me an effort to create a more accurate map of deciding. We feel like we are creatures split in two. Our ethereal minds seem to inhabit and be constrained by physical bodies. A more accurate map would show just the brain working away and our subjective mental experience as a view into what that brain is doing. It becomes easier to discard distinctions like “rational decision making” and “intuition” when the underlying brain structure and function is the map of mind.


I got used to things going pretty well, going well enough. It was a matter of standing on a steady stone, moving to one higher or broader when the opportunity presented.

I told myself to appreciate them as platforms, thinking of these situations as being good places to be. I learned at the beginning that not to decide is to decide and not to do is to do.

I’ve made some observations that may have some value, but mostly I’ve read the words of others. Let’s see if I’ve gotten it right.

The Search for Enlightenment

“You’re lost inside your houses
And there’s no time to find you now
Well your walls are turning
Your towers are burning
Gonna leave you here
And try to get down to the sea somehow”

Rock Me On the Water
-Jackson Browne

“Truth is what works.”

-William James

On my daily run today, Jackson Browne’s Rock Me On the Water came up on the Genius Playlist. I connected emotionally with the song as I almost always do, feeling that yearning for the transcendent truth that brings joy, identifying with the seeker on his journey to understanding, seeking the peace that lies beyond the mundane world.

William James was attacked during his lifetime and in subsequent decades by philosophers who felt he was destroying the search for truth by making it completely relative. For him, truth was a construct of the mind based on theories and mental models. Truth is a quality of thought based on how well what we think resembles the external world. He called this Pragmatism. We can approximate an accurate view of the world, but never reach ultimate truth.

The materialist philosophers who attacked him believed there’s a transcendent, absolute truth in the world that can discovered through observation and experimentation. How could it be, they asked, that one person’s truth could differ from someone else’s? How could I see one truth today and a different one tomorrow? Truth must be an absolute quality of statements. What is not True is False. The world of philosophy moved toward logic and proof and away from James’ view of world as a construct viewed by mind via brain processes.

I began to appreciate James’ views after reading contemporary cognitive science and philosophers of mind. Once one begins understanding that our brains function by creating models of the external world, this psychological definition of truth becomes most relevant. We build a mental model of the three diminutional space around us. We hear music and make sense of tone spatially, thinking of notes as being high or low, moving quickly or slowly. We think of crime epidemics as infectious disease or honesty as clean behavior in metaphorical models where one model provides meaning to a different model.

Where does that leave Jackson Browne’s romantic search for (capital T) Truth? Is there any reason to “get down to the sea”? Is it entirely an illusion to seek a non-scientific transcendent truth?

I submit that the poet is talking metaphorically about looking beyond the commonplace mental models that we use when “we’re lost inside our houses”. It is the mission of the poet to find the sea and come back and tell us about the journey and what it’s like to experience that joyous song.

Certainly the purpose of the spiritual search for truth is personal gain and fulfillment. Enlightenment is the clearest, most “right” view of the world possible. Reaching full human potential is a personal goal. One becomes a poet by returning from the sea and singing the song, inspiring others to join the joyous song.

The President, Luck, and Regression to the Mean

Being particularly lucky or unlucky is sure to interfere with good decision making. It’s hard to tell whether you’re succeeding because of a confluence of favorable effects due to chance or due to your exceptional brilliance. One’s internal model of self probably plays a role in how events are interpreted. Are you special or just really, really lucky?

If success came through chance, the model predicts that the future is going to look more average, for good or bad. It may or may not result in less risk taking. I can take risk knowing that the outcome is largely not in my control. I may win or lose; it’s knowing the odds of success that’s important.

On the other hand, if you’ve climbed to the top based on your merit then the internal model predicts continued success beyond the average, never regressing to the mean.

Case in point? Barack Obama. From Andrew Gelman writing at Frum Forum

More to the point, I don’t think that in January, 2009, Obama had any feeling he was in trouble.  For one thing, he’d spent the previous two years beating the odds and winning the presidency.  (Yes, a Democrat was favored in the general election, but Obama was only one of several Democrats running.)  As I and others have discussed many times, successful politicians have beaten the odds and so it is natural for them to be overconfident about future success.

Losing the 2010 midterms may have been a wakeup call, but then again it’s easy to construct a narrative where we claim responsibility for good outcomes and blame chance or other outside causes for the bad outcomes.

Purely from a statistical point of view, expect success to be followed by failure more often than not. In the end, we’re all average.

Approaching Complexity

The whole is greater than the sum of the parts.

This is the essence of a complex adaptive system. Any system that is straightforward enough to be a simple adding up of the effects of each part really isn’t worth contemplating as a system. It’s a collection of independent agents. A stack of checkers of different thicknesses are such a linear system. Stack them up and the height simply adds up in a linear way.

Once the components start acting on each other and themselves, behavior becomes complex and increasingly difficult to predict based on knowledge of the components and their connections. This is not due to ignorance. Collection of more and more data doesn’t help at all. There is some aspect of the whole that is not just the linear addition of the parts.

Once a system is made of connected components that have inputs and outputs, that are processing information, its behavior can become difficult to predict with precision. It could be a mechanical system like a thermostat connected to a heating system, a computer program with subroutines, nerve cells connected in brain circuits, stock traders in a market, the atmosphere are all complex adaptive systems. The mechanical and computer level examples are the most useful for study because they are clearly in the mechanistic, Newtonian, deterministic world and yet their future state cannot be known.

The difference between an additive system and a complex system is in the relationships. Negative and positive feedback create unexpected behaviors in the system. Small effects in one component produce large effects elsewhere because of the nature of the connections which are not simply proportional but non-linear instead.

We’re surrounded by complex systems.Arguably simple linear systems are exceptions and may be idealized simple models rather than real functioning systems out in the world. As thinkers, we study simple systems or simplify the complex into idealized simple systems because they are easy to deal with in a deterministic and reductionistic manner.

We are ignorant of the state of the past and the future. That creates uncertainty. Because of complexity, even if we had perfect knowledge, we’d still be unable to know the future.