Category Archives: Philosophy

Lack of Authority

It's hard to believe how little we trust what we read in this age of the internet.

The US election of 2016 demonstrated just how profoundly our relationship to authority has changed. We're exposed to conflicting opinions from the online media. We hear facts followed by denial and statement of the opposite as true. Everyone lies, apparently. There's no way to make sense of this online world in the way one makes sense of a tree or a dog or a computer.

Perhaps relying on confirmation bias, we are forced to interpret events without resort to reasoned argument or weight of evidence. We have to fall back on what we already believe. You have to pick a side. Faced with a deafening roar of comments on Twitter, cable news, news websites, we shape what we hear to be create a stable, consistent worldview.

Welcome to a world of narrative where the truth is simply the story we believe. And pragmatically, it seems not to matter much. Believe what you will, since we mostly yield no power in the world.

So what am I to make of this nice new MacBook Pro that I'm using right now? Is it really evidence of Apple's incompetence or their desire to marginalize or milk the Mac during its dying days? Again, believe what you will, but I've got some work to do.

Mind In The Cloud

“Technology changes ”how“ not ”what.“ Expands in space, compresses in time. The results are sometimes breathtaking.”

Notebooks as Extended Mind

In 1998, Andy Clark and David Chalmers made the radical suggestion that the mind might not stop at the borders of the brain. In their paper, The Extended Mind, they suggested that the activity of the brain that we experience as consciousness is dependent not only on brain but also on input from the rest of the world. Clark’s later book, Supersizing the Mind clarifies and expands on the idea. Taken to its logical conclusion, this extended mind hypothesis locates mind in the interactions between the brain and the external world. The physical basis of consciousness includes the brain, the body and nearby office products.

I mean to say that your mind is, in part, in your notebook. In the original paper, Clark and Chalmers use the hypothetical case of Otto. Otto has Alzheimer’s Disease and compensates for his memory deficit by carrying a notebook around with him at all times. They argue for externalism- that Otto’s new memories are in the notebook, not in his brain. The system that constitutes Otto’s mind, his cognitive activities depends not only on his brain, but on the the notebook. If he were to lose the notebook those memories would disappear just as if removed from his brain by psychosurgery. It should make no difference whether memory is stored as physical traces in neuronal circuitry or as ink marks on paper since the use is the same in the end.

The paper actually opens with more extreme cases like neural implants that blur completely whether information is coming from the brain or outside. We have brain mechanisms to separate what is internally generated and what is external. The point is that these external aids are extensions. In medical school I learned to use index cards and a pocket notebook reference, commonly referred to as one’s “peripheral brain”. Those of us who think well but remember poorly succeed only with these kinds of external knowledge systems.

In 1998, when The Extended Mind was published, we used mostly paper notebooks and computer screens. The Apple Newton was launched August, 1993. The first Palm Pilot device, which I think was the first ubiquitous pocket computing device , in March, 1997.

The Organized Extended Mind

When David Allen published Getting Things Done in 2001, index cards and paper notebooks were rapidly being left behind as the world accelerated toward our current age of email and internet. I’ll always think of Getting Things Done system as a PDA system because of the lists I created my system that lived on mobile devices. First it was the Palm, then Blackberry and most recently, iPhone. @Actions, @WaitingFor, @Projects were edited on the PC and synced to a device that needed daily connection to the computer. I had a nice collection of reference files, particularly for travel called “When in London”, “When in Paris”, etc.

My information flow moved to the PC as it became connected to the global network. Two communication functions really: conversations and read/write publishing. Email and message boards provided two way interaction that was generally one to one or among a small community. Wider publishing was to the web. Both of these migrated seamlessly to hand held devices that replicated email apps or the browser on the PC. Eventually the mobile device became combined with phone. Even though capabilities have grown with faster data rates, touch interfaces, bigger screens and large amounts of solid state data storage, The first iPhones and iPads showed their PDA roots as a tethered PC device in the way they backed up and synced information. That world is rapidly fading as the internet becomes a ubiquitous wireless connection.

Access to email and internet through smartphones has served to further “expand time”“ and ”compress space" as Dave put it. I adopted a text file based approach so that I could switch at will between my iPhone, iPad and MacBook Air and have my external thoughts available. The synced plain text files seems transformational, but feels like my old Palm set of lists.

The age of the cloud is one of information flakes. Much of what we know is now latent and external requiring reference to a small device. Is it any wonder that our streets and cafes are filled with people peering down into a screen rather than out into the world?

It was a rapid transition. One that continues to evolve and that demands frequent reconsideration of the means and methods for constructing the extended mind.

A Mind Released

The SimpleNote and Notational Velocity and DropBox ecosystem was the enabling technology for me. Suddenly there was seamless syncing between the iPad or iPhone and the Mac. The rapid adoption of Dropbox as the defacto file system for iOS broke the game wide open so that standard formats could be edited anywhere- Mac, Windows, iPhone, iPad, Unix shell. This was a stable fast data store available whenever a network was available.

Editing data on a server is also not a new idea. Shell accounts used for editing text with vi or Emacs on a remote machine from anywhere is as old as computer networking. I started this website in late 1999 on Dave Winer’s Edit This Page service where a text editor in the browser allowed simple website publishing for the first time.

Incremental searching of text files eliminates the need for databases or hierarchical structure. Text editors like Notational Velocity, nvAlt, SimpleNote or Notesy make searching multiple files as effortless as brain recall from long term memory. Just start typing associations or, for wider browsing, tags embedded in metadata, and unorganized large collections become useful. Just like brain free recall of red objects or words that begin with the letter F. Incremental searching itself is not a new idea for text editors. What’s new is that we’re not seeing just a line of text, but rather multiline previews and instant access to each file found. But together incremental searching with ubiquitous access and the extended mind is enabled across time and space.

What seems to have happened is that the data recorded as external memory has finally broken free from its home in notebooks or on the PC and is resident on the net where it can be accessed by many devices. My pocket notebook and set of GTD text lists is now a set of text files in the cloud. Instantly usable across platforms , small text files have once again become the unit of knowledge. Instant access to personal notebook knowledge via sync and search.

Do ADHD Drugs Work Longterm?

“Illness is the doctor to whom we pay most heed; to kindness, to knowledge, we make promise only; pain we obey.” ― Marcel Proust

An essay on ADHD in the New York Times launched an interesting Twitter exchange with Steve Silberman and a medical blogger PalMD on how well we understand psychiatric disorders and treatment.

In the article, Dr. Sroufe concludes that since there is no evidence for longterm use of ADHD medication, their use should be abandoned. He is right that the evidence of efficacy is all short term. Over the long term, no benefit has been shown. Of course almost no one dealing with the issue on a day to day basis would agree. Parents, teachers and physicians all agree that these medications have a use to improve the lives of these children. Count me among those who believe it is highly probable that treatment over the course of months and years has utility, but is hard to prove.

As a problem in decision making, this is a good example of the difference between believing and knowing.

There is a difference between the practice of science and an absolutist approach to truth. In decision making, we must be practical. As Williams James said, “Truth is what works.” He believed that science was a pragmatic search for useful models of the world, including mind. Those that look for abstract, absolute truth in clinical research will be confused, misguided and as often as not, wrong in their decisions. Truth is something that happens to a belief over time as evidence is accumulated, not something that is established by a single positive experiment.

Belief in the usefulness of therapy in medicine follows this model of accumulation of belief. The complexity and variability of human behavior demands a skeptical approach to evidence and a sifting through to discover what works.

Clinical trials for drugs to affect behavior are generally relatively small, short experiments that measure a change from baseline in some clinically meaningful variable. These trials are clinical pharmacology studies in the classic sense, studies in patients (clinical) of drug effect (pharmacology). No one is expecting cure or even modification of the disease. The benefit is short term symptom relief, so the trial examines short term symptom relief. In the case of a pain reliever, we ask whether patient’s self reports of pain are decreased by therapy compared to before therapy. In ADHD, we ask whether a group of target behaviors is changed by treatment compared to baseline.

This approach of measuring change from baseline has a host of pitfalls that limit the generalizability of clinical trials to real life medicine. First, baseline measures are subject to large amounts of bias. One of the worst sources of bias in these trials is the patient and physician’s joint desire to have the patient meet the severity required to be enrolled. The investigator is under pressure to contribute patients to the trial. The patient hopes to gain access to some new therapy, either during the trial or during some subsequent opportunity. Both of these factors pressure patients to maximize the severity of their complaint at baseline. How do you get into a trial? Exaggerate your problem! Even without conscious or unconscious bias from patients, any trial will enroll patients that happen to be worse than their average state. When measured repeatedly over time, the scores will tend to drop- a classic regression to the mean. If you select more severe outliers, they will tend to look more average over time.

Second, diseases are not stable over time. Without any intervention, measures of a disease will be most highly correlated when measured with a short duration between assessments. The longer you wait to measure again, the lower the correlation. Measuring a drug effect in a controlled trial accurately depends on a high level of correlation. All else being equal, the longer one treats, the harder it will be to measure the effect of the drug. This is the major pitfall of depression trials. Episodes are usually limited in duration, so most patients will get better over time without treatment.

So perhaps its not surprising that its very hard to measure the effect of ADHD drugs after months or years in chronic therapy trials. These kids get better over time both from regression to the mean and the natural history of the disease.

Another important issue in ADHD research is that these drugs have effects in healthy volunteers. As Dr. Sroufe points out, amphetamines help college students study for exams- no diagnosis of ADHD needed. This makes it easier to do pharmacology studies, but means that diagnosis in those studies doesn’t really matter- the pharmacology is largely independent of any real pathological state. One could never study a cancer drug in some one without cancer, but this is not true of a cognitive enhancing drug. Its probably most likely that the kids with ADHD don’t have a single pathophysiology, but rather a combination of being at one end of a normal spectrum of behavior plus stress or lack of coping mechanisms that create problems for them in the school environment where those behaviors are disruptive to their learning and that of others. The pharmacology of stimulants helps then all- after all it helps even neurotypic college students and computer programmers.

Treatment response does not confirm diagnosis in ADHD as it does in some other neurological diseases like Parkinson’s Disease. While we’d like to call ADHD a disease or at least abnormal brain state, we have no routine way of assessing the current state of a child’s brain. We have even less ability to predict the state of the brain in the future. Thus diagnosis, in the real meaning of the word- “dia” to separate and “gnosis” to know, is something we can’t do. We don’t know how to separate these kids from normal or into any useful categories. And we have no way of describing prognosis- predicting their course. So a trial that enrolls children on the basis of a behavior at a moment in time and tries to examine the effects of an intervention over the long term is probably doomed to failure. Many of those enrolled won’t need the intervention over time. Many of those who don’t get the intervention will seek other treatment methods over time.

With all of these methodological problems, we can’t accept lack of positive trials to be proof that drugs are ineffective long term. We can’t even prove that powerful opioid pain relievers have longterm efficacy. In fact, it was not too long ago that we struggled with a lack of evidence that opioids were effective even over time periods as short as 12 weeks.

Our short term data in ADHD provides convincing evidence of the symptomatic effects of treatment. Instead of abandoning their use, we should be looking at better ways to collect long term data and test which long term treatment algorithms lead to the best outcomes. And we should be using our powerful tools to look at brain function to understand both the spectrum of ADHD behaviors and the actions of drugs in specific brain regions.

The Brain is the Map of the Mind

I dwell in Possibility– – Emily Dickinson

So What?

Why learn about the world with no immediate practical application?

I’ve said that there is value in writing to learn and photographing to see. What’s the value of learning? Of seeing?

Knowing how to bake bread or bake beans is clearly useful and there’s no question of practicality. Science, even at its most exploratory, seems useful as long as it promises more powerful manipulation of nature. Sometimes the possibility of science is obvious, as in understanding the role of an enzyme in energy metabolism to affect cellular function. Even when the connection is unclear, learning more seems to have potential value even if the present result is impractical. All information has value and there are no dead ends, only detours. Learning broadly is often necessary preparation for learning more narrowly and usefully

Is philosophy of mind of any use? Is it as useful as neuroscience itself? Might thinking about the nature of the mind at least contribute to the usefulness of information about brain structure and function? Why explore the relationship between mind and brain? Why worry about the apparent contradictions between deterministic physical models and subjective free will?

If I can’t tell whether the world is real or an illusion, does it matter? Is the mind made of ectoplasm attached to the brain by a neural-spiritual interface? These questions have been around for centuries. Every year we learn more about the brain. Do we know any more about the mind? Is what we’ve learned potentially useful?

I’d like to convince you that understanding how mind is generated from brain is a useful way to improve brain function. For me, this is a fundamental reason why brain science is important. Learning about the brain should be a path to deciding better.

Learning From Experience, Teaching the Brain

In the *Consolations of Philosophy“, Alain DeBotton writes, ”In their different ways, art and philosophy help us, in Schopenhauer’s words, to turn pain into knowledge." We know what art is and we know that art helps us learn to see. Philosophy, in the broadest meaning of love of knowledge is a similar direct path from experience to knowledge.

Ignoring the question and pretending that knowledge and brain are independent domains is to miss an opportunity to understand what it means to “know” and therefore try to improve the everyday use of knowledge.

So what have you learned personally from your years of mental experience? TYou’ve made good, profitable decisions about the world. You’ve made mistakes of course. Better yet, how often have you thought that you were right, absolutely sure you were right, and later learned that the true state of the world was not at all what you thought?

The stock market serves as a wonderful lab for training the mind to decide better if approached mindfully. There’s profit in correctly identifying an undervalued stock that subsequently rises in value. On the other hand, buying into hype and choosing a company close to failure is exactly the kind of pain that Schopenhauer was referring to as leading to knowledge. What is it that has improved from years of experience in the market that we call “knowledge”? Is it mind that has better judgement now? Is it the brain that can now choose more accurately under conditions of uncertainty?

The Brain Makes Maps

We are beginning, just beginning I think, to understand how knowledge is stored and retrieved in the brain. The insights go back to the beginnings of brain physiology, when recordings of single neurons in awake, behaving animals began to be possible. It was obvious from the start that you and I don’t perceive the world directly and whole, but broken down into very small elements. Our retinas are the light sensing neural arrays at the back of the eye. Like the individual pixels that make up the sensor array in a camera, each photoreceptor senses the light from a small part of the visual scene. The whole picture is represented, but its been deconstructed into a mosaic in which each element has been disconnected from every other element.

Somehow that array of light intensity is reconstructed into a sensory impression that we experience subjectively as seeing the world. What’s reconstructed is more than just a visual sensory impression, the seen world has meaning. Its as if there are little call outs from the objects- blue book, time, moving fly making that buzzing noise, so annoying …

The way in which sensory input is organized into coherent perception remains one of the fundamental questions in neuroscience. In the visual system, the brain starts abstracting local features like color form and edges from the map of intensities sensed at the eye. These features are mapped from visual space into brain maps, creating a neural representation of features in the scene. At higher levels, features become maps of objects with meaning like books and flies. The maps of words for these objects are separate, but can be called on when the fly or the book needs to be mentioned.

The brain is a set of maps, spatially orgainized, each representing different sensory streams or, on the action side, control of different parts of the bodies and movement of them through space. To catch a fly requires the map of visual space containing the fly to be registered with the map of arm and hand movement. The connections and coordinating systems to do all of that are known. In fact, simpler versions of them can be studied in frogs who need project a sticky tongue out into visual space for fly catching activities. Lunch in this case.

It’s a small conceptual step to suggest that valuation of stock, reading another person’s motivation, and understanding calculus are all brain maps of various types. They are simplified representations that model aspects of the real world. The maps are not strictly spatial, but reflect our models of how the physical world is laid out and how it can be manipulated. As simple models, they are not perfectly accurate just as a geographical map is not the terrain itself but rather a useful representation for navigation.

The Mind Mapped

Learning is the act of making better brain maps. The more accurate the model of the world is in the brain, the better it will navigate the world itself. Misconceptions, inaccuracies and the unknown are all bad or missing parts of the map that will make decisions more prone to error. A fully accurate and comprehensive map isn’t ever possible. By their very nature, maps are restricted representations of the world. The world itself is too big and complex to deal with directly.

The exploration of the relationship between mind and brain is for me an effort to create a more accurate map of deciding. We feel like we are creatures split in two. Our ethereal minds seem to inhabit and be constrained by physical bodies. A more accurate map would show just the brain working away and our subjective mental experience as a view into what that brain is doing. It becomes easier to discard distinctions like “rational decision making” and “intuition” when the underlying brain structure and function is the map of mind.

Platforms

I got used to things going pretty well, going well enough. It was a matter of standing on a steady stone, moving to one higher or broader when the opportunity presented.

I told myself to appreciate them as platforms, thinking of these situations as being good places to be. I learned at the beginning that not to decide is to decide and not to do is to do.

I’ve made some observations that may have some value, but mostly I’ve read the words of others. Let’s see if I’ve gotten it right.

The Search for Enlightenment

“You’re lost inside your houses
And there’s no time to find you now
Well your walls are turning
Your towers are burning
Gonna leave you here
And try to get down to the sea somehow”

Rock Me On the Water
-Jackson Browne


“Truth is what works.”

-William James

On my daily run today, Jackson Browne’s Rock Me On the Water came up on the Genius Playlist. I connected emotionally with the song as I almost always do, feeling that yearning for the transcendent truth that brings joy, identifying with the seeker on his journey to understanding, seeking the peace that lies beyond the mundane world.

William James was attacked during his lifetime and in subsequent decades by philosophers who felt he was destroying the search for truth by making it completely relative. For him, truth was a construct of the mind based on theories and mental models. Truth is a quality of thought based on how well what we think resembles the external world. He called this Pragmatism. We can approximate an accurate view of the world, but never reach ultimate truth.

The materialist philosophers who attacked him believed there’s a transcendent, absolute truth in the world that can discovered through observation and experimentation. How could it be, they asked, that one person’s truth could differ from someone else’s? How could I see one truth today and a different one tomorrow? Truth must be an absolute quality of statements. What is not True is False. The world of philosophy moved toward logic and proof and away from James’ view of world as a construct viewed by mind via brain processes.

I began to appreciate James’ views after reading contemporary cognitive science and philosophers of mind. Once one begins understanding that our brains function by creating models of the external world, this psychological definition of truth becomes most relevant. We build a mental model of the three diminutional space around us. We hear music and make sense of tone spatially, thinking of notes as being high or low, moving quickly or slowly. We think of crime epidemics as infectious disease or honesty as clean behavior in metaphorical models where one model provides meaning to a different model.

Where does that leave Jackson Browne’s romantic search for (capital T) Truth? Is there any reason to “get down to the sea”? Is it entirely an illusion to seek a non-scientific transcendent truth?

I submit that the poet is talking metaphorically about looking beyond the commonplace mental models that we use when “we’re lost inside our houses”. It is the mission of the poet to find the sea and come back and tell us about the journey and what it’s like to experience that joyous song.

Certainly the purpose of the spiritual search for truth is personal gain and fulfillment. Enlightenment is the clearest, most “right” view of the world possible. Reaching full human potential is a personal goal. One becomes a poet by returning from the sea and singing the song, inspiring others to join the joyous song.

The President, Luck, and Regression to the Mean

Being particularly lucky or unlucky is sure to interfere with good decision making. It’s hard to tell whether you’re succeeding because of a confluence of favorable effects due to chance or due to your exceptional brilliance. One’s internal model of self probably plays a role in how events are interpreted. Are you special or just really, really lucky?

If success came through chance, the model predicts that the future is going to look more average, for good or bad. It may or may not result in less risk taking. I can take risk knowing that the outcome is largely not in my control. I may win or lose; it’s knowing the odds of success that’s important.

On the other hand, if you’ve climbed to the top based on your merit then the internal model predicts continued success beyond the average, never regressing to the mean.

Case in point? Barack Obama. From Andrew Gelman writing at Frum Forum

More to the point, I don’t think that in January, 2009, Obama had any feeling he was in trouble.  For one thing, he’d spent the previous two years beating the odds and winning the presidency.  (Yes, a Democrat was favored in the general election, but Obama was only one of several Democrats running.)  As I and others have discussed many times, successful politicians have beaten the odds and so it is natural for them to be overconfident about future success.

Losing the 2010 midterms may have been a wakeup call, but then again it’s easy to construct a narrative where we claim responsibility for good outcomes and blame chance or other outside causes for the bad outcomes.

Purely from a statistical point of view, expect success to be followed by failure more often than not. In the end, we’re all average.

Approaching Complexity

The whole is greater than the sum of the parts.

This is the essence of a complex adaptive system. Any system that is straightforward enough to be a simple adding up of the effects of each part really isn’t worth contemplating as a system. It’s a collection of independent agents. A stack of checkers of different thicknesses are such a linear system. Stack them up and the height simply adds up in a linear way.

Once the components start acting on each other and themselves, behavior becomes complex and increasingly difficult to predict based on knowledge of the components and their connections. This is not due to ignorance. Collection of more and more data doesn’t help at all. There is some aspect of the whole that is not just the linear addition of the parts.

Once a system is made of connected components that have inputs and outputs, that are processing information, its behavior can become difficult to predict with precision. It could be a mechanical system like a thermostat connected to a heating system, a computer program with subroutines, nerve cells connected in brain circuits, stock traders in a market, the atmosphere are all complex adaptive systems. The mechanical and computer level examples are the most useful for study because they are clearly in the mechanistic, Newtonian, deterministic world and yet their future state cannot be known.

The difference between an additive system and a complex system is in the relationships. Negative and positive feedback create unexpected behaviors in the system. Small effects in one component produce large effects elsewhere because of the nature of the connections which are not simply proportional but non-linear instead.

We’re surrounded by complex systems.Arguably simple linear systems are exceptions and may be idealized simple models rather than real functioning systems out in the world. As thinkers, we study simple systems or simplify the complex into idealized simple systems because they are easy to deal with in a deterministic and reductionistic manner.

We are ignorant of the state of the past and the future. That creates uncertainty. Because of complexity, even if we had perfect knowledge, we’d still be unable to know the future.

Making Decisions Under Conditions of Chaos

In 1961, Edward Lorenz discovered chaos in the clockwork universe.

Lorenz was running a computer simulation of the atmosphere to help forecast the weather. He wanted to rerun just part of one sequence, so instead of starting at the beginning, he started the run in the middle. In order to start in the middle, he used the output of the program at its midpoint as a new starting point, expecting to get the same result in half the time.

Unexpectedly, even though he was using a computer, following strict deterministic rules, the second run started with the same values as before but produced a completely different result. It was as if uncertainty and variability had some how crept into the orderly, deterministic world of his computer program.

As it turned out, the numbers used from the middle of the run were not quite the same as the ones used when the program was at that same point the first time because of rounding or truncation errors. The resulting theory, Chaos Theory, described how for certain kinds of computer programs, small changes in initial conditions could result in large changes later on. These systems change over time where each condition leads to the next. This dependence on initial conditions has been immortalized as “the butterfly effect” where a small change in initial conditions- the wind from a butterfly’s wings in China, can have a large effect later on- rain in New York.

This sensitivity to the exact values of parameters in the present makes it very hard to know future values in the future. As its been formalized mathematically, chaos theory applies to “dynamical system” which simply is a system that changes over time according to some rule. The system starts in some initial state at the beginning. For our purposes, think of it as now, time zero. Rules are applied and the system changes to its new state- wind is blowing, temperature is changing, etc based on rules applied to the initial state of the atmosphere. The rules are applied to the new state to produce the next state, etc.

Chaos may not have been the best word to describe this principle though. To me it suggests complete unpredictability. Most real or mathematically interesting dynamical systems don’t blow up like that into complete unpredictability. Using the weather, for example, even if the butterfly or small differences in ocean surface temperature makes it impossible to know whether the temperature in TImes Square in New York will be 34 degrees or 37 degrees on February 7th, either one is a likely value to be found in the system at that time in that place. Measuring a temperature of 95 degrees F in New York in February is impossible or nearly so.

Dynamical systems like the weather often show recurrent behavior, returning to similar but non-identical states over and over as the rules are applied. Following the values over time describes a path that wanders over values, returning after some time to the same neighborhood. Not exactly the same place because it started in a slightly different place than the last time around, but in the same neighborhood. Just unpredictably in the same neighborhood.

This returns us to the distinction between knowing the future and predicting it. The future state of chaotic system can be known because small changes in initial conditions result in large changes in result. But those large changes recur in a predictable range of values. A chaotic system can be predicted even though its future state can’t be known. When it comes to the TImes Square temperature, climate data tells us what range the chaotic values move within from season’s cycle to season’s cycle. In drug development, the chaotic system of taking a pill every day and a measuring drug levels in the blood allows prediction as the range of likely values, but because initial conditions change and cause large, unpredictable effects one can’t know in advance whether today’s measure will be high or low. Its almost never the average, it varies around the average.

It’s important to see how important prediction is for making decisions when the future is unknown. Because the uncertain future is orderly we actually know a lot about it; we don’t know it in all of its particulars. We must make decisions knowing what range of possibilities the future can assume. Chaos Theory suggests that this kind of uncertainty is in the very nature of the world because of the behavior of dynamical systems, any rules dictate how a system changes over time.

Thinking Without Knowing

We have less free will than we think. Our thoughts are severely constrained by both brain mechanisms and the metaphors that the brain has been filled with from environmental input. Physics, culture, experience.

We have more free will than we think because this remarkable consciousness we’re all endowed with can act back on the brain and change it. We can also hack the brain in new and interesting ways to create.

John Cleese on Creativity:

John Cleese on how to put your mind to work via John Paul Caponigro

Cleese points out two important non-conscious phenomenon. First there’s the “sleep on it” effect where upon taking something up again the next morning the solution often appears obvious. The answer presents itself without thinking; it’s just there. Of course most thought is “just there”, but the previous effort makes it seem remarkable that you can fail to think of something one day but succeed the next. The brain is an odd muscle indeed. Imagine if you failed to hold something one day but could grasp it the next morning.

And his point that you need to put in the work of thinking the night before shouldn’t be lost. He calls it “priming the pump”, but I think of it as one long thinking process. Its useful to interrupt the conscious work to free the brain to produce a solution. This is brain work without knowing.

His second point about recreation from memory is related. We now know that remembering is a creative act, not a playback of a brain tape of events. Its easy to remember something into a different form than the original. Creatively, the new version may be better.

Cleese tells a story about losing and recreating a script. The second version, the one from memory, was funnier and crisper. I use this trick all the time in creating presentations or writing. I step away from the words or slide and get myself to say in words what it is I’m trying to convey. I’ve realized also that if I can’t say it easily, my problem is that I don’t know what to say, not that I don’t know how to say it.

It’s always fun to turn to someone during a prep meeting who’s struggling to create a slide. They’re lost in a verbal maze trying to find the right words as if it were some magical incantation that will unlock the meaning. I’ll say, “Tell me what you’re trying to convey here.” Once they’ve told me, I say, “Well, write that down” and we get a clear and crisp rendering of the thought in words.