Waiting for Brain Science

It was back in High School that I became fascinated by the workings of the mind. I was doing lots improvisational theater and acting in plays and saw how I and others could transform into new identities at will. I realized that we did this in everyday life as we slipped between hanging out with friends and behaving (or not behaving) according to norms in school. Mind altering substances were everywhere, so reality could be easily demonstrated to be a mental construct, not the universal truth we all pretended it was.

Eventually, I put the arts into the background of and pursued the science of mind. A combined MD / PhD program led to training in Neurology and now a long career in developing new treatments for brain diseases. Given the state of cognitive science at the time, the practical pursuit of understanding neurological disease seemed more likely to lead to a real contribution.

I now have the luxury of returning to exploring cognitive science 30 years later. Real progress has been made on many fronts, much remains obscure. I’m particularly struck by how clearly we see the process of perception of complex scenes and symbols. When I was in college we were just beginning to understand the tuning of neurons in the primary receiving areas of the cerebral cortex. Now we have a picture of how shapes and words are recognized in the visual regions of brain through activation of tuned networks across the regions of cerebral cortex devoted to sensing the visual world.

Decision making plays a very specific role in the sensory systems. If the system is primed by a preceding stimulus, say a lion’s roar, the sensing areas are readied and more likely to detect a cat among the noise. Or, deciding to look for the color red, suddenly every red shape jumps out from the background, even though just before they were just part of the background.

Most remarkably, these sensory decisions take place in primary receiving areas, preventing the perception of anything else. And these decisions generally are not at all accessible to consciousness. We’re not aware of how we change our perception to fit context since it occurs via basic feed-forward mechanisms.

This is unconscious bias, but of a sort never imagined by philosophers and sociologists. It’s built into the apparatus of perception from the very first steps of visual perception, impossible to control directly, just influenced by the ongoing flow of brain action and reaction.

Not really where I thought cognitive science would end up, making deciding better so much more difficult if the decision process begins by these brain circuits determining what is seen in the first place.

Mental Events Can’t “Cause” Anything

A quick note: There are some questions that can’t be asked because they make no sense. Posing the question seems to lend legitimacy to the underlying assumptions, when the premise of the question is false to begin with.

I think this is true of our questions regarding the existence of free will and the cause of consciousness. As choice and the experience of choice are processes of the brain, asking how mental events can control brain processes is just asking how one brain process can lead to another brain process. It’s really not a useful question.

While in some sense it may be a category error, trying to combine two categories into one question, it seems more just based on an illusion from introspection, much like asking how does the sun go around the earth when really we know that the earth rotates. It’s the question that’s at fault, not some missing stuff that makes up the “qualia”.

Bye Blogroll

I just deleted the Blogroll from the site template. I doubt it will be missed.

The blogroll is a charming throwback to a time when the web consisted of individual sites like this one. We all linked to each other as a community trying out the new publishing medium of the World Wide Web. Came the RSS reader followed by the social network revolution and we’re all looking at Twitter or getting email newsletters from subscription supported sites.

And yes, most of the links in my now Blogroll turned out to links to shuttered or inactive sites. Even the mighty dangerousmeta!: is no longer updated. Garret wrote in his last post:

Blogs, I found, are swiftly becoming broadcast-only devices. Discussions are spread out over Facebook, Twitter, Slack, Signal … and many other services. Often hard to predict where a person will choose to elucidate their blog postings.

Garret notes that there are some other ways to go- like writing a book, a form that seems to have survived a few centuries in spite of movies, TV and podcasting.

I’ve always described this blog as a “Personal Journal”. I have almost twenty years of intellectual exploration, hobbies and nonsense cached here. All searchable and available to anyone who cares to look. I’ve had more readers here than most pre-internet authors even though my writing has tended to somewhat obscure most of the time.

Maybe for sentimental reasons I treasure having an instant publishing platform, even if the readership amounts to a few dozen individuals.

Reading: “Phi: A Voyage from the Brain to the Soul” by Giulio Tononi

Language continually asserts by the syntax of subject and predicate that “things” somehow “have” qualities and attributes
Mind and Nature: A Necessary Unity
Gregory Bateson

Phi: A Voyage from the Brain to the Soul  by Giulio Tononi is an odd introduction to “Integrated Information Theory” (IIT). I came to it having read some of Tononi’s work and the collaborations with Koch and Edelman, so I was hoping to gain a more intuitive feel for how they want to constitute an axiomatic construction of how the experience consciousness arises from brain activity based on information theory and integration.

I can’t really recommend the book as either an introduction or as an aid to understanding IIT intuitively. It’s written poetically, as a series of vignettes involving Galileo being guided by cognitive echoes of figures in philosophy and neuroscience. Echoes, as in Tononi’s vague re-imaginings of them in his own mind rather than real historical figures grounded in the thought and social context of their own time and place. There are notes that provide some explanation and context, but the whole thing reads as a way to avoid simple, straight explanation of the theory.

I did find the book useful as an introduction to some of the fundamental relationships between brain and consciousness. In the vignettes, Tononi nicely describes the mosaic nature of consciousness, distinguishing the difference in the experience of being in the dark (where there is visual world without content) and being cortically blind (where the visual world is actually missing from consciousness). When I was in medical school, cortical blindness was compared to “what it looks like behind your head”. There’s no vision there; it’s not black or unclear, it just isn’t. Similarly, there are vignettes on dementia, development and “brain in a vat” thought experiments that are useful in determining the size and shape of mind.

Maybe ITT is just dressed up dualism

In the end, I find ITT totally unconvincing. I actually think it’s really just dualism masquerading as a theory of emergence. When Tononi writes this, he gives the game away:

How can we be responsible for our choices, if how we choose is determined by brain and circumstance?

As the Bateson quote at the top puts it better than I could, we are used to a world of things, so want mind to “exist” some how. We want free will to be mind controlling brain, when the truth is that mind is what brain does, so there’s no way a process of a thing can control the thing. It is the thing. IIT tries to bring mind into existence , like trying to bring a baseball game into existence when there are just the players, field, bats and balls. Sure we want to say “I saw a baseball game” when it’s more accurate to say “I went to the stadium to watch baseball players play nine innings.”

Bateson and others accurately point out that this experience of mind actually occurs out when the brain interacts with the world, a world that includes some what miraculously other brains of almost the same construction. Brain inhabits a physical world of chemicals and planets and energy and things, but it also inhabits a semantic world of language, emotion, baseball and blogs.

Maps and Legends: Brain as world model

Can you map decision theory onto brain mechanisms?

It’s clear brain doesn’t make decisions in the way that’s been formulated as “rational” by decision theory. You won’t find branching decision trees composed of options and there’s no probability calculation that weights the potential payoff of different options. It’s a complex system built of networked neurons, quite opaque as to where it hides meaning. Yet somehow the brain makes decisions that within limits appear pretty optimal.

Are brain maps central?

It’s been known since the beginning of modern neuroscience that the cerebral cortex is organized as a series of maps. There are maps of the body surface in primary sensory area for touch, maps of the retina for vision, and tonotopic maps for hearing. Of course the the primary motor cortex responsible for fine movement is mapped across the body.

Flattened out, it’s an area of about 2.5 square feet, but we see it it folded into gyri to fit compactly in the scull. Other than the sensory maps, the rest of the cortex, the “association areas”, doesn’t have explicit physical maps, but instead maps other kinds of space- either movement or meaning much of which is still bound to a sensory or motor channel- vision (by far the largest in the human brain), touch, etc.

Perhaps these interconnected maps of the world are central to to decisions are made in the brain, because we experience consciousness as a representation of the world through these maps. It’s as if the brain is a simulation of our body moving through space. The global simulation I’m thinking about isn’t just a sensory image of the world built of reflected light and air pressure changes, it includes implicit understanding of physics and meaning (semantics) built into it. See an apple and know that it is something that doesn’t weigh much and is good to eat. It seems attentional mechanisms limit our access to everything going on across the cortex because of limitations on working memory or other real time control mechanisms, but the simulation is there to provide the options for action available moment to moment.

So maybe map is a bit limiting as a term. That’s the two dimensional representation of the skin or the retinal or the scale. The brain assembles that raw information into shapes and objects with qualities like color and geometry that don’t vary by quality of illumination or by angle of view. We actually see letters and words event though language is metadata cued by visual input.

Content, not mechanism of mind

While I like the map analogy, I’m not enthusiastic about “theories of consciousness” in general. I think they are mostly category errors where someone tries to explain an emergent observation, mind, in terms of the component parts of the system, neurons and networks. It’s useful to try to understand underlying mechanisms, but fruitless in general to go the other way. I can tell you how a clock moves in a regular pattern so that I can tell the time. A clock however doesn’t have in it the idea of time or hours or late for my next appointment.

This was the challenge understood by early systems theory thinkers. As they saw very simple robot systems evidence complex and unpredictable behavior, they quickly realized that while the behavior was contained in the system, it hadn’t been designed in and wasn’t there explicitly. Each part has a limited part to play, but in interacting a complex behavior emerges. No individual ant knows how to signal to others how to get to a food source or build a network of tunnels. Implicit knowledge is built into each one. The DNA of a single cell has all the information needed to build a whale or a platypus. But no one reading the string of nucleotides would imagine there was a potential mammal there.

I’d put the theorizing of Tozzi, Friston and others into the systems theory camp. For example in Towards a Neuronal Gauge Theory, they attempt to formalize this mapping idea, casting the brain in the role of minimizing uncertainty about the external world. In fact they cite Conant and Ashby’s good regulator hypothesis [34], which states that every good regulator of a system must be a model of that system.

Where choice comes from

So choice is implicit in the brain’s modeling of the world. The maps provide the options, values and probabilities that have been formalized as decision theory. There’s a neural calculus going on, but one that is far from the small world of even our most sophisticated models and mathematics. Fundamentally, The brain is a functioning part of a bigger system that includes other brains and a real environment that feeds a network of meaning and physical complexity that can’t be captured in the static numbers we use for computation.

Lessons in Science and Culture

John Nernst at Everything Studies provides a long and thoughtful analysis of a discussion of a dangerous idea: A Deep Dive into the Harris-Klein Controversy. I think it’s worth a comment here as well.

As a neuroscientist and reader of all of these public personalities (Charles Murray, Sam Harris and Ezra Klein), I’ve followed the discussion race and IQ over the years. We know that intelligence, like many other traits like height or cardiovascular risk are in part inherited and influenced strongly by environment. Professionally, I’m interested in the heritability of complex traits like psychiatric disorders and neurodegenerative diseases. The measured differences in IQ between groups falls squarely in this category of heritable traits where an effect can be measured, but the individual genes responsible have remained elusive.

I’m going to side with Erza Klein who in essence argues that there are scientific subjects where it is a social good to politely avoid discussion. One can learn about human population genetics, even with regard to cognitive neuroscience without entering into an arena where the science is used for the purpose of perpetuating racial stereotypes and promoting racist agendas of prejudice. That the data has a social context that cannot be ignored.

Sam Harris, on the other side of the argument, has taken on the mantle of defender of free scientific discourse. He takes the position that no legitimate scientific subject should be off limits for discussion based on social objections. His view seems to be that is that there is no negative value to free and open discussion of data. He was upset, as was I, at Murray’s treatment at Middlebury College and invited Murray onto his podcast. Sam was said by some to be promoting a racist agenda by promoting discussion of the heritability of IQ in the context of race.

In fact, Ezra Klein joined the conversation after his website Vox published a critique of the podcast portraying Harris as falling for Murray’s pseudoscience. But that’s nothing new really; Murray surfaces and his discussion of differences in IQ between populations is denounced.

As one who knows the science and have looked at the data, it bothers me like it bothers Harris that the data itself is attacked. Even if Murray’s reasons for looking at group differences is to further his social agenda, the data on group differences is not really suprising. Group differences for lots of complex inherited traits are to be expected, so why would intelligence be any different than height? And the genes responsible for complex traits are being explored, whether its height, body mass index or risk for neurodegenerative disease. Blue eyes or red hair, we have access to genomic and phenotypic data that is being analyzed. The question is whether looking at racial differences in IQ is itself racist.

I’ve surprised myself by siding with Klein in this case. His explanation of the background is here and his discussion after his conversation directly with Harris is here. Klein convincingly makes the argument that social context cannot be ignored in favor of some rationalist ideal of scientific discourse. Because we’re human, we bring our cultural suppositions to every discussion, every framing of every problem. Culture is fundamental to perception, so while data is indifferent to our thought, the interpretation of data can never be free of perceptual bias. Race, like every category we create with language, is a cultural construct. It happens to be loaded with evil, destructive context and thus is best avoided if possible, unless we’re discussing the legacy of slavery in the United States, which I think is Klein’s ultimate point.

Since these discussions are so loaded with historical and social baggage, they inevitably become social conversations, not scientific ones. Constructive social conversations are useful. Pointless defense of data is not useful; we should be talking about what can be done to overcome those social evils. No matter how much Sam would like us to be rational and data driven, people don’t operate that way. I see this flaw, incidentally, in his struggle with how to formulate his ethics. He argues with the simple truth that humans are born with basic ethics wired in just like basic language ability is wired in. We then get a cultural overlay on the receptive wiring that dictate much of how we perceive the world.

Way back when, almost 20 years ago, I named this blog “On Deciding . . . Better” based on my belief that deciding better was possible, but not easy. In the 20 years that have passed I’ve learned just how hard it is to improve and how much real work it takes. Work by us as individuals and work by us in groups and as societies.

Why I don’t blog more

We’ve recently seen some long running blogs shut down and some reflections about the value of blogging in the era of Twitter, Facebook and Instagram. Here are my thoughts.

I’ve had this site up an running for almost 20 years now. It was part of that first wave of enthusiasm in late 1999 when tools became available to write text, up load photos and link to other writers. I had previously written for another site, The Motley Fool, mostly hosted on America OnLine (AOL) that had evolved out of active message board activity.

At the time, I had become interested in Decision Theory, so it was a natural topic to embrace when I started writing. Of course a lot of what I wrote then was part of the conversation we were all having between our blogs. These were conversations that you could join only if you had your own site.

As my interests shifted and my ideas developed, I found it harder and harder to write a few paragraphs that expressed ideas coherently. It seemed to be a conversation I was having with myself in front of a small group of readers. It was simpler to have the conversation completely in private in my notebooks and text files rather than present them here.

We know that writing for others is more than just communication. It serves to sharpen, refine and clarify the ideas of the writer. I know that I’ve understood an idea better when I’ve taught it to someone else or written about it.

So perhaps in the end, ODB is for me, to share with you. I’m reading William James: In the Maelstrom of American Modernism a biography by Robert Richardson. Richardson writes intellectual biography by looking at what James read and his notes and journals that document the intellectual journal. I can look back over the years of ODB and see what I was reading and see the sweep of my own journey through these ideas of brain, mind and decision. On balance, it’s been valuable and I expect to keep it up for the foreseeable future.

Reading: The Elephant in the Brain

I very much enjoyed reading “The Elephant in the Brain: Hidden Motives in Everyday Life by Kevin Simler and Robin Hanson. In every bookstore there is a shelf of similar books, ranging from behavioral economics to classical neuroscience, all making similar claims as to the inaccessibility of our thought, motivation and decision making process.

My own realization that we had not access to the human decision making apparatus was the main reason I lost interest in analytic decision making tools like decision trees. I found that most decision makers rejected them as artificial and those who embraced them tended to manipulate the results enough to render the process no better than the typical informal methods we use to make decisions large and small.

When I read books like The Elephant in the Brain or listen to Sam Harris on rational morality, I get a bit frustrated with the implicit dualism that creeps into the conversation, the implication that we are in charge of some parts of our brain even if we are not in charge of others. I was very happy to hear Sean Carrol call out Sam on this very point in a recent podcast. It’s as Gregory Bateson said many years ago: speaking of mind affecting brain is a category error. It’s brain and neurons all the way down, there’s no “I”, no ego area of the brain controlling it in whole or part. We experience what the brain is doing through this miracle of conscious experience.

I’m quite impressed in reviewing literature at how far neuroscience has gotten in explaining decision making at the brain level, including the neural correlates of belief and uncertainty. One of the most important conclusions that we reach is that decision making occurs during perception itself. The brain processes ambiguous stimuli as nothing until it reaches some threshold level of certainty. Only then is is the perception formed enough to intrude into consciousness as it activates brain networks controlling action based on anticipated reward.

How I Work Now

I’ve talked about my Hobonichi habit and my Morning Pages ritual.

I recently posted about my tools on the Tinderbox user forum. Here’s a bit of a deeper dive.

When I started my PhD work in about 1978, the library was a building full of bound back issues of scientific journals. There were some useful printed helps to finding literature like Current Contents and Citation Index.

Current Contents was just just a weekly digest of the contents pages of the journals published the previous week or so. The copy circulated through the lab, starting with the boss then going to junior faculty, post-docs, graduate students and everybody else. It would be a few weeks before I got it, particularly if some procrastinator was in the circulation and had a stack on their desk they hadn’t gotten around to. Of course the alternative was just to go over to the library and flip through the four or five important neuroscience journals of the time.

Citation Index had it’s big update yearly with smaller periodic updates as I recall. It listed all of the articles that had been referenced that year by new publications. So one could follow the literature forward by identifying those key papers everyone cited and see who had cited them recently. Of course following the literature backward was the other side it, having found a new paper, you could go through the new paper’s references and identify the previously published relevant literature.

Having lists of references in hand, you went into the stacks and spent days and weeks copying the literature that seemed worth reading. My process was to take notes on papers on legal pads and file the papers in manilla folders (or big piles on my desk). Those notes were the raw material for writing as I could find the original pretty quickly for detailed reference. I took notes on the basic findings and the significance for me.

I now have a system that more or less replicates the same workflow. Finding and copying the original papers is of course completely different. Its a combination of PubMed searching and following chains of references cited by papers or linked via any one of a number of services like Web of Science which is a descendent of the whole Citation Index system I once used.

All of my PDFs live in Dropbox folder. I use Mendeley to provide accurate citation info, keywords, abstract and a standard title for the PDF file. I then index that folder with DEVONthink Pro which provides me with a robust searchable PDF database. I love how I can copy PDFs from the sync database into other project specific databases. These databases function like the manilla folders full of xeroxed papers I used to have before libraries and literature became digital.

In Eastgate’s Tinderbox, I pull links to the files in DEVONthink into notes for use when I want to refer back to the original documents. The idea is to keep my notes and summaries of the literature in a central place independent from all of the PDFs. This is an idea I stole from the Zettelkasten guys. Sometimes I will clip a critical table or figure out of a PDF and paste it into a Tinderbox note for quick reference so I don’t have to copy over or rewrite summary information.

I use the idea of replicating and amplifying my habitual way of working as a touchstone whenever a new tool comes along or I read about someone’s workflow. I think its about adopting best practices that work for you over the long term.

Why Analytic Decision Theory Needs Rethinking

This website is now about 18 years old. I’ve been writing about computers, photography, books and investment. But in the background is a big project I refer to as ODB. “On Deciding Better.”

ODB started with my introduction to analytic decision theory when I left academic medicine to work in a biotech. I had the privilege of working with a few visionaries in drug development, who advocated a more data driven approach to making decisions about how to turn a chemical into a commercial product that improved the lives of patients. It’s been clear for a long time that the pharmaceutical industry is wasteful, best characterized by a combination of wishful thinking and poor decision making.

My biggest achievement in implementing these ideas was in developing a new sedative based on propofol collaborating with experts in state of the art techniques of mathematical modeling and clinical trial simulation. I left the company before the final approval, watching from afar the introduction of the drug to the marketplace and then its withdrawal as a commercial failure.

As I moved on to other positions in the industry, I found very little enthusiasm for wide adoption of these approaches. There were pockets of use, but more broadly no one found them useful enough to pay for their use. Trials were designed using the same flawed methodology and decisions were made as they had always been. By argument and gut feeling.

I came to the realization over time there there was a disconnect between the principles of analytic decision theory and the way people actually made decisions. In the last few years I think I’ve figured out just why analytic decision theory doesn’t work to augment human decision making.

While the basic components of a decision are represented as objective, options, values and probability are all deeply subjective. When I recast them as imagination, emotion and belief, no one would even entertain the idea that this process constituted a rational approach to making a decision.

So get a group of decision makers together and they will see the right answer immediately. Or at least they see the outcome they want the most that they believe is achievable with acceptable risk of things going wrong. It’s a fact of life in organizations with human decision makers.

Fortunately, we’re entering a time where our computational power is great enough that we can augment these intuitions with algorithms powerful enough to capture some of the real world’s complexity and provide ways to support imagination rather than diminish it.

I think the essential rethinking has to how the theory of decision making and Baysian statistics are implemented by brains in our modern cultural context.