Mental Causation

A few years ago I read George Soros’ small book The Soros Lectures: At the Central European University in which he describes how he came to conceptualize reflexivity in markets, the idea that there is a feedback loop between what people think and market reality, which in turn affects what people think. It’s mental causation, but of course it’s just a manifestation of the the way brains interact socially through language, we affect each other with consequences in the real world.

In copying over note’s from last year’s Hobonichi, I found a note on a similar idea of inducing negative opinions. When you merely bring up a topic with some negative connotations for others, they are compelled to fill in the blanks. When I say “It’s like comparing apples and …”, you can’t help but think oranges. The word rises unbidden to mind, caused by my speech. It’s a powerful effect to have on another person as it’s reflexive and automatic. So by my mentioning a name and situation, your negative feelings, already in place, are reflexively activated cause you to think about those negative feelings and attitudes. Your brain does it, but I’ve directly caused it by my actions.

Just a thought about how powerful we are with just the power of words.

Nikon Focus Stacking and Computational Photography

An exercise in computational photography was another effort in my midwinter photography exploration. I’ve already talked about the Z-7 evaluation and film transfer. Today I’ll talk a bit about focus stacking which Nikon calls “Focus Shift”.

This is a technique that Vincent Versace presented in his Welcome to Oz book, which is now out of print and selling at a premium used. The original technique was to use multiple captures of a scene (camera on tripod) in which focus, exposure, aperture, and/or shutter speed were varied. These captures are then combined into a single image. It was need to put two different planes of focus together into a single image using the usual masking techniques in Photoshop, If you’re clever and sensitive enough to make it a believable probability, the final image represented reality in a way that satisfied perceptual expectations, but was way beyond a straight capture in camera. I never went to the additional step of image harvesting, where multiple views or angles are combined just because it seemed like photomontage to me, but Vincent has pulled it off quite well.

In these latest Nikon cameras, the D850 and Z series, the autofocus system has a function in which it will step through a range of focus, capturing up to 300 images at a variable step size. This is no different from automated exposure bracketing which has been a DSLR feature for many years and used for High Dynamic Range (HDR) photographs. It’s just auto-adjusting focus.

In the B&H Video, 21st Century Composition Theory (Day 2): ExDR Extending the Dynamic Range of Focus and Bokeh (the quality of blur) and How to Shoot For It</em> Vincent suggests using all 300 available images and a small step size since memory card space is free. Helicon Focus is used to combine the images using an edge detection algorithm. Again, it’s easy to combine stacks with different settings, like f5.6 or f8 for optimal image sharpness with f1.8 for best background blur (bokeh).

I set this little grouping up really quickly since I was out to test the technique. It’s actually 3 images blended, an f2.8 stack and an f8 stack with the f2.8 stack converted from RAW at a neutral white balance and a cool white balance for shadows and depth enhancement via color perception.

As an experiment, the result is interesting. Not such a great image artistically for me. I got a hyper-realism that I wasn’t really expecting, with objects really popping in the image. If you look closely you can see that the plane of focus is not natural with focus extending back at the end of the plastic cup and forward in the shadows to the side.

It’s just one experiment and I expect I’ll try more since I’m often frustrated in making images by having a focus that is either too deep or too narrow. This allows extension of sharp focus to anywhere in the frame, better than the alternative technique I often use of shooting sharp everywhere and selectively adding artificial lens blur where I want to for the purposes of directing the eye in the image.

All of these techniques are good examples of large sensor camera computational photography. The smart phone makers have really embraced these techniques, seamlessly combining multiple exposure values and views from multiple lenses. As is appropriate for this more deliberate style of image making, I’m using these techniques in controlled ways using special purpose software like Helicon Focus and Adobe Photoshop to align and blend them. I think we’ll see more automation like Focus Shift to come, capturing multiple versions of an image to be combined either in camera or for use in post processing to create synthetic images.

Film Transfer With the Nikon ES-2

Yesterday, I wrote about how the Z7 might be such an all around success that it could replace the digital Leica rangefinders. It may replace my aging Minolta film scanner as well. During my winter break photography project, looking for new ideas and techniques, I ran across two presentations (here and here) by Vincent Versace on the Nikon ES-2 Film Digitizing Adapter. By the way, watch both. The first is a clean, more formal presentation, the second is a more typical Versace philosophy of photography course about digital, film and the meaning of life.

I got the ES-2 from B&H Photo in NY and a 60mm Nikkor Micro from KEH in Atlanta. I recommend both companies for honesty, customer service and quality product. So last night I quickly digitized a few images just to try it out. Turns out it’s been at least two years since I’ve shot any film, so my eye is attuned to the current quality of digital black and white. But this wasn’t a test of film, it was a tryout of digital transfer over film scanners.

Using the ES-2 reminds me of the time a long time ago when we made slide duplicates. The only way to get a copy of a 35mm slide used for lectures or scientific presentations was to bring it to the lab for copying. A simple method was this kind of adaptor where you took a photo of the slide with slide film. This is just taking a photo of the negative, moving analog to digital.

The camera gets set to base ISO, f8, autofocus on the emulsions side. The post-processing is simple, just invert the curve in any processing program and the negative is converted. Vincent uses Picture Control in camera or Capture NX-D.

Two very obvious conclusions: 1. Its way faster to capture an image with a DSLR than a slide scanner. The scanner is clearly an older technology used to capture point by point what can now be done simultaneously. 2. The improved resolution doesn’t really matter much. It’s a photograph of grain which, at 100% clearly shows how much better the D850 45 MP sensor can resolve compared to Tri-X. So compared to the scanner, the grain is crisper with the DSLR capture, but the image really isn’t any different. The dynamic range of the D850 was wider than every negative I tried, but I could see how some negatives might benefit from a two exposure combination.

The process definitely makes shooting film more attractive. I’ll probably bring a film camera on an upcoming trip to capture some film-appropriate images. It’s more that nostalgia since film provides a rendering that is even more different than digital than it was a few years ago. Digital imaging in the same format is now into large format quality territory with the improvements we’ve seen in sensors and lenses. So for landscape and documentary work, it seems that digital is far and away the best medium. But for the sense of gritty, you are there, 35mm film still provides its quick sketch of the fall of light and sense of movement.

Can the Nikon Z7 Replace my Leica M10?

On New Years Day, we visited the Renwick Gallery in Washington DC. This is a smaller museum in the Smithsonian focusing on American Art and Craft. It’s a joy when a museum encourages photography.

A welcome sight

So I took full advantage of the low light capability of the Z7 to collaborate with the artists on view to create some of my own art based on their art.

The Z7 works well as a travel camera. With the 50mm f1.8 attached it weighs just as much as my Leica M10 with the Summicron APO 50mm f2.0 ASPH. It occupies more space, but it’s no more attention grabbing than the Leica. I don’t have the Nikkor 24-70mm f4 zoom, but extended it does look a bit more like a professional piece of gear. Shooting mode is generally aperture priority, minimum shutter speed 1/125 second and Auto ISO as high as it’ll go. Both cameras are much better at setting exposure than me- And I can blend processed RAW images to bring down highlights and light blocked up shadows as long as the image isn’t blown out.

Interestingly, I’ve found over the last few years that using live view is more acceptable in public than looking through a viewfinder. Maybe it’s just that we’re used to seeing smart phone photography and viewfinder based cameras seem more intrusive. Maybe it’s that you can see the photographer’s face and the camera attracts less attention. With Live View on either the Nikon Z7 or the Leica M10, it’s possible to take photos looking at the area and glancing at the subject.

The Z7 adds several features that aid unobtrusive shooting. The LCD on the rear tilts, so the camera can be low on a table or at the waist out of line of sight. For people, auto focus with face recognition allows shooting. Silent shooting on the Z7 provides a completely silent shooting experience, again avoiding attracting attention with that characteristic shutter sound. The the Z7 also has a higher pixel count and sensor stabilization. There are other advanced features of course, but not in use for this kind of travel photography.

So the question arises as to whether I could sell off all of my other cameras (M10, Monochrom, D850) and just use the Z7 exclusively. I’ll need more data on that, but I think I’ll be selling the D850 as I don’t like the weight and bulk. My Nikon glass will work with the Z7, so it’s redundant. Next, I’ll need to try a Leica M to Nikon Z adaptor to see how Leica glass works with the Z7. But I’ll need to look critically at my image library and do some camera rotation to decide whether the compact form rangefinder has real advantages over the technologically advanced Nikon Z7.

Film? It’s not going anywhere and in fact I plan to try some digital negative transfers with Nikon’s new ES-2 adaptor on the Z7.

When Blogs Were Journals

Looking back on my history of writing on the internet, I came across this nice personal history written by another early EditThisPage user, Frank McPherson who wrote Notes From The Cave.

I don’t think it was the change to titles that did in blogging, it was the move to writing articles rather than journaling, a larger conceptual shift. Looking back through those early sites they were frequently posted links, comments and quick thoughts. And indeed, this is a space now occupied by Twitter and other social media. Social connections in the early days of blogging was easy since the world was small. Twitter and Facebook provided scale for both personal and private networking, so its natural that the infinity of small island blogs like this faded away.

Over the years, this site has been found because of long form reviews or observations that get ranking in Google searches. Any other readers are long time net friends and family. The photos I feature on most pages are decorative, they can’t be found by search engines. I have Flickr for my photographic social network, a place that seems to be recovering from Yahoo’s neglect at this point.

19 Years of Deciding . . . Better

As Hal Rager at Blivet points out, we’ve been blogging for 19 years now since Dave Winer’s EditThisPage. I can’t say it’s been continuous over that time, but it’s been an ongoing project. I always get a kick out of reading the first page I wrote here: Imagination as Simulation

Over the years, I’ve started and stopped work on a long form version of ODB, which I guess most of the world would call a “book”, but the scope of the project has always proved to be overwhelming. The work goes on behind the scenes, with lots more reading on brain mechanisms of deciding, if you look back over the 10 posts I made in 2018.

I think I can see what the outlines of a workable synthesis, but the way the brain works seems to be very, very alien compared what we perceive. It’s not surprising since neurons and their networks aren’t accessible to conscious awareness, so they work very differently from the we would guess or by analogy to mechanical devices. It’s been clear for a long time, given optical illusions that most of awareness occurs automatically. In a way it looks like Sherrington was right, way back in the 20’s that it’s all down to reflexes that act to govern the body via analogues with the external environment. I think we now understand that the awareness is built up from differences between the expected state and the contents of sensory input.

And Scully the cat’s brain mechanisms seem to be very much like our own. Minus the symbolic environment we create through culture, as we are the social animals born with the mechanisms in place to use language.

And yes it’s a bit scary to realize that our perceptions and actions aren’t based on any kind of rational engine, but instead the brain models we’ve developed over a life time of sensory experience. We don’t choose what we see and we don’t choose how we react. Yet I think that it points to relatively simple approaches to deciding better, mostly by being in better, more informative environments that nurture our best selves. And taking time to use imagination as simulation to provide more options for better decisions. Coming full circle.

Nikon Z7: The New Digital Benchmark

I ordered Nikon’s first mirrorless offering, the Z7 as soon as it was announced. I got one of the first shipments through my great local camera store, Service Photo here in Baltimore. I played around with it a little, and thought the image quality was the equal of the Nikon D850 in a smaller package. I had the FTZ adaptor to try my collection of Nikkor’s, but it kind of kills the small package. So when I traveled to Italy twice, I brought the Leica M10 as a small travel kit and one drive to Pittsburgh seemed best suited for the D850 since it has the tripod mounting plate attached.

But as I mentioned, I’ve returned to photography these last few weeks and when leaving the house I’ve grabbed the Z7 with the new 50mm f1.8 lens and I’m finding that the images I can produce are a level I’ve never seen before. With the 45 megapixels, lack of antialiasing filter and lenses with that really wide mount, I get an almost large format feel to lots of the images.

This image is a scene a see several times a week as I drive to the gym in Owings Mills on Park Heights Avenue. The winter landscape had that gentle light coming at me, so I stopped in the middle of the road, opened the door, leaned out and grabbed this image. There wasn’t any traffic behind me, but the rear LCD let me frame at that awkward angle, resting the camera on the window frame.

The only real post-proccessing trick in the image was converting the RAW image twice, once at normal exposure and one at two stops under to get as much detail in the highlights and sky as possible. So its a single exposure HDR image if you wish, converted to a chromatic grayscale image using Nik Silver Efex.

It’s one of those images that I know I saw, I know I captured, I know I coaxed this out of the RAW file, but can’t quite believe the result. So I’ll give the Z7 a lot of credit.

Winter Break Project: Photography

Every year during the Christmas – New Year period when work slows down, I usually take on a project. My first web page was built many years ago when I learned HTML for the first time. I had planned to work on my long term project to write a book based on my explorations here at On Deciding . . . Better, but somehow I fell back into photography again. My Capture1 catalog tells the story. Lots of casual iPhone shots with a few collections of images associated with travel. Newfoundland this past summer, Milan this fall. But not much real image production.

I played around with a few iOS image tools, thinking that if I got images onto the iPad, I might spend more time doing the digital darkroom work. Fortunately, I noticed in a B&H Photo email that my photographic mentor, Vincent Versace, was doing some live sessions int the B&H Event Space. The series 21st Century Composition Theory sounds like its the basis of a new book or two from Vincent. His Oz books are out of print now because of the demise of the publisher, Rocky Nook. The sessions are a fine example of his approach to photography, using techniques familiar to those who have worked through the Oz books. The first session: The Journey is the Destination, a Live Fire Demo of Post Processing an Image From Vincent’s Most Recent TripI s a tour of creating an image, from color management in camera, through RAW conversion, photoshop processing, and printing. The second presents a computation photography technique using Nikon’s “Focus Shift” called ExDR Extending the Dynamic Range of Focus and Bokeh (the quality of blur) and How to Shoot For It. And the last is The Conversational Portrait showing how using silent shooting plus facial recognition can change the way you shoot portraits. But really, in a way the focus of the sessions is beside the point. It’s really about the overall approach to image capture and the cinematic post-processing in Photoshop.

Watching those videos swung me back int image making. For me, it was reminder of how much of the interest in an image comes after the RAW file is loaded into the computer. My entire artistic pursuit is simply framing interesting visual encounters with a camera and pulling that through to an image that tells the story of why a viewer might find it interesting.

And I’ll gladly admit it’s much more accessible than my thoughts about decision making, so in some way a pursuit of the easier path to truth by looking at what works.

Why I Sold All of My Apple Stock After 14 Years

Back in the day, I used to discuss investing and my portfolio here at OD…B. But the last 10 years have been uninteresting with an unrelenting bull market recovering from the disaster of the 2008 financial crisis up through the 2016-2018 “Trump Trade”. I believe that the run is coming to an end as the globalized economy splinters and Nationalism rises in the world.

Back in 2004, in the aftermath of the 2000 DotCom crash, I bought Apple stock. I had sworn never to invest in that failed company again, but with the success of the iPod with those white earbuds combined with the new iTunes Music Store, I bought into Steve Job’s vision of the Mac as a Digital Hub. In the end it was the iPhone that drove Apple valuation to it’s current stratospheric heights as the largest company by market cap.

Over the last few years, I began valuing Apple as a typical blue chip, based on its dividend payments. Apple became a huge company that was going to be judged on growing earnings and paying out a portion of the profits to investors. Apple’s been paying a dividend for about 5 years now. Initially yielding 2.5%. It’s been in a general range of 2 to 1.5%, but reached as low as 1.2% a few months ago as the market peaked. I looked at that and took it as a clue that the stock was overvalued.

Now this is combination with the original investment thesis I had in Apple, as the center of the home computing ecosystem. And the iPhone/iPad/Mac combo has done just that for me over the years. That iPhone camera connected to social media has killed both the camera industry and professional photography. The Apple Watch has decimated the mid price quartz watch market. But at this point, I don’t see the growth driver. I think the iPhone X is a nice iteration, but I see lots of older phones out there and a sentiment that some of this technology is just unnecessary. There’s a reaction against social media and connectedness.

And Apple has not introduced a new category killer since the Watch. The AirPods are a nice accessory and I seem them everywhere now. The HomePod and AppleTV haven’t gained real dominance because the streaming services and cable companies can’t be displaced by hardware.

Worse, Apples strategy of pushing prices up is now seeing consumer resistance. And if economic times turn tougher, it will be a completely untenable strategy. For years we’ve had stable prices for electronics with gradual improvement that made upgrading worth it from time to time.

So, I believe Apple’s stock price will fall back to where the dividend yield supports the value, back to yielding in the the 2 to 2.5% range. Then I’ll be a buyer as long as the company remains strong and the product lineup attractive.

Waiting for Brain Science

It was back in High School that I became fascinated by the workings of the mind. I was doing lots improvisational theater and acting in plays and saw how I and others could transform into new identities at will. I realized that we did this in everyday life as we slipped between hanging out with friends and behaving (or not behaving) according to norms in school. Mind altering substances were everywhere, so reality could be easily demonstrated to be a mental construct, not the universal truth we all pretended it was.

Eventually, I put the arts into the background of and pursued the science of mind. A combined MD / PhD program led to training in Neurology and now a long career in developing new treatments for brain diseases. Given the state of cognitive science at the time, the practical pursuit of understanding neurological disease seemed more likely to lead to a real contribution.

I now have the luxury of returning to exploring cognitive science 30 years later. Real progress has been made on many fronts, much remains obscure. I’m particularly struck by how clearly we see the process of perception of complex scenes and symbols. When I was in college we were just beginning to understand the tuning of neurons in the primary receiving areas of the cerebral cortex. Now we have a picture of how shapes and words are recognized in the visual regions of brain through activation of tuned networks across the regions of cerebral cortex devoted to sensing the visual world.

Decision making plays a very specific role in the sensory systems. If the system is primed by a preceding stimulus, say a lion’s roar, the sensing areas are readied and more likely to detect a cat among the noise. Or, deciding to look for the color red, suddenly every red shape jumps out from the background, even though just before they were just part of the background.

Most remarkably, these sensory decisions take place in primary receiving areas, preventing the perception of anything else. And these decisions generally are not at all accessible to consciousness. We’re not aware of how we change our perception to fit context since it occurs via basic feed-forward mechanisms.

This is unconscious bias, but of a sort never imagined by philosophers and sociologists. It’s built into the apparatus of perception from the very first steps of visual perception, impossible to control directly, just influenced by the ongoing flow of brain action and reaction.

Not really where I thought cognitive science would end up, making deciding better so much more difficult if the decision process begins by these brain circuits determining what is seen in the first place.