And now Newtown, Connecticut

I was going to take the easy way out today, in the wake of the Newtown, CT massacre, and say that feelings are indescribable or maybe publish a blank image.  But then the images started coming in and they are already iconic.  There is a picture of a young woman screaming in anguish in a way that surpasses anything that Edvard Munch ever dreamed of in his worst nightmare.  This poor woman, we learn, was awaiting news of the fate of her sister, who worked at the Sandy Hook School.  That image is now burned into my brain forever.  That woman is us. She is every sister, mother, and daughter.

The image is stored in my brain along with others: images of concentration camps, of My Lai, of 9/11, of Columbine, and Aurora.  The list goes on and on.  There are too many of these images and too many events that they represent.

Stephen King is a master of horror fiction.  And I have read that the essence of his mastery lays in the everyday, common place locations of his horror stories.  Well friends, need I say anymore?

The horrible frequency of these events is a measure of our societal depravity.  More alarming, perhaps just as tragic, is the view that we can do nothing about them.  The fault lies not with our politicians, but within ourselves.  As members of a free society, it is time to act.

The erased and missing victims

A reader has brought to my attention a recent article in the New York Times that really follows up on our discussion of responsibility in events and photography.  This article by Maurice Berger, in the blog “Lens,” entitled “Lynchings in the West, Erased From History and Photos”  features the work of photographer Ken Gonzales-Day in particular his “Erased Lynching Series.”  The subject fits in well with our discussion on photography ethics and on the effect of the gruesome on the observer.

Gonzales-Day has pursued mob violence and lynchings that occured in California during the nineteen and early twentieth centuries.  Unlike the Jim Crow lynchings of African Americans in the South, which were conducted under a veil of anonymity, lynchings in California were perpetrated predominantly against Mexicans, native AMericans, and Asians and were glorified in the name of “frontier justice.”  Gonzales-Day has done two things: first he has taken contemporary postcards and photographs of these infamous events and removed the victims; second he has revisited these sites and rephotographed them.

The victimless images have a profound effect.  While in the originals, the eye is invariably drawn and held riveted to the unfortunate victim, here you look instead into the sometimes smiling, sometimes blank faces of the mob of executioners.  And there is always something obviously missing that creates a sense of ambiguity.  Berger discusses the views of Belgian critic and curator Thierry de Duve: ” pictures of atrocities, shocking and disquieting as they may be, result in a ‘vanishing of the present tense.’ Distilling a complex, morally troubling event into an instant, they suspend viewers in a limbo in which they are inevitably ‘too early to witness the uncoiling of the tragedy’ and ‘too late, in real life,’ to do anything to prevent it.”   For Mr. de Duve, this renders pictures of bloodshed particularly disconcerting — almost unbearably — by intensifying our sense of helplessness before history.”

I think that the same may be said of the “subway tragedy” discussed in my blog of yesterday. Vison is our dominant sense.  As a result, the photographic image can affect us profoundly.  Photographs can be strong medicine. On the one hand they may fill us forever with profound sublimity.  One the other hand they may haunt us forever with gruesome reality.

Ethical questions raised by the subway tragedy photos

I was sitting in a  Rochester, Minnesota hotel restaurant having breakfast last Friday (and thus suitably insulated from the rest of the world) reading Laura Petrecca’s column in USA Today “Series of subway tragedy photos raises some questions.”  A journalist takes pictures of a man pushed in front of a train seconds before the man is hit and killed.  The ethical questions are profound.

  • Should the journalist, who claims he was too far away to help, have taken the pictures or rushed in vain to the man’s aid?
  • Should the NY Post have published these pictures and pandered to sensationalism?
  • Why are readers drawn to such macabre images?
  • Should spectators on the scene have taken endless IPhone and Android images and movies to post on the web? Was it, as Ms. Petrecca suggests, to feed the insatiable beast of the internet in an attempt to achieve a moment of notoriety?

Sincerely, I really don’t know the answers and am quite interested in what readers think.  In such matters I tend to fall back on what my mother taught me about right and wrong and, yes, good taste.

  • What the journalist could or could not do is in his own conscience.  If he was indeed helpless then, perhaps, the question becomes one of motivation.  Was he reflexively documenting or was he from the very first thinking of profiting from tragedy?
  • My mother taught me to never read the NY Post.
  • Voyeurism is widespread, if despicable.  Everyone rubber necks to some extent.  The important point is whether you are repulsed  and whether you feel guilty about being drawn in the first place.  You’ve got sadism  on the deplorable end of human emotion and empathy on the other end – again I never read the NY Post.  When I saw the images on the news.  My reaction was OMG that poor man.
  • As for the internet beast, there’s some of that.  I think significant too is the reality that taking camera to hand, or eye, is abstracting.  You cease to be a participant in events transforming instead into being a spectator, a documenter, a chronicler.  It is a defense mechanism.

Laura Petracca goes on to quote John Churman, who is a leader in the NY Society for Ethical Culture: “many photo takers have been ‘desensitized’ by watching traditional news media do ‘unseemly’ things, such as stick a microphone in the face of a distraught person to probe their feelings”…(it is) invasive and intrusive” and then they go on to mimic it.

My mind returns again to Eddie Adams’ “Brigadier General Nguyen Ngoc Loan Executing the Viet Cong Guerilla, Bay Lop, 1968.” What is it that makes Adams’ photograph acceptable photojournalism, and the image of the subway tragedy not?

 

Bokeh, what it is

I have been reading a lot about different digital camera lenses in my research on image sharpness.  There are words that you hear so often that it can get a bit annoying – meaningless words like “tack sharp.”  Another term that keeps cropping up is “bokeh.” If you read a lot about photography you’ll keep running into it as well.  So I thought that it might be worth defining it.

Bokeh is the aesthetic use of out-of-focus regions of an image to create pleasing effects.  It is caused by the depth-of-focus, or lack there of, of the lens.  Take a look, for instance, at my picture “Lady Slipper,” in my “Cabinet of Nature Gallery.”  I could have photographed against a dark black uniform background; but instead the out-of-focus Windsor chair and other elements of the background add aesthetically to the image.

The word “bokeh” comes from the Japanese word “boke,” which means “blur.”  The (h) is added so that we remember to pronounce the (e).  Usually it is pronounced as “boh – kay” but sometimes as “boh-keh” – but never as boke (rhyming with broke or bloke).  Remember this the next time you are in a genuine pizza shop and ordering a calzone.  Italians have never met a silent (e).  Calzone is not like school zone.  The (e) is pronounced like “cal-zo-nay.”  Mangia!

Sources of noise in digital photography

Perhaps, it is worth saying a bit more about noise in digital photography, because I don’t want to leave you with the impression that the only type of noise is the counting noise that we have been discussing.  Every element of an electronic device, like a digital camera, can generate noise.

For the kinds of detectors used in digital cameras, CCDs and CMOS detectors, there are, generally speaking, three types of noise:

  1. Dark or thermal noise, which reflects that fact that above absolute zero (and we are far above absolute zero) random thermal motion of electrons can cause the electrons to act as if they we caused by photons.  Or said differently heat rather than light causes the electron to appear in the pixel well.
  2. The kind of counting noise that we have been discussing.
  3. Read-out noise – noise that results from the transfer of electrons between adjacent pixels as they are read out of the detector.  This transfer is often described as a bucket brigade, where water can spill between buckets as it is moved along.

There are two other forms of detector noise to consider. I think these might be better considered as kinds of background in your image, but never-the-less.  Both of these are randomly distributed on your detector, but always occur in the same positions. These are:

  1. Random but permanent variations in pixel sensitivity including hot (bright) pixels or dead (black) pixels.
  2. Random fuzzy areas on your detector due to dust.

Finally, noise in the amplifier or other camera electronics can add noise to your image or even add systematic patterns such as banding, seen at low intensity.  If the amplifier has a temporal oscillation to it, for instance, and because the image is always read out the same way, these will appear as spatial bands in the image.

We will have the opportunity to discuss noise more in the future.  For now, the important point is that there is noise.  Some noise gets amplified along with the signal as you change the ISO setting.  Finally, noise ultimately limits image sgarpness and resolution.

Counting noise, ISO, graininess, and image sharpness

Now that we’ve introduced the concept of counting or signal noise, we are in a position to also discuss graininess and its relationship to image sharpness.  The important point to remember from our discussion of counting noise is that because of the random nature of the incoming photons of light, the intensity of even a uniform intensity surface will show pixel-to-pixel variation.

Now your digital camera may be thought of as having three components as illustrated in the schematic shown at the top of Figure 1. These three elements are:

  1. A sensor that builds up electrons in response to the light (subject to the laws of randomness that the lower the number the more the variation).  The individual pixel elements are miniature capacitors, which means that they produce a voltage directly proportional to the number of electrons and therefore the intensity of light,
  2. An electronic amplifier that can multiply your voltage and increase its size
  3. An analog to digital (A/D) converter that converts the voltage to a digital number that can then be stored either on your camera or on your computer.

Now, below this schematic on the left hand side I’ve drawn a little graph that shows intensity from four pixels in your image, presumed to be of the same light intensity.  However, because the intensity is low there is lots of variability, due to counting noise.  Also because the intensity is low it’s going to appear black or near black in your image.  So, we have to amplify it, say by a factor of 10 X.  This brightens things up.  However, as shown on the right hand side, the variability  gets amplified as well.  This is what we call graininess in an image.  Also, as we have previously discussed, this kind of noise limits our ability to distinguish things in our image.  So the resolution isn’t so good.

To overcome this problem, we can instead lengthen the exposure, which fills the pixels with more electrons and jacks up our overall intensity.  That is we need to use no, or less amplification.  This is shown at the bottom of Figure 1.  Less variability means less variability or graininess and better resolution.

Figure 2 – Increasing electronic amplification or ISO causes image graininess and reduces image sharpness

So you need two camera controls here.  To control how much you fill the pixels with electrons you need to be able to either expose for longer time or decrease the f-number.  When you cannot do this, because you need a high f-number for depth of field or faster exposure to stop action or hand motion, you need to be able to control the amplification.  That’s your ISO setting.  The higher your ISO the higher your magnification.  In the old film days, an ISO of 100 was considered to be normal, high ISO films were fast, high grain films; low ISO films were slow, fine grain films.

Let’s take a look now at the images of Figure 2.  Don’t worry about the slight color differences between top and bottom.  The top image was taken at ISO 100 (low electronic amplification).  When you blow up the caroler’s face there’s very little variation or grain, the sharpness is quite good.  The bottom image was taken at ISO 6400 (high electronic amplification).  When you blow up the caroler’s face There’s a significant amount of variation or graininess, and, as a result, there is a loss of sharpness or resolution.

The “Blue Marble”

Today marks the fortieth anniversary of the taking of a remarkable photograph on December 7, 1972.  The image is of the planet Earth and was taken by the astronauts of Apollo 17, then racing to the moon.  The image is often referred to as “The Blue Marble.”  It conjures up the same thoughts now as it did then.  We are all connected. The world is a beautiful place. We have tremendous power of technology, that we may chose to use for good or evil.   Why do we continue to kill each other, pollute, and destroy our beautiful planet?

Figure 1 – “The Blue Marble” taken by the astronauts of NASA flight Apollo 17, December 7, 1972

Joseph Petzval and the dawn of modern lens design

Figure 1 – An example of one of Joseph Petzval’s lenses.By Szőcs TamásTamasflex (Own work by uploader – Lens Photo by Андрей АМ) [CC-BY-SA-3.0 (http://creativecommons.org/licenses/by-sa/3.0) or GFDL (http://www.gnu.org/copyleft/fdl.html)], via Wikimedia Commons

In reading the January 2013 issue of Shutterbug (Is it really 2013 already?), I came upon an interesting excerpt from “Pring’s Photographer’s Miscellany” about Joseph Petzval (1807-1891) that was worth looking into further.  Petzval was a Slovakian physicist and eventually chair of Mathematics at the University of Vienna. He was the first person to design camera lenses mathematically; so in that respect was the father of geometric optics and modern lens design.  There is a wonderful Antique and Classic Cameras website that offers an excellent discussion of  early camera lenses.

The bottom line is that early Daguerreotypists were plagued by the very high f-numbers of their cameras and the long exposures that they consequently demanded.  The original lens that Daquerre used was a  cemented doublet achromatic Chevalier 15-inch telescope objective with an f-number of 15.

Petzval’s innovation was to design a  pair of doublets, where the second doublet compensated for the aberrations of the first.  This produced a sharp image at the center with a softer focus off center as well a vignetting at the edges.  Such a lens was ideal for portraiture, where the goal is to focus the eye on the subject not the background.  Indeed, Petzval lenses are still manufactured today so as to achieve just this effect.

At f/3.6 this lens was almost 25 times faster than the original Chevalier lens.  This made sitting times for portraits manageable, even for children. In this respect, Petzval may be credited with contributing significantly to the development of modern photography.

Petzval initially teamed with Voigtlander & Sons to produce his lens.  Utimately, Petzval fell out with Peter Wilhelm Friedrich von Voigtländer (1812–1878) .  Patent laws were problematic in the 19th century; so they continued to produce copies of his lens design and there was a dispute as to who held the legal right to manufacture these lens.   In 1859, Petzval’s home was burglarized and his manuscripts and notes were destroyed.  He never was able to reconstruct all of his previous work and he ultimately withdrew from society and died a destitute hermit in 1891.  This seems, sadly, to be a common epilogue for the photographic geniuses of the 19th century.

Random noise, counting noise

So let’s build on what we know about noise. The operative word for the day is random.  Suppose we have a camera sensor and we look at what is happening to a single pixel.  Let’s suppose that there are about 100,000 photons of light randomly hitting that pixel every second.  The pixel builds up charge in response to the light (which means it converts those photons to electrons and somehow counts them).  For now, we’ll assume one photon creates one electron.

OK, now assume that you read the amount of light every microsecond (one millionth of a second).  What this means is that most of your microsecond intervals are going to be empty.  In fact, only one in ten on average will have photon.  Or put another way, the probability that any given microsecond contains a photon is 1/10.  By-the-way, this means that the probability of catching two photons in an interval is (1/10)X(1/10) or 1/100 or 1%.

The point is that there is a lot of variability.  Because the photons are discrete and their delivery random, you cannot get 0.1 photons in an interval which happens to be the average.  Most (~90%) of the time you get 0.  Some (~10%) of the time you get 1.  And rarely you get 2, 3, 4, etc.

If you instead count for 2 micron intervals, ~80% of the time you get 0 and ~20 % of the time you get  1.  The odds of a 2 go up to 2.5%.  Basically, the longer the interval the less variability you get.  However unless you count for an infinite amount of time there’s always going to be variability.

Here’s the key.  Suppose your exposure (that’s your measurement interval) gives you 10,000, then there’s an uncertainty of the square root of 10,000 or 100 in the number of photons detected.  OK, I have to say it accurately at least once.  The laws of probability predict that if you made your measurement (exposure) a lot of times, 68% of the measurements would fall between 10,000-100 (that’s 9,900) and 10,000 +100 (that’s 10,100). This corresponds to a 1% uncertainty or variability.  I’m sorry I had to say it.

Figure 1 – Image of a grey wall and the corresponding histogram of measured intensities

The really important take home message here is that ultimately you’re counting photons (really electrons) and there’s always going to be variability, or noise, by virtue of the random way that light is delivered.

OK, so let’s prove it.  In Figure 1, I show a picture that I took of a homogeneous wall. Superimposed on this image is the histogram of the grey values.  While they’re tight they’re not all the same.  There is variability or noise even in this homogeneous image. Engineers refer to this as counting noise, and it’s inescapable.