If light is the most important thing in photography, then certainly the Histogram must be one of the photographers most important tools in post processing.
Histograms can reveal a lot about about a pic. Here are the most common traditional uses:
Identify which areas of the pic are overexposed and ‘blown out”
Identify which areas of a pic are underexposed and blacked out
With an RGB Histogram you can effectively do the same within each colour channel
A quick glance at a histogram can reveal whether it is a high contrast or low contrast pic
It can also reveal if an image is a predominantly high key or low key image
What i am proposing though is another piece of information altogether – an indication of whether an image is – on an “average” basis – over-exposed or under-exposed.
Don’t get me wrong, this information is already all there. And many photographer’s are already using this “aggregate” information even if they don’t realise it. A glance at the graph of any Histogram will give you an indication of whether there are more underexposed pixels or overexposed pixels. But it involves a bit of guess work, especially if your graph has lots of troughs and valleys (in fact it kind of reminds me of my previous days as an investment technical analyst poring over stock charts trying to glean trends)
WHY THIS IS REQUIRED NOW MORE THAN EVER
First a bit of background at how I arrived at this point. For the past 6 months I have been doing a lot of HDR work. And although Lightroom is the place where most of my post-processing happens, when i’m doing HDR the very first step is roundtripping the 3 exposure bracketed pics to Photomatix and then re-importing the “merged” image back into Lightroom. That is usually the point at which I first look at the Histogram.
As discussed the main use of histograms is to correct the chronically over-exposed and under-exposed exposed section of an image. But if your HDR is done correctly there should be very few underexposed or overexposed sections in your merged histogram. As a result of this the use-case for Histograms has been different for me.
What I am needing more and more is a general indication of whether the overall image is too dark or light. And this is not as straightforward as it seems. Sure, its easy enough when I’m sitting in front of my own monitor looking at my final image. The problem comes in when the image is viewed somewhere else. Photographers doing a lot of prints encountered this problem ages ago and the really good printers know that the “output” part of the post processing unit – getting the pic from the darkroom to the finished product that the public will view – is a whole science in its own (that can make or break a good image).
But I would argue that this “output” problem has become exponentially worse. In print days the photographer still had many more variables under their control. Take a photography exhibition for example – there is a single print on the wall and all viewers will see the same print. The venue can set the ideal lighting for the print and all viewers will view the pic in the same (presumably optimal) light.
The digital photographer (especially one whose work is viewed predominantly online) has no such luxuries. Here are a few of the variables they have to deal with:
Different size viewing screens ranging from 3 inch cell phones to 60 inch smart TVs
Wildly varying lighting conditions – from outdoor sunshine to total dark
Different screen resolutions with more and more retina display devices
Different brightness settings on the screens (and different screen hardware specs)
The list above doesn’t even drill down into the complexities of screen calibration, which, for anyone who has tinkered with it knows, can have a massive impact on how your image appears on screen.
As before, it was personal experience that highlighted these issues for me. Although I’ve paid a lot of attention to screen settings and lighting for my own desktop (which is where I do all my post-processing) I often see my Flickr pics on my wife’s laptop. The first time this happened I was alarmed at how dull my pics looked – totally different to those on my own monitor. I discovered that my wife sits in a much darker office most of the time than me and has therefore set her screen brightness lower. Since then I make it a habit to look at my pics on different screens and it is amazing how different they render in each case. If, like many photographers, you view your photographs predominantly on your own machine, you are likely unaware that what you are seeing on your screen may be very different to what other people are seeing. Digital publishing has always been fond of the acronym WYSIWYG – which stands for What You See Is What You Get. With the explosion of different devices and screens these days, a single WYSWIG that applies to all screens is not a possibility.
So how do you create an image that is not necessarily optimal for all screens but close enough to the mean to display adequately on the majority of screens? Unfortunately relying on your eye and own judgement call is not guaranteed unless your own workstation is set up at exactly the medium settings, resolution, calibration and external lighting settings (which in itself is a moving target). What we need is a constant, and that constant is the histogram. The one part of the image that will view exactly the same regardless of where you’re viewing it, is the image histogram. So the more we move into this multi-screen world (hey, people might even be viewing your images on Google Glass), the more we are going to have to rely on histograms.
In my own experience the greatest variation between screens is the “brightness issue” and this brings me back to my initial proposal for “an indication of whether an image is – on an “average” basis – over-exposed or under-exposed.”
But rather than looking at a Histogram graph and kind of judging where the bulk of the pixels lie, wouldn’t it be easier if we had a single number (maybe in a little box below the Histogram) that could give an absolute starting point for the overall “brightness” of an image.
How I’m proposing this would work is that you start with the central point of the histogram on the horizontal axis and give this line a value of zero. Then all pixels to the left would count as a negative value to the total “score” and all pixels to the right a positive value. One could also add some weighting factor in, so that those pixel counts closer to the median count less than those further out.
The result would be a single number (in the box below the histogram) with a negative number would indicating “On-balance not bright enough” and a positive number indicating “On balance too bright”
Unfortunately I don’t have enough scientific background on histograms to know whether the number I am proposing would serve any value at all. Maybe I am overlooking something or have misunderstood something?
The one thing I do know is that all of the information required to compute my number is already there, in the histogram. So why not use it? Even if a quick glance at it helps photographers in some other way – wouldn’t that be an improvement to Lightroom and other programs. Maybe if there are any savvy developers out there they could develop a Lightroom plugin?
I would really appreciate any feedback.
25th January 2014