Depth of Field

Introduction

The accepted way to determine Depth of field has not changed  for many years but with the progress of cultural heritage photography, and how that photography is viewed, it may well mean you need to give more thought to how you calculate it in the future.

How Depth of Field is calculated

Depth of field is calculated from the following formula:

d = 2NCD²/ f²

Equation 1.

Where:

N = f/number

C = circle of confusion

D = focus distance

f = focal length of the lens

The determining factor here is the circle of confusion (C), as the other 3 parameters will remain the same for any given shot.

Circle of confusion is taken as the maximum diameter a spot can be “blurry” but still appear sharp to the viewer and this is commonly determined with the viewer at a distance of 25cm viewing a print approximately 25 cm x 20 cm.

As an average person can determine around 100 points per cm at this distance (or 5 line pairs per mm), this approximates to a circle of confusion with a diameter of 0.2mm.

However this is the diameter of a blurry point that will appear sharp to us on print and it is from this that the circle of confusion that will be acceptable falling on the camera’s sensor is calculated.

For an example let’s look at a Canon 7D MarkII which has the following specifications:

Sensor Dimensions: 22.4 x 15mm

Pixel dimensions: 5472 x 3648 px

The enlargement required to produce a 25 x 20 cm print is given by the length of the print divided by the length of the sensor:

250mm / 22.4mm = 11.16 enlargement.

With the enlargement required we can calculate the circle of confusion by taking the required print circle of confusion, 0.2mm, and dividing it by the enlargement required. In the case of a Canon 7D Mark II, 11.16:

0.2mm / 11.16 = 0.018mm or 18µm

Now we have the circle of confusion required for our sensor to produce a sharp image viewed on our print the depth of field can be calculated.

To make things easy in our example we are going to use a 100mm lens, with an aperture of f/11 at a focal distance of 1m. If we put these figures into equation 1, we get:

d = 2 x 11 x 0.018 x (1000²) / 100²   =   39.6mm

Therefore the depth of field is 39.6mm from front to back in the scene that will appear sharp on the 25 x 20 cm print.

The Problem

The problem is that with the increased popularity of viewing “zoomed” images on screen the circle of confusion that would be acceptable with a depth of field of 39.6mm in our example will appear blurry when viewed at 100% on screen.

To demonstrate the problem take a look at the illustration below. This shows just how great an area a circle of confusion of 0.018µm diameter will be recorded on the sensor of the Canon 7D MarkII which has a pixel width of about 4.1µm.

18blur7d

It is this circle of confusion that is usually used in the calculation of depth of field. What needs to be calculated is a circle of confusion that will provide a sharp image when viewed on screen at 100%.

Ideally a circle of confusion equal to the width of the pixel would provide the basis for the depth of field calculation.

4micrococ7dsensors

In this case the depth of field for the same shot as the example earlier becomes:

d = 2 x 11 x 0.0041 x (1000²) / 100²   = 9.02mm

Now the depth of the scene that will be in focus when viewed has reduced significantly from 39.6mm to 9.02mm, a difference of 30.58mm, or a 77% reduction.

For practical purposes a useful rule of thumb is to use a circle of confusion equivalant to double the pixel width on the sensor. In the case of the Canon 7D MarkII this would be 0.0082µm.

7dsensorx2pw

This would result in the following depth of field in our example:

d = 2 x 11 x 0.0082 x (1000²) / 100²   = 18.04mm

This results in a significantly greater depth of field than using the pixel width, 9.02mm, but still significantly less than using the more traditional calculation for the circle of confusion, 39.60mm.

Summary

depthoffieldno

  • The next time you look at the depth of field scale on your lens give some thought to how your image will be viewed. They may not be relevant.
  • As the pixel width decreases on the sensors to accommodate more pixels the acceptable sharpness and depth of field will be affected to a greater extent if the images are to be viewed at 100% on a display.

Updated Website

As it’s a new year, well almost, I’ve been giving my website a bit of a tidy up and new look.

websites

It’s been 6 years since the original website went up and I thought it was time to give it a bit of attention. The new look is more in keeping with the blog and to be honest I found the dark look of the old one a bit depressing. I’ve also removed some of the excess pages from the old site to concentrate more on my core work within the cultural heritage sector.

Feel free to have a look, and if you find any glaring problems please let me know.

http://www.niepce.co.uk

All the best, Colin.

 

AHFAP Conference 2016

Congratulations to Tony Harris and the Association for Historical and and Fine Art Photography (AHFAP) committee on bringing together a great set of speakers for their annual conference 2016.

The conference takes place at the Imperial War Museum, London on Thursday 10th November 2016. If you’re interested please follow the link below.

http://www.ahfap.org.uk/conferences/2016-conference/

 

Colour Management Procedure for 2D works

Attached are instructions for colour managing the photography of 2 dimensional objects such as books, paintings and maps.

Colour Management Workflow for Photography of Works of Art V3.1

The instructions are written for those on a limited budget and although they assume you have a camera that will provide a RAW output, that you have an X-rite SG chart and Adobe Photoshop they can be adapted for other equipment.

Firstly, if you do not have an SG chart, and I know many people won’t, you can run the process using the cheaper X-rite 24 patch chart.Picture 002

If you do use a 24 patch chart you will need to use different pixel values for the six grey patches when setting the tone values in section 3. These should now be:

243, 201, 161, 121, 85, 52

If you do not have a camera that outputs RAW files, out put as large a jpeg file as possible and go straight to section 4. Try and get as correct as exposure as possible.

And finally, if you don’t have a copy of Adobe Photoshop you can carry out section 3 using the cheaper Photoshop Elements.

Please take note that these instructions use generic colour values and may not be the values of your chart, If you want more accurate information to work with you’ll need to measure your chart with a spectrophotometer and upload your data to delt.ae. Also the colour values used here are for Adobe RGB (1998). If you are using a different colour space such as sRGB IEC61966-2.1 or any other you will need different values.

If this means nothing to you don’t worry you can investigate  later, you should still get pretty reasonable results.

I hope this is helpful and any constructive feedback or comments are more than welcome.

TSR Imaging

At the start of the summer I began carrying out Infra-red Reflectogram work for TSR Imaging and thought I’d put up some brief details and a link to anyone who may be interested.

Tager Stonor Richardson (TSR) has been in operation since 2002, widening access to infrared reflectography for paintings that cannot easily travel, such as those in historic houses and private collections; delicate works undergoing conservation treatment; and important works on display in museum collections.

The Infrared reflectogram imagery is carried out using the high resolution OSIRIS camera which is capable of rapidly producing composite images of up to 16 mega pixels. It uses an InGaAs array detector with an operation wavelength of 0.9-1.7μ and so has far greater penetration than infrared photography using an adapted digital camera.

If you would like to look at the work TSR has carried out or are generally interested in using Infra-red imagery or contact details please take a look at their website.

www.tsrimaging.com

All the best, Colin.

Modulation Transfer Function (MTF)

Modulation Transfer Function

The Modulation Transfer Function ( MTF ) of an imaging system is a measure its ability to resolve detail within the image. The MTF is usually represented as a graph and plots the Modulation Transfer Factor ( Mu) over a range of spatial frequencies (u), Where the Modulation Transfer Factor represents the percentage change of modulation at each spatial frequency.

mtfchartgrey

%d bloggers like this: