AHFAP Standards Survey Results

By: Kristin A. Phelps, Peter Burns, and Don Williams

Taiichi Ohno suggested that “without standards, there can be no improvement.” International standards create compatibility and drive interoperability throughout a number of industries. And certainly, standards seem to be a buzzworthy topic in cultural heritage imaging. In fact, most of the industry’s community conferences include at least one talk about standards and the AHFAP/JISCMail discussion forum often hosts dialogues and debates about standards. Indeed, in the last year on the JISCMail forum there was an interesting interchange about colour accuracy, which has historically been one of the most debated imaging standards. This discourse led to the creation of a brief ten question survey of AHFAP members which ran from February 20-March 9, 2018 and focused on what kind of imaging standards were regularly in use by the AHFAP community. The survey results have provided some understanding of the demographics of AHFAP’s membership as well as some interesting insight into how standards are currently being used within the community. It was clear from the responses that this is a topic begging further discussion.

Before diving into the survey results, it is important to clearly define what is meant when several important terms are used in context of cultural heritage imaging. These terms are: standards; guidelines; policies; and procedures. Standards are agreed upon methods for objectively quantifying how to accurately measure digital imaging performance. These are usually defined at the national or international levels by recognized experts in their fields. It is important to understand that standards do not establish the numerical levels by which ‘good’ imaging is defined. They only establish ‘how’ to objectively measure performance. They do not define what performance levels define excellent, good, or bad for a particular purpose or collection. The latter is established through guidelines based on use case and importance.

For cultural heritage imaging, guidelines include the Federal Agency Digital Guideline Initiative (FADGI) and Metamorfoze. Such sets of guidelines offer suggestions on what levels of imaging performance, as evaluated through standards, are needed for different use cases and collection materials. These guidelines have common aim points but vary in ‘goodness’ levels, based on the amount of allowable tolerance or variability around the aim. In this way they largely establish consistent levels as articulated in the policy statement. Policies are best defined as consistent, common, and good digital imaging practices across cultural heritage institutions adopted for lower costs (less rework), effective communication, and ease of management. These are currently applied to 2D imaging of flat works.

Finally, procedures most often include the use of imaging targets (test charts), software tools, quality assurance and workflow practices, and training, to effectively execute policies using standards. Examples for cultural heritage imaging include targets (e.g., DICE or UTT targets), software (Imatest, GoldenThread, Delt.ae, ImCheck, OpenDICE, and IQ-Analyzer), and training (Lyrasis, conference tutorials, imaging interest groups).

Now that the terms have been clearly defined, on to the survey results!


The first three questions (What is your location? What types of collections do you work with? How long have you been working in cultural heritage imaging?) provided demographic information for the survey respondents. Of the 55 total responses, 44 were from the UK and the remaining 11 came from Europe, Africa, Oceania and North America. Museums were the most common collections that respondents reported working with at roughly 59%, followed by archives (46%), libraries (39%) and other (33%). Examples of types of collections which constituted ‘other’ were art, artists, and galleries. For the 55 respondents, 27 have been engaged in cultural heritage for 15 years or more; 15 have worked in the sector for 3-7 years, 10 for 8-14 years and 3 for 0-2 years.


The fourth question queried which set(s) of cultural heritage imaging standards respondents were aware of. Metamorfoze (guidelines from the Netherlands) and FADGI (guidelines from the US) were nearly equal with 45 and 47 respondents respectively. 21 respondents were familiar with other standards, primarily the ISO 1926-family (which are international guidelines).


This question invited respondents to rate seven performance metrics in order of importance to their workflow. In order from most important to least, respondents selected Exposure, White Balance, Resolution, Noise, Colour Registration Error, Colour Encoding Error, and Other.


The sixth question asked if respondents used any software or other tools to assist in complying with standards. Nearly 78% responded that they did use some kind of aid in their imaging practice while only 22% responded that they did not use anything.


The next question enquired if respondents used a target alone, a target with software, nothing or something else. Only 5% of respondents did not use anything. Around 35% indicated that they used a target alone for their work. 47% stated that they used both a target and software. Of the approximately 13% of those who responded other, most indicated that they were also using either a target or a combination of a target and software.


This question surveyed what software was being used by those respondents who were using software. The most popular piece of software by far was Delt.ae, which was being used by 24 out of the 34 individuals who responded to the question. 4 other respondents used Golden Thread; 3 used Capture One CH; 2 used Basiccolor; 2 used Hasselblad Phocus, other software, or nothing; 1 used IQ-Analyser; the final respondent was using OpenDice. It should be noted that a number of respondents reported utilizing more than one piece of software.


The ninth question asked why respondents were not regularly incorporating standards into their work. There were 36 responses to this question, with the top responses being lack of time, lack of financial resource, lack of support and lack of education.


The final question asked for any further input that respondents wished to share. Comments ranged from support for having standards, but calling for more education in their use to suggestions for a community developed set of standards. The most frequent request was for more education to be able to understand and adhere to standards. A word cloud was generated from the responses showing the most frequently occurring words from the responses.

It’s clear that the survey revealed interesting information about the community’s views on standards. It is clearly a subject of importance that affects AHFAP members’ daily workflows and output. With standards assigning quantifiable measures, and guidelines providing recommendations on imaging performance, policies help to identify issues and scope. Procedure establishes proper steps to take to enact uniform compliance. Quite simply, the resulting output creates a reliable image which benefits digital preservation and scholars. And, if we believe Taiichi Ohno, using standards in our work will help to create a better image.

For further reading

Museum Photography and Digitization Workshop

Text: Kira Zumkley

Images: Kristin Phelps on behalf of the Puerto Rico Science, Technology and Research Trust

This June Dani Tagen, Photographer at the Horniman Museum and Kira Zumkley, Photography Manager at the Science Museum Group embarked on a fantastic opportunity to lead a workshop in museum photography and digitization at the Puerto Rico Science, Technology and Research Trust for the Cultural Heritage Technology and Innovation initiative. The Trust’s mission is to invest, facilitate and build capacity to continually advance Puerto Rico’s economy and its citizens’ well-being. It is a much-needed effort as Puerto Rico has been experiencing an economic depression for 12 consecutive years which has led to a mass emigration of more than 200 individuals permanently leaving the island every day.

Hearing the stories of the people left behind and about their endeavor to save Puerto Rico’s cultural heritage from disappearing alongside their fellow countrymen has been a humbling experience. But more than that it has been an affirmation of how important it is to create communities, provide education and enable people to preserve their culture and history for generations to come.

Museums and cultural institutions in the United Kingdom are starting to understand that in order to engage people in a dialogue about the past, present and future and to create income; it is more and more necessary to make digital a dimension of everything they do. What would a museum look like without an online presence? Or, imagine museum websites, social media pages, curatorial talks, HLF applications, exhibitions, flyers, posters and publications without accompanying high quality photographs!

These were among the many questions raised during the first day of the workshop which was designed to get the attendants thinking about the benefits of museum photography and digitization. The participants ranged from freelance photographers, to collections staff and even museum directors. The presentation, group exercises and final discussion panel was a great success with more than 50 people attending, many of which had enrolled for the following four days of hands-on training as well.

Workshop attendants photographing an object using focus stacking

The second half of the workshop focused on providing the participants with basic photography and digitization training, tailored to the standards that must be applied when working in the cultural heritage sector such as correct exposure, white balance, colour space, file format and metadata. Feedback was of the highest order with all attendees giving the workshop between 9 and 10 out of 10 points.

First results using the flat copy set up

The willingness to work together across country borders and languages to support cultural heritage and share knowledge is a great opportunity to make our sector stand out and it has been a privilege to work the Puerto Rico Science, Technology and Research Trust and the people of Puerto Rico.

Group photo from the 2017 Museum Photography and Digitization workshop at the PRSTRT



Paolozzi: Mosaics to Maquettes

Author: John Bryden

In February 2016 I was tasked with digitising a large number of mosaic pieces which once comprised an Eduardo Paolozzi mural, previously installed in Tottenham Court Road Tube Station, London in 1984. The work of the Scottish artist was acquired by the University of Edinburgh Art Collection in October 2015 following its removal from the tube station arches by Transport for London.

The dissipate mural consisted of approximately 500 fragments spread over 42 boxes and 4 pallets. Dependent on what percentage of the original mural we had actually acquired, the initial long-term plan was to reconstruct the mural and install it within the university campus, giving it a new life. From the outset however, it became clear that piecing it all back together again would be challenging. We decided to attempt to digitally reconstruct the mural first to give a better idea of the potential for physical reconstruction. This would also help us establish what percentage of the whole mural was represented among the pallets and boxes….an unusual, but exciting, project to say the least!

On a technical level, I used a Hasselblad-H4 camera and professional, Bowens studio lights within my digitisation process. To begin with, I captured several mosaic fragments in one shot and then went on to crop, and edit, each piece individually before saving as a separate, new file. The tricky part came in ensuring that the scale of each fragment would be represented correctly with every image produced. I, therefore, set the camera at a distance from the mosaics that would represent a 1:1 ratio in scale, placing a ruler within each raw image capture in order to make minor adjustments at a later stage if necessary. The result meant that, when using the images in image processing software, the pieces would be of a relative size to one another. If the size of the fragments was incorrect then this would only cause problems further down the line when trying to complete this very large digital jigsaw puzzle. Further, the faces/upside of the mosaics had to be perpendicular to the focal plane of the camera and, collectively, the mosaics had to be of equal distance to focal plane. The same principles applied for the positioning of the ruler itself. This confirmed that perspectives would not be distorted and that the relative size of the mosaics would remain consistent throughout the project.



The image management process for this project involved saving the final cropped images as both tiff and png files. Having cropped directly around the edges of each fragment (i.e. with no background around the mosaic itself), the png files would then allow the fragments to be arranged edge to edge where possible. This process was key regarding the next stage of the project.

The images of the mosaics were then transferred to Professor Bob Fisher of the University’s Informatics Department. This cross departmental work seems particularly fitting as Paolozzi had close ties to the Informatics department and this relationship is visible in the form of several Paolozzi sculptures dotted about the Informatics buildings. With the help one of his PhD students, Professor Fisher used image recognition software, that he had developed, to digitally piece back together the fragments so to reflect the original design of the mural as best as possible. Professor Fisher had access to original images of the Paolozzi Mural in situ at Tottenham Road Tube Station which served as a vital reference point. To give a simple analogy, these original images would act as the cover image you would see on the box of a jigsaw puzzle, with the images of the individual mosaic fragments representing the pieces inside the box.

Art Collections Curator, Neil Lebeter, and I produced a short video interview with Professor Bob Fisher and Phd student, Alex Davies, where they discuss their work process, the challenges, and uniqueness, of this particular project and the results they have found to date. It is an interesting watch!


Since the making of this interview the project plans have developed in light of Bob and Alex’s findings. In August 2016 the University employed a Public Art Officer, Liv Laumenech. As well as caring for, and developing the public art collection, she also has the responsibility of figuring out what to do with the fragments. Given that a large portion of the arches are missing, to reconstruct the arches now seems an unlikely option. The next step that has been taken is to organise an interdisciplinary symposium in February 2017 that will bring together Paolozzi experts, conservators and mosaicists to brainstorm ideas for redisplay and use of the fragments with students. Until such time as a decision is made regarding their redisplay, the mosaics have been used in teaching and as part of visits by researchers and the general public to the art collection.

Having completed my side of the Paolozzi Mosaics Project, I have been lucky enough to get the opportunity to digitise a large number of Paolozzi maquettes which are also part of the University of Edinburgh Art Collection. The collection encompasses a wide range of weird and wonderful pieces. Among his maquettes we can see where he began developing his ideas for what became his piece, The Manuscript of Monte Cassino (also known as the ‘big foot’), situated outside St Mary’s RC Cathedral here in Edinburgh.


Digitising the mosaic fragments involved a more consistent photographic approach in terms of camera positioning and lighting, whereas working with the maquettes has offered slightly more freedom in this regard. I have been lighting and positioning each maquette in a way that best exhibits the physical attributes of that particular object. Here are a number of the maquettes pictured below.




We have also had Digital Heritage specialist Clara Molina Sanchez in the studio carrying out 3D work on one of the maquettes. This should render a high quality 3D visualisation of the object. Currently we are looking at ways of delivery such 3D images online. Here, Clara has kindly allowed us to show an interesting behind-the-scenes shot of her setup.


John Bryden

Project Photographer

Digital Imaging Unit

University of Edinburgh



RA250 stories part 2: Digitising the Collection

Author: Liz Dewer

In the build-up the RA’s 250th anniversary in 2018, we are digitising 10,000 new items from the Academy’s Collection – from works of art to letters and sketches. In this film, we take a look at how these works are chosen and what they can tell us about the history of art practice in the UK.

The digital life of a Hebrew manuscript

Authors: Kristin A. Phelps, former Senior Imaging Technician, British Library and Dr Adi Keinan-Schoonbaert, Digital Curator (Polonsky Fellow), British Library.

Picture your favourite book. Is it just the printed words on the pages that make it special? What about your notes and doodles in the margins? And the dog-eared corners marking the important parts? It may be one of the first things you pack if you are going on holiday, or the first item you unpack when you move house. Let’s be honest, your e-reader is great but there’s something about the physical copy that makes it irreplaceable.

The unique value of books is rooted in their dual nature. They are vehicles of information and, at the same time, they are three-dimensional objects. Books and manuscripts have always had a secret life as objects; they interact with the senses of those who touch them and forever show the evidence of human interaction with them. They have been cherished objects people have carried in both life and in death. Libraries not only stand as testaments to the importance of knowledge, but also to the value of safe-keeping printed books and handwritten manuscripts.

In the Library’s Sir John Ritblat Treasures Gallery, the public is invited to see a selection of the Library’s books and collection items. While carefully selected pages and folios are visible in the Gallery, it is impossible to examine the entire book as a single object. Visitors and scholars may be interested in the unique bindings and spines of these treasures, but are not able to hold the object, feel it, and observe its physical properties to get a sense of the whole book. In fact, the visitors have only a small percentage of the whole ‘book experience’ – and for a good reason. These fragile treasures need to be preserved for future generations. This is the seemingly unsolvable problem for objects in our collection; how can a viewer have a total sense of any object in a complete way?

Enter 3D imaging. A novel solution which allows viewers to examine an entire book or manuscript as a whole object. The technology also provides a digital record which aids in the documentation and preservation of the item. Many museums are currently using 3D imaging and modelling for their collection items. These models provide website visitors (and visitors to galleries) a view of objects as a whole, giving a somewhat tactile feel to items which are generally untouchable. The same is true with manuscripts held in libraries. A good example is the significant collection of Hebrew manuscripts held at the British Library.

Generously funded by The Polonsky Foundation, the Library has digitised (in two dimensions) 1,300 manuscripts under the guidance of the Hebrew Manuscripts Digitisation Project (2013-16). Many of these newly digitised manuscripts are not readily available to the public due to their fragile condition. The project employs a digital curator (Polonsky Fellow) whose responsibility is to encourage the consultation of the project’s digital material for research and scholarship, and promote fresh uses for the new digital research items.

What better way to re-examine some of this material than in full 3D? Under the direction of digital curator Adi Keinan-Schoonbaert, a small project was conceived to create 3D models of three of the Hebrew manuscripts: Add MS 4709 (a 15th-century CE Pentateuch), Or 1087 (a 15th-century CE Book of Esther), and Add MS 11831 (a 17th-century CE Scroll of Esther).

9 x 6

9 x 6

FIG 1: A Pentateuch from central Italy, 1486 CE. The name of one of its past owners is inscribed on the binding with gold letters and ornamental design (Salomon da Costa, 1719 CE; Add MS 4709)


11.75 x 7

11.75 x 7

FIG 2: A 15th-century CE Book of Esther (Or 1087)

The method chosen for 3D modelling is called photogrammetry – or Structure-from-Motion (SfM). In simple terms, a three-dimensional structure can be created from a sequence of two-dimensional images. Special software creates the third dimension by following the ‘motion’ of the camera around the object. Unlike laser scanning, this method is affordable and simple even for non-specialists. All that’s needed is a digital camera – even just a smartphone camera.

In the case of the Library’s Hebrew manuscripts, we benefited from the Imaging Studio’s advanced photography and lighting equipment. The 3D modelling process began with taking photographs of each manuscript from different angles. The idea was to ensure that we had photographs of the manuscript covering its entire surface, with sufficient overlap. In order to do that, the manuscript was placed on a turntable and the camera was mounted on a tripod. We rotated the turntable at 10-15 degrees with a photo taken at each position. After completing a 360-degree circle the manuscript was turned to its reverse side and the process was repeated.

Once enough photos had been taken, the images were white balanced and then masked ready for the next stage. The masking process included making a copy of each image, outlining the object that we wanted to model in an image-editing software (such as Adobe Photoshop), then filling the selected object with white, and the background with black.

After uploading all the images and their masks to the software (we used Agisoft PhotoScan), it identifies which part of each image should be 3D modelled. The software then makes use of the overlap between the photos to ‘stitch’ the sets of images together, forming a three-dimensional shape of the manuscript. This is done with the help of manually selecting obvious markers visible in photos from both sets, to help the software recognise shared points.

1.35 x 0.21 meters

1.35 x 0.21 meters

FIG 3: A 17th-century CE Esther scroll in an ivory case (Add MS 11831)

Once the models were complete, they were published to Sketchfab – an online platform that hosts 3D content – and can be seen below. Sketchfab allows you to annotate models in order to give a structured narrative or journey through some of the important features. This can be seen in the model of the Scroll of Esther (Add MS 11831), on which annotations were written by the collection’s lead curator Ilana Tahan. Curatorial annotations add yet another dimension to a digitised manuscript, facilitating a more naturally flowing learning experience for researchers and the public alike. These exciting results allow users to have a more rounded view of the manuscripts and present new opportunities for engagement.



Why does this matter? The ownership section of each of our catalogue records reminds us all who has read it, and that these manuscripts have had impressive histories for hundreds of years. Books and manuscripts have most likely been bound and rebound several times and they have travelled the world in the hands of different owners. These owners from the past, like you, found their special book so important that they carried it with them. And now, these manuscripts have a new chapter in their history as objects – not just physical, but also digital.


None Hath Refused:

Digitising the Protestation Returns at the Parliamentary Archives

Author: Simon Barnes, Digital Imaging Technician, Parliamentary Archives

We’re a digitisation team of two in the Parliamentary Archives and we’re responsible for delivering the Archives’ public copying service, digitisation project work and supporting exhibitions and outreach activities. We handle on-demand requests for copies of archives from the public and support exhibition and outreach activities by photographing records which are about to go out on loan. We also do photography for exhibition panels, publicity and our web resources and social media.

The digitisation project work we do is essential to the Parliamentary Archives’ aim to increase online access to our collections. The latest project we’ve been working on is the Protestation Returns. The Protestation Returns, dating from 1641-42, were ordered by the House of Commons and required all adult men to swear allegiance to the Protestant religion. The returns were organised by parish and are the closest we have to a seventeenth century census, significantly taking place at the start of a civil war that involved all levels of society and affected all countries in the British Isles and Ireland.

We work closely with our Collection Care colleagues who help prepare the documents by doing a condition check, unbinding the Returns from their files and flattening any folded documents. This really helps to speed up the process of digitisation and flags any which may need careful handling. Whilst the majority of the Returns are written on paper, a number are on parchment. In some cases individuals signed their own names on the Return, but more often an official wrote down the names and individuals made their mark. Some people refused to make the protestation, and this was duly noted, whilst widows (who became household head on the death of their husbands) also sometimes signed. So, each Parish produced a return in its own fashion and it created a somewhat varied collection of documents!

Our main challenge with this project was the highly variable dimensions and formats of the documents. Some Returns were completed on the back of the declaration, some were bound into booklets and some were recorded on thin lengthy strips – some are very large, while others are tiny! We established early on that we would not be able to optimise the photography of each item by setting column height, lens choice and ppi individually, it would be too lengthy a process. Furthermore, we weren’t able to set our Nikon D800 at one height and photograph everything with one setting as there was too much variation but we thought the Nikon would be quick as we could use live view to line-up the documents. We tested and developed three different settings which enabled us to digitise the majority of the collection but inevitably some documents required individual settings. Part way through the project we bought an IQ180 digital back and 55mm lens for our Phase camera and decided to switch as we could improve quality and productivity. With 80 megapixels we could set the camera at one height and capture all our documents at 600ppi (see a time-lapse of my colleague Tim at work).

As much of a challenge has been the ability, time and motivation (!) to quality assure all the images generated. We’ve followed a process of first QA, followed by any necessary reprocessing/reshoots, a second QA and then web conversion and watermarking. The images are then moved to the digital repository for permanent preservation. Low resolution jpegs are viewable via our online catalogue and the Archivists and our IT department have developed a prototype Map Search, which allows users to search for the Returns we hold by area. So, if you can trace your family tree back to the seventeenth century, and you have an idea where your relatives lived, you may be able to find them in the Protestation Returns.

We’re promoting the records and the digitisation project, via social media, blogging and are planning some outreach activities with regional archives. For the social media promotion, we’ve picked out interesting watermarks, useful dates, noted when women are listed (they weren’t required to be), where there are ‘recusants’ (refusals) and are highlighting some of the more interesting information and text we’ve discovered – some people were ‘not at home’ when they should have been making the protestation!

The photography is complete and the Returns are being ingested into the digital repository and made available through the map and online catalogue.

We’ve started on our next project focussing on Victorian MP and Parliamentary Estate photographer Benjamin Stone, which involves both digitising his historic photographs of the Palace of Westminster and its visitors and taking some of our own pictures of the rooms today, to compare and contrast how things have changed. We have also been visiting the roof of the Victoria Tower and took a time-lapse of the view, where we were lucky to catch a raincloud, and rainbow, passing over London.

Tim Banting Digitising the Protestation Returns Some volumes, the printed Protestation and an example of a return

A high-quality hybrid 35mm film stills digitising setup

Author: Max Browne

As any photographer knows after decades of shutter clicking, a huge backlog of archive work can develop. This occurs often with personal work which tends to get put aside until time is available to attend to it. In my case not only have I thousands of images to view and select but technology has moved on from the darkroom origin of my film negs and transparencies to require digital scanning which varies enormously in terms of speed, cost and quality of operating and processing.

I am sure that many would agree that a significant reason for such a backlog of personal archive work is the onerous and painfully slow mode of scanning 35mm slides and negatives using Minolta/Nikon/Canon box style scanners with film tray loading. This was never fun as it was both laborious and technically sub-optimal. Noting the past tense – it is not any more!

If you consider replicating the film grain, image detail, colour and tones of the originals as a base standard of working, along with digitising as fast as you can load and focus them, then you may be interested in the 35mm scanning setup I have put together recently: Nikon D800E camera, 55mm Micro-Nikkor lens with Extension Ring PK13, old Nikon F Slide Copier (via eBay) with custom made dovetail support post, Lanparte DSLR fully adjustable baseplate with support bars/adaptor and MacBook Pro laptop (figs 1,2). Using any suitable light source, this rig will provide a fast 8k raw NEF files workflow with either negative or positive screen images for optimising variables in real time as you adjust them on the laptop, once captured. Additionally, if you work tethered, then the camera ‘live view’ can display negatives as positives if you switch the computer screen mode to ‘invert’. In this way you can also use the system as an instant real time viewer for 35mm negatives prior to capture which is useful for both identification and assessment purposes – especially if there are no contact sheets for reference. For this reason tethered capture of negatives is a more efficient way of working since positive image tonality can be checked against a histogram and exposure adjusted as quickly as positive slides can via the camera rear screen.





Perhaps I should add that this set-up is one for manual operation. Cleaning, loading, focusing, assessing and correctly exposing the originals need the practised eye of a photographer to get the best out of it. I don’t think it could be successfully used in any ‘auto’ mode. However I’d be interested to know if such a setup could be more streamlined – perhaps with an autofocus macro lens.

In practice focusing is achieved by sliding the camera on the base plate (which is easy as it is superbly engineered) rather than twisting the lens focus ring. A two second shutter delay works well to minimise any operator vibration. I generally shoot film emulsion side in so that any slight bowing of the film follows the natural curvature of the lens depth of field. This requires the extra step of left-to-right reversal of the image later but maintains more consistent centre to edge image definition.

As you would expect, digitising positives is straightforward and intuitive. On the other hand negatives take a little more time to get used to since the tonalities are in reverse – you need to adjust exposure to retain ‘shadow’ details that are actually highlights and vice versa. Once captured a great advantage is that any adjustments in ‘camera raw’ can then be made whilst viewing on a positive screen in ‘invert’ mode. Once the image files are opened in a Photoshop type programme they can be ‘inverted’ to positive themselves and the screen changed back to normal mode.

A long awaited project of mine has been to digitise my collection of 1960s-1980s Rock & Roll gig images and a great bonus is that many that were previously rejected in their film version are now usable in digital form after suitable manipulation. This enhances them as historical documentation as well as from an aesthetic viewpoint. Many just look better after digitising which is not surprising when considering the continuously variable club/concert/theatre lighting conditions under which they were shot. Under or over-exposed shots can be made acceptable now as can problem images such as the unwanted ‘protrusions behind heads’ classic which leads me to digress to a brief ‘documentary ethics’ consideration.

I am a freelance documentary hunter supplying captured images for client display. Thus my work ethics are pragmatic not purist. If a subject has a floral arrangement growing out of their heads then one of the three of us (problem-object, subject or me) must move in order to provide a non-distracting image. Such was the problem recently with an otherwise nice and historically interesting shot I have of Eric Clapton onstage in London about 1980. Since Eric famously became teatotal a few years later the beer shot provides something of a conversation piece. The intriguing but highly distracting object in the background was easily disposed of, after digitising, by some quick surgery in Photoshop (ills.3,4). My fledgling website for these images is http://www.rockshots.co





Almost all the equipment is readily available including the aging but excellent Nikon F Slide Copier bought for less than £100 on eBay. The exception is the small but very necessary dovetail post to connect and adjust the Slide Copier onto the baseplate support bars. This was made for me by a local machine shop and again cost less than £100. The Lanparte adjustable baseplate unit is inexpensive and a joy to use and is necessary to update the otherwise obsolete Slide Copier which will not fit most modern DSLR cameras as their fronts protrude further than the original SLR film cameras they were designed to fit. In short this kind of rig gives these otherwise excellent copiers a new lease of life. If you are interested in using a similar setup I’d recommend acquiring one of these key items soon before word gets out!

Do get in touch if you have any queries.

Max Browne, DigitisingArt.Co

The very helpful Lanparte UK agent can be found at http://www.fastforwardtime.co.uk

Digitising the Christina Broom Glass Plate collection

Author: Gemma Cattell, Marketing Executive, Townsweb Archiving

The delicate glass plate negatives are checked before digitisation

The delicate glass plate negatives are checked before digitisation

Christina Broom, the UK’s first female press photographer initially started taking photographs in 1903 to support her family. But the following year a chance meeting resulted in a partnership with The Household Cavalry Regiment that lasted over 35 years. During this time she became The Regiment’s unofficial photographer, capturing a variety of pivotal moments in the history of The Regiment.

Now a unique archive of over 200 glass plate negatives featuring Christina Broom’s work for the unit are held at The Household Cavalry Museum’s archive in Windsor. They are part of a larger mixed media collection documenting the history of the two most senior regiments in The British Army – The Life Guards and The Blues and Royals.

In order to help preserve these fragile and historically important images, in 2015 The Household Cavalry Museum partnered with TownsWeb Archiving to digitise the Christina Broom collection.

The right time to Digitise

Having already worked with TownsWeb Archiving previously to digitally preserve the Museum’s WW1 and Battle of Waterloo photographic collections, The Regiment’s archivist decided to partner with the company again to digitise the Christina Broom glass plate collection.

Dating back to the early 20th Century, the plates in the collection are Silver Gelatine Dry Plate Negatives and are mostly A6 in size, with some up to A4.

The primary reason for digitising the glass plates was to preserve a digital copy of the unique images for posterity and to avoid any being lost as the physical plates degrade over time.

Considerations when digitising Glass Plates

Surface dirt and dust is carefully removed from the plate using a handheld bulb duster

Surface dirt and dust is carefully removed from the plate using a handheld bulb duster

Before digitisation, the glass plates were cleaned with a handheld bulb duster (air blower) to remove surface dirt and dust, which could have affected the quality and clarity of the final digitised images.

With some media it is possible to infrared clean during scanning – removing any ‘noise’ such as dirt and dust. However because of the emulsion used on the plates, this was not possible with the Christina Broom collection.

Due to the age and fragile nature of glass plates they can be one of the most difficult media to digitise. So throughout the project extreme care was needed, especially when handling broken or damaged plates.

Broken plates were a particular challenge. When digitising any broken plates technicians had to carefully place the fragments as close together as possible – to provide as complete a picture as the plate allowed.

In addition, on some of the older plates emulsion was peeling off. Technicians mitigated this by scanning them matte side down, so that the emulsion could lie flat and distortions were minimised on the digital images.

Scanning Equipment and Specification

  • Epson flatbed scanner (set on ‘Transparency’ mode)
  • Hand held bulb duster
  • Bespoke post – processing software
  • Approx. 200 glass plate negatives in the collection (up to A4)
  • Captured at 1200ppi to master TIFF format
  • Captured in Greyscale at 8 bit pixel depth
  • Average TIFF file size – 45Mb

A flatbed scanner set to Transparency mode was used to digitise the collection, mainly because it allowed for the greatest consistency across all images, in terms of colour profile and picture quality.

The high resolution offered by this type of scanner is also more suitable for capturing glass plates, which typically require a greater resolution than loose photographs or bound volumes (see below – File Requirements).

However, in many cases, when digitising glass plates a DSLR camera and light box capture workflow can be used instead.

Once the Museum’s collection had been captured, any blemishes or imperfections on the final images were fixed using bespoke post – processing software. Alternatively, some scanners have software that allows images to be automatically ‘corrected’ during the digitisation process.

Capturing the truest image

Throughout the digitisation, the Christina Broom glass plate collection was captured in 8 bit greyscale. This partially restored the plates to their former appearance and resulted in a greater clarity (due to the increased contrast) in the final digitised images.

Some archivists prefer to capture glass plate negatives in full colour as it shows a true snapshot of the plate at a particular point in time including how it’s aged, fading and any deterioration.

As mentioned, The Household Cavalry Museum’s main reason for digitising was for preservation purposes. For this reason greyscale proved the most suitable technique, as it resulted in the clearest and most useful images for the Museum.

One of the digitised images captured from the Christina Broom collection

One of the digitised images captured from the Christina Broom Collection © Household Cavalry Museum

Digital file requirements

In terms of file requirements, all of the 200 glass plates in the collection were captured at 1200ppi to master TIFF format. Scanning at a higher resolution meant as much tonal information as possible and all the detail from the glass plates was captured. As standard, The National Archives recommend that PPI should be considerably higher for any photographic material, including glass plate negatives.

The master TIFF files were then converted from negatives to positives in post – processing using a bespoke graphics programme.

The last stage in the digitisation process was to produce smaller, compressed JPEG versions from the master TIFF files. These smaller JPEG files make it easier to browse the digitised archive and can be published on the Museum’s website, if required, at a later date.

An historic collection preserved

The digital reproductions of the Christina Broom glass plate negatives are now held at the Museum’s archive in Windsor, safeguarded for future generations. A backup version is also stored elsewhere, as an extra precaution against potential unforeseen disasters.

If you would like to find out more about The Household Cavalry Museum and archive, check out their website. Or feel free to look at the TownsWeb Archiving website for further advice on digitising glass plates and information on glass plate digitisation.


Ship Model Rigging Post Production

Author: Joshua Akin – Digital Imaging Officer, Royal Museums Greenwich

An insight into our workflow for cutting out images of complex, rigged ship models. This involves using masking techniques so that we can ensure that the models appear on a pure white background.


 In 2006, we undertook a ship model photography project, which took over 6 years to complete. We photographed over 2500 ship models, ranging from small craft, fishing and hunting vessels to powered warships just to name a few.

The project served the following criteria:

  • To refine and update the catalogue of the collection.
  • Provide access to collections via collection’s online.
  • Most of the collection was moving to Chatham for storage.


The ship models had to be photographed at Kidbrooke outstation store, as they could not be moved to the photographic studio. The equipment we used off-site at Kidbrooke were:

  • Hasselblad H3D II-39
  • 105mm (macro) lens
  • 80mm lens
  • 50mm lens
  • Tilt-shift adapter
  • Xrite colour checker
  • Broncolor flash heads and soft boxes
  • Mac Computer and Eizo monitor workstation.


Ship models sizes ranged from 2 centimetres to 4 metres.

Photography was broken down to shoot the models in order of size: small, medium and large.

Some of the bigger models could not be shot in the allocated area, so a purpose built container was used to carry the ship models out of the building and into the large foyer.

All ship models were shot on a white or black background. Three photographic views were captured of each ship model:

  • Broadside
  • Bow ¾
  • Stern ¼

In some cases the curator requested up to 10 views of a ship model, as it was deemed necessary due to historical importance.

Due to the quality of the Hasselblad H3 cameras, curators were able to see the detail inside the models that was not easily seen with the naked eye.


  • Cut out ship model to a clean white background.
  • Remove shadows.
  • Keep every detail of the ship model.

Most of the masks that we create with channels end up needing that extra nudge and fine tuning.

The ship model is dark grey, not pure black, and they have little white holes in them. The white areas have a fine haze of light gray sprinkled here and there.

The mask’s edge can be too tight, too loose, too sharp or too soft.

The diagram represents the relationship between grayscale values in alpha channels, mask opacity, and selections.

Using the image editing capabilities in Photoshop, we worked on the alpha channel directly to perfect the mask.

The alpha channel is then inverted, so white reveals and the black area hides.

The alpha channel is loaded as a selection to isolate the ship model.

Workflow to make a black and white mask using:

  • Channels
  • Calculations
  • Dodge and Burn
  • Curve Tool
  • Pen Tool
  • Selections

Duplicate layer and change blending mode to multiply to darken ship model image.


Choose the best channel from the colour image that has the best near black and white contrast. Make a copy of the blue channel.

Change multiply blending mode to normal.

Open up the Levels tool and slide in the black to make your object black, without clamping the white areas.

Duplication the blue channel twice and name the first one HIGHLIGHT. With the second blue channel name it SHADOWS and invert it, so it is a negative.


Go to Image / calculations and select HIGHLIGHT alpha channel as your first source and SHADOWS as your second source and check mark invert. Then change blending mode to multiply or Linear Burn and adjust opacity, to get an almost black object. Save it as a new Channel.


Use the burn tool to target shadows and set exposure to a reasonable exposure between 2-10%.

Start burn in the light rigging areas by gradual strokes until it is black.


Open the Curve Tool and sample the gray background colour with the eye dropper tool, notice the ball along the curve line. Select the pencil tool to draw a line horizontal line across the upper right, changing all the tones of the gray to pure white, the press the smooth button twice, to smooth out the light and dark values. If it is not 255, kept apply the same technique until it is using the info dialog box.


Use the Dodge burn to dodge the darken gray areas around the object caused by the burn tool. Use Highlights and 5-10% exposure.


Use the Pen tool for straight edges, such as the base of the ship and also removing unwanted elements in the background.


Load the path selection and fill the alpha channel with white.

Making the RGB channel visible, enables you to view the edge of the mask in Quick Mask mode. Once you are happy refining the mask, turn of the RGB visible channel.

Load the selection from Mask channel and make a duplicate layer of the background. Make an adjustment layer mask of that selection.

Make as solid colour layer adjustment just before the duplicate layer and choose 255 white.

Use the blur tool to simulate the depth of field effect in the areas that should be blurred.

AHFAP Conference Keynote

Author: James Stevenson

I worked in museums between 1983 and 2013, firstly at the National Maritime Museum and for almost twenty years at the V&A. My talk is naturally influenced by my experience at these two institutions and may be biased and not representative of photography in cultural history. So my career in museums has followed that of AHFAP quite closely.

If AHFAP’s first meeting and forming in 1985 was not its actual conception, the glint in its boyfriend’s eye was 1982, a year that helped form the need for an organisation such as AHFAP

Some highlights from 1982 include:

  • The Commodore 64 was made in August of that year
  • Word perfect for DOS was released
  • Lotus 123 spreadsheet (Excel, 1985)
  • Sony Trinitron monitor
  • Apple made $1b in sales
  • Phillips made the first compact disc
  • Sony made the first audio CD player
  • Adobe was founded
  • Tron was released


  • the first computer virus was detected

I think that the development of technologies such as these helped form the idea and need for an association such as AHFAP. The adoption of PC technology in museums created more immediate forms of communication (the demise of the typing pool occurred around this time) but particularly business reporting and accountability. Museums were not slow to develop these needs, though the V&A was actually one of the last. This meant that it skipped some of the early errors and mistakes and jumped in at a later convenient point.

At the same time the use of technology allowed a more systematic auditing of collections and there was considerable improvement in collections management.

When this happened it was natural for managers and directors to ask, ‘What is happening elsewhere?’ and hence the development of benchmarking.

The consequence of the founding of AHFAP may have been originally socially meeting in pubs to meet with fellow professionals. The reality was to find out what each was doing, what problems they were facing and what solutions were being developed.

It is interesting to see when other associations in museums were founded:

  • The National Museum Directors’ Council 1929
  • The British Association of Paintings Conservator-Restorers 1943
  • Collections Trust’s history stretches back to the 1970s when it began life as the Information Retrieval Group of the Museums Association
  • The Museum Object Data Entry System (MODES) was launched in 1987 with immediate success
  • Museums Computer Group 1982
  • AVICOM, established in June 1991, is the International Committee for Audiovisual and New Image and Sound Technologies.
  • The Institute of Conservation was created in 2005, which was the creation of disparate group specialist conservation groups.

I joined the business of museum photography in, I think, 1984. I don’t keep a record of my own anniversary dates.  This was around the time of the high-water mark of analogue photography.

The top-level output for images in museums at this time was the fine art publication. Colour printing had achieved its high level of quality, and press and publicity. Newspapers went into colour in 1986. Today, owned by Eddie Shah, was the first newspaper to pioneer computer photo-typesetting.

One of my memorable pictures from this time is a series I made of the Royal Observatory in Greenwich.

One is a shot of the Observatory buildings made at dusk with dark blue light still in the sky, artificial light on within the buildings and the exterior illuminated by flash powder bought from a theatrical suppliers in Covent Garden. The picture was made on 10×8 Ektachrome, with a 5×4 camera used alongside as a processing test. These were all processed by Rod Tidnam. And this image was entirely based on chemistry! There were chemicals for the light and chemicals for the processing and final image, the large-format colour transparency.

All photography in museum studios at this time was chemical.

The V&A, which I joined in 1993, was still using a mixture of 1940s technology for b&w but had just introduced its own E6 processing line for colour transparency. It soon became obvious to me that the b&w element of the service was essentially worthless. The use of it in the museum was a legacy of its historic and ancient cataloguing systems.

This old way of using images was a legacy too of the Courtauld Institute’s resistance to colour images in their teaching of the history of art. They still taught about the history of painting in monochrome until the 1980s. They were naturally concerned about poor colour reproduction and did not trust it. This was true prior to 1975 when E6 was developed, but even after that colour was a moveable feast. Kodak tried to standardise things with Q Lab QA in the around 1990. It was partly as a result of Q Lab that museum photography started to rely on the large format colour transparency as its medium of choice.

The first members, the first generation, of AHFAP were, to my mind, chemists. This could be typified by Brian Tremain. Many members of the BM and the NMM will remember ‘Brian’s Brew’, his collection of developing chemicals kept in old film canisters. There were variants for all types of contrast development.

There is a photograph made by Brian of the oil painting ‘Death of Nelson’ by Arthur William Devis. It cannot be beaten for imaging quality, in either b&w or colour reproduction, both 10×8 negative and Ektachrome transparency.

I was asked once to re-photograph it, and considered the exercise pointless, because Brian’s image was so good. It is possible that digital imaging may now make an improvement with a linear curve, and a camera such as the Sinar CTM but that has never been done. However if you look at the image on-line it does not hold up. There are too many alterations in the web publishing process that kills it. Who knows what colour space it is in now? It holds up better when downloaded into Photoshop but I was still only looking at a 300k jpeg. You have to see the large format colour transparency, or for an even more sublime experience the 10×8 negative, to appreciate it fully. Of course, as this is stored within the NMM’s negative store, this is only available to one privileged person at a time. You will have to speak to Tina Warner.

The founding members of AHFAP were I believe ‘chemical’ photographers. They had worked for the whole of their careers in what is now known as analogue photography, but is better called, in my opinion, chemical photography. Most of these founding members had retired by 1995 or thereabouts, the time when digital, or electronic photography was starting to be adopted in museums.

My recollection is that 1995 was the time when we, at the V&A, started experimenting with electronic photography.

Our entry was via the Museum Picture Library. We installed a software database we called The Photo Catalogue. We had several thousand colour transparencies scanned to Photo CD so that visitors to the library and indeed the rest of the museum could search for images online. To use them for later reproduction they still had to go back to the analogue original.

It became clear to me that this was the best way forward when I calculated that digitally scanning transparencies and putting them into a database was cheaper than making black and white prints and sticking them into albums, a practice that had continued uninterrupted since 1856.

Somehow or other, in retrospect I cannot remember the whole sequence, and it doesn’t matter as the whole process was inevitable. We moved very quickly over the next few years to digital photography. The stimulus was the development of museum websites.  The first digital camera we had at the V&A was a Fujix DS-330. It was a rangefinder type camera, cost around £1,000 and was a 1M-pixel camera. Images were transferred to the computer by a 3.5-inch floppy disc adaptor. I think I may have been the only person to buy one.

According to Wikipedia, the first museum to go ‘online’ was the Museum of the History of Science, Oxford, in 1995. I’m not sure if that is correct but it feels about the right time. About then most of the larger museums built their own websites and Museum Directors recognised that this was a good new way to make the collections visible to a new audience.

My own recollection is that it was not the technology as such that was forcing the change to digital imaging but a realisation that visibility for images could expand considerably and that efficiencies in workflow and production efficiency would improve.

Various international and nationally funded projects, mainly AHRC funded, started at this time to promote the use of large collections of digital images, images to tell stories about the collections. Early standards were proposed, a variety of formats made, and many short-lived websites created. Whether many of these early project websites still exist or not is irrelevant to us but they did create an understanding of the new medium. They were great exercises in developing new work practices and realising that image fulfilment could be speeded up. Once again individuals were asked to benchmark, sometimes even within the funding applications, so communication within AHFAP was at that time a good and necessary thing.

The second generation of AHFAP photographers at this time were hybrid chemical/electronic animals, myself included. Turning from looking at the world upside down to staring at minuscule flickering lines on a screen. There is not much to miss about spending weeks in the darkroom.

Now many of those photographers who were in AHFAP are also retiring and leaving behind a third generation of purely electronic photographers.

However I do not believe that the technology has yet matured to anywhere near its potential. There is still plenty of improvement to be made in 2D imaging and a great deal in multi-media.

What it has done, though, is to bring Walter Benjamin’s prophecy to fruition.

‘The illiterate of the future will not be the man who cannot read the alphabet, but the one who cannot take a photograph.’ I wonder what level of literacy he was anticipating.

Museum collection management systems are now full of the work of visual illiterates, who think that because they can press the ‘photo’ button on their iPhone, they can make a picture. But they think they can, which is probably worse. Sometimes I can appreciate that any picture is better than no picture, if you are a researcher. But what is the impression of the museum when its shop window is full of poor and inadequate imagery. Poor exhibition and gallery displays are not tolerated. What is it that makes directors think that poor images do not do the same? I worry that some large collections will accept the standard of photography you might expect in the developing world.

Last year’s conference at the Wellcome was for me very encouraging, as it very clearly brought together photographers and computer graphics scientists.

As well as 3D imaging, which is still maturing, and may still have several decades to go—to me it does have inevitability to it for the accurate portrayal of solid objects and environments—there are also many other new developments.

I have seen proposals for non-lens aperture-only cameras, after-the-fact focusing and movement amplification in video. This last one was demonstrated on a Ted Talk.

Michael Rubinstein zooms in on movement we cannot see and magnifies it by thirty or a hundred times. His ‘motion microscope’, developed at MIT, picks up subtle motion and colour changes in videos and amplifies them up for the naked eye to see. The result: you can see a pulse in a wrist, a baby kicking in its mother’s womb. There were some photographs of tree rings converted into sound described on Radio 4 on Tuesday.

But where do you get news of these new developments? You don’t see any of them described in the BJP. I think that the way to see what is developing is now in the New Scientist and Ted Talks and such places.

There are other web technologies also being developed but not yet appearing in museums or elsewhere that I have seen.

Microsoft Seadragon browsing was demonstrated at TED in 2007,

FABRIC browsing developed 2010. Why have these great new ways of browsing and searching museum images not been taken up? I think that there is a problem with museum web development.

It is no use looking to website editors to support a creative vision. How many of them crop images to suit their restricted page designs. I get really get annoyed when I see heads of sculptures cut off! They seem to be led by a desire to make everything in museum websites into online games, seeking BBC1 and never getting to BBC4. When they do achieve some quality stories and videos they are well hidden below ‘what’s on and what’s to buy’. They are more like Daily Mail colour supplements than TED Talks.

Where does this leave the new developments in computer graphics? Why haven’t museums adopted new browsing techniques for their collections? Why aren’t we seeing the promised technologies of semantic web searching? Lots of website front pages, especially the new BBC web site remind me of LEGO.

For the professional we have probably reached the equivalent of the large-format transparency as a means to high-quality print reproduction. We have the resolution, colour quality control in FADGI and Metamorfoze, the equivalent of Q Lab, and the data transfer methods to get the images to the printer.  But do we have the final display necessary for this? There is a trend in museums to reduce their fine art publications’ printed output, to reduce the volume of their branded publications. Where will this leave the opportunity for creativity in the museum photographic studio?

I believe that Cultural Heritage photographers must adopt the new and developing technologies in a search for new forms of creativity. The price of them is coming down; many can be done on standard DSLR cameras. The movement amplification software of Michael Rubinstein can be downloaded free and used on any pc.

If I were a museum photographer now I would prefer to work in a smaller museum where there is greater opportunity to try new things. The large museums are full of middle management who find it very easy to say no! In a small museum you have to get on with your own thing and need to have a larger degree of self-management.

To finish I wonder now where the next generation of CH photographers will come from? Will they be those with the skills to adopt these new imaging opportunities, to be able to script code, to be at one with online media. I am not sure that they will come from the traditional photographic colleges which do not adequately teach the basic principles of photography. At this time in your career it is ironically very useful to understand fully chemical photography, which is the basis of all of the digital imaging principles.

Courses such as SEAHA (Centre for Doctoral Training in Science and Engineering in Arts Heritage and Archaeology) may be far better to undertake now rather than photographic courses. These courses understand the opportunities coming with new imaging technology. They don’t however yet have a component of basic photography and a full understanding of lighting, but they will do. Not only that but lighting and scene composition could very well become a post-processing issue.

So during the lifetime of AHFAP we have had;

First, Alpha or Analogue photographers; secondly, Beta or Bi-Technology photographers; thirdly, Gamma or Digital photographers who can now use linear curves, and, perhaps next, Delta photographers who will be able to record and show how cultural objects can change over time and in space.

The future for imaging should still be grasped by a new breed of professionals who can both make images, make the invisible visible and show cultural objects in many new ways and show their change in space and time. I am sure that AHFAP will provide a forum to discuss this.

James Stevenson

October 2015