The digital life of a Hebrew manuscript

Authors: Kristin A. Phelps, former Senior Imaging Technician, British Library and Dr Adi Keinan-Schoonbaert, Digital Curator (Polonsky Fellow), British Library.

Picture your favourite book. Is it just the printed words on the pages that make it special? What about your notes and doodles in the margins? And the dog-eared corners marking the important parts? It may be one of the first things you pack if you are going on holiday, or the first item you unpack when you move house. Let’s be honest, your e-reader is great but there’s something about the physical copy that makes it irreplaceable.

The unique value of books is rooted in their dual nature. They are vehicles of information and, at the same time, they are three-dimensional objects. Books and manuscripts have always had a secret life as objects; they interact with the senses of those who touch them and forever show the evidence of human interaction with them. They have been cherished objects people have carried in both life and in death. Libraries not only stand as testaments to the importance of knowledge, but also to the value of safe-keeping printed books and handwritten manuscripts.

In the Library’s Sir John Ritblat Treasures Gallery, the public is invited to see a selection of the Library’s books and collection items. While carefully selected pages and folios are visible in the Gallery, it is impossible to examine the entire book as a single object. Visitors and scholars may be interested in the unique bindings and spines of these treasures, but are not able to hold the object, feel it, and observe its physical properties to get a sense of the whole book. In fact, the visitors have only a small percentage of the whole ‘book experience’ – and for a good reason. These fragile treasures need to be preserved for future generations. This is the seemingly unsolvable problem for objects in our collection; how can a viewer have a total sense of any object in a complete way?

Enter 3D imaging. A novel solution which allows viewers to examine an entire book or manuscript as a whole object. The technology also provides a digital record which aids in the documentation and preservation of the item. Many museums are currently using 3D imaging and modelling for their collection items. These models provide website visitors (and visitors to galleries) a view of objects as a whole, giving a somewhat tactile feel to items which are generally untouchable. The same is true with manuscripts held in libraries. A good example is the significant collection of Hebrew manuscripts held at the British Library.

Generously funded by The Polonsky Foundation, the Library has digitised (in two dimensions) 1,300 manuscripts under the guidance of the Hebrew Manuscripts Digitisation Project (2013-16). Many of these newly digitised manuscripts are not readily available to the public due to their fragile condition. The project employs a digital curator (Polonsky Fellow) whose responsibility is to encourage the consultation of the project’s digital material for research and scholarship, and promote fresh uses for the new digital research items.

What better way to re-examine some of this material than in full 3D? Under the direction of digital curator Adi Keinan-Schoonbaert, a small project was conceived to create 3D models of three of the Hebrew manuscripts: Add MS 4709 (a 15th-century CE Pentateuch), Or 1087 (a 15th-century CE Book of Esther), and Add MS 11831 (a 17th-century CE Scroll of Esther).

9 x 6

9 x 6

FIG 1: A Pentateuch from central Italy, 1486 CE. The name of one of its past owners is inscribed on the binding with gold letters and ornamental design (Salomon da Costa, 1719 CE; Add MS 4709)

 

11.75 x 7

11.75 x 7

FIG 2: A 15th-century CE Book of Esther (Or 1087)

The method chosen for 3D modelling is called photogrammetry – or Structure-from-Motion (SfM). In simple terms, a three-dimensional structure can be created from a sequence of two-dimensional images. Special software creates the third dimension by following the ‘motion’ of the camera around the object. Unlike laser scanning, this method is affordable and simple even for non-specialists. All that’s needed is a digital camera – even just a smartphone camera.

In the case of the Library’s Hebrew manuscripts, we benefited from the Imaging Studio’s advanced photography and lighting equipment. The 3D modelling process began with taking photographs of each manuscript from different angles. The idea was to ensure that we had photographs of the manuscript covering its entire surface, with sufficient overlap. In order to do that, the manuscript was placed on a turntable and the camera was mounted on a tripod. We rotated the turntable at 10-15 degrees with a photo taken at each position. After completing a 360-degree circle the manuscript was turned to its reverse side and the process was repeated.

Once enough photos had been taken, the images were white balanced and then masked ready for the next stage. The masking process included making a copy of each image, outlining the object that we wanted to model in an image-editing software (such as Adobe Photoshop), then filling the selected object with white, and the background with black.

After uploading all the images and their masks to the software (we used Agisoft PhotoScan), it identifies which part of each image should be 3D modelled. The software then makes use of the overlap between the photos to ‘stitch’ the sets of images together, forming a three-dimensional shape of the manuscript. This is done with the help of manually selecting obvious markers visible in photos from both sets, to help the software recognise shared points.

1.35 x 0.21 meters

1.35 x 0.21 meters

FIG 3: A 17th-century CE Esther scroll in an ivory case (Add MS 11831)

Once the models were complete, they were published to Sketchfab – an online platform that hosts 3D content – and can be seen below. Sketchfab allows you to annotate models in order to give a structured narrative or journey through some of the important features. This can be seen in the model of the Scroll of Esther (Add MS 11831), on which annotations were written by the collection’s lead curator Ilana Tahan. Curatorial annotations add yet another dimension to a digitised manuscript, facilitating a more naturally flowing learning experience for researchers and the public alike. These exciting results allow users to have a more rounded view of the manuscripts and present new opportunities for engagement.

 

 

Why does this matter? The ownership section of each of our catalogue records reminds us all who has read it, and that these manuscripts have had impressive histories for hundreds of years. Books and manuscripts have most likely been bound and rebound several times and they have travelled the world in the hands of different owners. These owners from the past, like you, found their special book so important that they carried it with them. And now, these manuscripts have a new chapter in their history as objects – not just physical, but also digital.


 

None Hath Refused:

Digitising the Protestation Returns at the Parliamentary Archives

Author: Simon Barnes, Digital Imaging Technician, Parliamentary Archives

We’re a digitisation team of two in the Parliamentary Archives and we’re responsible for delivering the Archives’ public copying service, digitisation project work and supporting exhibitions and outreach activities. We handle on-demand requests for copies of archives from the public and support exhibition and outreach activities by photographing records which are about to go out on loan. We also do photography for exhibition panels, publicity and our web resources and social media.

The digitisation project work we do is essential to the Parliamentary Archives’ aim to increase online access to our collections. The latest project we’ve been working on is the Protestation Returns. The Protestation Returns, dating from 1641-42, were ordered by the House of Commons and required all adult men to swear allegiance to the Protestant religion. The returns were organised by parish and are the closest we have to a seventeenth century census, significantly taking place at the start of a civil war that involved all levels of society and affected all countries in the British Isles and Ireland.

We work closely with our Collection Care colleagues who help prepare the documents by doing a condition check, unbinding the Returns from their files and flattening any folded documents. This really helps to speed up the process of digitisation and flags any which may need careful handling. Whilst the majority of the Returns are written on paper, a number are on parchment. In some cases individuals signed their own names on the Return, but more often an official wrote down the names and individuals made their mark. Some people refused to make the protestation, and this was duly noted, whilst widows (who became household head on the death of their husbands) also sometimes signed. So, each Parish produced a return in its own fashion and it created a somewhat varied collection of documents!

Our main challenge with this project was the highly variable dimensions and formats of the documents. Some Returns were completed on the back of the declaration, some were bound into booklets and some were recorded on thin lengthy strips – some are very large, while others are tiny! We established early on that we would not be able to optimise the photography of each item by setting column height, lens choice and ppi individually, it would be too lengthy a process. Furthermore, we weren’t able to set our Nikon D800 at one height and photograph everything with one setting as there was too much variation but we thought the Nikon would be quick as we could use live view to line-up the documents. We tested and developed three different settings which enabled us to digitise the majority of the collection but inevitably some documents required individual settings. Part way through the project we bought an IQ180 digital back and 55mm lens for our Phase camera and decided to switch as we could improve quality and productivity. With 80 megapixels we could set the camera at one height and capture all our documents at 600ppi (see a time-lapse of my colleague Tim at work).

As much of a challenge has been the ability, time and motivation (!) to quality assure all the images generated. We’ve followed a process of first QA, followed by any necessary reprocessing/reshoots, a second QA and then web conversion and watermarking. The images are then moved to the digital repository for permanent preservation. Low resolution jpegs are viewable via our online catalogue and the Archivists and our IT department have developed a prototype Map Search, which allows users to search for the Returns we hold by area. So, if you can trace your family tree back to the seventeenth century, and you have an idea where your relatives lived, you may be able to find them in the Protestation Returns.

We’re promoting the records and the digitisation project, via social media, blogging and are planning some outreach activities with regional archives. For the social media promotion, we’ve picked out interesting watermarks, useful dates, noted when women are listed (they weren’t required to be), where there are ‘recusants’ (refusals) and are highlighting some of the more interesting information and text we’ve discovered – some people were ‘not at home’ when they should have been making the protestation!

The photography is complete and the Returns are being ingested into the digital repository and made available through the map and online catalogue.

We’ve started on our next project focussing on Victorian MP and Parliamentary Estate photographer Benjamin Stone, which involves both digitising his historic photographs of the Palace of Westminster and its visitors and taking some of our own pictures of the rooms today, to compare and contrast how things have changed. We have also been visiting the roof of the Victoria Tower and took a time-lapse of the view, where we were lucky to catch a raincloud, and rainbow, passing over London.

Tim Banting Digitising the Protestation Returns Some volumes, the printed Protestation and an example of a return

A high-quality hybrid 35mm film stills digitising setup

Author: Max Browne

As any photographer knows after decades of shutter clicking, a huge backlog of archive work can develop. This occurs often with personal work which tends to get put aside until time is available to attend to it. In my case not only have I thousands of images to view and select but technology has moved on from the darkroom origin of my film negs and transparencies to require digital scanning which varies enormously in terms of speed, cost and quality of operating and processing.

I am sure that many would agree that a significant reason for such a backlog of personal archive work is the onerous and painfully slow mode of scanning 35mm slides and negatives using Minolta/Nikon/Canon box style scanners with film tray loading. This was never fun as it was both laborious and technically sub-optimal. Noting the past tense – it is not any more!

If you consider replicating the film grain, image detail, colour and tones of the originals as a base standard of working, along with digitising as fast as you can load and focus them, then you may be interested in the 35mm scanning setup I have put together recently: Nikon D800E camera, 55mm Micro-Nikkor lens with Extension Ring PK13, old Nikon F Slide Copier (via eBay) with custom made dovetail support post, Lanparte DSLR fully adjustable baseplate with support bars/adaptor and MacBook Pro laptop (figs 1,2). Using any suitable light source, this rig will provide a fast 8k raw NEF files workflow with either negative or positive screen images for optimising variables in real time as you adjust them on the laptop, once captured. Additionally, if you work tethered, then the camera ‘live view’ can display negatives as positives if you switch the computer screen mode to ‘invert’. In this way you can also use the system as an instant real time viewer for 35mm negatives prior to capture which is useful for both identification and assessment purposes – especially if there are no contact sheets for reference. For this reason tethered capture of negatives is a more efficient way of working since positive image tonality can be checked against a histogram and exposure adjusted as quickly as positive slides can via the camera rear screen.

Fig.1

Fig.1

Fig.2

Fig.2

Perhaps I should add that this set-up is one for manual operation. Cleaning, loading, focusing, assessing and correctly exposing the originals need the practised eye of a photographer to get the best out of it. I don’t think it could be successfully used in any ‘auto’ mode. However I’d be interested to know if such a setup could be more streamlined – perhaps with an autofocus macro lens.

In practice focusing is achieved by sliding the camera on the base plate (which is easy as it is superbly engineered) rather than twisting the lens focus ring. A two second shutter delay works well to minimise any operator vibration. I generally shoot film emulsion side in so that any slight bowing of the film follows the natural curvature of the lens depth of field. This requires the extra step of left-to-right reversal of the image later but maintains more consistent centre to edge image definition.

As you would expect, digitising positives is straightforward and intuitive. On the other hand negatives take a little more time to get used to since the tonalities are in reverse – you need to adjust exposure to retain ‘shadow’ details that are actually highlights and vice versa. Once captured a great advantage is that any adjustments in ‘camera raw’ can then be made whilst viewing on a positive screen in ‘invert’ mode. Once the image files are opened in a Photoshop type programme they can be ‘inverted’ to positive themselves and the screen changed back to normal mode.

A long awaited project of mine has been to digitise my collection of 1960s-1980s Rock & Roll gig images and a great bonus is that many that were previously rejected in their film version are now usable in digital form after suitable manipulation. This enhances them as historical documentation as well as from an aesthetic viewpoint. Many just look better after digitising which is not surprising when considering the continuously variable club/concert/theatre lighting conditions under which they were shot. Under or over-exposed shots can be made acceptable now as can problem images such as the unwanted ‘protrusions behind heads’ classic which leads me to digress to a brief ‘documentary ethics’ consideration.

I am a freelance documentary hunter supplying captured images for client display. Thus my work ethics are pragmatic not purist. If a subject has a floral arrangement growing out of their heads then one of the three of us (problem-object, subject or me) must move in order to provide a non-distracting image. Such was the problem recently with an otherwise nice and historically interesting shot I have of Eric Clapton onstage in London about 1980. Since Eric famously became teatotal a few years later the beer shot provides something of a conversation piece. The intriguing but highly distracting object in the background was easily disposed of, after digitising, by some quick surgery in Photoshop (ills.3,4). My fledgling website for these images is http://www.rockshots.co

Fig.3

Fig.3

Fig.4

Fig.4

Almost all the equipment is readily available including the aging but excellent Nikon F Slide Copier bought for less than £100 on eBay. The exception is the small but very necessary dovetail post to connect and adjust the Slide Copier onto the baseplate support bars. This was made for me by a local machine shop and again cost less than £100. The Lanparte adjustable baseplate unit is inexpensive and a joy to use and is necessary to update the otherwise obsolete Slide Copier which will not fit most modern DSLR cameras as their fronts protrude further than the original SLR film cameras they were designed to fit. In short this kind of rig gives these otherwise excellent copiers a new lease of life. If you are interested in using a similar setup I’d recommend acquiring one of these key items soon before word gets out!

Do get in touch if you have any queries.

Max Browne, DigitisingArt.Co

The very helpful Lanparte UK agent can be found at http://www.fastforwardtime.co.uk

AHFAP Conference Keynote

Author: James Stevenson
Director
CHD Ltd

I worked in museums between 1983 and 2013, firstly at the National Maritime Museum and for almost twenty years at the V&A. My talk is naturally influenced by my experience at these two institutions and may be biased and not representative of photography in cultural history. So my career in museums has followed that of AHFAP quite closely.

If AHFAP’s first meeting and forming in 1985 was not its actual conception, the glint in its boyfriend’s eye was 1982, a year that helped form the need for an organisation such as AHFAP

Some highlights from 1982 include:

  • The Commodore 64 was made in August of that year
  • Word perfect for DOS was released
  • Lotus 123 spreadsheet (Excel, 1985)
  • Sony Trinitron monitor
  • Apple made $1b in sales
  • Phillips made the first compact disc
  • Sony made the first audio CD player
  • Adobe was founded
  • Tron was released

and

  • the first computer virus was detected

I think that the development of technologies such as these helped form the idea and need for an association such as AHFAP. The adoption of PC technology in museums created more immediate forms of communication (the demise of the typing pool occurred around this time) but particularly business reporting and accountability. Museums were not slow to develop these needs, though the V&A was actually one of the last. This meant that it skipped some of the early errors and mistakes and jumped in at a later convenient point.

At the same time the use of technology allowed a more systematic auditing of collections and there was considerable improvement in collections management.

When this happened it was natural for managers and directors to ask, ‘What is happening elsewhere?’ and hence the development of benchmarking.

The consequence of the founding of AHFAP may have been originally socially meeting in pubs to meet with fellow professionals. The reality was to find out what each was doing, what problems they were facing and what solutions were being developed.

It is interesting to see when other associations in museums were founded:

  • The National Museum Directors’ Council 1929
  • The British Association of Paintings Conservator-Restorers 1943
  • Collections Trust’s history stretches back to the 1970s when it began life as the Information Retrieval Group of the Museums Association
  • The Museum Object Data Entry System (MODES) was launched in 1987 with immediate success
  • Museums Computer Group 1982
  • AVICOM, established in June 1991, is the International Committee for Audiovisual and New Image and Sound Technologies.
  • The Institute of Conservation was created in 2005, which was the creation of disparate group specialist conservation groups.

I joined the business of museum photography in, I think, 1984. I don’t keep a record of my own anniversary dates.  This was around the time of the high-water mark of analogue photography.

The top-level output for images in museums at this time was the fine art publication. Colour printing had achieved its high level of quality, and press and publicity. Newspapers went into colour in 1986. Today, owned by Eddie Shah, was the first newspaper to pioneer computer photo-typesetting.

One of my memorable pictures from this time is a series I made of the Royal Observatory in Greenwich.

One is a shot of the Observatory buildings made at dusk with dark blue light still in the sky, artificial light on within the buildings and the exterior illuminated by flash powder bought from a theatrical suppliers in Covent Garden. The picture was made on 10×8 Ektachrome, with a 5×4 camera used alongside as a processing test. These were all processed by Rod Tidnam. And this image was entirely based on chemistry! There were chemicals for the light and chemicals for the processing and final image, the large-format colour transparency.

All photography in museum studios at this time was chemical.

The V&A, which I joined in 1993, was still using a mixture of 1940s technology for b&w but had just introduced its own E6 processing line for colour transparency. It soon became obvious to me that the b&w element of the service was essentially worthless. The use of it in the museum was a legacy of its historic and ancient cataloguing systems.

This old way of using images was a legacy too of the Courtauld Institute’s resistance to colour images in their teaching of the history of art. They still taught about the history of painting in monochrome until the 1980s. They were naturally concerned about poor colour reproduction and did not trust it. This was true prior to 1975 when E6 was developed, but even after that colour was a moveable feast. Kodak tried to standardise things with Q Lab QA in the around 1990. It was partly as a result of Q Lab that museum photography started to rely on the large format colour transparency as its medium of choice.

The first members, the first generation, of AHFAP were, to my mind, chemists. This could be typified by Brian Tremain. Many members of the BM and the NMM will remember ‘Brian’s Brew’, his collection of developing chemicals kept in old film canisters. There were variants for all types of contrast development.

There is a photograph made by Brian of the oil painting ‘Death of Nelson’ by Arthur William Devis. It cannot be beaten for imaging quality, in either b&w or colour reproduction, both 10×8 negative and Ektachrome transparency.

I was asked once to re-photograph it, and considered the exercise pointless, because Brian’s image was so good. It is possible that digital imaging may now make an improvement with a linear curve, and a camera such as the Sinar CTM but that has never been done. However if you look at the image on-line it does not hold up. There are too many alterations in the web publishing process that kills it. Who knows what colour space it is in now? It holds up better when downloaded into Photoshop but I was still only looking at a 300k jpeg. You have to see the large format colour transparency, or for an even more sublime experience the 10×8 negative, to appreciate it fully. Of course, as this is stored within the NMM’s negative store, this is only available to one privileged person at a time. You will have to speak to Tina Warner.

The founding members of AHFAP were I believe ‘chemical’ photographers. They had worked for the whole of their careers in what is now known as analogue photography, but is better called, in my opinion, chemical photography. Most of these founding members had retired by 1995 or thereabouts, the time when digital, or electronic photography was starting to be adopted in museums.

My recollection is that 1995 was the time when we, at the V&A, started experimenting with electronic photography.

Our entry was via the Museum Picture Library. We installed a software database we called The Photo Catalogue. We had several thousand colour transparencies scanned to Photo CD so that visitors to the library and indeed the rest of the museum could search for images online. To use them for later reproduction they still had to go back to the analogue original.

It became clear to me that this was the best way forward when I calculated that digitally scanning transparencies and putting them into a database was cheaper than making black and white prints and sticking them into albums, a practice that had continued uninterrupted since 1856.

Somehow or other, in retrospect I cannot remember the whole sequence, and it doesn’t matter as the whole process was inevitable. We moved very quickly over the next few years to digital photography. The stimulus was the development of museum websites.  The first digital camera we had at the V&A was a Fujix DS-330. It was a rangefinder type camera, cost around £1,000 and was a 1M-pixel camera. Images were transferred to the computer by a 3.5-inch floppy disc adaptor. I think I may have been the only person to buy one.

According to Wikipedia, the first museum to go ‘online’ was the Museum of the History of Science, Oxford, in 1995. I’m not sure if that is correct but it feels about the right time. About then most of the larger museums built their own websites and Museum Directors recognised that this was a good new way to make the collections visible to a new audience.

My own recollection is that it was not the technology as such that was forcing the change to digital imaging but a realisation that visibility for images could expand considerably and that efficiencies in workflow and production efficiency would improve.

Various international and nationally funded projects, mainly AHRC funded, started at this time to promote the use of large collections of digital images, images to tell stories about the collections. Early standards were proposed, a variety of formats made, and many short-lived websites created. Whether many of these early project websites still exist or not is irrelevant to us but they did create an understanding of the new medium. They were great exercises in developing new work practices and realising that image fulfilment could be speeded up. Once again individuals were asked to benchmark, sometimes even within the funding applications, so communication within AHFAP was at that time a good and necessary thing.

The second generation of AHFAP photographers at this time were hybrid chemical/electronic animals, myself included. Turning from looking at the world upside down to staring at minuscule flickering lines on a screen. There is not much to miss about spending weeks in the darkroom.

Now many of those photographers who were in AHFAP are also retiring and leaving behind a third generation of purely electronic photographers.

However I do not believe that the technology has yet matured to anywhere near its potential. There is still plenty of improvement to be made in 2D imaging and a great deal in multi-media.

What it has done, though, is to bring Walter Benjamin’s prophecy to fruition.

‘The illiterate of the future will not be the man who cannot read the alphabet, but the one who cannot take a photograph.’ I wonder what level of literacy he was anticipating.

Museum collection management systems are now full of the work of visual illiterates, who think that because they can press the ‘photo’ button on their iPhone, they can make a picture. But they think they can, which is probably worse. Sometimes I can appreciate that any picture is better than no picture, if you are a researcher. But what is the impression of the museum when its shop window is full of poor and inadequate imagery. Poor exhibition and gallery displays are not tolerated. What is it that makes directors think that poor images do not do the same? I worry that some large collections will accept the standard of photography you might expect in the developing world.

Last year’s conference at the Wellcome was for me very encouraging, as it very clearly brought together photographers and computer graphics scientists.

As well as 3D imaging, which is still maturing, and may still have several decades to go—to me it does have inevitability to it for the accurate portrayal of solid objects and environments—there are also many other new developments.

I have seen proposals for non-lens aperture-only cameras, after-the-fact focusing and movement amplification in video. This last one was demonstrated on a Ted Talk.

Michael Rubinstein zooms in on movement we cannot see and magnifies it by thirty or a hundred times. His ‘motion microscope’, developed at MIT, picks up subtle motion and colour changes in videos and amplifies them up for the naked eye to see. The result: you can see a pulse in a wrist, a baby kicking in its mother’s womb. There were some photographs of tree rings converted into sound described on Radio 4 on Tuesday.

But where do you get news of these new developments? You don’t see any of them described in the BJP. I think that the way to see what is developing is now in the New Scientist and Ted Talks and such places.

There are other web technologies also being developed but not yet appearing in museums or elsewhere that I have seen.

Microsoft Seadragon browsing was demonstrated at TED in 2007,

FABRIC browsing developed 2010. Why have these great new ways of browsing and searching museum images not been taken up? I think that there is a problem with museum web development.

It is no use looking to website editors to support a creative vision. How many of them crop images to suit their restricted page designs. I get really get annoyed when I see heads of sculptures cut off! They seem to be led by a desire to make everything in museum websites into online games, seeking BBC1 and never getting to BBC4. When they do achieve some quality stories and videos they are well hidden below ‘what’s on and what’s to buy’. They are more like Daily Mail colour supplements than TED Talks.

Where does this leave the new developments in computer graphics? Why haven’t museums adopted new browsing techniques for their collections? Why aren’t we seeing the promised technologies of semantic web searching? Lots of website front pages, especially the new BBC web site remind me of LEGO.

For the professional we have probably reached the equivalent of the large-format transparency as a means to high-quality print reproduction. We have the resolution, colour quality control in FADGI and Metamorfoze, the equivalent of Q Lab, and the data transfer methods to get the images to the printer.  But do we have the final display necessary for this? There is a trend in museums to reduce their fine art publications’ printed output, to reduce the volume of their branded publications. Where will this leave the opportunity for creativity in the museum photographic studio?

I believe that Cultural Heritage photographers must adopt the new and developing technologies in a search for new forms of creativity. The price of them is coming down; many can be done on standard DSLR cameras. The movement amplification software of Michael Rubinstein can be downloaded free and used on any pc.

If I were a museum photographer now I would prefer to work in a smaller museum where there is greater opportunity to try new things. The large museums are full of middle management who find it very easy to say no! In a small museum you have to get on with your own thing and need to have a larger degree of self-management.

To finish I wonder now where the next generation of CH photographers will come from? Will they be those with the skills to adopt these new imaging opportunities, to be able to script code, to be at one with online media. I am not sure that they will come from the traditional photographic colleges which do not adequately teach the basic principles of photography. At this time in your career it is ironically very useful to understand fully chemical photography, which is the basis of all of the digital imaging principles.

Courses such as SEAHA (Centre for Doctoral Training in Science and Engineering in Arts Heritage and Archaeology) may be far better to undertake now rather than photographic courses. These courses understand the opportunities coming with new imaging technology. They don’t however yet have a component of basic photography and a full understanding of lighting, but they will do. Not only that but lighting and scene composition could very well become a post-processing issue.

So during the lifetime of AHFAP we have had;

First, Alpha or Analogue photographers; secondly, Beta or Bi-Technology photographers; thirdly, Gamma or Digital photographers who can now use linear curves, and, perhaps next, Delta photographers who will be able to record and show how cultural objects can change over time and in space.

The future for imaging should still be grasped by a new breed of professionals who can both make images, make the invisible visible and show cultural objects in many new ways and show their change in space and time. I am sure that AHFAP will provide a forum to discuss this.

James Stevenson

October 2015