[RLG DLF logo]

Guides to Quality in Visual Resource Imaging

4. Measuring Quality Of Digital Masters
Franziska Frey
� 2000 Council on Library and Information Resources

1.0 Introduction
1.1 Building visual literacy
1.2 Looking at pictures on a screen
1.3 Key Question: Scan from Original or Intermediate?
2.0 Visual Attributes Associated with Quality
2.1 First Things to Check
2.2 Subjective Evaluations of Image Quality
3.0 Objective Technical Attributes Associated with Quality
3.1 Tone Reproduction
3.2 Color Reproduction
3.3 Detail Reproduction
3.4 Noise
4.0 Measures
4.1 How to Judge the Numbers after Scanning
4.2 Inclusion of targets
5.0 Image Processing
5.1 Types of Image Processing
5.2 Fitting Image Processing into the Workflow

1.0 Introduction

The purpose of this guide is to identify the features used to define and measure the technical qualities of a digital master in relation to the original from which it is reproduced and that it is intended to represent. Some parts of this discussion may identify qualities whose implications are not yet completely understood. Other parts identify measures that are needed or are under development and thus are not yet commercially available.

After the photographs have been scanned, the technical quality of the files must be judged. This is different from benchmarking the scanning system, because one is not checking the performance of the system, but rather whether the masters meet the requirements set at the beginning of the project. However, some of the checks might be the same as or similar to those described in Guide 2.

1.1 Building Visual Literacy

Judging image quality is a complex task. The viewer has to know what he or she is looking for. The visual literacy required for looking at conventional images has to be translated for digital images ( Ester 1990; 1994). A great deal of research must be done before it will be possible to fully understand how working with images as they appear on a monitor differs from working with original photographs. In addition, the people checking digital images often have different professional backgrounds than those who look at conventional photographs. To ensure consistency and coherence in the files produced, individuals involved in such a project must be trained before they begin their work. Training is needed, for example, in such areas as checking tonality of an image and checking sharpness. A good starting point is to have a person with a visual background as a member of the project team.

1.2 Looking at Pictures on a Screen

In most cases, the first access to the images occurs on a monitor. Very few studies have looked at the level of quality needed for viewing digital reproductions on a screen.

To quote Michael Ester ( 1990):

The selection of image quality has received little attention beyond a literal approach that fixes image dimensions at the display size of a screen. The use of electronic images has scarcely transcended the thinking appropriate to conventional reproduction media. No single level of image resolution and dynamic range will be right for every application. Variety still characterizes current photographic media: different film stocks and formats each have their place depending on the intended purpose, photographic conditions, and cost of the photograph. Perceived quality, in the context of image delivery, is a question of users' satisfaction within specific applications. Do images convey the information that users expect to see? What will they tolerate to achieve access to images? The ability of a viewer to discriminate among images of different quality is also a key ingredient in the mix.

It is helpful to ask users whether their expectations are met when comparing the digital master with the photographic original. In the best of cases, there should be no difference in the appearance of the two.

To achieve this goal, one must control the viewing environment. A common problem when using different computer systems or monitors is that the images look different when displayed on the various systems. Systems should be set up and calibrated carefully. This is often not done properly, and problems ensue. Moreover, even when systems are calibrated, measurements may not be taken correctly.

The best way to view a monitor is under dim illumination that has a lower correlated color temperature than that of the monitor. This reduces veiling glare, increases the monitor dynamic range, and enables the human eye to adapt to the monitor. This condition results in the most aesthetically pleasing monitor images. The situation gets more problematic if originals and screen images are viewed side by side, because in this case the observer is not allowed to adapt to each environment individually. It is a good idea to put a piece of tape over the monitor's brightness and contrast controls after calibration and to maintain consistent lighting conditions. Once calibrated, the monitor should need recalibration only on a monthly basis, or whenever conditions change. Comparing originals and screen images requires a suitable light booth or light table. It is important that the intensity and color temperature of such devices be regulated to match that of the monitor.

Monitor viewing conditions, as described in soon-to-be-published Viewing Conditions—for Graphic Technology and Photography (ISO 3664), will require the following:

  • The chromaticity of the white displayed on the monitor should approximate that of D65. The luminance level of the white displayed on the monitor shall be greater than 75 cd/m 2 and should be greater than 100 cd/m 2.
  • When measured in any plane around the monitor or observer, the level of ambient illumination shall be less than 64 lux and should be less than 32 lux. The color temperature of the ambient illumination shall be less than or equal to that of the monitor white point.
  • The area immediately surrounding the displayed image shall be neutral, preferably gray or black, to minimize flare, and of approximately the same chromaticity as the white point of the monitor.
  • The monitor shall be situated so there are no strongly colored areas (including clothing) that are directly in the field of view or that may cause reflections in the monitor screen. All walls, floors, and furniture in the field of view should be gray and free of posters, notices, pictures, wording, or any other object that may affect the viewer's vision.
  • All discernible sources of glare should be avoided because they significantly degrade the quality of the image. The monitor shall be situated so that no illumination sources such as unshielded lamps or windows are directly in the field of view or are causing reflections from the surface of the monitor and so that there is an adequate shield or barrier to prevent "spill" from these devices from striking the monitor.

1.3 Key Question: Scan from Original or Intermediate?

One frequent question regarding the reproduction quality of the digital master is whether scans should be made from the original or an intermediate. There are advantages and disadvantages to each approach. Because every generation of photographic copying involves some quality loss, using intermediates inherently implies some decrease in quality.

This leads to the question of whether the negative or the print should be used for digitization, assuming both are available. Quality will always be best if the first generation of an image, (i.e., the negative) is used. However, there may be substantial differences between the negative and the print. This is particularly true in fine-arts photography. Artists often spend a great deal of time in the darkroom creating their prints. The results of this work are lost if the negative, rather than the print, is scanned. The outcome of the digitization will be disappointing. Moreover, the quality of negatives varies significantly: one might show extreme contrast and the next might be relatively flat. Care has to be taken to translate this into the digital master. For example in the case of flat negatives, the bit-depth of the scanner must be high enough to discriminate between the different levels.

2.0 Visual attributes Associated with Quality

The visual characteristics of images and how these characteristics can be achieved on different systems are important parameters to consider when assessing the quality of the digital master. Users' computer platforms, color management systems, calibration procedures, color space conversions, and output devices will vary greatly. Almost every institution has a different setup for image access. This makes it more challenging to pick the right parameters and to make sure they remain useful over time. Ideally, all of the chosen parameters should be tied to well-documented standards to make it possible to take images safely into the future.

Furthermore, it is important that images always be checked on the output device they are intended for. Images that are intended for print should be judged on the print and not only on the monitor. This is because viewers accept lower quality when they judge an image on the screen than they do when viewing an actual print.

2.1 First Things to Check

2.1.1 Visual Sharpness

The first attribute to check is the sharpness of the images that have been scanned. Looking at the full image on the screen, the viewer might think the image is sharp. However, when the viewer zooms into the image, it might become obvious that the image has not been scanned with optimal focus. To evaluate sharpness, images should be viewed on the monitor at 100 percent (i.e., one pixel on the screen is used to represent each captured pixel of the image). The evaluation should include an area of the image that depicts details and edges.

There are different reasons why the scanned image may not be sharp. With a flatbed scanner, the mechanics holding the optics might be stuck. This would produce an out-of-focus scan. When using a camera with a helical focus mechanism on a vertical copy stand, it is important to prevent the lens from defocusing. This can result from the combined effects of gravity and the thinning of lubricants in the helical mechanism. Precautions are especially important if a "hot" source of illumination (e.g., quartz, tungsten, or HMI lights) is used. A focus lock should be devised if it is not part of the original lens design. In addition, some scanning approaches include the use of a glass plate to flatten the original. This glass plate might not have been reset into the correct position after the originals were placed. In the case of digital cameras mounted on a copy stand, the image plane and the object plane might not be 100 percent horizontal; this would cause a blurred image. In any of these cases, the image should be rescanned.

2.1.2 Image Completeness (Cropping)

While looking at an image on the monitor, one must also ascertain that the whole image area has been scanned and that no part of it has been cropped accidentally. Often, an area larger than the image itself is being scanned. The area that does not contain any image information (e.g., a black frame) will have to be cropped after scanning. There are ways to automate this processing step.

2.1.3 Image Orientation

The image orientation has to be checked. Laterally reversed images are often a problem because it is not always easy to differentiate emulsion and base in old photographic processes. If necessary, the image will have to be flipped. It is advisable to scan the image with the correct orientation in order to minimize processing of the digital master, and to ensure maximal sharpness.

2.1.4 Skew

One must determine whether images have been scanned with a skew. Skewing occurs when the originals have not been placed squarely on the scanner. Depending on the angle of the skew and the image quality desired, it might be better to rescan the image instead of rotating it. Rotating introduces artifacts that will decrease image quality.

2.1.5 Flare

It is especially important to control flare and "ghosting" and to examine each scan carefully for these problems. They are most likely to occur when light areas of the original object are adjacent to very dark areas. For this reason, white margins of the original that are not necessary to depict in the digital file should be masked or covered on the original before it is scanned.

2.1.6 Artifacts Definition and implications
Image artifacts are defects that have been introduced in scanning, such as dropout lines, dropout pixels, banding, nonuniformity, color misregistration, aliasing, and contouring. (See Guide 2.) It is important to check for these artifacts, which can be consistent from image to image. Another form of artifact is a compression artifact. It will be dependent on the compression scheme used, the level of compression, and the image information. How to determine artifacts
Artifacts can be seen by carefully looking at the images on a screen; for the evaluation, images should be viewed on the monitor at 100 percent. Dropout lines, dropout pixels, and banding can be seen best in uniform areas of the image. These types of artifacts are difficult to correct because they are introduced by the sensor or by the connection between the scanner and the CPU. They will in most cases already have appeared during initial tests of the imaging system. Compression artifacts can be seen in different areas of the image and are image-dependent.

2.2 Subjective Evaluations of Image Quality

2.2.1 Definition of Subjective Image Quality

Subjective image quality is determined by human judgment. Stimuli that do not have measurable physical quantities can be evaluated using psychometric scaling test methods. The stimuli are rated according to the reaction they produce on human observers. Psychometric methods give indications about response differences. Scaling tools to measure subjective image quality have been available only for the last 25 to 35 years (Gescheider, 1985).

In most cases, subjective evaluation does not include psychophysical testing but simply entails making the first evaluation of a scanned image by viewing it on a monitor. The viewer decides whether the image looks pleasing and fulfills the goals that have been stated at the beginning of the scanning project. This is important, because human judgment decides the final acceptability of an image. It should be emphasized, however, that subjective quality control must be done on calibrated equipment in a standardized viewing environment. Images might have to be transformed for monitor viewing. If images are intended to be printed, subjective quality control has to be done on the print, because, as mentioned earlier, the viewer is more forgiving when judging quality on a monitor.

2.2.2 What to Look for

Tone reproduction and color reproduction of the image must also be checked. These attributes are valid only for a particular viewing or output device. In addition, they depend on the rendering intent that has been set at the beginning of the project. If high bit-depth color data are being archived (i.e., higher than the bit-depth of the viewing or output device) and the rendering intent has been determined, an access file will have to be created at this stage. The file must be created on the viewing device currently being used. Several rendering intents can be chosen (Frey and Süsstrunk 1996):

  • The images have been scanned with the intent of matching the appearance of the original photographic image. The digital image can be evaluated by visually comparing the original with a reproduction. Care has to be taken that the monitor is calibrated and has a contrast range similar to that of the original.
  • Many collections contain photographs that were incorrectly exposed or processed. Some may be color casted or over- or underexposed; others may have the wrong contrast. In such cases, the scanner operator makes decisions about tone and color reproduction. The subjective evaluation of these scanned images will show whether these decisions led to a pleasing image that fulfills the expectations.
  • In the case of older color photographs that are faded, the goal might be to render the original, unfaded state of the photograph. Visual inspection might include the use of additional information (e.g., objects that are depicted on the photograph that are also available in the object collection of the museum, or known things such as grass).

2.2.3 What to be Careful about

The most important point is to use a well-calibrated monitor under controlled viewing conditions. The tools used to compare the digital master with the original (e.g., a viewing booth or light box) need to be controllable and meet standards. In addition, one has to be well aware of any rendering requirements that have been established.

3.0 Objective Technical Attributes Associated with Quality

To achieve reproducibility and coherence, one must also include objective parameters in the evaluation of the digital master. Objective image quality is evaluated by means of physical measurements of image properties. This evaluation process is different from the benchmarking of the scanning system; however, the same tools are used for it. This step ensures that the established requirements, set after looking carefully at the original materials and at the users and the usage of the digital files, have been met. It ensures the quality of the digital masters and helps justify the investment being made ( Frey and Süsstrunk 1996; Frey 1997; Frey and Süsstrunk 1997; Dainty and Shaw 1974).

Quantification of objective parameters for imaging technologies is a recent development. Theoretical knowledge and understanding of the parameters involved are available ( Gann 1999), but the targets and tools needed to objectively measure them are still not available to the practitioner in the field. Furthermore, in most cases, the systems being used for digital imaging projects are open systems (i.e., they include modules from different manufacturers). Therefore, the overall performance of a system cannot be predicted on the basis of the manufacturers' specifications, because the different components influence each other. An additional hurdle is that more and more process steps are done in software yet limited information about these processes is available to users.

It should be kept in mind that scanning for an archive is different from scanning for prepress purposes. In the latter case, the variables of the scanning process are well known, and scanning parameters can be chosen accordingly. If an image is scanned for archival purposes, neither the future use of the image nor the impact of technological advances is known. Decisions concerning the quality of archival image scans are, therefore, critical.

Most of the available scanning technology is still based on the model of immediate output on an existing output device, with the original available during the reproduction process. The intended output device determines spatial resolution and color mapping. Depending on the quality criteria of the project, a more sophisticated system and greater operator expertise may be needed to successfully digitize a collection in an archival environment where the future output device is not yet known. In either case, the parameters that have been chosen and defined need to be carefully evaluated in the digital master.

3.1 Tone Reproduction

3.1.1 Definition and Implications

Tone reproduction is the matching, modifying, or enhancing of output tones relative to the tones of the original document. It refers to the degree to which an image conveys the luminance ranges of an original scene (or, in the case of reformatting, of an image to be reproduced). It is the single most important aspect of image quality. Because all the components of an imaging system contribute to tone reproduction, it is often difficult to control. If the tone reproduction of an image is right, users will generally accept it, even if the other attributes are not ideal.

Evaluating the tone reproduction target will show how linearly the system works, e.g., with respect to density values. Linearity, in terms of scanner output in this case, means that the relationship of tonal values of the image is not distorted.

Reproducing the gray scale correctly does not necessarily result in optimal reproduction of the images; however, if the gray scale is incorrect, the image will not look good. The gray scale is used to protect the archive's investment in the digital scans. Having a calibrated gray scale associated with the image not only makes it partly possible to go back to the original stage after transformations but facilitates the creation of derivatives.

The most widely used values for bit-depth equivalency of digital images are 8 bits per pixel for monochrome images and 24 bits for color images. An eight-bit-per-color scanning device output might be sufficient for visual representation on today's output devices, but it might not capture all the tonal subtleties of the original. To accommodate all kinds of originals with different dynamic ranges, the initial quantization on the charge-coupled device (CCD) array side must be larger than eight bits.

CCDs work linearly to intensity. To scan images with a large dynamic range, 12 to 14 bits are necessary on the input side. If these bits are available to the user and can be saved, it is said that one has "access to the raw scan."

It is often possible to get only eight-bit data out of the scanner. The higher-bit file is reduced internally. This is often done nonlinearly (nonlinear to intensity, but linear in lightness or brightness or density). A distribution of the tones linear to the density of the original leaves headroom for further processing but will in most cases need to be processed before viewing.

Operator judgments regarding color and contrast cannot be reversed in a 24-bit RGB color system. Any output mapping different from that of the archived image must be considered. On the other hand, saving raw scanner data of 12 or 16 bits per channel with no tonal mapping can create problems for future output if the scanner characteristics are not well known and profiled.

As an option, a transformation can be associated with the raw scanner data to define the pictorial intent that was chosen at the time of capture. However, no currently available software allows one to define the rendering intent of the image in the scanner profile. The user usually sets rendering intent during output mapping. Software is available that allows the user to modify the scanner profile and to create "image profiles." That process is as work-intensive as regular image editing with the scanner or image processing software. Some of the profile will be read and used by the operating system, some by the application; this depends on how the color management is implemented.

Because the output is not known at the time of archiving, it is best to stay as close as possible to the source, i.e., the scanning device. In addition, scanning devices should be well characterized spectrally, and the information should be readily available from the manufacturers.

3.1.2 How to Measure Tone Reproduction

Tone reproduction is applicable only if an output device is chosen to reproduce the images. Therefore, for objective testing one should refer to testing the Opto-Electronic Conversion Function (OECF) as explained in Guide 2 (ISO 14524/FDIS, ISO/TC42 January 1999).

Because the data resulting from the evaluation of the tone reproduction target are the basis for all subsequent parameter evaluations, it is important that this test be done carefully. In cases where data are reduced to eight bits, the OECF data provide a map for linearizing the data to intensity by applying the reverse OECF function. This step is needed to calculate all the other parameters. In the case of 16-bit data, linearity to transmittance and reflectance are checked with the OECF data. Any processing to linearize the data to density will occur later.

Neutral gray scale patches that vary from dark to light are used as targets (see Guide 3, Figure 4). This target characterizes the relationship between the input values and the digital output values of the scanning system. It is used to determine and change the tone reproduction. The digital values of the gray patches are determined in full-featured imaging software and compared with those of the target. The outcome of this test will either be the same as that for the benchmarking process or will show whether any requirements for tone reproduction are met.

See also, Guide 2, Tone Reproduction or Tonal Fidelity.

3.2 Color Reproduction

3.2.1 Definition and Implications

Three color reproduction intents can apply to a digital image: perceptual intent, relative colorimetric intent, and absolute colorimetric intent. The perceptual intent is to create a pleasing image on a given medium under given viewing conditions. The relative colorimetric intent is to match, as closely as possible, the colors of the reproduction to the colors of the original, taking into account output media and viewing conditions. The absolute colorimetric intent is to reproduce colors as exactly as possible, independent of output media and viewing conditions. This terminology is often associated with the International Color Consortium (ICC).

Scanning for an image archive is different from scanning for commercial offset printing. When an image is scanned for archival purposes, the future use of the image is not known. Will color profiles still be maintained or even used? Operator judgments regarding color and contrast cannot be reversed in a 24-bit RBG color system. Any output mapping different from the archived image's color space and gamma must be considered. On the other hand, saving raw scanner data of 12 or 16 bits per color with no tonal mapping can create problems for future output if the scanner characteristics are not well known and profiled. Archiving both a raw sensor data file in high bit-depth and a calibrated RGB 24-bit file at a high resolution for each image is not an option for a many institutions, considering the number of digital images an archive can contain.

3.2.2 Which Color Space?

The most important attribute of a color space in an archival environment is that it be well defined. The following issues should be taken into consideration when choosing a color space ( Süsstrunk, Buckley, and S. Swen 1999).

  • Archiving in an output space (e.g., device-specific RGB or CMYK). This is not advisable because it is device-specific, leads to color space variations, usually has the smallest color gamut, and makes future use for other purposes difficult. However, many legacy images have been produced in this space.
  • Archiving in a rendered space (e.g., sRGB or Adobe RGB 98). This is applicable if the right space is chosen for the application needed. Currently the most effective archiving solution, it is easily controllable and is ideal for access images. However, problems can occur if the wrong image rendering or the wrong rendering space is chosen.
  • Archiving in an unrendered space (e.g., ISO RGB or unrendered Lab). This is possible but expensive. Depending on the transformation, reversing to the source space might be impossible later. In addition, the creation of access images is needed.
  • Archiving in sensor space (e.g., device-specific RGB). This is the best, most flexible solution, but it is device-specific, needs storage of all device characterization data, and requires the creation and storage of high-resolution access images for output.
  • Archiving images with ICC profiles. This solution is not recommended with currently available methods. The profiles are changing, they contain vendor-specific proprietary tags, there is no guaranteed backward compatibility, and they are not easily updated.

At this point, the best thing to do is to characterize and calibrate the systems and then scan into a known color space. In this way, it will be possible to update just one profile as necessary, if another space is chosen later.

There is more than one solution to the problem. The "right" color space depends on the purpose and use of the digital images, as well as the resources available for their creation. Color management is important for producing and accessing the digital images, but not for storing them.

3.2.3 How to Measure Color Reproduction

Most of the procedures for measuring and controlling color reproduction are geared toward the prepress industry. The use of profiles, as standardized currently by the ICC, is not recommended in an archival environment; however, it might be the only solution available now. To take digital images into the future, it is imperative to document the procedures used and to update profiles until a standardized approach for the archival community is available. The use of targets such as IT8 is described in Guide 3.

An approach that is being developed and will be more useful in an archival environment is the metamerism index , described in Guide 2. Another standard in working stage that holds promise for the future is ISO 17321, Graphic Technology and Photography-Colour Characterization of Digital Still Cameras (DCSs) using colour targets and spectral illumnation, ( ISO 17321/WD, ISO/TC42. 1999).

See also, Guide 2, Color Reproduction or Color Fidelity.

3.3 Detail Reproduction

3.3.1 Definition and Implications

Detail is defined as relatively small-scale parts of a subject or the image of those parts in a photograph or other reproductions. In a portrait, detail may refer to individual hairs or pores in the skin. Edge reproduction refers to the ability of a process to reproduce sharp edges.

People are often concerned about spatial resolution issues. This is understandable because spatial resolution has always been one of the weak links in digital capture. Additionally, the concept of resolution is relatively easy to understand. Finally, it was hard to achieve the needed resolution values with affordable hardware. Technology has evolved, however, and today reasonable spatial resolution is not very expensive and does not require large amounts of storage space.

3.3.2 How to Measure Detail Reproduction

The best measure of detail and resolution is the modulation transfer function (MTF). MTF was developed to describe image quality in classical optical systems. The MTF is a graphical representation of image quality that eliminates the need for decision making by the observer; however, one must have a good understanding of the MTF concept in order to judge what a good MTF is.

MTF of the master files is being measured according to the methods described in Guide 2 and Guide 3. The values will show whether the set target values for resolution have been met (ISO12233/FDIS, ISO/TC42, 1999; ISO 16067/WD, ISO/TC42, 1999).

Archival files should be scanned at the optical resolution of the scanning device to get the best quality. In some cases, it is necessary to resample the images to a certain resolution.

See also, Guide 2, Resolution of Modulation Transfer Function or Guide 3, Spatial Resolution.

3.4 Noise

3.4.1 Definition and Implications

Noise refers to random variations associated with detection and reproduction systems. In photography, granularity is the objective measure of density nonuniformity that corresponds to the subjective concept of graininess. The equivalent in electronic imaging is noise, the presence of unwanted energy in the signal. This energy is not related to the image and degrades it. (However, one form of noise, known as the photon noise, is image related.)

Noise is an important attribute of electronic imaging systems. The visibility of noise to human observers depends on the magnitude of the noise, the apparent tone of the area containing the noise, and the type of noise. The magnitude of the noise in an output representation depends on the noise present in the stored image data and the contrast amplification or gain applied to the data in processing the output. Noise visibility is different for the luminance or brightness (monochrome) channel and the color channels.

The noise test yields two important pieces of information. First, it shows the noise level of the system, indicating how many bit levels of the image data are actually useful. For example, if the specifications of the scanner state that 10 bits per channel are recorded on the input side, it is important to know how many of these bits are image information and how many are noise. Second, it indicates the signal-to-noise (S/N) ratio, which is essential for image quality considerations. The noise of the hardware should not change unless the scanner operator changes the way she or he works or dirt accumulates in the system.

Since many electronic imaging systems use extensive image processing to reduce the noise in uniform areas, the noise measured in different large area gray patches of the target may not be representative of the noise levels found in scans from real scenes. Therefore, another form of noise, so-called edge noise, will have to be looked at more closely.

See also, Guide 2, Noise.

3.4.2 How to Measure Noise

The Electronic Still Photography Group IT10 has designed a target. A software plug-in, with which to read and interpret the target, will soon be available ( ISO 15739/CD, ISO/TC42, 1999). One could also use a gray wedge and check the noise in the different areas of the wedge by calculating the standard deviation of the pixel count values contained within each gray patch. A more detailed description can be found in Guide 2.

4.0 Measures

4.1 How to Judge the Numbers after Scanning

4.1.1 Setting Aim-point Values for Scanning

Benchmark values for the endpoints of the RGB levels are often specified when (or in case) images are scanned in RGB and reduced to eight-bit-per-channel. The guidelines of the National Archives and Records Administration, for example, ask for RGB levels ranging from 8 to 247 for every channel ( Puglia and Roginski 1998). The dynamic headroom at both ends of the scale ensures that there is no loss of detail or clipping in scanning, and accommodates the slight expansion of the tonal range that is caused by sharpening or other image processing steps. The aim-point values can be checked by looking at the histogram. It is important to check that the histogram has been calculated from the entire image and not just from the portion of the image that is displayed on the screen.

4.1.2 Clipping

It must be confirmed that no clipping in the shadows or highlight areas has occurred during scanning. This is done by looking at the histogram. If clipping has occurred, these details will be lost in the digital file. The images will have to be rescanned.

4.1.3 Distribution of Digital Levels

The overall appearance of the histogram gives a good view of the integrity of the scan. A well-scanned image uses the entire tonal range (this will not be the case if histograms of 16-bit data are being examined) and shows a smooth histogram. If the histogram shows obvious spikes, artifacts or a noisy scanner could be the reason. If the histogram looks like a comb, it is likely that the image has been manipulated with image processing.

4.2 Inclusion of targets

4.2.1 Pros and Cons of Including Targets

Targets are a vital part of the image quality framework. After targets are scanned, they are evaluated with a software program. Some software components exist as plug-ins to full-featured image browsers, others as stand-alone programs.

Targets can be part of every scanned image, or the target information can be put into the image header. Putting a target into every scan might be particularly appropriate for very high-quality scans of a limited number of images. However, the target area will make the file size bigger. For large collections, a better approach might be to characterize the scanner well and to include this information in the file header. In this case, the images can be batch-scanned and processed later.

4.2.2 Which Targets Should be Used

Few targets are readily available on the market. For now, one approach would be to use a commercially available gray-scale target. A knife-edge target or sine wave target for measuring resolution will have to be included. Both can be purchased (see Guide 2). Software for analyzing the images is also available (see Guide 2). However, more sophisticated, user-friendly solutions are needed.

A set of targets and the necessary analyzing software is being developed within IT10 and will be on the market soon. To facilitate objective measurements of each of the four parameters, different targets for different forms of images (e.g., prints, transparencies) are needed. For reliable results, the targets should be made of the same materials as are the items that will be scanned.

4.2.3 How to Fit Targets into the Workflow

Full versions of the targets could be scanned every few hundred images and then linked to specific batches of production files. Alternatively, smaller versions of the targets could be included with every image. In any case, targets must be carefully controlled and handled. Targets must be incorporated into the workflow from the beginning so that operators become accustomed to using them. Standard procedures must be established to ensure that the targets are consistently incorporated into the workflow.

4.2.4 Documentation of Target Use

Information about the use of targets must be well documented. Target measurements, numbers, and types need to be linked to the files that have been scanned with them, preferably by putting that information in the file header. To enable work with images across platforms as well as over time, it is important that the imaging process be well documented and that the information be kept with every file.

5.0 Image Processing

5.1 Types of Image Processing

The initial processing should maximize image detail reproduction and S/N and get tone and color reproduction in line with a standard. This standard should, to some extent, maximize the image file information content through efficient use of the digital levels; however, it is also important to make the image data accessible ( Holm 1996a).

5.1.1 Resampling

Resampling refers to changing the number of pixels along each axis after scanning. There are many different resampling algorithms. Bicubic interpolation provides the best image quality.

Resampling occurs in two forms: down sampling and up sampling. Scanning often occurs at a higher resolution than is necessary, and the required resolution is obtained by resampling the image. Aliasing can occur when the image data are downsized. To minimize its effects, low-pass filtering can be applied to the image before it is downsized. Up sampling should be avoided because no additional image information can be created.

5.1.2 Sharpening and Noise Reduction

Images often need to be slightly sharpened after scanning; however, sharpening is much more of an issue when one is processing images for output. There are several sharpening techniques, of which unsharp masking is the most popular. The level of filtering depends on the scanner and the material being scanned.

5.1.3 Tone and Color Correction

Depending on the rendering intent, various tone and color corrections may have to be performed. Tone and color correction can be done on images, e.g., to remove a cast on the image.

Preferably, all tone and color corrections should be controlled and done with the scanner software, if 24-bit images are being used as the master files. Tone and color corrections should be kept to a minimum after scanning in this case.

A question that often arises is whether images should be processed for monitor viewing before storing. Adjusting master files for monitor representation provides better viewing fidelity but means giving up certain processing possibilities in the future. However, a linear distribution of the tones in a digital image compared with the density of the original offers greater potential for future functionality, but images need to be adjusted before being viewed on a monitor.

In case of storing images at a higher bit-depth, all color and tone corrections are deferred until the image is being processed for output.

5.1.4 Defect Elimination

At this stage, dust and scratches are removed from the digital image. It is often very time-consuming; however, new technologies are being developed to automate this process during scanning.

5.1.5 Compression

Advances in image-data compression and storage-media development have helped reduce concerns about storage space for large files. Nevertheless, image compression in an archival environment has to be evaluated carefully. Because the future use of a digital image in this environment is not yet determined, one copy of every image should be compressed using a lossless compression scheme, or even left uncompressed, because current lossless compression schemes do not significantly reduce the amount of data.

Lossless compression makes it possible to exactly reproduce the original image file from a compressed file. New compression schemes, such as wavelets, that do not produce the well-known artifacts that JPEG-compressed files show are not readily available. The advance of JPEG 2000 will have to be followed closely.

5.2 Fitting Image Processing into the Workflow

5.2.1 Sequence of Applying Different Image Processing Steps

Image quality is affected by the sequence of applying different image processing steps. It is important to be aware of the effects of different processing algorithms. It has also to be kept in mind that the future use of the images is not known. Ideally, all image processing should be delayed until the time an image is actually used and its image rendering and output characteristics are known. This would require that data be stored with a bit-depth of more than eight bits per channel. Most available workflow solutions do not allow this.

5.2.2 Effects of Image Processing on Processing in the Future

Image data are best stored as raw capture data. Subsequent processing of these data can only reduce the information content, and there is always the possibility that better input-processing algorithms will be available in the future. The archived image data should, therefore, be the raw data, along with the associated information (e.g., sensor characteristics) required for processing, whenever possible.

Tone and color corrections on eight-bit-per-channel images should be avoided, if possible. Using them causes the existing levels to be compressed even further, no matter what kind of operation is executed. To avoid the loss of additional brightness resolution, all necessary image processing should be done on a higher bit-depth file, and requantization to eight-bit-per-channel images should occur after any tone and color corrections.

Most important, master files that have been well checked are a sound investment for the future.