Stalin, Mao, Hitler, Mussolini, Castro, Brezhnev, and many others had photographs manipulated in an attempt to rewrite history. These men understood the power of photography — if they changed photographs they could change history. Cumbersome and time-consuming darkroom techniques were required to alter the historical record on behalf of Stalin and others. Today, powerful and low-cost digital technology has made it far easier for nearly anyone to alter digital images. And the resulting fakes are often difficult to detect. This photographic fakery is having a significant impact in many different areas of society. Doctored photographs are appearing in tabloid and fashion magazines, government media, mainstream media, social media, on-line auctions sites, on-line dating sites, political ad campaigns, and scientific journals. The technology that can distort and manipulate digital media is developing at break-neck speeds, and it is imperative that the technologies that can detect such alterations develop just as quickly. The field of photo forensics has emerged to restore some trust to photography.
To this end, we develop mathematical and computational algorithms to detect tampering in digital media. We have developed several techniques that quantify and detect statistical correlations that result from specific forms of digital tampering (in the absence of any digital watermark). Consider, for example, the creation of a forgery showing two movie stars, rumored to be romantically involved, walking down a sunset beach. Such an image might be created by splicing together individual images of each movie star. In order to create a convincing match, it is often necessary to re-size, rotate, or stretch portions of the images. This process requires that a portion of the image be interpolated, leaving behind a specific statistical pattern that we have been able to quantify. With an explicit model of the interpolation patterns, the expectation/maximization algorithm is employed to simultaneously detect if a part of an image was interpolated, and to estimate the interpolation parameters. As another example, when pairing two images together, it is difficult to exactly match the illumination effects of directional lighting (e.g., the sun on a clear day). We have developed a technique for estimating, from a single image, the direction of the illuminating light source (within one degree of ambiguity). Inconsistencies in lighting across an image are then used as evidence of tampering. These are but two of several techniques we have developed to reveal tampering in images.
Forensic investigations often have to contend with extremely low-quality images that can provide critical evidence. Recent work has shown that, although not visually apparent, information can be recovered from such low-resolution and degraded images. We develop computational techniques that can recover content from extremely degraded images. We have, for example, developed an approach to decipher the contents of low-quality images of license plates. Evaluation on synthetically-generated and real-world images, with resolutions ranging from 10 to 60 pixels in width and signal-to-noise ratios ranging from −3.0 to 20.0 dB, shows that the proposed approach can localize and extract content from severely degraded images, far outperforming human performance.
We have been interested in the interplay between object segmentation and recognition. Specifically, we have been exploring if and how human observers can segment and recognize objects in cluttered scenes where bottom-up segmentation cues are insufficient for perceptual grouping. We found that, counter to most models, recognition can drive segmentation. These results add to the growing literature on the role of top-down processing in the brain.