News

06/07/16

Sensors Unlimited featured in Physics Today

We at Sensors Unlimited are excited to circulate the article below, which was featured in Physics Today.

Sharp images from fuzzy concepts
Developments in snapshot imaging and computational processing improve resolution at unprecedented frame rates.

By: Rachel Berkowitz

As any fine arts photographer would attest, capturing a natural image is no easy task.

Imaging natural events at high speeds is even harder. The first such motion studies, now-legendary processes of recording a running horse and photographing a supersonic bullet, were done in the late 19th century and used controllable mechanical shutters.

Digital cameras based on CCDs and CMOS technology raised the bar in the 1980s by enabling image acquisition of up to 10 million frames per second. But those rates are limited by on-chip storage and electronic readout speed.

Now, advances in sensors and photonics technology allow imaging of natural phenomena at unprecedented frame rates and enable in vivo situations that have been inaccessible until very recently. Some of the latest high-speed imaging and signal-processing techniques were presented at the SPIE Photonics West meeting in San Francisco in February.

Benefits of those techniques might include development of new technologies for imaging transient events at scales from cellular organelles to galaxies. The processes could improve our understanding of light transmission through biological tissues, with medical applications in heart and brain physiology.

Faster than a speeding bullet

Today's fastest imaging techniques rely on specialized illumination of repeatable events in highly controlled environments. But many natural events do not repeat themselves and are not reproducible. To overcome that restriction, Washington University in St. Louis scientists, including Liang Gao, Jinyang Liang, Chiye Li, and Lihong Wang, demonstrated a record-breaking application of a two-dimensional dynamic technique known as compressed ultrafast photography (CUP). Their method uses a device called a streak camera to capture nonrepetitive, time-evolving events at up to 100 billion frames per second in a single snapshot.

Real-time visualization of a laser pulse refracted at an air-resin interface. CREDIT: Liang Gao

A streak camera has a narrow aperture that limits the imaging field of view to a straight line. As a result, producing 2D images usually requires additional optical or mechanical scanning along the orthogonal, or second spatial, direction. That means the imaged event must repeat or be actively illuminated while the camera is scanning.

In contrast, CUP uses semi-random patterns to rapidly capture information about the image. First the scene is encoded by a digital mirror device in which hundreds of tiny mirrors are turned on or off to sample a random black-and-white matrix of the image. Light from each of the “on” mirrors is passed to the streak camera's open entrance port as a single pixel. The pixel is sent across an electrode to provide a relationship between time and space. From thousands of those single-pixel samples, each of which contains information about a different part of the image at different times, the original image can be mathematically reconstructed.

The researchers demonstrated their ultrafast single-shot camera by clocking the speed of a laser pulse as it was reflected and refracted first through air and then through resin.

As promising as it is, the technique has limitations, including lost information due to poor spatial resolution and low-intensity artifacts in the background of reconstructed images.

The team tackled the background artifact problem with an algorithm known as space- and intensity-constrained (SIC) reconstruction. A lens in the streak camera reflects light onto a second camera, which in turn takes a snapshot of the image. That process adds new constraints that can be used to construct the final image.

By shining ultrashort laser pulses on a 2D image of a car, the researchers produced pictures with picosecond resolution. “The initial image is really blurred,” says Washington University's Liren Zhu, who worked on the reconstruction. “But the SIC method gives a very sharp boundary.”

A beating (mouse) heart

Unlike the Washington University team, biologists don’t require 100 billion frames per second; their sample space, though, can be more delicate, or more minute. Using quantum dots that emit short-wave infrared (SWIR) light as an in vivo sensor, an MIT team has been able to produce images of a beating mouse heart.


A photo of a foggy San Francisco Bay is much clearer with a SWIR camera (left) than a regular one. CREDIT: NASA Spinoff/Sensors Unlimited

Biological imaging relies on targeting the tissue of interest with a fluorescent dye and detecting light transmitted from the sample. Absorption and scattering by tissue can greatly attenuate the signal. Oliver Bruns of MIT found a happy medium by working with SWIR light, typically at 1000–2000 nm, which achieves greatly enhanced sensitivity by transmitting efficiently and scattering less light.

Probes that fluoresce at those wavelengths are difficult to find. Bruns and his colleagues developed InAs/CdZnSe quantum dots to do the job. As an added benefit, the dots’ superior brightness allows for higher-speed imaging.

Using cameras fitted with InGaAs detectors, Bruns imaged a mouse’s heartbeats and brain signaling at 60 frames per second. He succeeded in capturing activity up to 300 μm inside the body.

“Now we have the imaging speed to allow the mouse to freely behave,” Bruns says. And by studying fully alert mice, biologists can now study physiological differences between awake and anesthetized animals.

Computing away scattering

Whether photographing a car or a mouse heart, light scattering degrades the effective resolution of an image. Preventing scattering is one of the most important challenges in optics. But scattered light does have a silver lining: It can contain significant information about the subject.

Laura Waller at the University of California, Berkeley, presented a new algorithm for 3D reconstruction of multiple point sources inside scattering material, such as a foggy atmosphere or biological tissue. Her approach is based on phase space, which describes light using both positional and angular information.

Waller starts with an image taken in a scattering medium and extrapolates backward to determine where each ray of light originated. In contrast with previous studies, Waller’s algorithm collects four coordinates—two positional and two angular—for each ray of light. From that information, the algorithm determines the actual 3D position of the point source.

That work is being extended to neural activity tracking with optogenetics, a technique that uses light to control and manipulate cells in living tissues. All-optical approaches are promising because they are minimally invasive and potentially scalable to millions of neurons. “You can stimulate neurons by firing light at them, and imaging can let you watch them in action,” Waller says.

With Waller’s algorithm a volume image is never reconstructed; rather, the light-field signatures for each neuron are estimated. That dictionary of signatures, based on 4D measurements, is then used to identify and localize each neuron in 3D.

Using the new technique, Waller and colleagues Nicolas Pegard and Hillel Adesnik recorded the electrical activity of 800 live zebrafish neurons at a record 100 frames per second. The method could scale to extremely large data sets, which could potentially enable functional activity mapping of an entire mouse brain cortex.

Our ability to comprehend the natural world is extending to smaller scales, higher speeds, and larger data sets than ever before. Techniques for creating sharper images will bring those tiny, fast, or delicate domains within the scope of our understanding.