To help answer the perennial question ‘What does a laser path look like in slow motion?’ a team of researchers (published Open Access in Nature in August 2014) undertook an experiment at Heriot-Watt University that used a 2D silicon CMOS array of single-photon avalanche diodes (‘SPAD’) to essentially construct a 0.1 megapixel resolution camera capable of recording the path of laser pulses through a given area. While the article acknowledges that light has been captured in flight as early as 1978, the challenge addressed by the team is one of simplifying data acquisition and reducing acquisition times “by achieving full imaging capability and low-light sensitivity while maintaining temporal resolution in the picosecond regime” (Gariepy et al, 2014: 2). To produce an image (or rather a video) from the experiment, the raw sensor data was put through a sequence of processing steps categorised into 3 stages: noise removal, temporal deconvolution and re-interpolation – which is illustrated in the graphic below:
Creating an image from the 32 x 32 array of sensors. Image from article (linked).
The video produced by the team (GIF excerpt below) is an overlay of the 500 picosecond pulse of laser light on top of a DSLR photograph of the experimental setup. The scattering of light that makes the beam visible is remarkably only through interaction with ambient gas molecules (Gariepy et al, 2014: 4), versus a more ‘dense’ medium that is traditionally required to highlight laser light (e.g. fog, participating media such as airborne dust, etc).
GIF of laser path. Image created from video (linked)
This laser path in flight is the missing step from the following video produced as part of the Scottish Ten project: we see the Leica C10 scanner laser spot reflecting from the surface of a bridge at the Eastern Qing Tomb, China. If we applied the same methodology as the research team in the article to the scanner, we might see the same phenomenon repeated at incremental spatial locations to record the environment around the scanner – perhaps the ultimate LiDAR demo?
When fresh scanning tools offer the ability to conduct ever-richer investigation of heritage material and provide documentation of an environment, it’s an exciting thing. According to an open-access paper published this month by Liaing et al in ISPRS vol. 95, that seems to be the case, with the team’s presentation of ‘PRISMS’ (Portable Remote Imaging System for Multispectral Scanning) which was designed “for portable, flexible and versatile remote imaging”. To quote what it is capable of:
“In this paper, we demonstrate a spectral imaging system that allows automatic, in situ, remote imaging (distances up to 35 m) of paintings at high resolution that gives not only spectral information per pixel of the paintings, but also 3D position and distance measurements as a by-product.”
The specification of PRISMS is pretty impressive. The multispectral imaging can provide imagery at a resolution of 80 microns (0.08mm) from a distance of 10m, covering a range of 400-1700nm with a spectral resolution of 50nm. After calibration the 3D data capture achieved distance accuracy of ‘a few mm’, achieved at distances of 10m, though steadily worse at distances greater than that albeit with an indication of room for improvement with better calibration.
Sanskrit revealed in cave 465. Panel D shows ‘difference image between 550 nm and 880 nm’. Source: Liaing et al (2014)
The real-world applications have already been demonstrated: the paper shows that the tool revealed faded Sanskrit writing on the ceiling of China’s Mogao cave 465, otherwise not visible in solely colour and spectral imaging. In addition, it reveals invisible drawings and spectrally identifies pigments such as red ochre and azurite. 3D data capture and multispectral imaging have been possible separately by combining different tools previously, so the team’s development means a great deal for streamlining the investigation processes on-site and enabling easier, richer capture of heritage data.