Category Archives: Research

Encode your digital data into DNA, then keep it on ice

The Global Seed Vault on the Arctic island of Svalbard. The vault presently stores almost half a million seed samples. Image source: Gizmodo.

Recent articles by Gizmodo and New Scientist follow the publication of an interesting paper by a Swiss team from Zurich’s Swiss Federal Institute of Technology. The paper highlights the challenges of physically preserving digital data for periods of greater than 50 years. It suggests, as an alternative to current methods, sequencing digital data as DNA specifying “we translated 83 kB of information to 4991 DNA segments, each 158 nucleotides long, which were encapsulated in silica”.

The 83kB in question were from the Swiss federal charter from the 13th century and the 10th century Archimedes Palimpsest – both items of heritage deemed worthy of being incorporated into the experiment (and unlikely to present copyright issues). In terms of performance, the article states that accelerated aging was undertaken at 70°c for a week and showed considerable promises of longevity, “thermally equivalent to storing information on DNA in central Europe for 2000 years”. The data was recovered without error, and the team even integrated error-correcting codes similar to traditional approaches to the archiving of digital data. With this experimental evidence in hand, the team’s paper notes that “The corresponding experiments show that only by the combination of the two concepts, could digital information be recovered from DNA stored at the Global Seed Vault (at -18°C) after over 1 million years”.

CyArk uses a bespoke LTFS setup as a practical stable-media solution in the archiving of its terabytes of data, so while side-by-side 83kB may not sound like a lot for our purposes, potentially “just 1 gram of DNA is theoretically capable of holding 455 exabytes”. The economics of this may not add-up yet, as it is indicated that it costed around £1000 to encode this small excerpt of data. Ultimately, if we want to see this level of DNA data storage and access become feasible or an everyday reality, it will need among other things strong market forces behind its adoption and development. I am reminded of this picture of the 1956 IBM RAMDAC computer which included the IBM Model 350 disk storage system (seen below), storing 5MB of data and hired at $3200/month, equivalent to the purchase price of $160,000.

1956 IBM RAMDAC computer included the IBM Model 350 disk storage system. Image source: Faber.se

What would a laser scanner look like at 20 billion frames per second?

To help answer the perennial question ‘What does a laser path look like in slow motion?’ a team of researchers (published Open Access in Nature in August 2014) undertook an experiment at Heriot-Watt University that used a 2D silicon CMOS array of single-photon avalanche diodes (‘SPAD’) to essentially construct a 0.1 megapixel resolution camera capable of recording the path of laser pulses through a given area. While the article acknowledges that light has been captured in flight as early as 1978, the challenge addressed by the team is one of simplifying data acquisition and reducing acquisition times “by achieving full imaging capability and low-light sensitivity while maintaining temporal resolution in the picosecond regime” (Gariepy et al, 2014: 2). To produce an image (or rather a video) from the experiment, the raw sensor data was put through a sequence of processing steps categorised into 3 stages: noise removal, temporal deconvolution and re-interpolation – which is illustrated in the graphic below:

Creating an image from the 32 x 32 array of sensors. Image from article (linked).

Creating an image from the 32 x 32 array of sensors. Image from article (linked).

The video produced by the team (GIF excerpt below) is an overlay of the 500 picosecond pulse of laser light on top of a DSLR photograph of the experimental setup. The scattering of light that makes the beam visible is remarkably only through interaction with ambient gas molecules (Gariepy et al, 2014: 4), versus a more ‘dense’ medium that is traditionally required to highlight laser light (e.g. fog, participating media such as airborne dust, etc).

GIF of laser path. Image created from video (linked)

GIF of laser path. Image created from video (linked)

This laser path in flight is the missing step from the following video produced as part of the Scottish Ten project: we see the Leica C10 scanner laser spot reflecting from the surface of a bridge at the Eastern Qing Tomb, China. If we applied the same methodology as the research team in the article to the scanner, we might see the same phenomenon repeated at incremental spatial locations to record the environment around the scanner – perhaps the ultimate LiDAR demo?

Microsoft HoloLens mixes the real and the virtual

The HoloLens headset. Image source: Engadget (linked).

With the announcement of Windows 10 in the live event, Microsoft also recently presented the HoloLens. This is a ‘mixed reality’ headset that overlays ‘holographic’ imagery over your day-to-day vision, allowing you to interact virtually – make Skype calls, build 3D objects in their HoloStudio software, play HoloBuilder (essentially MineCraft), and so on – untethered & without markers/tracking. The specification of it are unclear at this point, described variously as ‘sharp’ and having ‘HD lenses’, and in the presentation it is explained that a traditional CPU/GPU combination were not enough and that the answer was in inventing a ‘HPU’ (‘Holographic Processing Unit’), which deals with the input from various sensors detecting sound, our gestures and ‘spatially map the world around us’ in real-time.

It requires little imagination to visualise how units like this could integrate with archaeological excavation, training, virtual access/reconstruction, etc, in a similar vein to how it has already been employed by NASA. To quote Dave Lavery, program executive for the Mars Science Laboratory mission at NASA Headquarters in Washington: “OnSight gives our rover scientists the ability to walk around and explore Mars right from their offices [...] It fundamentally changes our perception of Mars, and how we understand the Mars environment surrounding the rover.”

Exploring Mars - Microsoft worked with NASA's Jet Propulsion Laboratory team and the Curiosity Mars rover

Exploring Mars – Microsoft worked with NASA’s Jet Propulsion Laboratory team and the Curiosity Mars rover to explore how engineers, geologists, etc could use the technology. Image source: Frame from video (linked).

We’ve all long been aware of the development of consumer VR headsets (e.g. Oculus Rift) – which can immerse us in entirely 3D environments, and Google’s Glass (Prototype production has now ended). HoloLens is an interesting move from Microsoft, and somewhat curiously there is also an absence of reference to ‘augmented reality’ (see Microsoft’s FAQ), which has been suggested may be for marketing purposes. In terms of availability, we are told the HoloLens will be made available within the timeframe of Windows 10. Going forwards it will be a question of which applications befit VR/AR – especially for heritage (and conservation), where virtual access to the present or the past in 2D and 3D form is a central requirement.

The scanned material is greater than the sum of its maps

Link to video

Quixel Megascans test scene. Top: Material mapped render; Bottom: Rendered scene with no materials. Image source: http://quixel.se/megascansintro

In seeking to model and render CGI environments, material shaders and their respective maps have always sat as the foundation of making the sterile virtual environment appear realistic. Data capture methods that bring 3D objects into virtual environments go some way in bringing a facsimile of the real into an entirely virtual place, and typically a mesh that was generated by Structure from Motion photogrammetry or structured light scanning for example will feature a diffuse texture map. Taking these principles, Quixel Megascans is a service and suite that allows artists/modellers alike to take advantage of a library of scanned-in material maps with a range of acquired parameters. They even built their own material scanner to generate these maps.

Breakdown of maps that constitute material parameters

Breakdown of maps that constitute material parameters

This is big, because it hasn’t previously been so accessible to take advantage of captured material data and integrate it into a 3D workflow with your everyday BRDF shader in 3DS Max. It means 3D reconstructions of cultural heritage sites, for example, don’t have to be accurate 3D data punctuated with environments of artistic representations of foliage, generic mudbrick, masonry, etc, but are physically based representations of those materials from colour to translucency and specular reflections (Quixel list captured maps as PBR Calibrated Albedo, Gloss, Normal, Translucency, Cavity, Displacement, AO, Bump and Alpha). This is exciting because, alongside a trend of movement from biased towards unbiased rendering algorithms and the continual advance of computational power, these richer environments aren’t just increasingly ‘realistic looking’ but actually become more accurate representations of natural and built environments.

Real-time depth imagery on a Raspberry Pi

Test image of an extracted depth-map from a stereo pair. URL: http://www.raspberrypi.org/real-time-depth-perception-with-the-compute-module/

Test image of an extracted depth-map from a stereo pair. URL: http://www.raspberrypi.org/real-time-depth-perception-with-the-compute-module/

This is a cool example of the low-cost platform offered by the Raspberry Pi being used for inventive projects. The Raspberry Pi is a capable, low cost (Model B+ is £25GBP/$40USD) credit-card sized computer that was first released in February 2012. It was introduced to encourage computer science skills, particularly with students. This project sought to capture 3D depth maps by using stereo-pair cameras linked up to a Raspberry Pi Compute Module. Argon Design intern David Barker concocted a rig that processes stereoscopic depth maps from two cameras at a rate of 12FPS. The core functionality is provided by an algorithm well known in video compression: by splitting frames into blocks then comparing to blocks from other frames, you can detect motion and measure parallax, whilst taking advantage of real-time video processing capabilities. The story posted on the Raspberry Pi website has further details on how this was achieved, and the steps through which the algorithms were optimised to run in real-time.

The Raspberry Pi has been used in a separate instance to achieve 3D data capture through the construction of a spherical setup (using 40 Raspberry Pi + camera units) to capture entire body scans simultaneously. These two setups represent a means to achieving different results for different purposes: real-time applications tout immediate uses for robot vision, whilst applications that can be processed ‘offline’ are central to a host of data capture techniques creating accurate digital records of environments and objects, particularly those such as SfM photogrammetry and terrestrial laser scanning.

FlexSense: Interactive, deforming overlay that tracks in 3D – and what it offers us

Microsoft's FlexSense , source: https://www.youtube.com/watch?v=3Jo9ww9cLzg

Microsoft’s FlexSense , source: https://www.youtube.com/watch?v=3Jo9ww9cLzg

Another recent innovation from Microsoft’s R&D labs has presented us with ‘FlexSense’, which you can see demonstrated in a video here. This was brought to my attention in an article posted on The Verge. FlexSense is a thin, transparent, flexible surface that tracks its own deformation in 3D. It can be used to control applications in a variety of ways, such as the masked display of certain areas of an underlying screen and manipulation of mesh surfaces in 3d programs. It owes its versatility to the high degrees of freedom afforded by the sensors embedded in the device, which are shown as having been calibrated and ground-truthed by visual markers.

The examples that Microsoft demonstrate enable you to picture a number of uses, of which I can imagine a rich range of applications to digital heritage across engagement and conservation. Just in its capacity as an ‘overlay’ alone, you could peel away an animated reconstruction of an archaeological site to see the underlying evidence on which it is based, for example. Also layering other information such as uncertainty of specific features of an archaeological interpretation, alternate reconstructions, lighting-only of such an environment (based on accurate calculations), and also annotations of a 3D environment or object.

As a controller, perhaps it could also offer the ability to more naturally virtually manipulate high resolution scans of ’2D’ artefacts such as scrolls, parchments, charters, etc, which would otherwise be far too delicate to handle in a way that could deform it as such (by conservation professionals and members of the public alike). This could enable a much more natural simulation, exploration and interpretation without touching the fragile source material. It will be interesting to see where the FlexSense development goes, and hopefully its offering to these areas will bear fruit, if it hasn’t already.

1-s2.0-S0924271614001336-gr1

PRISMS. Spectral imaging + 3D distance measurement = rich, useful datasets

When fresh scanning tools offer the ability to conduct ever-richer investigation of heritage material and provide documentation of an environment, it’s an exciting thing. According to an open-access paper published this month by Liaing et al in ISPRS vol. 95, that seems to be the case, with the team’s presentation of ‘PRISMS’ (Portable Remote Imaging System for Multispectral Scanning) which was designed “for portable, flexible and versatile remote imaging”. To quote what it is capable of:

“In this paper, we demonstrate a spectral imaging system that allows automatic, in situ, remote imaging (distances up to 35 m) of paintings at high resolution that gives not only spectral information per pixel of the paintings, but also 3D position and distance measurements as a by-product.”

The specification of PRISMS is pretty impressive. The multispectral imaging can provide imagery at a resolution of 80 microns (0.08mm) from a distance of 10m, covering a range of 400-1700nm with a spectral resolution of 50nm. After calibration the 3D data capture achieved distance accuracy of ‘a few mm’, achieved at distances of 10m, though steadily worse at distances greater than that albeit with an indication of room for improvement with better calibration.

Sanskrit revealed in cave 465. Panel D shows 'difference image between 550 nm and 880 nm'.  Source:  Liaing et al (2014) http://www.sciencedirect.com/science/article/pii/S0924271614001336

Sanskrit revealed in cave 465. Panel D shows ‘difference image between 550 nm and 880 nm’.
Source: Liaing et al (2014)

The real-world applications have already been demonstrated: the paper shows that the tool revealed faded Sanskrit writing on the ceiling of China’s Mogao cave 465, otherwise not visible in solely colour and spectral imaging. In addition, it reveals invisible drawings and spectrally identifies pigments such as red ochre and azurite. 3D data capture and multispectral imaging have been possible separately by combining different tools previously, so the team’s development means a great deal for streamlining the investigation processes on-site and enabling easier, richer capture of heritage data.

Link to paper: http://www.sciencedirect.com/science/article/pii/S0924271614001336