Real-time depth imagery on a Raspberry Pi

Test image of an extracted depth-map from a stereo pair. URL: http://www.raspberrypi.org/real-time-depth-perception-with-the-compute-module/

Test image of an extracted depth-map from a stereo pair. URL: http://www.raspberrypi.org/real-time-depth-perception-with-the-compute-module/

This is a cool example of the low-cost platform offered by the Raspberry Pi being used for inventive projects. The Raspberry Pi is a capable, low cost (Model B+ is £25GBP/$40USD) credit-card sized computer that was first released in February 2012. It was introduced to encourage computer science skills, particularly with students. This project sought to capture 3D depth maps by using stereo-pair cameras linked up to a Raspberry Pi Compute Module. Argon Design intern David Barker concocted a rig that processes stereoscopic depth maps from two cameras at a rate of 12FPS. The core functionality is provided by an algorithm well known in video compression: by splitting frames into blocks then comparing to blocks from other frames, you can detect motion and measure parallax, whilst taking advantage of real-time video processing capabilities. The story posted on the Raspberry Pi website has further details on how this was achieved, and the steps through which the algorithms were optimised to run in real-time.

The Raspberry Pi has been used in a separate instance to achieve 3D data capture through the construction of a spherical setup (using 40 Raspberry Pi + camera units) to capture entire body scans simultaneously. These two setups represent a means to achieving different results for different purposes: real-time applications tout immediate uses for robot vision, whilst applications that can be processed ‘offline’ are central to a host of data capture techniques creating accurate digital records of environments and objects, particularly those such as SfM photogrammetry and terrestrial laser scanning.

FlexSense: Interactive, deforming overlay that tracks in 3D – and what it offers us

Microsoft's FlexSense , source: https://www.youtube.com/watch?v=3Jo9ww9cLzg

Microsoft’s FlexSense , source: https://www.youtube.com/watch?v=3Jo9ww9cLzg

Another recent innovation from Microsoft’s R&D labs has presented us with ‘FlexSense’, which you can see demonstrated in a video here. This was brought to my attention in an article posted on The Verge. FlexSense is a thin, transparent, flexible surface that tracks its own deformation in 3D. It can be used to control applications in a variety of ways, such as the masked display of certain areas of an underlying screen and manipulation of mesh surfaces in 3d programs. It owes its versatility to the high degrees of freedom afforded by the sensors embedded in the device, which are shown as having been calibrated and ground-truthed by visual markers.

The examples that Microsoft demonstrate enable you to picture a number of uses, of which I can imagine a rich range of applications to digital heritage across engagement and conservation. Just in its capacity as an ‘overlay’ alone, you could peel away an animated reconstruction of an archaeological site to see the underlying evidence on which it is based, for example. Also layering other information such as uncertainty of specific features of an archaeological interpretation, alternate reconstructions, lighting-only of such an environment (based on accurate calculations), and also annotations of a 3D environment or object.

As a controller, perhaps it could also offer the ability to more naturally virtually manipulate high resolution scans of ’2D’ artefacts such as scrolls, parchments, charters, etc, which would otherwise be far too delicate to handle in a way that could deform it as such (by conservation professionals and members of the public alike). This could enable a much more natural simulation, exploration and interpretation without touching the fragile source material. It will be interesting to see where the FlexSense development goes, and hopefully its offering to these areas will bear fruit, if it hasn’t already.

Revisiting Old Data

One of the initial visions at CyArk was to store all of the RAW data from the sites that compose the archive. With the notion that as software improved we could reprocess the data and produce new derivatives. With the help of Autodesk ReCap we simply uploaded a series of photos of one of the sculptures depicted in the Dashavatara carved panel sequence at the newly inscribed UNESCO site, Rani ki Vav, and an hour later we had the model you see above. While I often do not find time around the office to visit older projects, these results might make me find the time.

Rani ki Vav is part of the Scottish Ten, check out the latest site just announced!

1-s2.0-S0924271614001336-gr1

PRISMS. Spectral imaging + 3D distance measurement = rich, useful datasets

When fresh scanning tools offer the ability to conduct ever-richer investigation of heritage material and provide documentation of an environment, it’s an exciting thing. According to an open-access paper published this month by Liaing et al in ISPRS vol. 95, that seems to be the case, with the team’s presentation of ‘PRISMS’ (Portable Remote Imaging System for Multispectral Scanning) which was designed “for portable, flexible and versatile remote imaging”. To quote what it is capable of:

“In this paper, we demonstrate a spectral imaging system that allows automatic, in situ, remote imaging (distances up to 35 m) of paintings at high resolution that gives not only spectral information per pixel of the paintings, but also 3D position and distance measurements as a by-product.”

The specification of PRISMS is pretty impressive. The multispectral imaging can provide imagery at a resolution of 80 microns (0.08mm) from a distance of 10m, covering a range of 400-1700nm with a spectral resolution of 50nm. After calibration the 3D data capture achieved distance accuracy of ‘a few mm’, achieved at distances of 10m, though steadily worse at distances greater than that albeit with an indication of room for improvement with better calibration.

Sanskrit revealed in cave 465. Panel D shows 'difference image between 550 nm and 880 nm'.  Source:  Liaing et al (2014) http://www.sciencedirect.com/science/article/pii/S0924271614001336

Sanskrit revealed in cave 465. Panel D shows ‘difference image between 550 nm and 880 nm’.
Source: Liaing et al (2014)

The real-world applications have already been demonstrated: the paper shows that the tool revealed faded Sanskrit writing on the ceiling of China’s Mogao cave 465, otherwise not visible in solely colour and spectral imaging. In addition, it reveals invisible drawings and spectrally identifies pigments such as red ochre and azurite. 3D data capture and multispectral imaging have been possible separately by combining different tools previously, so the team’s development means a great deal for streamlining the investigation processes on-site and enabling easier, richer capture of heritage data.

Link to paper: http://www.sciencedirect.com/science/article/pii/S0924271614001336

Why is hyperlapse footage in the news, and why stabilise it? Microsoft & Instagram

Hyperlapse_blog_28th_Aug_2014

Source: Kopf et al (2014) https://www.youtube.com/watch?v=sA4Za3Hv6ng

Very recently Microsoft announced an exciting development: that they had developed a new way of stabilising hyperlapse camera footage (that is, time-lapse footage where the camera moves as a tracking shot). With hyperlapse techniques you can achieve very cool and unique results such as here and here. Traditionally this is achieved with a series of photographs manually compiled together. Their development meant you could do so from video footage, so you could continuously record and it would distil your footage to a smooth, hyperlapse adventure. This is cool because it vastly simplifies the process and further empowers people to make awesome videos. Examples of its use are in really any situation, and there are great examples applied to nature and cities alike, first person POV or wide-angle cityscapes. Important to note that Microsoft had not announced how or when this research would come to market.

So on top of this hyperlapse hype, Instagram announced very recently that they had developed and released (on iOS) the functionality to allow users themselves to undertake similar capture. More on it here. Essentially, Instagram are about bringing this hyperlapse tech to your smartphone. A brief comparative look at the techniques seems to indicate that the Microsoft method relies upon reconstructing the scene in 3D (Microsoft’s technical explanation video shows a point-cloud generated via SfM photogrammetry, which is exciting on its own in how it deals with the unusual datasets) and developing a new, stabilised camera path through the old one. Instagram’s approach on the other hand uses the internal gyroscopic sensors to stabilise the footage via it’s ‘Cinema’ tech developed by lead project engineer Alex Karpenko. What stands out here is that Microsoft’s technique appears to work with a range of footage (thinking GoPro, smart-phone, DSLR, etc) regardless of additional sensor information, whereas Instagram’s Hyperlapse appears to rely on smartphones’ sensors to stabilise the footage (it is on iOS and will be on Android when API support becomes available).

On game engines (look at the pretty)

Game engines have long been used to power reconstructions and visualisations within the realm of built heritage and archaeology. Without reference to the volumes of examples that have in many ways laid the foundations, in the past few months (and days) a couple of relatively recent developments stood out to me:

1) March 2014 – CryEngine – Digital Digging project (recreating uppsala) http://digitaldigging.net/digital-iron-age-environment-recreating-uppsala-bit-bit/ (see http://www.westergrenart.com/ for more)

2) August 2014 – Unreal Engine 4 – Architectural visualisation example
http://www.ronenbekerman.com/unreal-engine-4-and-archviz-by-koola/

These projects/examples of currently available technologies show both the real and well-adapted application of these tools to construct, and reconstruct (if you want to make the distinction here) rich environments with a wealth of different tools, shaders, lighting options, etc that have moved this world (visually at least) light-years ahead of this work even half a decade ago. As with the Unreal Engine example, people are excited because the animations and stills produced (in real-time via the game engine) broadly match up with the quality of those produced through long render-times by offline renderers. That isn’t to say that there hasn’t been an investment of time elsewhere, such as baked lighting in gaming, which needs to be taken into account when comparing ‘offline-vs-realtime’ time savings.

Perhaps one of the most enabling and exciting reasons to be inspired by this is that developers such as CryTek and Epic Games have made their high-grade proprietary engines available free for non-commercial use. This is great for those wishing to experiment and opens doors for many to start building their own amazing environments featuring modelled and reality-captured 3D versions of heritage, artefacts and sites, and such as with Daniel Westergren’s Iron Age Uppsala, evidence based reconstructions of historic or prehistoric sites.

PBR (no not that one)

One of the key challenges to the visualisation of 3D reality-capture data has long been how it is presented to those viewing and using it. Within heritage and archaeological study, capturing accurate geometry and diffuse colour texture is traditionally the focus here, with every effort taken to ensure the finished dataset resembles the subject as closely as possible. That takes care of the input. However, the output is a separate challenge. Rendering it accurately depends on the subject and its material, and to what ‘level of detail’ you need to convey; this can completely influence how we see and interpret the artefact or site. In a lot of circumstances where we view these models, we accept that in most cases we view the objects with completely synthetic lighting (or lighting setup), with little regard to how accurately the output matches the reality, but just that they show us the 3D model or point cloud.

This is where physically based rendering (PBR) comes in. In short, it is used to accurately simulate lighting in a scene according to a model of the laws of physics. While ‘offline’ 3D renderers have been able to support these sophisticated lighting and material properties, including global illumination, HDRI lighting, texture maps of reflectance, refraction and sub-surface scattering (such as with marble), we have seen that basic viewers and web-based viewers have struggled to keep up. This announcement is exciting, as not only does it highlight the current state of the art in web-based viewing via WebGL of more sophisticated 3d datasets but also its development and adoption by a major vendor. Presented at this year’s SIGGRAPH 2014, the tech represents a step in the right direction that will enable us to more widely engage with richer, more accurate visualisations of heritage sites and archaeological artefacts.

https://labs.sketchfab.com/siggraph2014/

We started a blog!

This is the only photo I have with Adam. This was taken while working on a project at the British Museum last week. More on that project later.

Hi All,

We (Adam + Scott) really started this blog to keep track of all the innovative projects and products we want to keep track of in heritage and related fields. We are running it as a sister project to where we work, CyArk, a non profit organization whose mission is to digitally preserve the world’s cultural heritage, among a few other things.

This blog will explore the development and use of digital survey and visualisation tools in heritage. In addition we hope to share a few tips and tricks, as well as what we are up to in our Oakland and Edinburgh offices.

Disclaimer: Our goal is to update this daily. This might be a lofty goal but we do promise weekly.