Microsoft’s FlexSense , source: https://www.youtube.com/watch?v=3Jo9ww9cLzg
Another recent innovation from Microsoft’s R&D labs has presented us with ‘FlexSense’, which you can see demonstrated in a video here. This was brought to my attention in an article posted on The Verge. FlexSense is a thin, transparent, flexible surface that tracks its own deformation in 3D. It can be used to control applications in a variety of ways, such as the masked display of certain areas of an underlying screen and manipulation of mesh surfaces in 3d programs. It owes its versatility to the high degrees of freedom afforded by the sensors embedded in the device, which are shown as having been calibrated and ground-truthed by visual markers.
The examples that Microsoft demonstrate enable you to picture a number of uses, of which I can imagine a rich range of applications to digital heritage across engagement and conservation. Just in its capacity as an ‘overlay’ alone, you could peel away an animated reconstruction of an archaeological site to see the underlying evidence on which it is based, for example. Also layering other information such as uncertainty of specific features of an archaeological interpretation, alternate reconstructions, lighting-only of such an environment (based on accurate calculations), and also annotations of a 3D environment or object.
As a controller, perhaps it could also offer the ability to more naturally virtually manipulate high resolution scans of ’2D’ artefacts such as scrolls, parchments, charters, etc, which would otherwise be far too delicate to handle in a way that could deform it as such (by conservation professionals and members of the public alike). This could enable a much more natural simulation, exploration and interpretation without touching the fragile source material. It will be interesting to see where the FlexSense development goes, and hopefully its offering to these areas will bear fruit, if it hasn’t already.
Game engines have long been used to power reconstructions and visualisations within the realm of built heritage and archaeology. Without reference to the volumes of examples that have in many ways laid the foundations, in the past few months (and days) a couple of relatively recent developments stood out to me:
1) March 2014 – CryEngine – Digital Digging project (recreating uppsala) http://digitaldigging.net/digital-iron-age-environment-recreating-uppsala-bit-bit/ (see http://www.westergrenart.com/ for more)
2) August 2014 – Unreal Engine 4 – Architectural visualisation example
These projects/examples of currently available technologies show both the real and well-adapted application of these tools to construct, and reconstruct (if you want to make the distinction here) rich environments with a wealth of different tools, shaders, lighting options, etc that have moved this world (visually at least) light-years ahead of this work even half a decade ago. As with the Unreal Engine example, people are excited because the animations and stills produced (in real-time via the game engine) broadly match up with the quality of those produced through long render-times by offline renderers. That isn’t to say that there hasn’t been an investment of time elsewhere, such as baked lighting in gaming, which needs to be taken into account when comparing ‘offline-vs-realtime’ time savings.
Perhaps one of the most enabling and exciting reasons to be inspired by this is that developers such as CryTek and Epic Games have made their high-grade proprietary engines available free for non-commercial use. This is great for those wishing to experiment and opens doors for many to start building their own amazing environments featuring modelled and reality-captured 3D versions of heritage, artefacts and sites, and such as with Daniel Westergren’s Iron Age Uppsala, evidence based reconstructions of historic or prehistoric sites.