Source: Kopf et al (2014) https://www.youtube.com/watch?v=sA4Za3Hv6ng
Very recently Microsoft announced an exciting development: that they had developed a new way of stabilising hyperlapse camera footage (that is, time-lapse footage where the camera moves as a tracking shot). With hyperlapse techniques you can achieve very cool and unique results such as here and here. Traditionally this is achieved with a series of photographs manually compiled together. Their development meant you could do so from video footage, so you could continuously record and it would distil your footage to a smooth, hyperlapse adventure. This is cool because it vastly simplifies the process and further empowers people to make awesome videos. Examples of its use are in really any situation, and there are great examples applied to nature and cities alike, first person POV or wide-angle cityscapes. Important to note that Microsoft had not announced how or when this research would come to market.
So on top of this hyperlapse hype, Instagram announced very recently that they had developed and released (on iOS) the functionality to allow users themselves to undertake similar capture. More on it here. Essentially, Instagram are about bringing this hyperlapse tech to your smartphone. A brief comparative look at the techniques seems to indicate that the Microsoft method relies upon reconstructing the scene in 3D (Microsoft’s technical explanation video shows a point-cloud generated via SfM photogrammetry, which is exciting on its own in how it deals with the unusual datasets) and developing a new, stabilised camera path through the old one. Instagram’s approach on the other hand uses the internal gyroscopic sensors to stabilise the footage via it’s ‘Cinema’ tech developed by lead project engineer Alex Karpenko. What stands out here is that Microsoft’s technique appears to work with a range of footage (thinking GoPro, smart-phone, DSLR, etc) regardless of additional sensor information, whereas Instagram’s Hyperlapse appears to rely on smartphones’ sensors to stabilise the footage (it is on iOS and will be on Android when API support becomes available).
Game engines have long been used to power reconstructions and visualisations within the realm of built heritage and archaeology. Without reference to the volumes of examples that have in many ways laid the foundations, in the past few months (and days) a couple of relatively recent developments stood out to me:
1) March 2014 – CryEngine – Digital Digging project (recreating uppsala) http://digitaldigging.net/digital-iron-age-environment-recreating-uppsala-bit-bit/ (see http://www.westergrenart.com/ for more)
2) August 2014 – Unreal Engine 4 – Architectural visualisation example
These projects/examples of currently available technologies show both the real and well-adapted application of these tools to construct, and reconstruct (if you want to make the distinction here) rich environments with a wealth of different tools, shaders, lighting options, etc that have moved this world (visually at least) light-years ahead of this work even half a decade ago. As with the Unreal Engine example, people are excited because the animations and stills produced (in real-time via the game engine) broadly match up with the quality of those produced through long render-times by offline renderers. That isn’t to say that there hasn’t been an investment of time elsewhere, such as baked lighting in gaming, which needs to be taken into account when comparing ‘offline-vs-realtime’ time savings.
Perhaps one of the most enabling and exciting reasons to be inspired by this is that developers such as CryTek and Epic Games have made their high-grade proprietary engines available free for non-commercial use. This is great for those wishing to experiment and opens doors for many to start building their own amazing environments featuring modelled and reality-captured 3D versions of heritage, artefacts and sites, and such as with Daniel Westergren’s Iron Age Uppsala, evidence based reconstructions of historic or prehistoric sites.
One of the key challenges to the visualisation of 3D reality-capture data has long been how it is presented to those viewing and using it. Within heritage and archaeological study, capturing accurate geometry and diffuse colour texture is traditionally the focus here, with every effort taken to ensure the finished dataset resembles the subject as closely as possible. That takes care of the input. However, the output is a separate challenge. Rendering it accurately depends on the subject and its material, and to what ‘level of detail’ you need to convey; this can completely influence how we see and interpret the artefact or site. In a lot of circumstances where we view these models, we accept that in most cases we view the objects with completely synthetic lighting (or lighting setup), with little regard to how accurately the output matches the reality, but just that they show us the 3D model or point cloud.
This is where physically based rendering (PBR) comes in. In short, it is used to accurately simulate lighting in a scene according to a model of the laws of physics. While ‘offline’ 3D renderers have been able to support these sophisticated lighting and material properties, including global illumination, HDRI lighting, texture maps of reflectance, refraction and sub-surface scattering (such as with marble), we have seen that basic viewers and web-based viewers have struggled to keep up. This announcement is exciting, as not only does it highlight the current state of the art in web-based viewing via WebGL of more sophisticated 3d datasets but also its development and adoption by a major vendor. Presented at this year’s SIGGRAPH 2014, the tech represents a step in the right direction that will enable us to more widely engage with richer, more accurate visualisations of heritage sites and archaeological artefacts.
This is the only photo I have with Adam. This was taken while working on a project at the British Museum last week. More on that project later.
We (Adam + Scott) really started this blog to keep track of all the innovative projects and products we want to keep track of in heritage and related fields. We are running it as a sister project to where we work, CyArk, a non profit organization whose mission is to digitally preserve the world’s cultural heritage, among a few other things.
This blog will explore the development and use of digital survey and visualisation tools in heritage. In addition we hope to share a few tips and tricks, as well as what we are up to in our Oakland and Edinburgh offices.
Disclaimer: Our goal is to update this daily. This might be a lofty goal but we do promise weekly.