Category Archives: Software

The scanned material is greater than the sum of its maps

Link to video

Quixel Megascans test scene. Top: Material mapped render; Bottom: Rendered scene with no materials. Image source: http://quixel.se/megascansintro

In seeking to model and render CGI environments, material shaders and their respective maps have always sat as the foundation of making the sterile virtual environment appear realistic. Data capture methods that bring 3D objects into virtual environments go some way in bringing a facsimile of the real into an entirely virtual place, and typically a mesh that was generated by Structure from Motion photogrammetry or structured light scanning for example will feature a diffuse texture map. Taking these principles, Quixel Megascans is a service and suite that allows artists/modellers alike to take advantage of a library of scanned-in material maps with a range of acquired parameters. They even built their own material scanner to generate these maps.

Breakdown of maps that constitute material parameters

Breakdown of maps that constitute material parameters

This is big, because it hasn’t previously been so accessible to take advantage of captured material data and integrate it into a 3D workflow with your everyday BRDF shader in 3DS Max. It means 3D reconstructions of cultural heritage sites, for example, don’t have to be accurate 3D data punctuated with environments of artistic representations of foliage, generic mudbrick, masonry, etc, but are physically based representations of those materials from colour to translucency and specular reflections (Quixel list captured maps as PBR Calibrated Albedo, Gloss, Normal, Translucency, Cavity, Displacement, AO, Bump and Alpha). This is exciting because, alongside a trend of movement from biased towards unbiased rendering algorithms and the continual advance of computational power, these richer environments aren’t just increasingly ‘realistic looking’ but actually become more accurate representations of natural and built environments.

Revisiting Old Data

One of the initial visions at CyArk was to store all of the RAW data from the sites that compose the archive. With the notion that as software improved we could reprocess the data and produce new derivatives. With the help of Autodesk ReCap we simply uploaded a series of photos of one of the sculptures depicted in the Dashavatara carved panel sequence at the newly inscribed UNESCO site, Rani ki Vav, and an hour later we had the model you see above. While I often do not find time around the office to visit older projects, these results might make me find the time.

Rani ki Vav is part of the Scottish Ten, check out the latest site just announced!

Why is hyperlapse footage in the news, and why stabilise it? Microsoft & Instagram

Hyperlapse_blog_28th_Aug_2014

Source: Kopf et al (2014) https://www.youtube.com/watch?v=sA4Za3Hv6ng

Very recently Microsoft announced an exciting development: that they had developed a new way of stabilising hyperlapse camera footage (that is, time-lapse footage where the camera moves as a tracking shot). With hyperlapse techniques you can achieve very cool and unique results such as here and here. Traditionally this is achieved with a series of photographs manually compiled together. Their development meant you could do so from video footage, so you could continuously record and it would distil your footage to a smooth, hyperlapse adventure. This is cool because it vastly simplifies the process and further empowers people to make awesome videos. Examples of its use are in really any situation, and there are great examples applied to nature and cities alike, first person POV or wide-angle cityscapes. Important to note that Microsoft had not announced how or when this research would come to market.

So on top of this hyperlapse hype, Instagram announced very recently that they had developed and released (on iOS) the functionality to allow users themselves to undertake similar capture. More on it here. Essentially, Instagram are about bringing this hyperlapse tech to your smartphone. A brief comparative look at the techniques seems to indicate that the Microsoft method relies upon reconstructing the scene in 3D (Microsoft’s technical explanation video shows a point-cloud generated via SfM photogrammetry, which is exciting on its own in how it deals with the unusual datasets) and developing a new, stabilised camera path through the old one. Instagram’s approach on the other hand uses the internal gyroscopic sensors to stabilise the footage via it’s ‘Cinema’ tech developed by lead project engineer Alex Karpenko. What stands out here is that Microsoft’s technique appears to work with a range of footage (thinking GoPro, smart-phone, DSLR, etc) regardless of additional sensor information, whereas Instagram’s Hyperlapse appears to rely on smartphones’ sensors to stabilise the footage (it is on iOS and will be on Android when API support becomes available).

On game engines (look at the pretty)

Game engines have long been used to power reconstructions and visualisations within the realm of built heritage and archaeology. Without reference to the volumes of examples that have in many ways laid the foundations, in the past few months (and days) a couple of relatively recent developments stood out to me:

1) March 2014 – CryEngine – Digital Digging project (recreating uppsala) http://digitaldigging.net/digital-iron-age-environment-recreating-uppsala-bit-bit/ (see http://www.westergrenart.com/ for more)

2) August 2014 – Unreal Engine 4 – Architectural visualisation example
http://www.ronenbekerman.com/unreal-engine-4-and-archviz-by-koola/

These projects/examples of currently available technologies show both the real and well-adapted application of these tools to construct, and reconstruct (if you want to make the distinction here) rich environments with a wealth of different tools, shaders, lighting options, etc that have moved this world (visually at least) light-years ahead of this work even half a decade ago. As with the Unreal Engine example, people are excited because the animations and stills produced (in real-time via the game engine) broadly match up with the quality of those produced through long render-times by offline renderers. That isn’t to say that there hasn’t been an investment of time elsewhere, such as baked lighting in gaming, which needs to be taken into account when comparing ‘offline-vs-realtime’ time savings.

Perhaps one of the most enabling and exciting reasons to be inspired by this is that developers such as CryTek and Epic Games have made their high-grade proprietary engines available free for non-commercial use. This is great for those wishing to experiment and opens doors for many to start building their own amazing environments featuring modelled and reality-captured 3D versions of heritage, artefacts and sites, and such as with Daniel Westergren’s Iron Age Uppsala, evidence based reconstructions of historic or prehistoric sites.