Category Archives: The Webs

3D scanning the US President: The new portrait

Link to video

3D SLS print of President Obama’s captured portrait. Image source: Smithsonian Institute DPO

The White House recently announced its most cutting edge use of 3D data capture technology for presidential portraiture with the release of a video examining the project and processes behind it. Initially proposed two years ago by the Smithsonian Institution as a way of documenting the president, the project used a 3D methodology that evoked the prior scanning of the Lincoln life masks: presidential casts originally taken in 1860 and 1865 respectively.

Over a century and a half later, this project saw the Smithsonian Institution use the handheld structured-light scanner Artec EVA (up to 0.5mm spatial resolution), combined with data captured in a Mobile Light Stage setup by partners at the University of Southern California’s Institute for Creative Technologies. With the Artec data and over 80 photographs captured via the MLS, the dataset was augmented with handheld photography, prior to quality assessment and dispatching the data to Autodesk for processing. 72 hours later having registered, unified and normalised spatial and photographic data, plus the addition of a modelled plinth, the produced mesh output consisted of 15 million triangles.

The final step to transition the digital mesh to a physical bust saw the transfer of the data to 3D Systems, and utilised 3D printing using SLS (Selective Laser Sintering) to create an accurate representation of the dataset, standing 19in (48cm) tall and weighing around 13lbs (5.8kg). The prints will be entered into the Smithsonian’s National Portrait Gallery collection alongside the raw data from the scanning.

For further information see the original Smithsonian blog from the Digitization Program Office detailing the project.

PBR (no not that one)

One of the key challenges to the visualisation of 3D reality-capture data has long been how it is presented to those viewing and using it. Within heritage and archaeological study, capturing accurate geometry and diffuse colour texture is traditionally the focus here, with every effort taken to ensure the finished dataset resembles the subject as closely as possible. That takes care of the input. However, the output is a separate challenge. Rendering it accurately depends on the subject and its material, and to what ‘level of detail’ you need to convey; this can completely influence how we see and interpret the artefact or site. In a lot of circumstances where we view these models, we accept that in most cases we view the objects with completely synthetic lighting (or lighting setup), with little regard to how accurately the output matches the reality, but just that they show us the 3D model or point cloud.

This is where physically based rendering (PBR) comes in. In short, it is used to accurately simulate lighting in a scene according to a model of the laws of physics. While ‘offline’ 3D renderers have been able to support these sophisticated lighting and material properties, including global illumination, HDRI lighting, texture maps of reflectance, refraction and sub-surface scattering (such as with marble), we have seen that basic viewers and web-based viewers have struggled to keep up. This announcement is exciting, as not only does it highlight the current state of the art in web-based viewing via WebGL of more sophisticated 3d datasets but also its development and adoption by a major vendor. Presented at this year’s SIGGRAPH 2014, the tech represents a step in the right direction that will enable us to more widely engage with richer, more accurate visualisations of heritage sites and archaeological artefacts.