The Banquet Scene – Captured with the Artec Eva at .6mm resolution (normal and diffuse maps transition image)
Alabaster wall panel relief fragment; garden scene; birds and a locust in the trees; the king has left his weapons on a table to the right; he reclines on a sofa beneath a vine and his queen sits opposite him; they are drinking and refreshments are on the table; maids fan the royal couple; others bring food or play music; suspended from a tree behind the queen is the head of Teumman, king of Elam; the furniture is very elaborate. © The Trustees of the British Museum
In seeking to model and render CGI environments, material shaders and their respective maps have always sat as the foundation of making the sterile virtual environment appear realistic. Data capture methods that bring 3D objects into virtual environments go some way in bringing a facsimile of the real into an entirely virtual place, and typically a mesh that was generated by Structure from Motion photogrammetry or structured light scanning for example will feature a diffuse texture map. Taking these principles, Quixel Megascans is a service and suite that allows artists/modellers alike to take advantage of a library of scanned-in material maps with a range of acquired parameters. They even built their own material scanner to generate these maps.
Breakdown of maps that constitute material parameters
This is big, because it hasn’t previously been so accessible to take advantage of captured material data and integrate it into a 3D workflow with your everyday BRDF shader in 3DS Max. It means 3D reconstructions of cultural heritage sites, for example, don’t have to be accurate 3D data punctuated with environments of artistic representations of foliage, generic mudbrick, masonry, etc, but are physically based representations of those materials from colour to translucency and specular reflections (Quixel list captured maps as PBR Calibrated Albedo, Gloss, Normal, Translucency, Cavity, Displacement, AO, Bump and Alpha). This is exciting because, alongside a trend of movement from biased towards unbiased rendering algorithms and the continual advance of computational power, these richer environments aren’t just increasingly ‘realistic looking’ but actually become more accurate representations of natural and built environments.
The White House recently announced its most cutting edge use of 3D data capture technology for presidential portraiture with the release of a video examining the project and processes behind it. Initially proposed two years ago by the Smithsonian Institution as a way of documenting the president, the project used a 3D methodology that evoked the prior scanning of the Lincoln life masks: presidential casts originally taken in 1860 and 1865 respectively.
Over a century and a half later, this project saw the Smithsonian Institution use the handheld structured-light scanner Artec EVA (up to 0.5mm spatial resolution), combined with data captured in a Mobile Light Stage setup by partners at the University of Southern California’s Institute for Creative Technologies. With the Artec data and over 80 photographs captured via the MLS, the dataset was augmented with handheld photography, prior to quality assessment and dispatching the data to Autodesk for processing. 72 hours later having registered, unified and normalised spatial and photographic data, plus the addition of a modelled plinth, the produced mesh output consisted of 15 million triangles.
The final step to transition the digital mesh to a physical bust saw the transfer of the data to 3D Systems, and utilised 3D printing using SLS (Selective Laser Sintering) to create an accurate representation of the dataset, standing 19in (48cm) tall and weighing around 13lbs (5.8kg). The prints will be entered into the Smithsonian’s National Portrait Gallery collection alongside the raw data from the scanning.
For further information see the original Smithsonian blog from the Digitization Program Office detailing the project.