Monthly Archives: November 2014

Data Management on the Scottish Ten Project

To echo a couple of the previous blogs, this is not my first Scottish Ten digital documentation project. While I was a part of the survey undertaken in November 2012 at the Eastern Qing Tombs in China, my focus was solely on-site data capture.

However, Nagasaki is my first Scottish Ten project where my role has been primarily data management and processing, which means assembling the data captured by the team, and registering and ensuring updated quadruplicate backups of over a terabyte of crucial survey information. I had arrived with a pre-prepared spreadsheet to help manage the multiple scanning and photography inputs and backup completion, though this quickly saw some amendments to fit the job and equipment at hand.

Some key equipment every office should have (Taken at Kosuge)

I have been well accommodated at the different sites so far with temporary office space: at the Kosuge slip-dock I was located in a nearby community hall, furnished with traditional Japanese tatami mats. At the working Mitsubishi dock my office was located within the footprint of the Cantilever crane itself, giving the team easy access to offload data during the day and preview areas that have been captured.

Reviewing data with Mr Hachiya of Mitsubishi Heavy Industries at the Giant Cantilever Crane office.

The crane in particular presented a real challenge for capturing data. Data coverage of steel beams with flanges and self-occluding structural elements could only be checked from different positions with other registered data. This meant tying together and examining data sets from the two jib teams and the ground team’s laser scanners to ensure there were no shadows or voids in the overall 3D point cloud.

To add to the difficulty, as the majority of the crane was scanned in a single static position, capturing all sides of the movable jib (overhanging the water, it was later rotated 180 degrees) meant delving into the pointclouds, cutting out the desired components then registering. With 5 scanners on the go from a variety of manufacturers (4 plus a scanning-capable ‘multi station’, the Leica MS50) it was easy to see the number of points captured for the crane registration alone soar to over a billion in a couple of days.

A perspective view of the laser scan data up the Kosuge slipway.

The same data management challenges are true of the Kosuge slip-dock and the No. 3 dry dock, with the primary focus of Hashima Island (aka ‘Gunkanjima’) being the vast amount of data from the various types of photography (including time-lapse, panoramic, HDR, 2D and 3D video). This will serve as a rich high resolution record of areas of the island in their current state, which are usually inaccessible to the public.

-Adam Frost

Real-time depth imagery on a Raspberry Pi

Test image of an extracted depth-map from a stereo pair. URL: http://www.raspberrypi.org/real-time-depth-perception-with-the-compute-module/

Test image of an extracted depth-map from a stereo pair. URL: http://www.raspberrypi.org/real-time-depth-perception-with-the-compute-module/

This is a cool example of the low-cost platform offered by the Raspberry Pi being used for inventive projects. The Raspberry Pi is a capable, low cost (Model B+ is £25GBP/$40USD) credit-card sized computer that was first released in February 2012. It was introduced to encourage computer science skills, particularly with students. This project sought to capture 3D depth maps by using stereo-pair cameras linked up to a Raspberry Pi Compute Module. Argon Design intern David Barker concocted a rig that processes stereoscopic depth maps from two cameras at a rate of 12FPS. The core functionality is provided by an algorithm well known in video compression: by splitting frames into blocks then comparing to blocks from other frames, you can detect motion and measure parallax, whilst taking advantage of real-time video processing capabilities. The story posted on the Raspberry Pi website has further details on how this was achieved, and the steps through which the algorithms were optimised to run in real-time.

The Raspberry Pi has been used in a separate instance to achieve 3D data capture through the construction of a spherical setup (using 40 Raspberry Pi + camera units) to capture entire body scans simultaneously. These two setups represent a means to achieving different results for different purposes: real-time applications tout immediate uses for robot vision, whilst applications that can be processed ‘offline’ are central to a host of data capture techniques creating accurate digital records of environments and objects, particularly those such as SfM photogrammetry and terrestrial laser scanning.