Recently we were approached by the Australian Institute of Marine Science (AIMS) for some advice on processing hardware. They were struggling to process the underwater datasets used to record the Great Barrier Reef.
Creating a baseline record of the reef is important. Tracking changes over time helps understand the health of an incredibly important ecosystem that is under pressure from climate change. AIMS needed to create and analyse the transects over the reef before the next set of images were gathered.
The sheer volume of images in their datasets were beyond reasonable processing time and their current hardware & infrastructure could not cope. Could we review and recommend kit – RAM, GPUs and network etc – that could process and deliver orthophotos faster?
The first step was to review the process used to gather the images. AIMS were using a towed array comprising of six GoPro cameras with some post processing to merge the images with GPS data. The cameras were set to shoot on an interval timer as the rig was drawn over the reef, shooting thousands of images in a very short time.
From our own experience we know that sinking feeling when images refuse to align thanks to “the one” missing overlap. But the technique immediately rang a few alarm bells. Calculating the overlap showed there just might be a significant amount of redundant overlap.
From our work with the US National Parks Service we know the value of multiple cameras but as the GoPros were not synchronised we couldn’t use the scaling advantages stereoscopic cameras offer. There are some marginal gains in processing to be gained when working with multiple cameras, but it would not help here.
A New Approach
Could we reduce the volume of images and preserve output quality and alignment?
The first sample dataset arrived and comprised of 3570 images. After processing the mesh covered 114m2 of reef and the ortho photo was achieving 1.7mm2 per pixel…but the processing time for a reasonably small area was horrendous and took more than the standard 1 cup of coffee.
It was time to take a step back and look at what Metashape really needs, and how could we cut the processing time by culling redundant images. Would the proposal end up spending nothing on hardware?
The step killing efficiency was alignment. How could we speed up this step? Could we maintain the quality of the outputs? Metashape has a Reduce Overlap tool that can set the number of cameras used and disable unwanted ones, but that needs a mesh…and that needs alignment.
Jose took a step back and after a little thinking came up with a new workflow to produce a low quality mesh upon which Reduce Overlap could work its magic. During testing we found the 3570 images in the dataset could be culled to 270 (Yes…really…) and deliver comparative results. Mr photogrammetry (as we call Jose) had delivered the goods again.
A second look at the issue saw our python team disappear for a few hours and produce a script that diced up the pre-aligned images into cells based on their GPS values. The camera nearest the centre of the square would be selected and the rest disabled.
This method would cull the initial alignment down to around 500 images. Running this before Jose’s new workflow improved speed and refined the number down to the 270 really needed.
After a few enhancements such as user input cell size, number of images per cell, error checking and warnings of missing GPS values, the script was delivered along with the new workflow.
We will leave the summary to the customer:
The optimisation script alone was worth the investment so very happy with the outcome and the service…
This is the kind of work we relish in and a challenge is what Jose and Simon need to keep the curious minds busy.
We have not asked how much budget was being set aside for new hardware but remain confident the consulting services were significantly lower.
Plus…an added benefit is all that hardware running for no purpose will not be powered up. The electricity won’t be consumed and resulting carbon impact not happen.
It’s a tiny contribution but every little counts and the reef may just benefit in the long term if this approach was used more often.
This workflow is a game-changer for anyone gathering massive datasets by human eye.
No one wants to leave the scene with a missing image or two…but processing everything can then become a waste of time and resource.
We are so pleased with this new way of working its going to become a module in our Professional training course as we have a sneaking suspicion all former and current students will benefit from this way of working and we will add the script as a download in the training module.
We will email students when the new module is ready.
If you are interested in deeper learning and knowing more about photogrammetry then do please consider our Agisoft endorsed Metashape training courses.
If you have a photogrammetry issue that would benefit from some consulting services then please use the contact us page to get in touch – we would love to hear from you.