Accessibility Skip to Global Navigation Skip to Local Navigation Skip to Content Skip to Search Skip to Site Map Menu

Large-scale 3D models from images

clocktower 3d model

There’s a story about the Normandy landings in World War II ... rather than use the usual intelligence gathering to build up maps of the coastline – which might alert the German forces to their proposed landing sites – the allies ran a photographic competition, getting people to send in holiday snaps from the French coast. From the thousands of photos they received they were able to develop very detailed maps and the rest is history …

This is sort of what Steven Mills, Zhiyi Huang and David Eyers do with computers to develop large-scale 3D images from normal photos … layering up the images to get the sense of depth.

It’s easy for us to overlay one image on its neighbouring one, to get the overlap just right, but this is surprisingly difficult for a computer to do … the more photos in the set the richer the image you end up with, but then there’s a whole lot more information to be processed.

“There are mathematical processes underlying this and we can deliver good solutions at each step, but some of the first steps in the process are still problematic. Even the best computer matches between one image and another can be 70% wrong … however if we remove ambiguous matches, they can get it 70% right … it’s all about getting good correspondence between the images – you start with two images, find five points of correspondence, then take those two images and find the five points of correspondence between them and the next photos, and then up and up until you are looking at all the images together … it quickly gets to be a very large computation!”

David Eyers is approaching the challenge from a systems perspective: How do the clusters of computers required to analyse such a large amount of information work their way through the problems?

“I’m most interested in the efficiency of computation for these large amounts of information. If we are crowd-sourcing data – the equivalent of getting people to send in their holiday snaps of the French Coast – the quality will be variable. Usually we would be batch-processing with large clusters of computers. We send data into the queue and wait to see what happens. If the input data has a problem, we have to start all over again.

Recent developments open up the possibility of stream processing – if we’re building up a 3D landscape we can use GPS co-ordinates to anchor some images as reference points and we can run a drone with a camera to acquire data. If the images that come back give us poor coverage we can send a message to the drone to go back over an area and try again while it’s still in flight.”

Computer processing of aerial images runs into some interesting problems when the computer can’t distinguish between one image and the next – consider flying over a pine forest: for the computer it looks like you are hovering over the same tree all the time, because the angles and perspectives from one tree to the next look pretty much identical as you move over one tree and the next …

Steve Mills says the next step is to add in time, making the images 4D - “ From aerial photographs we can build up a picture of how land use has changed over time. We can develop models that include those changes – which is very handy for land management and agriculture. We like hard problems in computing! Not embarrassingly parallel problems, not impossible problems, just interestingly hard ones …”