Medical and biological imaging are important and valuable areas of research. Medical images are often quite different from everyday images - X-Ray images, MRI, and CT scans all pose unique challenges.
X-ray images of the spine are an important diagnostic tool for a range of disorders. An X-ray image provides a 2D representation of a 3D structure. Unlike a normal image, however, the X-ray provides information about the internal structure of a person, rather than just a surface view. We are interested in using multiple X-ray images to recover information about the 3D structure of the spine, similar the the way that image-based reconstruction recovers 3D models of a scene from multiple colour images.
For more information see the ClaritySMART website.
Human-Computer Interaction (HCI) is often conducted through a graphical user interface. These interfaces have remained more or less constant for nearly twenty years. We are so used to the screen, keyboard, and that we forget that this is merely a fashion and an accident of history. We are interested in new and intuitive ways of interacting with devices.
The Octagon is both an exhibit and an experiment in cooperation and user interface design. Eight computers provide eight 'artists' with a view into a shared virtual room. In the center of the room is a platform on which sculpture may be built. Each participant can add to the sculpture using a very intuitive interface. It 'feels' as though you just draw what you want and it appears in 3D. Most users need no instruction. Each user's 2D view is inherently ambiguous, yet artists do not seem to ask how their 2D gestures are translated. They discover the rules by experimentation and usually don't even realize that there are rules. The Octagon can be presented in various forms. The simplest is to set up eight computers in a ring so they surround a real space. But it is possible to put the computers anywhere. Activity can be shared across the room or across the world.
The long term goal of this project is to better understand how interaction can support cooperative work. We are also interested in the structure of the interactive process. Much research deals with the use of novel devices. Our concern is more with how the gestures of interaction should be organized and utilized. The major challenges in this work are to render the networking seamlessly so that each user always has a consistent view of the shared world, and to ensure the response to gestures have as natural a feel as possible.
The Watching Window is an experiment in this idea of natural communication. The user is "watched" by two tiny cameras and the computer must deduce from gestures what the user's requirements are.
In this simple demonstration, the computer is simulating a window into a 3D world. The cameras track your hands and eyes and the display is changed accordingly. If you move your head, you see the simulated world from a different angle. You can interact with the display by pointing at simulated objects. The 3D effect can be made even more convincing with the use of stereo glasses.
As you move around a real object, you see it from different sides. You can look at objects in the watching window the same way.
Lower your head and you get a view from underneath. To achieve this the computer must work out what the object would look like from your point of view. This is what we display on the screen.
Tartini is a program designed as a practical music analysis tool for singers and instrumentalists. Just plug in a microphone and instantly your computer will give real-time feedback including:
- Accurate pitch contours for visualising intonation, vibrato shape, tuning or just which note is being played
- Loudness graphs, to help analyse dynamics
- Harmonic structure of a note describing timbre
The Tartini project has been asleep for a couple of years but we are very interested in waking it up. The right person will be a capable programmer with knowledge or interest in Physics and some experience of playing an instrument. If you think you have what it takes to do a PhD in this area please e-mail Geoff Wyvill directly.
You can find out more, and get the software, at the Tartini project homepage.
An individual image gives only partial information about a 3D scene or object. As well has having a limited field of view, information is lost in the projection from the 3D world to a 2D image. By combining information from multiple images, or from images with structured lighting, we can recover 3D models of the world.
Aerial Image Processing
One of the oldest applications of image-based reconstruction is aerial photogrammetry. Advances in digital imaging and unmanned aerial vehicles (UAVs) mean that large numbers of high-quality images can be captured. This raises challenges for the traditional processing methods, and so increased automation is required.
Our work in this area ranges from reconstructing 3D terrain models to producing detailed 2D map images. From the images we can build realistic 3D models of the world. Initially we reconstruct a dense point cloud model, and then a mesh is fitted over it. This mesh can be textured from the original images, and a detailed map of the world created. If we know the GPS locations of the cameras when the images were taken, then we can align the models to the world, and make accurate measurements in them.
This research is carried out in partnership with Areograph Ltd. and Hawkeye UAV Ltd.
This project, joint with colleagues at the University of Canterbury, aims to build a robot capable of pruning crops. Computer vision techniques are used to build a 3D model of the plant so that an AI system can guide a robotic arm to prune away unwanted branches. A combination of trinocular stereo and structured light system is used.
Trinocular stereo uses three cameras to reconstruct 3D information, in much the same way as we do with two eyes (binocular stereo). Using three cameras provides additional constraints and reduces the number of situations where only one camera can see part of the scene. Our structured light uses laser stripes projected into the scene. On a flat surface this would create straight lines in the images, and by analysing the way the stripe's image is deformed we can recover shape information.
This project is supported through partnership with industry and government funding from MSI.