Accessibility Skip to Global Navigation Skip to Local Navigation Skip to Content Skip to Search Skip to Site Map Menu

Graphics and Vision Research Group

People - Students

 

PhD Students

lewis.bakerLewis Baker

Augmented Reality for Visualisation of Sports Data
Lewis joined the lab in 2015 and worked on projects like a vision-based Power Line Detection and an extension of the ARSandBox. In 2017, he started as a PhD student working on creating an Augmented Reality Sport Spectator system. Spectators may use mobile devices or wear head-mounted displays to see live visualisations, statistics, and commentaries. This project has many challenges such as localisation, effective visualisations, and registration (aligning virtual objects with the real world), which become much more difficult in large uncontrolled environments.

umair.khan.jpgUmair Mateen Khan

Unsupervised Detection of Emergent Patterns in Large Image Collections
At present there is no effective system that can identify objects in an image. This problem is known as semantic gap problem. I will try to solve this problem by scaling up the size of the images collection and then finding semantic information presented in them. As images containing similar objects produce similar features which can be used to retrieve image from a collection and this is how the Bag-of-Words method works. Just as a child learns something through repeated exposures of an object so as a computer can. So for this technique to be effective each object should have many exposures in the collection.

hamza.bennaniHamza Bennani

Surface Matching Applied to Medical Imaging
Hamza Bennani from Morocco has joined in June 2011 the graphics lab at University of Otago, Dunedin, as a PhD Student on surface matching applied to medical imaging. He is a Computer Science and Applied Mathematics Engineer graduating from applied mathematics and computer science department at ENSEEIHT Toulouse, France. In 2009/2010, Hamza was an ERASMUS student in TU-Darmstadt, Germany. During the same period he worked at the Innovation and Integration Systems department at T-Systems, Darmstadt where he worked on machine learning and computer vision techniques. This work was the support of his masters thesis, involving the conception of an application for mobile phones able to recognize 2D Barcode on real time using video streaming. His current work is really exciting, at the interface between human health and computer science. He enjoys creating transdisciplinary links and creating tools useful for everyone.

maria.mikhisorMaria Mikhisor

Real Time Robust Eye Tracking in the Watching Window
Maria is working on eye tracking for the 'Watching Window' project, which presents a customised 3D view of an object to a user. This view changes as the user moves their head, giving an illusion of depth via motion parallax.

xiping.fuXiping Fu

Manifold and Non-Mainifold Learning for Computer Vision
Xiping is investigating dimension estimation and manifold learning techniques to represent low-dimensional sub-spaces in high-dimensional data.

jordan.campbellJordan Campbell

Tracking Articulated Motion
Jordan is interested in tracking articulated objects using prior models of their structure. A common application of this is tracking people or hands using skeletal models, but Jordan is also investigating applications to animal (spider) tracking with Mike Paulin in Zoology.

russel.mesbahRussel Mesbah

Convolutional Neural Networks for Medical Image Segmentation
Convolutional neural networks have recently had great success in object recognition tasks. Rassoul is investigating their application to image segmentation, and medical image segmentation in particular.

Sajida Kalsoom

Distributed Tracking over Multiple Views
Tracking and object in a single camera's view is a well studied problem, and when you have multiple views of the same object, stereo and structure-from-motion methods can be applied. Sajida is interested in the case where you see the same target at different times in non-overlapping views of the world. How do you recognise that two observations are related? What information needs to be shared between the two views?

tapabrata.chakrabortiTapabrata Chakraborti

Fine-Grained Species Recognition
Tapabrata is interested in fine-grained recogntion tasks. While significant progress has been made in object recognition in recent years, this tends to be focussed on broad categories of objects. Tapabrata is investigating techniques for determining specific subcategories, such as determining the species of a bird from an image.

MSc Students

joshua.lapineJoshua La Pine

Musical Instrument Identification
Joshua's research is in the identification of musical instruments from the sound they make. This is related to the concept timbre, which is the qualities of a sound apart from loudness and pitch that makes it distinctive.

400-Level Project Students

2017
Herbert Han: Robot learning
Patrick Skinner: Indirect AR Browser
Frank Zhao: Computational Videography

2015
Amy Heinrich: Kinected Projection
Ashley Manson: Gigesaur (graphics and UI)
Joshua La Pine: Gigesaur (vision and localisation)
Lewis Baker: GPU-accelerated Ray Tracing
Reuben Crimp: Virtual Reality Mirror Therapy
Shahne Rodgers: Gigesaur (networking and communications)
Stephen Markham: Vision for Quadrotor Control

2014
Campbell Young: A Compiler for Lego Robots
Katie Whitefield: Nutrition-based Game
Stefan Orr: Learning for a Robot Arm

2013
Stuart Austin: Texturing 3D Models
Matthew Bennett: Skin and Bones
James Brown: Swarm AI for Games
Nicky Crawford: Fast Mobile Pose Estimation
Aleksis Kavalieris: Procedural Environments for Games
Nicholas Robertson: Fast Mobile Pose Estimation
Jess Todd: Vision-Based Quadrotor Control
Kevin Weatherall: Kinected Projection