Owheo Building, Room 249
Phone: +64 3 479 5691
Email: lechszym@cs.otago.ac.nz
My research interests are machine learning, deep representation and connectionist models. Our state of the art computation models can learn from examples, but not without the heavy involvement of an expert user, making critical decisions about system architecture, parameters, and appropriate representation. My ultimate objective is to make machine learning into a tool that is easier to use by an average user. However, in order for that to happen, we need to develop truly autonomous machine learning algorithms, capable of forming an appropriate model for the task at hand, completely on their own.
Before my PhD, which I completed in 2012, I worked as a software engineer for a wireless telecommunications company in Ottawa, Canada. My background is in computer and electrical engineering, with a focus on embedded programming and digital signal processing. My interest in artificial neural network models dates back to summer employment, while still an undergraduate student, developing programs for data analysis in a neurobiology laboratory at the National Research Council Canada. Since then I have worked on several aspects of modelling and machine learning, including speech recognition, classification, learning theory and object recognition from images.
For more information and selected publications, see this page.
Publications
Szymanski, L., & McCane, B. (2014). Deep networks are effective encoders of periodicity. IEEE Transactions on Neural Networks & Learning Systems, 25(10), 1816-1827. doi: 10.1109/TNNLS.2013.2296046
Johnson, R., Szymanski, L., & Mills, S. (2015). Hierarchical structure from motion optical flow algorithms to harvest three-dimensional features from two-dimensional neuro-endoscopic images. Journal of Clinical Neuroscience, 22(2), 378-382. doi: 10.1016/j.jocn.2014.08.004
Szymanski, L., & McCane, B. (2013). Learning in deep architectures with folding transformations. Proceedings of the International Joint Conference on Neural Networks (IJCNN). IEEE. doi: 10.1109/IJCNN.2013.6706945
Szymanski, L., & McCane, B. (2012). Deep, super-narrow neural network is a universal classifier. Proceedings of the International Joint Conference on Neural Networks (IJCNN). IEEE. doi: 10.1109/IJCNN.2012.6252513
Szymanski, L., & McCane, B. (2012). Push-pull separability objective for supervised layer-wise training of neural networks. Proceedings of the International Joint Conference on Neural Networks (IJCNN). IEEE. doi: 10.1109/IJCNN.2012.6252366
Journal - Research Article
Johnson, R., Szymanski, L., & Mills, S. (2015). Hierarchical structure from motion optical flow algorithms to harvest three-dimensional features from two-dimensional neuro-endoscopic images. Journal of Clinical Neuroscience, 22(2), 378-382. doi: 10.1016/j.jocn.2014.08.004
Szymanski, L., & McCane, B. (2014). Deep networks are effective encoders of periodicity. IEEE Transactions on Neural Networks & Learning Systems, 25(10), 1816-1827. doi: 10.1109/TNNLS.2013.2296046
Conference Contribution - Published proceedings: Full paper
Szymanski, L., & McCane, B. (2013). Learning in deep architectures with folding transformations. Proceedings of the International Joint Conference on Neural Networks (IJCNN). IEEE. doi: 10.1109/IJCNN.2013.6706945
Szymanski, L., & McCane, B. (2012). Deep, super-narrow neural network is a universal classifier. Proceedings of the International Joint Conference on Neural Networks (IJCNN). IEEE. doi: 10.1109/IJCNN.2012.6252513
Szymanski, L., & McCane, B. (2012). Push-pull separability objective for supervised layer-wise training of neural networks. Proceedings of the International Joint Conference on Neural Networks (IJCNN). IEEE. doi: 10.1109/IJCNN.2012.6252366