NOT OFFERED IN 2020
Paper Description
Classical and modern solution methods for inverse problems including image de-blurring and analysis of experimental data.
An introduction to inverse problems that arise in mathematical descriptions of physical systems, including image deblurring and analysis of data from experiments. Covers analytic and computational tools for solving inverse problems, based on classical regularization methods and Bayesian inference, as well as theoretical properties of inverse problems and solutions.
Prerequisites:
None
This paper consists of 14 lectures and 4 workshops. There are 3 assignments.
Assesment:
Final Exam 70%, Assignments 30%
Important information about assessment for ELEC445
Course Coordinator:
Associate Professor Colin Fox
After completing this paper students are expected to have achieved the following major learning objectives:
- Identify the essential elements of an inverse problem, and describe examples of inverse problems including image deblurring and curve-fitting.
- Know the defining properties and identify well-posed and ill-posed problems, and well-conditioned and ill-conditioned operators.
- Know the defining properties of the singular value matrix decomposition, explain the action of multiplying a matrix and vector in terms of the singular value decomposition, and explain how small singular values lead to noise blow-up of the least-squares solution to a linear inverse problem.
- Solve a linear inverse problem using Tihkonov regularization.
- Solve a linear inverse problem using truncated singular value decomposition regularization.
- Explain the effect of varying the regularization parameter and use the L-curve strategy to find a suitable regularization parameter.
- Code up a regularization method to solve a linear inverse problem in MatLab or python.
- Model a physical experiment where data is measured as an inverse problem, stating a suitable prior distribution and likelihood function.
- Use Bayes' Rule to solve an inverse problem in terms of a posterior probability distribution.
- Define, and in simple cases compute, maximum likelihood and maximum a posteriori estimates for the solution of an inverse problem.
- Given independent samples from the posterior distribution, estimate the solution and uncertainty of the solution to an inverse problem.
- Compare classical regularization with the Bayesian approach for solving inverse problems.
- Code up a MCMC method that solves a linear inverse problem.
Additional outcomes:
An overall goal is to provide each student with confidence in their ability to recognize and solve inverse problems that they are likely to meet in their future studies.
Topics:
- Examples of inverse problems, including curve fitting and image deblurring
- Ill-posedness and ill-conditioning
- Singular value decomposition
- Tihkonov regularization, truncated SVD regularization, and L-curve
- Introduction to probability, Bayes rule, CLT
- Formulating and solving inverse problems using Bayesian modeling and inference
- MCMC methods, expectation, and uncertainty quantification
The ELEC445 Support Home Page
Formal University Information
The following information is from the University’s corporate web site.
Details
Classical and modern solution methods for inverse problems including image deblurring and analysis of experimental data.
Paper title | Inverse Problems and Imaging |
---|---|
Paper code | ELEC445 |
Subject | Electronics |
EFTS | 0.0833 |
Points | 10 points |
Teaching period | Not offered in 2021 (On campus) |
Domestic Tuition Fees (NZD) | $673.90 |
International Tuition Fees (NZD) | $2,981.97 |
- Limited to
- BSc(Hons), PGDipSci, MSc, MAppSc
- Contact
- colin.fox@otago.ac.nz
- Teaching staff
- Director of Electronics: Associate Professor Colin Fox
- Textbooks
- Textbooks are not required for this paper.
- Graduate Attributes Emphasised
- Global perspective, Interdisciplinary perspective, Lifelong learning, Scholarship,
Communication, Critical thinking, Information literacy, Self-motivation, Teamwork.
View more information about Otago's graduate attributes. - Learning Outcomes
- After completing this paper students are expected to:
- Identify the essential elements of an inverse problem and describe examples of inverse problems, including image deblurring and curve-fitting
- Know the defining properties and identify well-posed and ill-posed problems and well-conditioned and ill-conditioned operators
- Know the defining properties of the singular value matrix decomposition, explain the action of multiplying a matrix and vector in terms of the singular value decomposition and explain how small singular values lead to noise blow-up of the least-squares solution to a linear inverse problem
- Solve a linear inverse problem using Tihkonov regularisation
- Solve a linear inverse problem using truncated singular value decomposition regularisation
- Explain the effect of varying the regularisation parameter and use the L-curve strategy to find a suitable regularisation parameter
- Code up a regularisation method to solve a linear inverse problem in MatLab or Python
- Model a physical experiment in which data is measured as an inverse problem, stating a suitable prior distribution and likelihood function
- Use Bayes' rule to solve an inverse problem in terms of a posterior probability distribution
- Define, and in simple cases compute, maximum likelihood and maximum a posteriori estimates for the solution of an inverse problem
- Given independent samples from the posterior distribution, estimate the solution and uncertainty of the solution to an inverse problem
- Compare classical regularisation with the Bayesian approach for solving inverse problems
- Code up an MCMC method that solves a linear inverse problem