The Hebrew University Logo
Syllabus Introduction to Control with Learning - 67678
עברית
Print
 
PDF version
Last update 05-09-2022
HU Credits: 2

Degree/Cycle: 2nd degree (Master)

Responsible Department: Computer Sciences

Semester: 2nd Semester

Teaching Languages: Hebrew

Campus: E. Safra

Course/Module Coordinator: oron sabag

Coordinator Email: oron.sabag@mail.huji.ac.il

Coordinator Office Hours:

Teaching Staff:
Dr. Oron Sabag

Course/Module description:
The course introduces control theory starting from stochastic linear problems where analytical solutions can be derived, and proceeds to general Markov decision processes where the modelling, solution principles and algorithmic aspects will be taught.

Course/Module aims:
The course intends to expose students to the fascinating field of control theory and their solution methods.

Learning outcomes - On successful completion of this module, students should be able to:
- Explain the principles in modelling control and estimation settings as a state-space models.
-Solve linear problems in control/estimation with quadratic loss.
- Model sequential problems as a Markov decision process.
- Implement algorithms for MDPs and reinforcement learning.

Attendance requirements(%):

Teaching arrangement and method of instruction:

Course/Module Content:
Background: The role of feedback in control, open- vs. closed-loops, examples of control systems, the state-space model (observability/controllability).
- Linear control and estimation for vector systems: Linear quadratic regulator (LQR), LQG and Kalman filtering: 1. Solutions via a dynamic programming principle in the finite-horizon regime. 2. The infinite-horizon problem via Riccati equations.
- Markov decision processes: the model, examples, policies and objective (discounted rewards/risk-sensitive/ average reward).
- Value function, the Bellman operator and the Bellman equation.
- Value iteration, policy iteration and their convergence.
- Algorithms for reinforcement learning: tabular, Monte Carlo, Temporal difference, Q-learning, actor-critic methods.

Required Reading:
-

Additional Reading Material:
- D.P. Bertsekas, Dynamic Programming and Optimal Control, vol. 1, Athena Scientific, 4th edition, 2017
- K. J. Åström. Introduction to Stochastic Control Theory. Academic Press, 1970.
- Sutton, Richard S., and Andrew G. Barto. Reinforcement learning: An introduction. MIT press, 2018.

Course/Module evaluation:
End of year written/oral examination 0 %
Presentation 0 %
Participation in Tutorials 0 %
Project work 0 %
Assignments 10 %
Reports 10 %
Research project 80 %
Quizzes 0 %
Other 0 %

Additional information:
 
Students needing academic accommodations based on a disability should contact the Center for Diagnosis and Support of Students with Learning Disabilities, or the Office for Students with Disabilities, as early as possible, to discuss and coordinate accommodations, based on relevant documentation.
For further information, please visit the site of the Dean of Students Office.
Print