Sorry, you need to enable JavaScript to visit this website.
ai
CS

Reinforcement Learning

In recent years, advancements in AI have generated much public interest. Large language models have shocked many with their power to mimic critical thinking skills on complex problems, a distinctly human behavior. Autonomous controllers are beginning to be deployed in safety-critical systems like driving and flying. All this progress has now introduced new questions: can we trust these systems? Should human involvement remain? This semi-lab will not give any answers to ethical questions. Rather, we will learn how we got here. A key component of these advances is Reinforcement Learning (RL), in which an agent learns how to interact with its environment to maximize fulfilling a goal. This semi-lab will delve into the inner workings of Deep RL and the mathematical concepts that are used. This will involve some complex concepts like logic and derivatives. We will then look at and train controllers for simulators of potential real-world autonomous systems and see how researchers are working on making sure we can trust them. 

Prerequisites: Experience with Python is heavily encouraged.

Difficulty level: Advanced