Navigation auf uzh.ch
We are currently focusing on three research areas in Reinforcement Learning.
Learning from diverse feedback is particularly useful in domains where defining an exhaustive and effective reward function is difficult, or where human expertise can significantly enhance the learning process. This approach has been gaining attention in applications ranging from robotics, where mimicking human behavior is beneficial, to complex decision-making systems in business or medicine.
Imitation learning is a machine learning technique where an agent learns to perform tasks by mimicking or replicating the behavior observed from demonstrations by a human or another proficient agent. This approach is particularly valuable in scenarios where defining explicit rules or reward structures is challenging. It is divided into:
Multiagent reinforcement learning involves multiple agents that learn or make decisions independently or cooperatively in a shared environment. MARL is an extension of single-agent reinforcement learning and introduces new complexities due to the interactions among agents. These can be competitive (as in games), cooperative (as in team robotics), or both. Safety in MARL focuses on ensuring that learning and decision-making by agents do not lead to undesired outcomes, especially in interactions with humans or other agents. This aspect is crucial in applications where safety is a priority, such as autonomous vehicles or healthcare.