Tutorials

The details of tutorials are still being updated!

Mathematical Guarantees for Fairness in Reinforcement Learning


Abstract: In this tutorial, we shall provide an overview of the state-of-the-art mathematical results in the literature of fair reinforcement learning (fair-RL). We will start with an outline of the fairness notions typically used in the literature of broader machine learning (ML) and artificial intelligence (AI). Then, we shall see how these notions have been formulated in reinforcement learning problems. We will introduce the reinforcement learning methods that have been proposed in the literature along with the respective mathematical guarantees on their performance in terms of fairness and other related performance measures. We will conclude with a discussion about the current challenges and the future directions in fair reinforcement learning research.

Organizers:
  • Pratik Gajane, Eindhoven University of Technology
  • Mykola Pechenizkiy, Eindhoven University of Technology

Neural Program Synthesis and Induction



Abstract: Despite the recent advancement in machine learning, developing artificial intelligence systems that can be understood by human users and generalize to novel scenarios remains challenging. This tutorial provides an in-depth overview of the two emerging research paradigms that aim to address this challenge: neural program synthesis and neural program induction. Neural program synthesis (NPS) methods produce human-readable and machine-executable programs that can serve as task-solving procedures, data representations, or reinforcement learning policies. On the other hand, neural program induction (NPI) approaches aim to induce latent programmatic representations by employing specific network architectural design (e.g., differentiable external memory) or leveraging detailed supervision. This tutorial will cover the transformative impact of neural program synthesis and neural program induction toward building interpretable and generalizable machine learning frameworks.
Organizers:
  • Shao-Hua Sun, National Taiwan University

Trustworthy Learning Under Imperfect Data


Abstract: Standard machine learning algorithms under the assumption of the clean and intact data have been well studied in the past few decades. When it comes to the practical settings, the data, however, is usually imperfect with ubiquitous label noise, adversary and long-tail characteristics, which dramatically degenerates the performance of existing machine learning algorithms. Trustworthy machine learning thereby has become an important pursuit in the machine learning community and the real-world applications. To better understand the landscape of this area, this tutorial systematically recap the background, algorithms and applications for trustworthy learning under three representative types of imperfect data, i.e., noisy data, adversarial data and long-tail-distributed data. For noisy data, we will summarize the most recent noisy-supervision-tolerant techniques, from the viewpoint of statistical learning, deep learning and their applications in industry. In the second part, we will summarize the most recent adversarial data detection and generalization techniques, from the viewpoint of statistics and machine learning, and their applications in industry. Finally, we review many techniques to handle the long-tail-distributed data from the perspective of data augmentation, model personalization and loss design.
Organizers:
  • Jiangchao Yao, Shanghai Jiao Tong University
  • Feng Liu, University of Melbourne
  • Bo Han, Hong Kong Baptist University