ACML 2020 🇹🇭
  • News
  • Program

program

Tutorials

T1: Optimization methods for ML

By Haimonti Dutta, University of Buffalo | Video | Tutorial Website

Abstract

Machine learning algorithms often use optimization to solve problems: for example, when model(s) are constructed to data, they are usually trained by solving an underlying optimization problem. This helps to learn parameters of loss functions and possibly regularization functions if they are used. In the process of model selection and validation, the optimization problem may be solved many times. This entwining of machine learning and optimization makes it possible for researchers to use advances in mathematical programming to study the speed, accuracy and robustness of machine learning algorithms. In this tutorial, we will investigate how popular machine learning algorithms can be posed as unconstrained optimization problems and solved using well known techniques in literature including Line Search Methods, Newton and Quasi-Newton methods, and Conjugate-Gradient and Projection methods. Implementation of algorithms and illustrative examples in the R programming language will be presented.


T2: Recent Advances in Bayesian Optimization

By Vu Nguyen, University of Oxford | Video | Tutorial Website

Abstract

Bayesian optimization (BO) has emerged as an exciting sub-field of machine learning and artificial intelligence that is concerned with optimization using probabilistic methods. Systems implementing BO techniques have been successfully used to solve difficult problems in a diverse set of applications, including automatic tuning of machine learning algorithms, experimental designs, and many other systems. Several recent advances in the methodologies and theory underlying BO have extended the framework to new applications and provided greater insights into the behavior of these algorithms. Bayesian optimization is now increasingly being used in industrial settings, providing new and interesting challenges that require new algorithms and theoretical insights. Therefore, I think having a tutorial on Bayesian optimization for ACML audience is timely, useful, and practical for both academia and industries to know the recent advances on Bayesian optimization in a systematic manner. The topics of this tutorial consists of two main parts. In the first part, I will go into detail the BO in the standard setting. In the second part, I will present the current advances in Bayesian optimization including (1) batch BO, (2) high dimensional BO and (3) mixed categorical-continuous BO. In the end of the talk, I also outline the possible future research directions in Bayesian optimization.


T3: Towards Neural Architecture Search: Challenges and Solutions

By Xiaojun Chang, Monash University | Video | Tutorial Website

AbstractIn recent years, a large number of related algorithms for Neural Architecture Search (NAS) have emerged. They have made various improvements to the NAS algorithm, and the related research work is complicated and rich. In order to reduce the difficulty for beginners to conduct NAS-related research, in this tutorial, we will provide a new perspective: starting with an overview of the characteristics of the earliest NAS algorithms, summarizing the problems in these early NAS algorithms, and then giving solutions for subsequent related research work. In addition, we will conduct a detailed and comprehensive analysis, comparison and summary of these works. Finally, we will give possible future research directions

T4: Forecasting for Data Scientists

By Christoph Bergmeir, Monash University | Video | Tutorial Website

AbstractThough machine learners claim for potentially decades that their methods yield great performance for time series forecasting, until recently machine learning methods were not able to outperform even simple benchmarks in forecasting competitions, and did not play a role in practical applications. This has changed in the last 3-4 years, with methods being able to win several prestigious competitions. The models are now competitive as more series, and longer series due to higher sampling rates, are typically available. In this tutorial, we will briefly recap the history of the field of forecasting and its developments parallel to machine learning, and then discuss recent developments in the field, around learning across series, multivariate forecasting, recurrent neural networks, CNNs, and other models, and how they are now able to outperform traditional methods.

T5: Tensor Networks in Machine Learning: Recent Advances and Frontiers

By Qibin Zhao, RIKEN AIP | Video | Tutorial Website

AbstractTensor Networks (TNs) are factorizations of high dimensional tensors into networks of many low-dimensional tensors, which have been studied in quantum physics, high-performance computing, and applied mathematics. In recent years, TNs have been increasingly investigated and applied to machine learning and signal processing, due to its significant advances in handling large-scale and high-dimensional problems, model compression in deep neural networks, and efficient computations for learning algorithms. This tutorial aims to present a broad overview of recent progress of TNs technology applied to machine learning from perspectives of basic principle and algorithms, novel approaches in unsupervised learning, tensor completion, multi-task, multi-model learning and various applications in DNN, CNN, RNN, LSTM and etc. We also discuss the future research directions and new trend in this area.