Monday, 7 october
Session 1: Methodological Approaches of Machine Learning
Distributed Machine Learning over Networks – Francis Bach
Knowledge representation and model-based image understanding – Isabelle Bloch
Session 2: Artificial Intelligence, Privacy & Ethics
Ethics and autonomous agents – Grégory Bonnet
Privacy-Preserving Algorithms for Decentralized Collaborative Machine Learning – Aurélien Bellet
Learning Anonymized Representations with Mutual Information – Pablo Piantanida
Tuesday, 8 october
Session 3: Machine Learning, Human Learning and Robotics
Meta-learning as a Markov Decision Process – Lisheng Sun
Session 4: Machine Learning, Natural Language Processing and Dialogue
Graph-to-Sequence Learning in Natural Language Processing – Lingfei Wu
Deep reinforcement learning with demonstrations – Olivier Pietquin
Session 5: Physics and Artificial Intelligence
When statistical physics meets machine learning – Lenka Zdeborov
Distributed Machine Learning over Networks
Francis Bach, Professor at Inria and Ecole Normale Supérieure
The success of machine learning models is in part due to their capacity to train on large amounts of data. Distributed systems are the common way to process more data than one computer can store, but they can also be used to increase the pace at which models are trained by splitting the work among many computing nodes. In this talk, I will study the corresponding problem of minimizing a sum of functions which are respectively accessible by separate nodes in a network. New centralized and decentralized algorithms will be presented, together with their convergence guarantees in deterministic and stochastic convex settings, leading to optimal algorithms for this particular class of distributed optimization problems.
Francis Bach is a researcher at Inria, leading since 2011 the machine learning team which is part of the Computer Science department at Ecole Normale Supérieure. He graduated from Ecole Polytechnique in 1997 and completed his Ph.D. in Computer Science at U.C. Berkeley in 2005, working with Professor Michael Jordan. He spent two years in the Mathematical Morphology group at Ecole des Mines de Paris, then he joined the computer vision project-team at Inria/Ecole Normale Supérieure from 2007 to 2010. Francis Bach is primarily interested in machine learning, and especially in sparse methods, kernel-based learning, large-scale optimization, computer vision and signal processing. He obtained in 2009 a Starting Grant and in 2016 a Consolidator Grant from the European Research Council, and received the Inria young researcher prize in 2012, the ICML test-of-time award in 2014, as well as the Lagrange prize in continuous optimization in 2018, and the Jean-Jacques Moreau prize in 2019. In 2015, he was program co-chair of the International Conference in Machine learning (ICML), and general chair in 2018; he is now co-editor-in-chief of the Journal of Machine Learning Research.
Privacy-Preserving Algorithms for Decentralized Collaborative Machine Learning
Aurélien Bellet, Researcher at Inria Lille, MAGNET & CRISTAL Projet-Teams
With the advent of connected devices with computation and storage capabilities, it becomes possible to run machine learning on-device to provide personalized services. However, the currently
dominant approach is to centralize data from all users on an external server for batch processing, sometimes without explicit consent from users and with little oversight. This centralization poses important privacy issues in applications involving sensitive data such as speech, medical records or geolocation logs.
In this talk, I will discuss an alternative setting where many agents with local datasets collaborate to learn models by engaging in a fully decentralized peer-to-peer network. We introduce and analyze asynchronous algorithms that allow agents to improve upon their locally trained model by exchanging information with peers that have similar objectives. I will then describe how to make such algorithms differentially private to avoid leaking information about the local datasets, and analyze the resulting privacy-utility trade-off. I will demonstrate the benefits our approach compared to competing techniques on synthetic and real datasets.
Aurélien Bellet is a tenured researcher at Inria, where he is part of the Magnet Team (MAchine learninG in information NETworks) and affiliated with CRIStAL (UMR CNRS 9189), a research center of the University of Lille. He is also an invited associate professor at Télécom Paris. Prior to joining Inria, he was a postdoctoral researcher at the University of Southern California (working with Fei Sha) and then at Télécom Paris (working with Stephan Clémençon). He obtained his Ph.D. from the University of Saint-Etienne in 2012 under the supervision of Marc Sebban and Amaury Habrard. His main line of research is in statistical machine learning. He is particularly interested in designing large-scale learning algorithms which allow a good trade-off between computational complexity (or other “resources”, such as privacy or communication) and statistical performance.
Knowledge representation and model-based image understanding
Isabelle Bloch, Professor at LTCI, Télécom Paris, Institut Polytechnique de Paris
In this talk, we will discuss the importance of knowledge and models to guide image understanding, and present a few examples. In these examples, structural information is expressed as mathematical models of spatial relations, using fuzzy sets and mathematical morphology. This knowledge is included in models such as ontologies, graphs, logical knowledge bases. Image understanding is then expressed as a spatial reasoning problem. Examples in medical imaging will illustrate these approaches. Finally, recent developments on transfer learning will be illustrated.
Isabelle Bloch is Professor at Telecom Paris (LTCI / IMAGES team). Her research interests include 3D image understanding, computer vision, artificial intelligence, lattice theory, mathematical morphology, discrete 3D geometry and topology, information fusion, fuzzy set theory, graph-based and knowledge-based object recognition, spatial logics and spatial reasoning, and medical imaging.
Ethics and autonomous agents
Grégory Bonnet, Associate Professor, GREYC Lab, Normandie University
Abstract: Recent years have been marked by computer science achievements suggesting that artificial intelligence, robots or autonomous machines will invest more and more in our daily environment. As a result of interacting more and more with humans, an interest in designing moral or ethical autonomous agents has been raised. In this talk, I will investigate first what kind of ethical issues autonomous agents may raise, and what could be an ethical autonomous agent. Then I will present logical architectures that may be useful to design autonomous agents embedded with explicit ethical reasoning capabilities, such as attributing causality and responsibilities, judging, deciding and acting according to ethical principles.
Grégory Bonnet is Assistant Professor at the GREYC Lab, Normandie University, France since 2010. In 2008, he received a Ph.D. degree from the University of Toulouse III, focusing on multi-agent planning. He worked at the University of Technology of Troyes from 2009 to 2010, focusing on autonomic networking. Today, his research topics deals with formal aspects of multi-agent systems, and focus on coordination and cooperation protocols, adaptive behavior and regulation mechanisms. From 2014 to 2018, he lead the French national project ETHICAA (Ethics and Autonomous Agents) which involves computer scientists and philosophers, and aims at defining what could be an architecture for ethical autonomous agents.
Tackling the Data-Efficiency Challenge in Autonomous Robots Using Probabilistic Modeling
Marc Deisenroth, Senior Lecturer, Imperial College London
The vision of intelligent and fully autonomous robots, which are part of our daily lives and automatically learn from mistakes and adapt to new situations, has been around for many decades. However, this vision has been elusive so far. Although reinforcement learning is a principled framework for learning from trial and error and has led to success stories in the context of games, we need to address a practical challenge when it comes to learning with mechanical systems: data efficiency, i.e., the ability to learn from scarce data in complex domains.
In this talk, I will outline three approaches, based on probabilistic modeling and inference, that explicitly address the data-efficiency challenge in reinforcement learning and robotics. First, I will give a brief overview of a model-based RL algorithm that can learn from small datasets. Second, I will describe an idea based on model predictive control that allows us to learn even faster while taking care of state or control constraints, which is important for safe exploration. Finally, I will introduce latent-variable approach to meta learning (in the context of model-based RL) for transferring knowledge from known tasks to tasks that have never been encountered.
Marc Deisenroth, Senior Lecturer, Imperial College London
Developmental Autonomous Learning: AI, Cognitive Sciences and Educational Technology
Pierre-Yves Oudeyer, Professor at Inria, Bordeaux University and ENSTA Paris
Current approaches to AI and machine learning are still fundamentally limited in comparison with autonomous learning capabilities of children. What is remarkable is not that some children become world champions in certains games or specialties: it is rather their autonomy, flexibility and efficiency at learning many everyday skills under strongly limited resources of time, computation and energy. And they do not need the intervention of an engineer for each new task (e.g. they do not need someone to provide a new task specific reward function).
I will present a research program that has focused on computational modeling of child development and learning mechanisms in the last decade. I will discuss several developmental forces that guide exploration in large real world spaces, starting from the perspective of how algorithmic models can help us understand better how they work in humans, and in return how this opens new approaches to autonomous machine learning.
In particular, I will discuss models of curiosity-driven autonomous learning, enabling machines to sample and explore their own goals and their own learning strategies, self-organizing a learning curriculum without any external reward or supervision.
I will show how this has helped scientists understand better aspects of human development such as the emergence of developmental transitions between object manipulation, tool use and speech. I will also show how the use of real robotic platforms for evaluating these models has led to highly efficient unsupervised learning methods, enabling robots to discover and learn multiple skills in high-dimensions in a handful of hours. I will discuss how these techniques are now being integrated with modern deep learning methods.
Finally, I will show how these models and techniques can be successfully applied in the domain of educational technologies, enabling to personalize sequences of exercises for human learners, while maximizing both learning efficiency and intrinsic motivation. I will illustrate this with a large-scale experiment recently performed in primary schools, enabling children of all levels to improve their skills and motivation in learning aspects of mathematics.
Pierre-Yves Oudeyer is a research director at Inria and head of the FLOWERS lab at Inria and Ensta-ParisTech since 2008. Before, he has been a permanent researcher at Sony Computer Science Laboratory for 8 years (1999-2007).
He studies developmental autonomous learning and the self-organization of behavioural and cognitive structures, at the frontiers of AI, machine learning, neuroscience, developmental psychology and educational technologies. In particular, he studies exploration in large open-ended spaces, with a focus on autonomous goal setting, intrinsically motivated learning, and how this can automate curriculum learning. With his team, he pioneered curiosity-driven learning algorithms working in real world robots (used in Sony Aibo robots), and showed how the same algorithms can be used to personalize sequences of learning activitivies in educational technologies deployed at large in schools. He developed theoretical frameworks to understand better human curiosity and its role in cognitive development, and contributed to build an international interdisciplinary research community on human curiosity. He also studied how machines and humans can invent, learn and evolve speech communication systems.
He is laureate of the Inria-National Academy of Science young researcher prize in computer sciences, of an ERC Starting Grant, and of the Lifetime Achievement Award of the Evolutionary Linguistics association. Beyond academic publications and several books, he is co-author of 11 international patents. His team created the first open-source 3D printed humanoid robot for reproducible science and education (Poppy project, now widely used in schools and artistic projects), as well as a startup company. He is also working actively for the diffusion of science towards the general public, through the writing of popular science articles and participation to radio and TV programs as well as science exhibitions.
Learning Anonymized Representations with Mutual Information
Pablo Piantanida, Associate Professor of Information Theory at CentraleSupelec
Statistical methods protecting sensitive information or the identity of the data owner have become critical to ensure privacy of individuals as well as of organizations. In this talk, we present a statistical anonymization method based on representation learning and deep neural networks. Our approach employs adversarial networks to perform a novel variational approximation of the mutual information between the representations and the user’s identity. We introduce a training objective for simultaneously learning representations that preserve the information of interest (e.g., about regular labels) while dismissing information about the identity of a person (e.g., about private labels). We demonstrate the success of this approach for standard classification versus anonymization tasks.
Pablo Piantanida received both B.Sc. in Electrical Engineering and Mathematics, and M.Sc degrees from the University of Buenos Aires (Argentina) in 2003, and the Ph.D. from Université Paris-Sud (Orsay, France) in 2007. Since October 2007 he has joined the Laboratoire des Signaux et Systèmes (L2S), at CentraleSupélec together with CNRS (UMR 8506) and Université Paris-Sud, as an Associate Professor of Network Information Theory. He is an IEEE Senior Member, and coordinator of the Information Theory and its Applications group (ITA) at L2S and General Co-Chair of the 2019 IEEE International Symposium on Information Theory (ISIT).
His research interests lie broadly in information theory and its interactions with other fields. Information theory—the mathematical description of information and its utilization—plays an increasingly fundamental role in numerous areas of applied mathematics and science. He is particularly fascinated by the development of information-theoretic principles and methods that explain the common structure in a variety of applied mathematical problems.
Deep reinforcement learning with demonstrations
Olivier Pietquin, Research Scientist, Google Brain
Deep Reinforcement Learning (DRL) has recently experienced increasing interest after its success at playing video games such as Atari, DotA or Starcraft II as well as defeating grand masters at Go and Chess. However, many tasks remain hard to solve with DRL, even given almost unlimited compute power and simulation time. These tasks often share the common problem of being “hard exploration tasks”. In this talk, we will show how using demonstrations (even sub-optimal) can help in learning policies through different mechanisms such as imitation learning, inverse reinforcement learning, credit assignment or adversarial perturbations.
Olivier Pietquin obtained an Electrical Engineering degree from the Faculty of Engineering, Mons (FPMs, Belgium) in June 1999 and a PhD degree in April 2004. In 2011, he received the Habilitation à Diriger des Recherches (French Tenure) from the University Paul Sabatier (Toulouse, France). He joined the FPMs Signal Processing department (TCTS Lab.) in September 1999. In 2001, he has been a visiting researcher at the Speech and Hearing lab of the University of Sheffield (UK). Between 2004 and 2005, he was a Marie-Curie Fellow at the Philips Research lab in Aachen (Germany). From 2005 to 2013 he was a professor at the Metz campus of the Ecole Superieure d’Electricite (Supelec, France), and headed the “Information, Multimodality & Signal” (IMS) research group from 2006 to 2010 when the group joined the UMI 2958 (GeorgiaTech – CNRS). In 2012, he headed the Machine Learning and Interactive Systems group (MaLIS). From 2007 to 2010, he was also a member of the IADI INSERM research team (in biomedical signal processing). He was a full member of the UMI 2958 (GeorgiaTech – CNRS) from 2010 to 2013 and coordinated the computer science department of this international lab. After that, he joined the University Lille ! as a Full Professor at University Lille 1, affiliated to the CRIStAL (UMR CNRS 9189) lab’s SequeL Team (also INRIA team-project). In 2014, he has been appointed at the Institut Universitaire de France as a junior fellow. He is now on leave with Google, first at Google DeepMind in London and, since 2018 with Brain in Paris. Olivier Pietquin sat on the IEEE Speech and Language Technical Committee from 2009 to 2012 and he is a Senior IEEE member since 2011. His research interests include spoken dialog systems evaluation, simulation and automatic optimisation, machine learning (especially direct and inverse reinforcement learning), speech and signal processing. He authored or co-authored over 100 publications in these domains.
Meta-learning as a Markov Decision Process
Lisheng Sun, PhD student at LRI, University Paris Sud
Machine Learning (ML) has enjoyed huge successes in recent years and an ever-growing number of real-world applications rely on it. However, designing promising algorithms for a specific problem still requires huge human effort. Automated Machine Learning (AutoML) aims at taking the human out of the loop and develop machines that generate / recommend good algorithms for a given ML tasks. AutoML is usually treated as a algorithm / hyper-parameter selection problems, existing approaches include Bayesian optimization, evolutionary algorithms as well as reinforcement learning.
Among them, auto-sklearn which incorporates meta-learning techniques in their search initialization, ranks consistently well in AutoML challenges. This observation oriented my research to the Meta-Learning domain, leading to my recent paper where active learning and collaborative filtering are used to assign as quickly as possible a good algorithm to a new dataset, based on a meta-learning performance matrix S, i.e. a matrix of scores of algorithms on given datasets or tasks. This direction led me to develop a novel framework based on Markov Decision Processes (MDP) and reinforcement learning (RL), which will be the main topic of this speech.
Lisheng Sun is a 3rd year PhD student at LRI, University Paris Sud, under the direction of Mme. Isabelle Guyon and Mme. Michele Sebag. Her research focuses on AutoML / meta-learning. She graduated from Observatoire de Paris with a master degree on Astrophysics in 2013, then spent 3 years in Japan working as a freelance software developer before she joined the PhD program in 2016.
Graph-to-Sequence Learning in Natural Language Processing
Lingfei Wu, Research Staff Member in the IBM AI Foundations Labs
The celebrated Seq2Seq technique and its numerous variants achieve excellent performance on many tasks such as neural machine translation, natural language generation, speech recognition, and drug discovery. Despite their flexibility and expressive power, a significant limitation with the Seq2Seq models is that a neural network can only be applied to problems whose inputs are represented as sequences. However, the sequences are probably the simplest structured data and many important problems are best expressed with a complex structure such as a graph. On one hand, these graph-structured data can encode complicated pairwise relationships for learning more informative representations; On the other hand, the structural and semantic information in sequence data can be exploited to augment original sequence data by incorporating the domain-specific knowledge.
To cope with the complex structured graph inputs, we propose Graph2Seq, a novel attention-based neural network architecture for graph-to-sequence learning. Our Graph2Seq can be viewed as a generalized Seq2Seq model for graph inputs, which a general end-to-end neural encoder-decoder architecture that encodes an input graph and decodes the target sequence. In this talk, I will first introduce our Graph2Seq model, and then talk about how to apply this model in different NLP tasks. In particular, we illustrate the advantages of our Graph2Seq model over various Seq2Seq models and Tree2Seq models in our two recent works (“Exploiting Rich Syntactic Information for Semantic Parsing with Graph-to-Sequence Model”, EMNLP 2018) and (“SQL-to-Text Generation with Graph-to-Sequence Model”, EMNLP 2018).
Dr. Lingfei Wu is a passionate researcher and responsible team leader, developing novel deep learning/machine learning models for solving real-world challenging problems. He has served as the PI in IBM for several federal agencies such as DARPA and NSF (more than $1.8M), as well as MIT-IBM Watson AI Lab. He has published more than 50 top-ranked conference and journal papers in ML/DL/NLP domains and is a co-inventor of more than 20 filed US patents. He was the recipients of the Best Paper Award and Best Student Paper Award of several conferences such as IEEE ICC’19 and KDD workshop on DLG’19. His research has been featured in numerous media outlets, including NatureNews, YahooNews, Venturebeat, TechTalks, SyncedReview, Leiphone, QbitAI, MIT News, IBM Research News, and SIAM News. He has organized or served as Poster co-chairs of IEEE BigData’19, Tutorial co-chairs of IEEE BigData’18, Workshop co-chairs of Deep Learning on Graphs (with KDD’19, IEEE BigData’19, and AAAI’20), and regularly served as a SPC/TPC member of the following major AI/ML/DL/DM/NLP conferences including NIPS, ICML, ICLR, ACL, IJCAI, AAAI, and KDD.
When statistical physics meets machine learning
Lenka Zdeborová, CNRS Researcher, Institut de Physique Théorique – CEA
The affinity between statistical physics and machine learning has long history, this is reflected even in the machine learning terminology that is in part adopted from physics. The very purpose of physics is to provide understanding for empirically observed behaviour. From this point of view, the current success of machine learning provides a myriad of yet unexplained empirical observations that call for explanation. Physics functions by study of models that are simple enough to be studied and at the same time capture the salient features of the real system. In this lecture I will describe some of the history of statistical physics applied to machine learning and focus of the current hunt for suitable models, starting with a reflection on what should be the salient features they should capture, and methods to possibly solve them.
Lenka Zdeborová is a researcher at CNRS working in the Institute of Theoretical Physics in CEA Saclay, France. She received a PhD in physics from University Paris-Sud and from Charles University in Prague in 2008. She spent two years in the Los Alamos National Laboratory as the Director’s Postdoctoral Fellow. In 2014, she was awarded the CNRS bronze medal, in 2016 Philippe Meyer prize in theoretical physics and an ERC Starting Grant, in 2018 the Irène Joliot-Curie prize. She is editorial board member for Journal of Physics A, Physical review E and Physical Review X. Lenka’s expertise is in applications of methods developed in statistical physics, such as advanced mean field methods, replica method and related message passing algorithms, to problems in machine learning, signal processing, inference and optimization.