Keynote Speakers

Keynote Speakers

We are pleased to present the ISPDC 2021 keynote speakers:

Learning Representations: Opportunities for Parallel and Distributed Computing

Daniela Rus
Daniela Rus
MIT CSAIL, USA
rus [at] csail mit edu

Abstract: Learning representations is critical for machine learning and very computation-intensive process. There are many opportunities to introduce efficiencies through parallel and distributed computing. The success of machine learning algorithms depends on data representation. Different representations can expose or hide different features of the data. As we think about the future of learning representations and its impact on machine learning it is important to consider the state of the art of machine learning today, the challenges and opportunities for addressing the computation issues around representation learning, and how to get to deeper understanding and more capabilities in machine learned models. In this talk I will describe 4 ideas related to computational issues in representation learning: reducing uncertainty, developing compact representations, debasing the training data, and developing privacy-preserving representations.

Bio: Prof. Daniela Rus is the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science; Director of the Computer Science and Artificial Intelligence Laboratory (CSAIL); Deputy Dean of Research for Schwarzman College of Computing at MIT. Prof. Rus brings deep expertise in robotics, artificial intelligence, data science, and computation. She is a member of the National Academy of Engineering, a member of the American Academy of Arts and Sciences, and fellow of the Association for the Advancement of Artificial Intelligence, the Institute of Electrical and Electronics Engineer, and the Association for Computing Machinery. She is also a recipient of a MacArthur Fellowship, a National Science Foundation Career award, and an Alfred P. Sloan Foundation fellowship. Daniela Rus earned her PhD in computer science from Cornell University.

Neural circuit policies

Radu Grosu
Radu Grosu
Technical University of Wien, Austria
radu.grosu [at] tuwien ac at

Abstract: A central goal of artificial intelligence is to design algorithms that are both generalisable and interpretable. We combine brain-inspired neural computation principles and scalable deep learning architectures to design compact neural controllers for task-specific compartments of a full-stack autonomous vehicle control system. We show that a single algorithm with 19 control neurons, connecting 32 encapsulated input features to outputs by 253 synapses, learns to map high-dimensional inputs into steering commands. This system shows superior generalisability, interpretability and robustness compared with orders-of-magnitude larger black-box learning systems. The obtained neural agents enable high-fidelity autonomy for task-specific parts of a complex autonomous system.

Bio: Radu Grosu is a full Professor and the Head of the Institute of Computer Engineering at the Faculty of Informatics, Vienna University of Technology. Radu Grosu is also the Head of the Cyber-Physical-Systems Group within the Institute of Computer- Engineering, and a Research Professor at the Department of Computer Science, of the State University of New York at Stony Brook, USA. The research interests of Radu Grosu include the modeling, the analysis and the control of cyber-physical systems and of biological systems. The applications focus of Radu Grosu includes IoT, smart CPS (e.g. smart mobility, smart production, smart buildings, smart energy, smart farming, smart health-care), cardiac myocyte networks, neural networks, and genetic regulatory networks. Radu Grosu is the recipient of the National Science Foundation Career Award, the State University of New York Research Foundation Promising Inventor Award, and the Association for Computing Machinery Service Award. He is also an elected member of the International Federation for Information Processing, Working Group 2.2. Before receiving his appointment at the Vienna University of Technology, Radu Grosu was an Associate Professor in the Department of Computer Science, of the State University of New York at Stony Brook, where he co-directed the Concurrent-Systems Laboratory and co-founded the Systems-Biology Laboratory. Radu Grosu earned his doctorate (Dr.rer.nat.) in Computer Science from the Faculty of Informatics of the Technical University München, Germany. He was subsequently a Research Associate in the Department of Computer and Information Science, of the University of Pennsylvania, an Assistant, and an Associate Professor in the Department of Computer Science, of the State University of New York at Stony Brook, USA.

Gradient compression for efficient distributed deep learning

Nikos Deligiannis
Nikos Deligiannis
Vrije Universiteit Brussel and imec, Belgium
ndeligia [at] etrovub be

Abstract: Recent successful results in the field of artificial intelligence and machine learning are achieved with deep learning models that contain a large number of parameters and are trained using a massive amount of data. Training such deep networks in a single machine (given a fixed set of hyperparameters) can take weeks. An answer to this problem is data-parallel distributed training, where a deep model is replicated into several computational nodes that have access to different chunks of the data. This approach, however, entails high communication rates and latency because of the computed gradients that need to be shared among nodes at every iteration. We will elaborate on various gradient compression strategies proposed to address this bottleneck within distributed training, including gradient sparsification, quantization, and entropy encoding. We will also discuss error correction techniques that compensate for the errors introduced by gradient compression. Furthermore, we will present new communication strategies that explore the correlation of gradients across distributed nodes to achieve further improvements in reducing the communication rate and latency

Bio: Nikos Deligiannis is an Associate Professor at the Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel (VUB), and a senior AI scientist with imec, Belgium. He received the Diploma degree in electrical and computer engineering from the University of Patras in 2006 and the Ph.D. degree (Hons.) in engineering sciences from VUB in 2012. From 2013 to 2015, he was a Postdoctoral Researcher with the Department of Electronic and Electrical Engineering, University College London. His current research interests include signal processing, machine learning, and distributed learning theory and algorithms with applications in computer vision, data mining, and natural language processing. Dr. Deligiannis is a member of IEEE and EURASIP and serves as the Vice-Chair of the EURASIP Technical Area Committee on Signal and Data Analytics for Machine Learning. He received various scientific awards, including the Best Paper Award at the 2019 IEEE International Conference on Image Processing, the 2017 EURASIP Best Ph.D. Award, and the 2013 Scientific Prize FWO-IBM Belgium.

Towards Robust, Large-scale Concurrent and Distributed Programming

Philipp Haller
Philipp Haller
KTH Royal Institute of Technology, Sweden
phaller [at] kth se

Abstract: Software systems must satisfy rapidly increasing demands imposed by emerging applications. For example, new AI applications, such as autonomous driving, require quick responses to an environment that is changing continuously. At the same time, software systems must be fault-tolerant in order to ensure a high degree of availability. As it stands, however, developing these new distributed software systems is extremely challenging even for expert software engineers due to the interplay of concurrency, asynchronicity, and failure of components. The objective of our research is to develop reusable solutions to the above challenges by means of novel programming models and frameworks that can be used to build a wide range of applications. This talk reports on our work on the design, implementation, and foundations of programming models and languages that enable the robust construction of large-scale concurrent and distributed software systems.

Bio: Philipp Haller is an Associate Professor of Computer Science at KTH Royal Institute of Technology in Stockholm, Sweden.  He was part of the team that received the 2019 ACM SIGPLAN Programming Languages Software Award for the development of the Scala programming language. He received a Ph.D. from École Polytechnique Fédérale de Lausanne, EPFL, Switzerland and a Diplom-Informatiker degree from Karlsruhe Institute of Technology, Germany. His main research interests are programming language design and implementation, type systems, concurrency, and distributed programming.