Clustering and classification play a central role in machine learning systems for processing images, text, high-dimensional data, graph-based learning and various other use cases of unsupervised and supervised learning. Despite its long history, clustering and classification are still active areas of research, and in the past decade, a variety of new techniques and new models have been applied to these applications. This workshop will give an overview of new angles on this topic including scalable algorithms for high-dimensional and graph-based data, novel models and objectives (including hierarchical and overlapping), applications in semi-supervised and unsupervised learning, beyond worst-case analysis and robustness, fairness and privacy, deep learning-augmented techniques, etc.

 When? September 18-20, 2019. Where? Toyota Technological Institute at Chicago. What? The program includes longer keynote talks, shorter spotlight talks, a panel discussion and a poster session. See the schedule and the list of speakers. Do I need to register? Yes, registration is free but required.

# Day 1: Wednesday, September 18

• 09:00 – 10:00

Breakfast

• 10:00 – 12:00
Jennifer Dy (Northeastern)
TBD

TBD

TBD

TBD

Zohar Karnin (Amazon NYC)
TBD

TBD

Aravindan Vijayaraghavan (Northwestern)
On Robustness to Adversarial Examples and Polynomial Optimization

The talk will focus on classification algorithms that are robust to adversarial (test time) perturbations. While there have numerous recent works on this topic due to its connections to test time robustness of deep networks, there is limited theoretical understanding of several basic questions including (1) when and how can one design provably robust learning algorithms?, and (2) what is the price of achieving robustness to adversarial examples in a computationally efficient manner?

In this talk, I will show a strong connection between achieving robustness to adversarial examples, and approximation algorithms for a rich class of polynomial optimization problems, thereby making progress on the above questions. In particular, this connection can be leveraged to design the computationally efficient robust algorithms with provable guarantees, and also give a precise characterization of the price of achieving robustness in a computationally efficient manner for some of these classes. Time permitting, I will also describe how we can also design efficient algorithms to certify robustness and generate adversarial attacks in a principled manner for $2$-layer neural networks.

•  12:00 – 14:00

Lunch

• 14:00 – 15:00
Konstantin Makarychev (Northwestern)
TBD

TBD

Yury Makarychev (TTIC)
TBD

TBD

• 15:00 – 15:30

Tea time

• 15:30 – 16:30
Moses Charikar (Stanford)
Keynote: TBD

TBD

# Day 2: Thursday, September 19

• 09:00 – 09:30

Breakfast

• 09:30 – 10:30
Elchanan Mossel (MIT)
Keynote: TBD

TBD

• 10:30 – 12:30
TBD

TBD

TBD

TBD

TBD

TBD

TBD

TBD

•  12:30 – 14:00

Lunch

• 14:00 – 15:00
TBD

TBD

TBD

TBD

• 15:00 – 15:30

Tea time

• 15:30 – 17:00
Michael Mahoney (Berkeley)
TBD

TBD

Andrew McGregor (UMass)
TBD

TBD

Andrej Risteski (CMU)
TBD

TBD

• 17:00 – 18:00
Kamalika Chaudhuri (UCSD)
Keynote: Challenges in Reliable Machine Learning

TBD As machine learning is increasingly used in real applications, there is a need for reliable and robust methods. In this talk, we will discuss two such challenges that arise in reliable machine learning. The first is sample selection bias, where training data is available from a distribution conditioned on a sample selection policy, but the resultant classifier needs to be evaluated on the entire population. We will show how we can use active learning to get a small amount of labeled data from the entire population that can be used to correct this kind of sample selection bias. The second is robustness to adversarial examples -- slight strategic perturbations of legitimate test inputs that cause misclassification. We next look at adversarial examples in the context of a simple non-parametric classifier -- the k-nearest neighbor classifier, and look at its robustness properties. We provide bounds on its robustness as a function of k, and propose a more robust 1-nearest neighbor classifier.

Joint work with Songbai Yan, Tara Javidi, Yaoyuan Yang, Cyrus Rastchian, Yizhen Wang and Somesh Jha

# Day 3: Friday, September 20

• 09:00 – 09:30

Breakfast

• 09:30 – 10:30
Keynote: TBD

TBD

• 10:30 – 12:30
Haipeng Luo (USC)
TBD

TBD

Krzysztof Onak (IBM Research)
TBD

TBD

Variance Reduction in Bipartite Experiments through Correlation Clustering

Causal inference in randomized experiments typically assumes that the units of randomization and the units of analysis are one and the same. In some applications, however, these two roles are played by distinct entities linked by a bipartite graph. The key challenge in such bipartite settings is how to avoid interference bias, which would typically arise if we simply randomized the treatment at the level of analysis units. One effective way of minimizing interference bias in standard experiments is through cluster randomization, but this design has not been studied in the bipartite setting where conventional clustering schemes can lead to poorly powered experiments. We introduce a novel clustering objective and a corresponding correlation clustering algorithm that partitions a bipartite graph so as to maximize the statistical power of a bipartite experiment on that graph. We use a publicly-available graph of Amazon user-item reviews to validate our solution and illustrate how it substantially increases the statistical power in bipartite experiments.

David Woodruff (CMU)
TBD

TBD

•  12:30 – 14:00

Lunch

• 14:00 – 15:00
TBD

TBD

Quanquan Gu (UCLA)
TBD

TBD

• 15:00 – 15:30

Tea time

• 15:30 – 16:30
Po-Ling Loh (UWisc)
Robust Information Bottleneck

We propose a novel strategy for extracting features in supervised learning that can be used to construct a classifier which is more robust to small perturbations in the input space. Our method builds upon the idea of the information bottleneck by introducing an additional penalty term that encourages the Fisher information of the extracted features to be small, when parametrized by the inputs. We derive the optimal solution explicitly when the inputs and outputs are jointly Gaussian, proving that the optimally robust features are also jointly Gaussian in that setting. Furthermore, we propose a method for optimizing the robust information bottleneck objective in general settings using a form of stochastic gradient descent that may be implemented efficiently in neural networks. Our experimental results for synthetic and real data sets show that the proposed feature extraction method indeed produces classifiers with increased robustness to perturbations.

Ludwig Schmidt (Berkeley)
Do ImageNet Classifiers Generalize to ImageNet?

We build new test sets for the CIFAR-10 and ImageNet datasets. Both benchmarks have been the focus of intense research for almost a decade, raising the danger of overfitting to excessively re-used test sets. By closely following the original dataset creation processes, we test to what extent current classification models generalize to new data. We evaluate a broad range of classifiers and find accuracy drops of 3% - 15% on CIFAR-10 and 11% - 14% on ImageNet. However, accuracy gains on the original test sets translate to larger gains on the new test sets. Our results suggest that the accuracy drops are not caused by adaptivity, but by the classifiers' inability to generalize to slightly "harder" images than those found in the original test sets.

• 16:30 – 17:30
Aaron Roth (UPenn)
Keynote: TBD

TBD

# Speakers

## Keynote Speakers

• ### Moses Charikar (Stanford University)

Moses Charikar is the Donald E. Knuth professor of Computer Science at Stanford University. He obtained his PhD from Stanford in 2000, spent a year in the research group at Google, and was on the faculty at Princeton from 2001-2015. He is broadly interested in approximation algorithms (especially the power of mathematical programming approaches), metric embeddings, algorithmic techniques for big data, efficient algorithms for computational problems in high-dimensional statistics and optimization problems in machine learning. He won the best paper award at FOCS 2003 for his work on the impossibility of dimension reduction, the best paper award at COLT 2017 and the 10 year best paper award at VLDB 2017. He was jointly awarded the 2012 Paris Kanellakis Theory and Practice Award for his work on locality sensitive hashing, and was named a Simons Investigator in theoretical computer science in 2014.

• ### Kamalika Chaudhuri (University of California, San Diego)

Kamalika Chaudhuri received a Bachelor of Technology degree in Computer Science and Engineering in 2002 from Indian Institute of Technology, Kanpur, and a PhD in Computer Science from University of California at Berkeley in 2007. In July 2010, she joined the CSE department at UC San Diego as an assistant professor. She received an NSF CAREER Award in 2013 and a Hellman Faculty Fellowship in 2012.

Kamalika's research interests are in the statistical foundations of machine learning. She is particularly interested in the foundations of trustworthy machine learning, which includes problems such as learning from sensitive data while preserving privacy, learning under sampling bias, and in the presence of an adversary.

• ### Elchanan Mossel (Massachusetts Institute of Technology)

Elchanan Mossel became a professor of mathematics at MIT in 2016. He was previously a professor of statistics at the University of Pennsylvania. Elchanan was awarded a Ph.D. in mathematics from Hebrew University of Jerusalem. Since then, he was appointed as a postdoc in the Theory Group at Microsoft Research Redmond, an Alfred P. Sloan Research Fellow, a Miller Research Fellow at University of California, Berkeley, a professor of mathematics, applied mathematics and computer science at the Weizmann Institute of Science and a professor of statistics and computer science at University of California, Berkeley.

He studies fundamental problems in probability and analysis, computational complexity and algorithms, as well as applications to machine learning, Markov chain Monte Carlo methods, social choice and networks, and computational biology.

• ### Aaron Roth (University of Pennsylvania)

Aaron Roth is the class of 1940 Bicentennial Term associate professor of Computer and Information Sciences at the University of Pennsylvania, and co-director of the Networked and Social Systems Engineering (NETS) program. He is the recipient of a Presidential Early Career Award for Scientists and Engineers (PECASE) awarded by President Obama in 2016, an Alfred P. Sloan Research Fellowship, and research awards from Amazon, Google, and Yahoo. His research focuses on data privacy, algorithmic fairness, game theory, and machine learning. Together with Cynthia Dwork, he is the author of "The Algorithmic Foundations of Differential Privacy". Together with Michael Kearns, he is the author of “The Ethical Algorithm”, forthcoming in 2019 from Oxford University Press.

• ### Kunal Talwar (Google Brain, MTV)

Kunal Talwar is a Theoretical Computer Scientist, working in the areas of Differential Privacy, Machine Learning, Algorithms, and Data Structures. He is a Research Scientist in the Google Brain team. He got his PhD from UC Berkeley in 2004 and was at Microsoft Research 2004-2014.

## Spotlight Speakers

Mohammad Hossein Bateni is a staff research scientist at Google NYC working on large-scale graph and optimization problems. Before joining the team in 2011, he obtained Ph.D. and M.A. degrees in Computer Science from Princeton University, and a B.E. in Computer Engineering from Sharif University of Technology.

Hossein's research interests lie broadly in the design of approximation and distributed algorithms as well as combinatorial optimization. More recently he has focused on network design and distributed clustering problems.

• ### Jennifer Dy (Northeastern University)

Jennifer G. Dy is a Professor at the Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, where she first joined the faculty in 2002. She received her M.S. and Ph.D. in 1997 and 2001 respectively from the School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, and her B.S. degree from the Department of Electrical Engineering, University of the Philippines, in 1993. Her research spans both fundamental research in machine learning and their application to biomedical imaging, health, science and engineering, with research contributions in clustering, multiple alternative clustering, dimensionality reduction, feature selection and sparse methods, learning from uncertain experts, active learning, and Bayesian nonparametric models. She received an NSF Career award in 2004. She has served or is serving as Secretary for the International Machine Learning Society, associate editor/editorial board member for the Journal of Machine Learning Research, Machine Learning journal, IEEE Transactions on Pattern Analysis and Machine Intelligence, organizing and or technical program committee member for premier conferences in machine learning and data mining (ICML, ACM SIGKDD, AAAI, IJCAI, UAI, AISTATS, SIAM SDM), and program co-chair for SIAM SDM 2013 and ICML 2018.

• ### Alessandro Epasto (Google AI, NYC)

Alessandro Epasto is a research scientist at Google NYC in the graph mining team, part of the Google NYC Algorithms and Optimization team lead by Vahab Mirrokni. Before joining Google in 2016 he was a postdoc at Brown University advised by prof. Eli Upfal. He received a Ph.D. in computer science from Sapienza University of Rome where he was advised by prof. Alessandro Panconesi and supported by the Google European Doctoral Fellowship. His research interests include algorithmic problems in large-scale data analysis.

• ### Quanquan Gu (University of California, Los Angeles)

Quanquan Gu is an Assistant Professor of Computer Science at UCLA. His current research is in the area of artificial intelligence and machine learning, with a focus on developing and analyzing nonconvex optimization algorithms for machine learning to understand large-scale, dynamic, complex and heterogeneous data, and building the theoretical foundations of deep learning. He received his Ph.D. degree in Computer Science from the University of Illinois at Urbana-Champaign. He is a recipient of the Yahoo! Academic Career Enhancement Award, NSF CAREER Award, Simons Berkeley Research Fellowship, Adobe Research Award, and Salesforce Deep Learning Research Award.

• ### Jakub Łącki (Google Research, NYC)

Jakub Łącki is a Research Scientist at Google New York. His interests include distributed algorithms, clustering and dynamic graph algorithms. He received his PhD from Univeristy of Warsaw in 2015, advised by Piotr Sankowski. Before joining Google he was a postdoctoral researcher at Sapienza University of Rome, working with Stefano Leonardi.

• ### Haipeng Luo (University of Southern California)

Haipeng Luo is an assistant professor at the CS department of the University of Southern California. He obtained his PhD from Princeton University in 2016 and spent a year at Microsoft Research, NYC as a post-doc researcher afterwards. His research interest is in theoretical and applied machine learning, with a focus on online learning, bandit algorithms, and interactive machine learning. He received several awards over the years, including three best paper and best student paper awards at ICML, NeurIPS, and COLT respectively.

• ### Varun Kanade (University of Oxford)

Varun Kanade is an Associate Professor in Computer Science at the University of Oxford, and a Turing Fellow at the Alan Turing Institute in London. He was awarded a Ph.D. from Harvard University in 2012. Subsequently, he was a postdoctoral fellow supported by the FSMP at ENS, Paris and by the Simons Foundation at UC Berkeley. His research interest lie in theoretical aspects of machine learning and has recently been working on various problems related to clustering, randomized graph algorithms and random graphs, and supervised learning.

• ### Zohar Karnin (Amazon Web Services, NYC)

Zohar Karnin is a Principal Scientist in Amazon AWS AI. His research interests are in the area of large scale and online machine learning algorithms, and computation with limited resources. He is the technical lead for the science efforts of several components in Amazon SageMaker, including the development of built-in machine learning algorithms.

• ### Michael Mahoney (University of California, Berkeley)

Michael Mahoney is in the Department of Statistics at the University of California, Berkeley. His research focuses on algorithmic and statistical aspects of modern large-scale data analysis and large-scale machine learning. Specific topics include randomized matrix algorithms and randomized numerical linear algebra, geometric network analysis tools for structure extraction in large informatics graphs, scalable implicit regularization methods, and applications in genetics, astronomy, medical imaging, social network analysis, and internet data analysis. He received his PhD from Yale University with a dissertation in computational statistical mechanics, and he has worked and taught at Yale University in the mathematics department, at Yahoo Research, and at Stanford University in the mathematics department. He served on the National Advisory Committee of the Statistical and Applied Mathematical Sciences Institute (SAMSI); he was on the National Research Council's Committee on the Analysis of Massive Data; he runs the biennial MMDS Workshops on Algorithms for Modern Massive Data Sets; and he spent fall 2013 at UC Berkeley co-organizing the Simons Foundation's program on the Theoretical Foundations of Big Data Analysis.

• ### Andrew McGregor (University of Massachusetts, Amherst)

Andrew McGregor is an Associate Professor at the University of Massachusetts, Amherst. He received a B.A. degree and the Certificate of Advance Study in Mathematics from the University of Cambridge and a Ph.D. from the University of Pennsylvania. He also spent a couple of years as a post-doc at UC San Diego and Microsoft Research SVC. He is interested in many areas of theoretical computer science and specializes in data stream algorithms, linear sketching, and communication complexity. He received the NSF Career Award in 2010.

Rad Niazadeh is a Motwani postdoctoral fellow at Stanford University, Department of Computer Science, where he is hosted by Amin Saberi, Moses Charikar and Tim Roughgarden. He is joining Chicago Booth as an assistant professor in July 2020, and Google Research NYC as a research visitor in Dec 2019. Prior to Stanford, he obtained his Ph.D. in Computer Science from Cornell University. His research interests are broadly at the intersection of algorithms, game theory and optimization, with a focus on applications in market design, operations research, and machine learning.

• ### Krzysztof Onak (IBM T.J. Watson Research Center)

Krzysztof Onak is a computer scientist who works at the IBM T.J. Watson Research Center near Yorktown Heights, NY. He is interested in computation with limited resources, including sublinear-time algorithms, streaming algorithms, and algorithms for modern parallel systems. Krzysztof received his Master's degree from the University of Warsaw and his PhD from the Massachusetts Institute of Technology. Before joining IBM, he was a Simons Postdoctoral Fellow at Carnegie Mellon University.

• ### Andrej Risteski (Carnegie Mellon University)

Andrej Risteski is an Assistant Professor in the Machine Learning Department at Carnegie Mellon University. His work lies in the intersection of machine learning and theoretical computer science. The broad goal of his research is theoretically understanding statistical and algorithmic phenomena and problems arising in modern machine learning. He was previously a Norbert Wiener Fellow and Applied Mathematics Instructor at MIT, and prior to that received his Ph.D. at Princeton University.

• ### Ludwig Schmidt (University of California, Berkeley)

Ludwig Schmidt is a postdoctoral researcher at UC Berkeley working with Moritz Hardt, Ben Recht, and Martin Wainwright. Ludwig’s research interests revolve around the foundations of machine learning, often with the goal of making machine learning more reliable. Before Berkeley, Ludwig completed his PhD at MIT under the supervision of Piotr Indyk. Ludwig received a Google PhD fellowship, a Microsoft Simons fellowship, a best paper award at the International Conference on Machine Learning (ICML), and the Sprowls dissertation award from MIT.

• ### Aravindan Vijayaraghavan (Northwestern University)

Aravindan Vijayaraghavan is an Assistant Professor of Computer Science at Northwestern University. After obtaining his PhD in Computer Science from Princeton University in 2012, he was a Simons Postdoctoral Fellow at Carnegie Mellon University. He also spent a year as a postdoc at the Courant Institute, with the Simons Collaboration on Algorithms & Geometry. His research interests are in designing efficient algorithms for problems in Combinatorial Optimization and Machine Learning, and in using paradigms that go Beyond Worst-Case Analysis to obtain good algorithmic guarantees.

• ### Joshua Wang (Google Research, Mountian View)

Joshua Wang is a Research Scientist at Google. Prior to that, he did a PhD at Stanford University, advised by Tim Roughgarden. His current research interests include submodular functions, graph algorithms, and algorithmic game theory. Joshua has won a SPAA 2016 Best Paper Award as well as an ESA 2014 Best Student Paper Award. His work has also been recognized with oral presentations at NeurIPS 2017 and 2018.

• ### David Woodruff (Carnegie Mellon University)

David Woodruff is an Associate Professor at Carnegie Mellon University in the School of Computer Science. Prior to that he spent ten years at the IBM Almaden Research Center, which he joined in 2007 after completing his Ph.D. at MIT in theoretical computer science. His research interests include communication complexity, data stream algorithms, machine learning, numerical linear algebra, sketching, and sparse recovery. He is the author of the book "Sketching as a Tool for Numerical Linear Algebra". He is a recipient of the 2014 Presburger Award and Best Paper Awards at STOC 2013 and PODS 2010. At IBM he was a member of the Academy of Technology and a Master Inventor.

# Organizers

• ### Vahab Mirrokni (Google Research, NYC)

Vahab Mirrokni is a Principal Research Scientist, heading the algorithms research group at Google Research, New York. He received his PhD from MIT in 2005 and his B.Sc. from Sharif University of Technology in 1999. He joined Google Research in New York in 2008, after spending a couple of years at Microsoft Research, MIT and Amazon.com. He is the co-winner of a SODA'05 best student paper award and ACM EC'08 best paper award. His research areas include algorithms, algorithmic game theory, combinatorial optimization, and social networks analysis. At Google, he is mainly working on algorithmic and economic problems related to search and online advertising. Recently he is working on online ad allocation problems, distributed algorithms for large-scale graph mining, and mechanism design for advertising exchanges.

• ### Grigory Yaroslavtsev (Indiana University, Bloomington)

Grigory Yaroslavtsev is an assistant professor of Computer Science at Indiana University and the founding director of the Center for Algorthms and Machine Learning at IU. Prior to that he was a postdoctoral fellow at the Warren Center for Network and Data Sciences at the University of Pennsylvania. He was previously a Postdoctoral Fellow in Mathematics at Brown University, ICERM. He received his Ph.D. in Theoretical Computer Science in 2014 from Pennsylvania State University and an M.Sc. in Applied Mathematics and Physics from the Academic University of the Russian Academy of Sciences in 2010. Grigory works on efficient algorithms for sparsification, summarization and testing properties of large data, including approximation, parallel and online algorithms, learning theory and property testing, communication and information complexity and private data release.