Title: Performance SLAs for Cloud Data Analytics.
Abstract: A variety of data analytics systems are available as cloud services today, including Amazon Elastic MapReduce (EMR), Redshift, Azure's HDInsight, and others. To buy these services, users select and pay for a given cluster configuration: i.e., number and type of service instances. It is well known, however, that users often have difficultly selecting configurations that meet their needs. We thus argue that today's interface to purchasing cloud services is the wrong one and develop a new approach that enables users to directly purchase quality of service levels. To achieve this vision, we develop a pair of systems, PSLAManager and PerfEnforce. Given a cloud data analytics service, the PSLAManager generates a database-specific performance-oriented SLA with multiple choices of service tiers. PerfEnforce uses elastic scaling to meet, at a low cost, the SLA runtime guarantees that the user purchases. We present the core algorithms behind each system as well as their evaluation using the Myria cloud data analytics system running on Amazon EC2.
Bio: Magdalena Balazinska is a Professor in the Paul G. Allen School of Computer Science and Engineering at the University of Washington and the Director of the University of Washington eScience Institute. She's also the director of the IGERT PhD Program in Big Data and Data Science and the director of the associated Advanced Data Science PhD Option. Magdalena's research interests are in the field of database management systems. Her current research focuses on data management for data science, big data systems, and cloud computing. Magdalena holds a Ph.D. from the Massachusetts Institute of Technology (2006). She is a Microsoft Research New Faculty Fellow (2007), received the inaugural VLDB Women in Database Research Award (2016), an ACM SIGMOD Test-ofTime Award (2017), an NSF CAREER Award (2009), a 10-year most influential paper award (2010), a Google Research Award (2011), an HP Labs Research Innovation Award (2009 and 2010), a Rogel Faculty Support Award (2006), a Microsoft Research Graduate Fellowship (2003-2005), and multiple best-paper awards.
Title: Toward intelligent cloud platforms: the Resource Central experience.
Abstract: Services that rely heavily on Artificial Intelligence (AI), such as speech understanding and image recognition, have been receiving an enormous amount of attention. In this talk, I will argue that AI can also be used to optimize the platforms underlying these services, and create intelligent cloud platforms. As an example, I will overview Resource Central (RC), a novel machine learning and prediction-serving system for improving cloud resource management. We are placing RC right at the center of Microsoft Azure. To conclude, I will discuss some lessons from deploying such research efforts in production and how they relate to academic research.
Bio: Dr. Ricardo Bianchini received his PhD degree in Computer Science from the University of Rochester. After his graduate studies, he joined the faculty at the Federal University of Rio de Janeiro, and later at Rutgers University. Since 2014, he has been the Chief Efficiency Strategist at Microsoft, where he leads efforts to improve the efficiency of the company's online services and datacenters. He also leads the Systems research group at MSR Redmond. His main research interests include cloud computing and datacenter efficiency. Dr. Bianchini is a pioneer in datacenter power/energy management, energy-aware storage systems, energy-aware load distribution across datacenters, and leveraging renewable energy in datacenters. He has published nine award papers, and has received the CAREER award from the National Science Foundation. He is an ACM Fellow and an IEEE Fellow.