University of Wisconsin-Madison
Keynote Title: How To Find Research Problems
In this talk, I discuss how our group approaches the most basic question that faces all researchers: how to find good systems problems to work on? Through examples drawn from a research career now spanning nearly 30 years, I will present different problems we have worked on, and how we arrived upon them. The examples will highlight our work in file systems, storage systems, and distributed systems, including older work on reliability and more recent work on distributed systems.
Remzi Arpaci-Dusseau is the Vilas Distinguished Achievement Professor, Grace Wahba Professor, and Chair of Computer Sciences at UW-Madison. He co-leads a group with Professor Andrea Arpaci-Dusseau. Together, they have graduated 29 Ph.D. students and won numerous best-paper awards; many of our innovations are used by commercial systems. For their work, Andrea and Remzi received the ACM-SIGOPS Weiser award for ``outstanding leadership, innovation, and impact in storage and computer systems research'' and were named ACM Fellows for ``contributions to storage and computer systems''. Remzi has won the SACM Professor-of-the-Year award six times, the Rosner ``Excellent Educator'' award, and the Chancellor's Distinguished Teaching Award. Andrea and Remzi's operating systems book (www.ostep.org) is downloaded millions of times yearly and used at numerous institutions worldwide.
Dr. Seetharami Seelam
Keynote Title: Hardware-Middleware System co-design for flexible training of foundation models in the cloud
Foundation models are a new class of AI models that are trained on broad data (typically via self-supervision) and that can be used in different downstream tasks. Due to self-supervision and the ability to train on massive amounts of unlabeled data, these models grew to have hundreds of billions of parameters, and they take many months on hundreds of GPU to train and generate a foundation model. So, AI Systems and middleware are critical to train these foundation models in scalable, cost-effective manner.
In this talk, I will discuss the architecture of a new cloud-based AI System to train large scale foundation models. The system is built entirely out of open source software stack from hypervisor to guest operating systems, from container platforms to AI frameworks and libraries. It is natively built into IBM Cloud platform and the hardware and software stack is optimized for training of foundation models on hundreds of GPUs. We trained various foundation models with state-of-the-art accuracy in the shortest time on this platform. I will discuss the architecture, operational experience, and thoughts on the directions for the co-design of hardware and middleware for future AI Systems.
Dr. Seetharami Seelam is Principal Research Staff Member and a Technical Lead at IBM T. J. Watson Research Center where he provides leadership for the Hybrid Cloud Infrastructure Research group. Dr. Seelam is responsible for defining the strategy and implement the execution plan for HPC, AI, and Quantum on IBM Hybrid Cloud Platforms. He has over 15 years of industry experience as an engineer, research scientist, leader, strategist, public speaker, educator, and architect in Cloud Infrastructure, Cloud and AI Platforms, and High-performance Computing. His technical contributions to IBM earned him one IBM Corporate award, seven outstanding technical accomplishment awards (OTAA), and two outstanding innovation awards. He filed more than 40 patents (25 issued), published over 50 papers: received four best paper awards, one outstanding paper award.
|Full Paper Submission|
|Rebuttal||July 29th-August 1st, 2022|
|Author Notification||August 9th, 2022|
|Revised Submissions||September 9th, 2022|
|Notifications of Decisions of Revised Papers||September 23rd, 2022|
|Camera Ready||October 3rd, 2022|
|Workshop Proposal Submission|
|Industry Track Full Paper Submission|
|Doctoral Symposium Submission|
|Demo & Poster Submission|
|Conference||November 7th – 11th, 2022|
|Full Paper Submission|
|Notifications of Decisions of Revised Papers|