Quick links

Talk

Carbon Connect: An Ecosystem for Sustainable Computing

Date and Time
Tuesday, November 19, 2024 - 1:30pm to 2:30pm
Location
Computer Science Large Auditorium (Room 104)
Type
Talk
Host
Margaret Martonosi

Benjamin Lee
As the impact of artificial intelligence (AI) continues to proliferate, computer architects must assess and mitigate its environmental impact. This talk will survey strategies for reducing the carbon footprint of AI computation and datacenter infrastructure, drawing on data and experiences from industrial, hyperscale systems. First, we analyze the embodied and operational carbon implications of super-linear AI growth. Second, we re-think datacenter infrastructure and define a solution space for carbon-free computation with renewable energy, utility-scale batteries, and job scheduling. Finally, we develop strategies for datacenter demand response, incentivizing both batch and real-time workloads to modulate power usage in ways that reflect their performance costs. In summary, the talk provides a broad perspective on sustainable computing and outlines the many remaining directions for future work.

Bio: Benjamin C. Lee is a Professor of Electrical and Systems Engineering and of Computer and Information Science at the University of Pennsylvania. He is also a visiting researcher at Google in the Global Infrastructure Group. Dr. Lee’s research focuses on computer architecture (microprocessors, memories, datacenters), energy efficiency, and environmental sustainability. He builds interdisciplinary links to machine learning and algorithmic economics to better design and manage computer systems. His research on sustainable computing, in collaboration with Harvard, received an Expedition in Computing award from the National Science Foundation in 2024.

He received his post-doctorate from Stanford University, Ph.D. from Harvard University, and B.S. from the University of California at Berkeley. He has also held visiting positions at Meta AI, Microsoft Research, Intel Labs, and Lawrence Livermore National Lab. He is an IEEE Fellow and ACM Distinguished Scientist.


Cosponsored by the School of Engineering and Applied Science William Pierson Field Fund and the Department of Computer Science

In-person attendance is open to Princeton University faculty, staff and students

To request accommodations for a disability please contact Donna Ghilino, dg3548@princeton.edu

Contributions to and/or sponsorship of any event does not constitute departmental or institutional endorsement of the specific program, speakers or views presented.

Inioluwa Deborah Raji: Audits and Accountability in the Age of Artificial Intelligence

Date and Time
Friday, April 5, 2024 - 1:30pm to 2:50pm
Location
Robertson Hall 001
Type
Talk
Host
Lydia Liu

Inioluwa Deborah Raji is a Nigerian-Canadian computer scientist and activist who works on algorithmic bias, AI accountability, and algorithmic auditing. She is a Mozilla fellow and Ph.D. student in computer science at University of California, Berkeley, who is interested in questions on algorithmic auditing and evaluation. In the past, she worked closely with the Algorithmic Justice League initiative to highlight bias in deployed AI products.

She has also worked with Googleʼs Ethical AI team and been a research fellow at the Partnership on AI and AI Now Institute at New York University working on various projects to operationalize ethical considerations in ML engineering practice. In 2000 was named to Forbes 30 Under 30 and MIT Tech Review 35 Under 35 Innovators.

Raji’s lecture will be followed by a fireside chat with Lydia Liu, assistant professor of computer science.

Systems and Networking - Basil: Breaking up BFT with ACID (transactions)

Date and Time
Wednesday, March 6, 2024 - 2:00pm to 3:00pm
Location
Computer Science 402
Type
Talk
Speaker
Natacha Crooks, from UC Berkeley
Host
Wyatt Lloyd

This talk will present Basil, a new transactional Byzantine Fault Tolerant database. Basil leverages ACID transactions to scalably implement the abstraction of a trusted shared log in the presence of Byzantine actors. Unlike traditional BFT approaches, Basil executes non-conflicting operations in parallel and commits transactions in a single round-trip during fault-free executions. This approach improves throughput over traditional BFT systems by four to five times. Basil’s novel recovery mechanism further minimizes the impact of failures: with 30% Byzantine clients, throughput drops by less than 25% in the worst-case.

Bio: Natacha Crooks is an Assistant Professor at UC Berkeley. She works at the intersection of distributed systems and databases. Most recently, she is focused on developing scalable systems with strong integrity and privacy guarantees. She is a recipient of a VMWare Early Career Faculty Grant, the Dennis Ritchie Doctoral Dissertation Award, and the IEEE TCDE Rising Star Award.

LLM Forum: A Conversation with Wai Chee Dimock

Date and Time
Wednesday, November 8, 2023 - 4:30pm to 6:00pm
Location
Friend Center 101
Type
Talk
Speaker
Wai Chee Dimock, from Yale University

Wai Chee Dimock
Recent breakthroughs in Artificial Intelligence (AI) have produced a new class of neural networks called Large Language Models (LLMs) that demonstrate a remarkable capability to generate fluent, plausible responses to prompts posed in natural language. While LLMs have already revolutionized certain industry applications, the recent debut of ChatGPT has generated new anxiety and curiosity about machine intelligence, especially in the way we teach, research, tell stories and report facts.

The Princeton LLM Forum is bringing together leading scholars and researchers from a variety of disciplines and fields to discuss the implications that large language models (LLMs) have on our understanding of language, society, culture, and theory of mind. Join us for our second panel, a discussion between Wai Chee Dimock, Professor of English at Yale University, and Meredith Martin, Associate Professor of English and Director of the Center for Digital Humanities at Princeton University

Wai Chee Dimock writes about public health, climate change, and indigenous communities, focusing on the symbiotic relation between human and nonhuman intelligence. She is now at Harvard’s Center for the Environment, working on a new book, “AI, Microbes, and Us: Risky Partners in an Age of Pandemics and Climate Change.” A collaborative project, “AI for Climate Resilience,” is co-sponsored by Stanford’s Institute for Human-Centered Artificial Inteligence and Yale’s Jackson School of Global Affairs.  Dimock’s most recent book is Weak Planet (2020). Other books include Through Other Continents: American Literature Across Deep Time (2006); Shades of the Planet (2007); and a team-edited anthology, American Literature in the World: Anne Bradstreet to Octavia Butler ( 2017). Her 1996 book, Residues of Justice: Literature, Law, Philosophy, was reissued in a new edition in 2021. Her essays have appeared in Artforum, Chronicle of Higher Education, The Hill, Los Angeles Review of Books, New York Times, New Yorker, and Scientific American.


This event is sponsored by the Humanities Council, the Princeton Center for the Digital Humanities, and the Department of Computer Science.

LLM Forum: A Conversation with Meredith Whittaker

Date and Time
Wednesday, October 25, 2023 - 4:30pm to 6:30pm
Location
Friend Center 101
Type
Talk
Speaker
Meredith Whittaker, from Signal

Meredith Whittaker
Recent breakthroughs in Artificial Intelligence (AI) have produced a new class of neural networks called Large Language Models (LLMs) that demonstrate a remarkable capability to generate fluent, plausible responses to prompts posed in natural language. While LLMs have already revolutionized certain industry applications, the debut of ChatGPT has generated new anxiety and curiosity about machine intelligence, especially in the way we teach, research, tell stories and report facts.

The Princeton LLM Forum is bringing together leading scholars and researchers from a variety of disciplines and fields to discuss the implications that large language models (LLMs) have on our understanding of language, society, culture, and theory of mind. Join us for our first panel, a discussion between Meredith Whittaker, CEO of Signal and Arvind Narayanan, professor of Computer Science and Director of the Center for Information and Technology Policy at Princeton, about the implications of LLM technology on society.

Meredith Whittaker is the President of Signal. She is the current Chief Advisor, and the former Faculty Director and Co-Founder of the AI Now Institute. Her research and advocacy focus on the social implications of artificial intelligence and the tech industry responsible for it, with a particular emphasis on power and the political economy driving the commercialization of computational technology. Prior to founding AI Now, she worked at Google for over a decade, where she led product and engineering teams, founded Google’s Open Research Group, and co-founded M-Lab, a globally distributed network measurement platform that now provides the world’s largest source of open data on internet performance. She has advised the White House, the FCC, FTC, the City of New York, the European Parliament, and many other governments and civil society organizations on artificial intelligence, internet policy, measurement, privacy, and security.


This event is sponsored by the Humanities Council, the Princeton Center for the Digital Humanities, and the Department of Computer Science.

Next-Generation Optical Networks for Machine Learning Jobs

Date and Time
Monday, July 31, 2023 - 2:00pm to 3:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Talk
Host
Jennifer Rexford

Manya Ghobadi
In this talk, I will explore three elements of designing next-generation machine learning systems: congestion control, network topology, and computation frequency. I will show that fair sharing, the holy grail of congestion control algorithms, is not necessarily desirable for deep neural network training clusters. Then I will introduce a new optical fabric that optimally combines network topology and parallelization strategies for machine learning training clusters. Finally, I will demonstrate the benefits of leveraging photonic computing systems for real-time, energy-efficient inference via analog computing. I will discuss that pushing the frontiers of optical networks for machine learning workloads will enable us to fully harness the potential of deep neural networks and achieve improved performance and scalability.

Bio: Manya Ghobadi is faculty in the EECS department at MIT. Her research spans different areas in computer networks, focusing on optical reconfigurable networks, networks for machine learning, and high-performance cloud infrastructure. Her work has been recognized by the ACM-W Rising Star award, Sloan Fellowship in Computer Science, ACM SIGCOMM Rising Star award, NSF CAREER award, Optica Simmons Memorial Speakership award, best paper award at the Machine Learning Systems (MLSys) conference, as well as the best dataset and best paper awards at the ACM Internet Measurement Conference (IMC). Manya received her Ph.D. from the University of Toronto and spent a few years at Microsoft Research and Google before joining MIT.


To request accommodations for a disability please contact Sophia Yoo, sy6@princeton.edu, at least one week prior to the event.

To attend remotely via webinar, please register here.

Highly accurate protein structure prediction with AlphaFold

Date and Time
Thursday, September 22, 2022 - 3:00pm to 4:00pm
Location
Zoom (off campus)
Type
Talk
Speaker
Michael Figurnov, from DeepMind
Host
Ellen Zhong

Michael Figurnov
Predicting a protein’s structure from its primary sequence has been a grand challenge in biology for the past 50 years, holding the promise to bridge the gap between the pace of genomics discovery and resulting structural characterization. In this talk, we will describe work at DeepMind to develop AlphaFold, a new deep learning-based system for structure prediction that achieves high accuracy across a wide range of targets.  We demonstrated our system in the 14th biennial Critical Assessment of Protein Structure Prediction (CASP14) across a wide range of difficult targets, where the assessors judged our predictions to be at an accuracy “competitive with experiment” for approximately 2/3rds of proteins. The talk will focus on the underlying machine learning ideas, while also touching on the implications for biological research.

Bio: Michael Figurnov is a Staff Research Scientist at DeepMind. He has been working with the AlphaFold team for the past four years. Before joining DeepMind, he did his Ph.D. in Computer Science at the Bayesian Methods Research Group under the supervision of Dmitry Vetrov. His research interests include deep learning, Bayesian methods, and machine learning for biology.

Click here to join this talk via Zoom.

Manifold learning uncovers hidden structure in complex cellular state space

Date and Time
Friday, April 5, 2019 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Talk
Host
Mona Singh

David van Dijk
In the era of big biological data, there is a pressing need for methods that visualize, integrate and interpret high-throughput high-dimensional data to enable biological discovery. There are several major challenges in analyzing high-throughput biological data. These include the curse of (high) dimensionality, noise, sparsity, missing values, bias, and collection artifacts. In my work, I try to solve these problems using computational methods that are based on manifold learning. A manifold is a smoothly varying low-dimensional structure embedded within high-dimensional ambient measurement space. In my talk, I will present a number of my recently completed and ongoing projects that utilize the manifold, implemented using graph signal processing and deep learning, to understand large biomedical datasets. These include MAGIC, a data denoising and imputation method designed to ‘fix’ single-cell RNA-sequencing data, PHATE, a dimensionality reduction and visualization method specifically designed to reveal continuous progression structure, and two deep learning methods that use specially designed constraints to allow for deep interpretable representations of heterogeneous systems. I will demonstrate that these methods can give insight into diverse biological systems such as breast cancer epithelial-to-mesenchymal transition, human embryonic stem cell development, the gut microbiome, and tumor infiltrating lymphocytes. 

Lunch for talk attendees will be available at 12:00pm. 
To request accommodations for a disability, please contact Sara B. Thibeault at thibeault@princeton.edu, at least one week prior to the event.

Democratizing Web Automation: Programming for Social Scientists and Other Domain Experts

Date and Time
Thursday, April 11, 2019 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Talk
Host
Andrew Appel

Sarah Chasins
We have promised social scientists a data revolution, but it hasn’t arrived.  What stands between practitioners and the data-driven insights they want? Acquiring the data.  In particular, acquiring the social media, online forum, and other web data that was supposed to help them produce big, rich, ecologically valid datasets.  Web automation programming is resistant to high-level abstractions, so end-user programmers end up stymied by the need to reverse engineer website internals—DOM, JavaScript, AJAX.  Programming by Demonstration (PBD) offered one promising avenue towards democratizing web automation.  Unfortunately, as the web matured, the programs became too complex for PBD tools to synthesize, and web PBD progress stalled.

In this talk, I’ll describe how I reformulated traditional web PBD around the insight that demonstrations are not always the easiest way for non-programmers to communicate their intent. By shifting from a purely Programming-By-Demonstration view to a Programming-By-X view that accepts a variety of user-friendly inputs, we can dramatically broaden the class of programs that come in reach for end-user programmers. My Helena ecosystem combines (i) usable PBD-based program drafting tools, (ii) learnable programing languages, and (iii) novel programming environment interactions.  The end result: non-coders write Helena programs in 10 minutes that can handle the complexity of modern webpages, while coders attempt the same task and time out in an hour. I’ll conclude with predictions about the abstraction-resistant domains that will fall next—robotics, analysis of unstructured texts, image processing—and how hybrid PL-HCI breakthroughs will vastly expand access to programming.

Bio:
Sarah Chasins is a Ph.D. candidate at UC Berkeley, advised by Ras Bodik.  Her research interests lie at the intersection of programming languages and human-computer interaction.  Much of her work is shaped by ongoing collaborations with social scientists, data scientists, and other non-traditional programmers.  She has been awarded an NSF graduate research fellowship and a first place award in the ACM Student Research Competition. 

Lunch for talk attendees will be available at 12:00pm. 
To request accommodations for a disability, please contact Emily Lawrence, emilyl@cs.princeton.edu, 609-258-4624 at least one week prior to the event.

AlphaGo and the Computational Challenges of Machine Learning

Date and Time
Tuesday, April 9, 2019 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Talk
Host
Ryan Adams

Chris Maddison
Many computational challenges in machine learning involve the three problems of optimization, integration, and fixed-point computation. These three can often be reduced to each other, so they may also provide distinct vantages on a single problem. In this talk, I present a small part of this picture through a discussion of my work on AlphaGo and two vignettes on my work on the interplay between optimization and Monte Carlo. AlphaGo is the first computer program to defeat a world-champion player, Lee Sedol, in the board game of Go. My work laid the groundwork of the neural net components of AlphaGo, and culminated in our Nature publication describing AlphaGo's algorithm, at whose core hide these three problems. In the first vignette, I present the Hamiltonian descent methods we introduced for first-order optimization. These methods are inspired by the Monte Carlo literature and can achieve fast linear convergence without strong convexity by using a non-standard kinetic energy to condition the optimization. In the second vignette I cover our A* Sampling method, which reduces the problem of Monte Carlo simulation to an optimization problem, and an application to gradient estimation in stochastic computation graphs.

Bio: 
Chris Maddison is a PhD candidate in the Statistical Machine Learning Group in the Department of Statistics at the University of Oxford. He is an Open Philanthropy AI Fellow and spends two days a week as a Research Scientist at DeepMind. His research is broadly focused on the development of numerical methods for deep learning and machine learning. He has worked on methods for variational inference, numerical optimization, and Monte Carlo estimation with a specific focus on those that might work at scale with few assumptions. Chris received his MSc. from the University of Toronto. He received a NeurIPS Best Paper Award in 2014, and was one of the founding members of the AlphaGo project.

Lunch for talk attendees will be available at 12:00pm. 
To request accommodations for a disability, please contact Emily Lawrence, emilyl@cs.princeton.edu, 609-258-4624 at least one week prior to the event.

Follow us: Facebook Twitter Linkedin