Quick links

Colloquium

TrafficSense: Rich Road and Traffic Monitoring Using Mobile Smartphones

Date and Time
Friday, March 14, 2008 - 11:00am to 12:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Venkat Padmanabhan, from Microsoft Research
Host
Jennifer Rexford
We consider the problem of monitoring road and traffic conditions in a city. Prior work in this area has required the deployment of dedicated sensors on vehicles and/or on the roadside, or the tracking of mobile phones by service providers. Furthermore, prior work has largely focused on the developed world, with its relatively simple traffic flow patterns. In fact, traffic flow in cities of the developing regions, which comprise much of the world, tends to be much more complex owing to varied road conditions (e.g., potholed roads), chaotic traffic (e.g., a lot of braking and honking), and a heterogeneous mix of vehicles (2-wheelers, 3-wheelers, cars, buses, etc.).

To monitor road and traffic conditions in such a setting, we present TrafficSense, a system that piggybacks on smartphones that users carry around with them. Each participating smartphone performs rich sensing using its communication radios, GPS, accelerometer, and microphone, scans for and exchanges information from other participating phones in its neighbourhood, and reports processed information back to a central server for aggregation and reporting. We discuss key technical challenges, including energy optimization and the need for hands-free operation, present the design of TrafficSense, including how information from multiple sensors is combined to make inferences, and evaluate it in various road and traffic settings in Bangalore.

(Joint work with Prashanth Mohan and Ram Ramjee.)

Bio: Venkat Padmanabhan is a Senior Researcher and Research Manager at Microsoft Research India in Bangalore, where he founded and now leads the Mobility, Networks, and Systems group. Venkat was previously with Microsoft Research Redmond for 8.5 years. His research interests are in networked systems and his current projects focus on mobile and senor systems, and network management. His professional service has included serving as program chair for ACM NOSSDAV 2004, ACM IMC 2005, and IEEE HotWeb 2008, and as an affiliate faculty member at the University of Washington, where he has taught and served in student thesis committees. Venkat holds a B.Tech. from IIT Delhi and an M.S. and Ph.D from UC Berkeley, all in Computer Science.

Reinventing Partially Observable Reinforcement Learning

Date and Time
Wednesday, March 12, 2008 - 4:15pm to 5:45pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Eyal Amir, from University of Illinois, Urbana-Champaign
Host
David Blei
Many complex domains offer limited information about their exact state and the way actions affect them. There, autonomous agents need to make decisions at the same time that they learn action models and track the state of the domain. This combined problem can be represented within the framework of reinforcement learning in POMDPs, and is known to be computationally difficult.

In this presentation I will describe a new framework for such decision making, learning, and tracking. This framework applies results that we achieved about updating logical formulas (belief states) after deterministic actions. It includes algorithms that represent and update the set of possible action models and world states compactly and tractably. It makes a decision with this set, and updates the set after taking the chosen action. Most importantly, and somewhat surprisingly, the number of actions that our framework takes to achieve a goal is bounded polynomially by the length of an optimal plan in a fully observable, fully known domain, under lax conditions. Finally, our framework leads to a new stochastic-filtering approach that has better accuracy than previous techniques.

* Joint work with Allen Chang, Hannaneh Hajishirzi, Stuart Russell, Dafna Shahaf, and Afsaneh Shirazi (IJCAI'03,IJCAI'05,AAAI'06,ICAPS'06,IJCAI'07,AAAI'07).

Computing Equilibria in Games

Date and Time
Wednesday, March 5, 2008 - 4:15pm to 5:45pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Konstantinos Daskalakis, from UC Berkeley
Host
Sanjeev Arora
Game Theory is important for the study of large competitive environments, such as the Internet, the market, and even social and biological systems. A key tool in analyzing such systems (games) is the study of their stable states, that is, their equilibria. Understanding the properties of equilibria can give insights into the effectiveness of economic policies, engineering decisions, etc. However, due to the large scale of most interesting games, the problem of computing equilibria cannot be separated from complexity considerations. Motivated by this challenge, I will discuss the problem of computing equilibria in games. I will show first that computing a Nash equilibrium is an intractable problem. It is not NP-complete, since, by Nash's theorem, an equilibrium is always guaranteed to exist, but it is at least as hard as solving any fixed point computation problem, in a precise complexity-theoretic sense. In view of this hardness result, I will present algorithms for computing approximate equilibria. In particular, I will describe algorithms that achieve constant factor approximations for 2-player games and give a quasi-polynomial time approximation scheme for the multi-player setting. Finally, I will consider a very natural and important class of games termed anonymous games. In these games every player is oblivious to the identities of the other players; examples arise in auction settings, congestion games, and social phenomena. I will introduce a polynomial time approximation scheme for the anonymous setting and provide surprising connections to Stein's method in probability theory. Bio: Constantinos (or Costis) Daskalakis grew up in Athens, Greece, where he received his undergraduate degree in Electrical and Computer Engineering from the National Technical University of Athens. In 2004 he moved to California to pursue a Ph.D. in Computer Science at U.C. Berkeley under the supervision of Professor Christos H. Papadimitriou. Costis’s work has focused on computational game theory and applied probability, in particular the computation of equilibria in games, the study of social networks, and computational problems in biology. His research is motivated by two questions: "How does the algorithmic perspective influence economics, biology, physics, and the social sciences?" And, "how does the study of computational problems arising from areas outside computer science transform the theory of computation?"

WordNet: A lexical resource for Natural Language Processing

Date and Time
Wednesday, February 27, 2008 - 4:15pm to 5:45pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Christiane Fellbaum, from Princeton
Host
Robert Schapire
We discuss the contents, structure and uses of WordNet, a large machine-tractable lexical database developed and maintained at Princeton. This semantic network contains information about tens of thousands of English lexicalized concepts. WordNet enables automatic systems to "understand" the meanings of words, to measure their semantic similarity, and to disambiguate words with multiple meanings. A powerful tool for Natural Language Processing, WordNet is used in a wide range of applications including information retrieval, question answering, machine translation and automatic reasoning and inference.

Network Data Streaming - A Computer Scientist's Journey in Signal Processing

Date and Time
Thursday, December 13, 2007 - 4:00pm to 5:30pm
Location
Friend Center 013
Type
Colloquium
Speaker
Jim Xu, from Georgia Institute of Technology
Host
Xiaojuan Ma
With the rapid growth of the Internet, network link speeds have become faster every year to accommodate more Internet users. Measuring and monitoring the traffic on such high-speed links has become an ever challenging problem. Our team is among the first to realize that data streaming algorithms have the potential to become very useful tools for measuring and monitoring high-speed networks. Data streaming is concerned with processing a long stream of data items in one pass using a small working memory in order to estimate certain statistics of the stream. The challenge is to use this small memory to ``remember'' as much information {\it pertinent to this estimation} as possible. However, existing data streaming algorithms are typically designed for processing a single stream of data for a single type of statistic, and most of these algorithms cannot operate at very high link speed. Targeting the deficiency of existing approaches, we have investigated new data streaming paradigms and mechanisms that allow us to perform large-scale distributed data streaming on tens of thousands of high-speed links and nodes, and aggregate, compress, and interpret these streaming results, for better measurement and monitoring of large networks. We also discovered that the applications of data streaming go far beyond network measurement and monitoring. Recently, we have successfully applied the data streaming techniques to a seemingly unrelated area: query routing in an unstructured P2P network. In this talk, I will talk about these new applications as well as our theory of formulating the data streaming problems in CS as the channel design problem in EE. BIO: Jun (Jim) Xu is an Associate Professor in the College of Computing at Georgia Institute of Technology. He received his Ph.D. in Computer and Information Science from The Ohio State University in 2000. His current research interests include data streaming algorithms for the measurement and monitoring of computer networks, algorithms and data structures for computer networks, network security, and performance modeling and simulation. He received the NSF CAREER award in 2003 for his ongoing efforts in establishing fundamental lower bound and tradeoff results in networking. He is a co-author of a paper that won the Best Student Paper Award from 2004 ACM Sigmetrics/IFIP Performance joint conference, and the faculty advisor of the student winners. He received 2006 IBM faculty award for making fundamental contributions to performance evaluation methodologies.

Taming Concurrency: A Program Verification Perspective

Date and Time
Wednesday, December 12, 2007 - 4:15pm to 5:45pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Shaz Qadeer, from Microsoft Research
Host
David Walker
Concurrency, as a basic primitive for software construction, is more relevant today than ever before, primarily due to the multi-core revolution. General-purpose software applications must find ways to exploit concurrency explicitly in order to take advantage of multiple cores. However, experience has shown that explicitly parallel programs are difficult to get right. To deliver compelling software products in the multi-core era, we must improve our ability to reason about concurrency.

Generalizing well-known sequential reasoning techniques to concurrent programs is fundamentally difficult because of the possibility of interference among concurrently-executing tasks. In this lecture, I will present reduction and context-bounding, two ideas that alleviate the difficulty of reasoning about interference by creating a simpler view of a concurrent execution than an interleaving of thread instructions. Reduction reduces interference by making a sequence of instructions in a thread appear as a single atomic instruction; it allows the programmer to view an execution as a sequence of large atomic instructions. Context-bounding reduces interference by focusing on executions with a small number of context switches; it allows the programmer to view an execution as a sequence of large thread contexts. I will present the theory behind these two approaches and their realization in a number of program verification tools I have developed over the years.

Bio: Shaz Qadeer is a researcher in the Software Reliability Research group at Microsoft Research. The goal of his research is to develop tools for improving the productivity of programmers. He has worked on many program verification tools, spanning the range from run-time verification to model checking to static analysis, with an emphasis on tools for concurrent programs. Shaz received his Ph.D. at UC Berkeley and worked at Compaq Systems Research Center before joining Microsoft Research.

Understanding the Networked Audience

Date and Time
Thursday, November 29, 2007 - 4:30pm to 6:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Nancy Baym
Host
Edward Felten
Once viewed as passive consumers, media fans today are not just active, they are organized. Fans use the internet to form communities and networks, to produce their own artistic materials, to publicize what catches their fancies, to form personal alliances and friendships, to petition producers, and even to raise money for charities. They influence media producers in unprecedented ways, challenging old hierarchies. In a world of MySpace and YouTube, engaging fans is no longer a matter of mass communication, but of interpersonal relationship formation. This talk explores these changes and their implications for fans, producers, artists, analysts, and policy-makers.

Bio: Nancy Baym is an Associate Professor of Communication Studies at the University of Kansas. She has written many widely-cited articles about online fan community and social aspects of online interaction and is the author of the book Tune In, Log On: Soaps, Fandom, and Online Community (Sage Press, Inc.). Her work has been translated into several languages. She is a co-founder and Past-President of the Association of Internet Researchers, an international interdisciplinary association. She is an award-winning teacher whose courses address the use of new communication technologies in creating identities, relationships and communities, interpersonal communication, and qualitative research methods. She serves on the editorial boards of the premiere journals in the field, including New Media & Society, The Journal of Communication, The Journal of Computer-Mediated Communication, and The Information Society. Her blog about fan activity on the internet can be found at http://www.onlinefandom.com

Taming Wild Concurrent Programs

Date and Time
Wednesday, November 7, 2007 - 4:15pm to 5:45pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Yuanyuan Zhou
Host
Kai Li
The reality of multi-core hardware has made concurrent programs pervasive. Unfortunately, writing correct concurrent programs is difficult. As programmers are used to sequential thinking, concurrent programs can easily run wild. Additionally, due to the non-deterministic nature, concurrency bugs are notoriously hard to expose during testing and to reproduce for diagnosis.

In this talk, I will describe our recent research effort in combating wild concurrent programs. More specifically, I will talk about our findings in (1) hardware and system support as well as using data mining and machine learning/NLP techniques to automatically infer programmers' synchronization intent and detect concurrency bugs; (2) systematic approaches to identify and cover corner-case interleavings and expose more concurrency bugs during in-house testing; and (3) real world concurrency bug characteristics that can shed some lights on future research on concurrency bug detection, testing and programming language design.

Bio:Yuanyuan Zhou is an associate professor in the Department of Computer Science at Univ of Illinois at Urbana Champaign. Prior to UIUC, she worked at NEC Research Institute as a scientist after completing her Ph.D at Princeton in 2000. Her research interests span the areas of operating systems, architecture, storage systems and software reliability. She was the recipient for the Alfred Sloan Fellowship 2007, UIUC Gear Faculty Award 2006, NSF Career-2004 award, the CRA-W Anita-Borg Early Career Award 2005, the DOE Early Career Principle Investigator Award 2005, the IBM Faculty Award 2004 & 2005, and the IBM SUR-2003 award. She has 3 papers selected into the IEEE Micro Special Issue on Top Picks from Architecture Conferences and one best paper in SOSP 2005. She was also selected into the "Incomplete List of Teachers Ranked as Excellent by Their Students” in 2003 and 2006 at UIUC.

Grand Challenges in Complexity Theory

Date and Time
Thursday, October 25, 2007 - 4:30pm to 6:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Alexander Razborov, from IAS
Host
Bernard Chazelle
Modern complexity theory is a vast, loosely defined area. Its methods and methodology can be successfully applied whenever one has a set of tasks, a class of admissible solutions, and numerical characteristics to measure the ``quality'' of a solution. This talk will focus on two specific scenarios: classical computational complexity and proof complexity.

The story I will tell, however, is highly representative of the methods that have been tried in the field at large, with its mixed bag of successes, setbacks, and promising directions.

I will try to reveal some of the tight, beautiful and unexpected connections existing between different branches of complexity theory. I will discuss the ``grand challenges'' of the field, including "P vs NP" and questions about the power of classical proof systems.

This talk will assume no prior knowledge in theoretical computer science.

Three Beautiful Quicksorts

Date and Time
Wednesday, October 17, 2007 - 4:15pm to 5:45pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Jon Bentley, from Avaya Labs Research
Host
Brian Kernighan
This talk describes three of the most beautiful pieces of code that I have ever written: three different implementations of Hoare?s classic Quicksort algorithm. The first implementation is a bare-bones function in about a dozen lines of C. The second implementation starts by instrumenting the first program to measure its run time; a dozen systematic code transformations proceed to make it more and more powerful yet more and more simple, until it finally disappears in a puff of mathematical smoke. It therefore becomes the most beautiful program I never wrote. The third program is an industrial-strength C library Qsort function that I built with Doug McIlroy. A theme running through all three implementations is the power of elegance and simplicity. (This talk expands my Chapter 3 in Beautiful Code, edited by Oram and Wilson and published by O?Reilly in July, 2007.)
Follow us: Facebook Twitter Linkedin