Quick links

Colloquium

It's the Content, Stupid. How Content is driving Network Evolution

Date and Time
Monday, March 30, 2009 - 3:00pm to 4:30pm
Location
Friend Center 013
Type
Colloquium
Speaker
Kobus van der Merwe, from AT&T Lab - Research
Host
Jennifer Rexford
Security and robustness, operational complexity and dealing with demand growth are arguably the most significant challenges facing large scale networks.

In this talk I will show how Internet growth is driven by significant increases in content related traffic. Efficiently dealing with the distribution of this content presents a significant challenge to service providers. I will present our work on load-aware anycast which offers a cross-layer approach to content distribution (considering both content and network layers) .

This work is currently being readied for trial in the AT&T network and directly builds on our earlier work on Closed-loop Route Control where route selection is informed by network intelligence. As such the work presents a successful case study on the transition from research concept to production. However, this work also serves to illustrate how difficult it is to make this transition, or indeed more generally, how difficult it is to effect any change in a provider network.

In the second part of my talk I will present our recent work on a platform called ShadowNet. With ShadowNet we aim to address this fundamental challenge of network change by introducing an operational trial/test network which "sits between" a lab testbed and the production network. Specifically, each ShadowNet node is composed of a collection of carrier-grade equipment (routers, switches and servers). Each node is connected to both the Internet and other ShadowNet nodes via a (virtual) backbone. ShadowNet exploits the strong separation between logical functionality and physical realization that if offered by modern equipment. As such ShadowNet is a sharable, programmable and composable infrastructure, that allows multiple technology and service trials to be executed in parallel in a realistic operational setting. I will describe the ShadowNet control framework, our prototype implementation and deployment plans.

Bio: Kobus Van der Merwe is a Principal Member of Technical Staff in the Internet and Networking Systems Research Center at AT&T Labs - Research, where he is fortunate to be doing networking systems research that are both forward looking and sometimes find application in the AT&T network. His current research interests include network management, control and operation, network security and content distribution.

Managing the Complexity of Modern Enterprise Networks

Date and Time
Thursday, March 26, 2009 - 3:00pm to 4:30pm
Location
Friend Center 013
Type
Colloquium
Speaker
Aditya Akella, from University of Wisconsin
Host
Jennifer Rexford
Operator interviews and anecdotal evidence suggest that an operator's ability to manage a network decreases as the network becomes more complex. Today, there is no way to quantify how complex a network's design is, nor how complexity may impact network management activities. In this talk, I will present a suite of "complexity models" that describe the routing design and configuration of a network in a succinct fashion, abstracting away details of the underlying configuration languages. Our models, and the complexity metrics arising from them, capture the difficulty of configuring control and data plane behaviors on routers. They also measure the inherent complexity of the reachability constraints that a network implements via its routing design. Our models simplify network design and management by facilitating comparison between alternative designs for a network.

We tested our models on seven networks, including four universities and three enterprises. We validated the results through systematic interviews with the operators of five of the networks. We found the metrics to be predictive of the issues operators face when reconfiguring their networks. A surprising result of our study was uncovering the reasons for operators choosing the designs they did.

Given the frequency with which configuration errors are responsible for major outages, we believe that creating techniques to quantify the complexity of a network's design is an important first step to reducing that complexity. In databases, software engineering and other fields, metrics and benchmarks have driven the direction of the field by defining what is desirable and what is not. In proposing these metrics, we hope to start a similar conversation for network design.

I will end the talk with a brief description of our recent work on WAN optimization.

Unveiling Isolated and Layered Communities in Complex Networks

Date and Time
Wednesday, March 25, 2009 - 4:15pm to 5:45pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Michelle Girvan, from University of Maryland, IAS
Host
Michael Freedman
In the past decade, a number of studies have focused on the statistical properties of networked systems such as social networks and the World-Wide Web. Researchers have focused on properties common to many real-world networks such as small-world properties, power-law degree distributions and network transitivity.

In this talk, I will focus on another property found in many networks, that of community structure, in which network nodes are joined together in tightly knit groups, between which there are only looser connections. I will discuss algorithms for discovering community structure in networks, beginning with methods that strictly partition the nodes into non-overlapping groups. These methods work by detecting the boundaries between communities and do not require the user to input the number of desired communities a priori.

I will also discuss methods for finding highly overlapping or "layered" community structure in networks. In this case, using a combination of the aforementioned techniques, simulated annealing, and other tools borrowed from statistical physics, it is possible to find multiple possible divisions of the network into different communities. I'll demonstrate that the algorithms are highly effective at discovering community structure in both computer-generated and real-world network data, and show how they can be used to shed light on the sometimes dauntingly complex structure of networked systems.

Leakage resistant public key cryptography

Date and Time
Wednesday, March 18, 2009 - 4:20pm to 5:50pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Moni Naor, from Weizmann Institute, Israel
Host
Sanjeev Arora
Most of the work in the formal analysis of cryptographic schemes traditionally concentrated in abstract adversarial models that do not capture side-channel attacks. Such attacks exploit various forms of unintended information leakage, which is inherent to almost all physical implementations. In light of the prevalence of such attacks there are several attempts to model them and suggest schemes that are resistant to some of these attacks. I will describe recent developments in the area, especially those inspired by the ``cold boot attacks" of Halderman et al (Usenix Security 2008) and the model suggested by Akavia, Goldwasser and Vaikuntanathan (TCC 2009) in which adversarially chosen functions of the secret key are leaked to the attacker. In particular I will show a new simple construction of a public-key cryptosystem resistant to leakage of almost all the key. I will also discuss directions for future research. Joint work with Gil Segev

50 Years of C++

Date and Time
Wednesday, March 4, 2009 - 4:15pm to 5:45pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Bjarne Stroustrup, from Texas A&M
Host
Brian Kernighan
This is a talk about the design and evolution of C++. As all programming languages, C++ owes a lot to earlier languages and evolves through the blending of ideas. This talk tries to answer some common questions about that success of C++: Why did it succeed? At what did it succeed? How did it maintain a steady course over more than 25 years? At what did it not succeed? The scale of C++ success was unanticipated and its continuing strength has left many language and business theorists puzzled, so explanations are required. Given the long time span involved and because no large system can be designed with 20-20 hindsight, a historical perspective is an essential part of any answer. A combination of technical, philosophical, and sociological issues must be considered. This talk focuses on the design aims of C++ and my philosophy of incremental language evolution relying on feedback loops. It goes into some detail on the ISO standards process which has been central to the stability and evolution of C++. It gives a few -- but only very few -- illustrative code examples.

Grad get together

Date and Time
Wednesday, February 25, 2009 - 4:15pm to 5:45pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Larry Peterson
Host
Larry Peterson

Google and the Vapnik-Chervonenkis Dimension

Date and Time
Wednesday, February 11, 2009 - 4:15pm to 5:45pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Stuart Geman, from Brown University
Host
Fei-fei Li
Google engineers routinely train query classifiers, for ranking advertisements or search results, on more words than any human being sees or hears in a lifetime. A human being who sees a meaningfully new image every second for one-hundred years will not see as many images as Google has in its libraries, all of which are available for training object detectors and image classifiers. Yet by human standards the state-of-the-art, in computer understanding of language and computer-generated image analysis, is primitive. What explains the gap? Why can’t learning theory tell us how to make machines that learn as efficiently as humans? Upper bounds on the number of training samples needed to learn a classifier as rich and competent as the human visual system can be derived using the Vapnik-Chervonenkis dimension, or the metric entropy, but these suggest that not only does Google need more examples, but all of evolution might fall short. I will make some proposals for efficient learning and offer some mathematics to support them.

STAIR: The STanford Artificial Intelligence Robot Project

Date and Time
Wednesday, February 4, 2009 - 4:15pm to 5:45pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Andrew Ng, from Stanford
Host
David Blei
This talk will describe the STAIR home assistant robot project, and the satellite projects that led to key STAIR components such as (i) robotic grasping of previously unknown objects, (ii) depth perception from a single still image, and (iii) multi-modal robotic perception.

Since its birth in 1956, the AI dream has been to build systems that exhibit broad-spectrum competence and intelligence. STAIR revisits this dream, and seeks to integrate onto a single robot platform tools drawn from all areas of AI including learning, vision, navigation, manipulation, planning, and speech/NLP. This is in distinct contrast to, and also represents an attempt to reverse, the 30 year old trend of working on fragmented AI sub-fields. STAIR's goal is a useful home assistant robot, and over the long term, we envision a single robot that can perform tasks such as tidying up a room, using a dishwasher, fetching and delivering items, and preparing meals.

In this talk, I'll describe our progress on having the STAIR robot fetch items from around the office, and on having STAIR take inventory of office items. Specifically, I'll describe: (i) learning to grasp previously unseen objects (including unloading items from a dishwasher); (ii) probabilistic multi-resolution maps, which enable the robot to open/use doors; (iii) a robotic foveal+peripheral vision system for object recognition and tracking. I'll also outline some of the main technical ideas---such as learning 3-d reconstructions from a single still image, and reinforcement learning algorithms for robotic control that played key roles in enabling these STAIR components.

Andrew Ng is an Assistant Professor of Computer Science at Stanford University. His research interests include machine learning, reinforcement learning/control, and broad-competence AI. His group has won best paper/best student paper awards at ACL, CEAS, 3DRR and ICML. He is also a recipient of the Alfred P. Sloan Fellowship.

Games in Networks: the price of anarchy and learning

Date and Time
Thursday, December 11, 2008 - 4:30pm to 6:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Eva Tardos, from Cornell
Host
Sanjeev Arora
Network games play a fundamental role in understanding behavior in many domains, ranging from communication networks through markets to social networks. Such networks are used, and also evolve due to selfish behavior of the users and owners. In light of these competing forces, it is surprising how efficient these networks are. It is an exciting challenge to understand the operation and success of these networks in game theoretic terms: what principles of interaction lead selfish participants to form such efficient networks? We will focus on congestion games, and study the degradation of quality of solution caused by the selfish behavior of users. We model users as learning algorithms, and show that natural learning behavior can avoid bad outcomes predicted by the price of anarchy in atomic congestion games such as the load-balancing game. We use tools from the theory of dynamical systems and algebraic geometry to show when players use a class of natural learning algorithms the distribution of play converges to the set of weakly stable equilibria, and that the set of weakly stable equilibria are the pure Nash equilibria with probability 1 when congestion costs are selected at random independently on each edge (from any monotonically parametrized distribution). The talk is a survey and self-contained. ----- About the speaker: Eva Tardos is a Professor of Computer Science at Cornell University where she is currently chair. Her research is in Algorithm Design and Algorithmic Game Theory. Algorithmic game theory is an emerging new area of designing systems and algorithms for selfish users. She is a winner of the Fulkerson Prize and the Dantzig Prize.

Computer Graphics as a Telecommunication Medium

Date and Time
Wednesday, December 3, 2008 - 4:15pm to 5:45pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Vladlen Koltun, from Stanford University
Host
Michael Freedman
I will argue that the primary contribution of computer graphics in the next decade will be to enable richer social interaction at a distance. The integration of real-time computer graphics and large-scale distributed systems will give rise to a rich telecommunication medium, currently referred to as virtual worlds. The medium provides open-ended face-to-face communication among ad-hoc groups of people in custom environments and decouples shared spatial experiences from geographic constraints.

I will outline a research agenda for enabling and advancing the medium. The three driving themes are system architectures, content creation, and interaction. System architectures aim to support the medium at planetary scale. Content creation aims to enable untrained participants to create high-quality three-dimensional content. Interaction aims to make virtual world communication seamless and natural. I will demonstrate preliminary results in each area.

Bio: Vladlen Koltun is an Assistant Professor of Computer Science at Stanford University. He directs the Virtual Worlds Group, which explores how scalable virtual world systems can be built, populated, and used. His prior work in computational geometry and theoretical computer science was recognized with the NSF CAREER Award, the Alfred P. Sloan Fellowship, and the Machtey Award.

Follow us: Facebook Twitter Linkedin