Quick links

Colloquium

On the Interaction of Multiple Overlay Routing

Date and Time
Wednesday, June 14, 2006 - 10:30am to 12:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
John Chi-Shing Lui, from Chinese University of Hong Kong
Host
Jennifer Rexford
In the past few years, overlay networks have received much attention but there has been little study on the "interaction" of multiple, co-existing overlays on top of a physical network.

In addition to previously introduced concept of overlay routing strategy such as the selfish routing, we introduce a new strategy called "overlay optimal routing". Under this routing policy, the overlay seeks to minimize its weighted average delay by splitting its traffic onto multiple paths.

We establish that: (1) The overlay optimal routing can achieve better delay compared to selfish routing, and (II) there exists a Nash equilibrium when multiple overlays adopt this strategy. Although an equilibrium point exists for overlay optimal routing and possibly for selfish routing, we show that the interaction of multiple overlay routing may not be Pareto optimal and that some fairness anomalies of resouce allocation may occur. This is worthy of attention since overlay may not know the existence of other overlays and they will continue to operate at this sub-obtimal point. We explore two pricing schemes to resolve the above issues. We show that by incorporating a proper pricing scheme, the overlay routing game can be led to the desired equilibrium and avoid the problems mentioned above. Extensive fluid-based simulations are performed to support the theoretical claims.

Quality Assessment for Super Resolution Image Enhancement

Date and Time
Monday, May 1, 2006 - 4:30pm to 6:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Amy Reibman, from AT&T Labs - Research
Host
Jennifer Rexford
Image super-resolution (SR) creates an enhanced high-resoultion (HR) image using multiple low-resolution (LR) images of the same object. A typical image formation model introduces blurring, aliasing, and added noise. Super-resolution is designed to jointly reduce or remove all three. While the first SR algorithm appeared over 20 years ago, only recently have people begun exploring the performance of these algorithms. However, only objective MSE performance has been considered. In this talk, we examine subjective and objective quality assessment for super-resolution.

Learning and Recognizing Real-World Scenes and Objects

Date and Time
Wednesday, April 26, 2006 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Fei-Fei Li, from University of Illinois - Urbana Champaign
Host
Szymon Rusinkiewicz
For both humans and machines, the ability to learn and recognize the semantically meaningful contents of the visual world is an essential and important functionality. Among the many kinds of high level visual tasks, we are particularly interested in the questions of natural scene and generic object categorization.

In this talk, we will begin by showing a recent algorithm developed toward learning and recognizing complex real-life images such as busy city street, beach, kitchen, etc. To motivate this topic, I will present a series of recent human psychophysics studies on natural scene recognition. All these experiments converge to one prominent phenomena of the human visual system: humans are extremely efficient and rapid in capturing the overall gist of natural images. Can we achieve such a feat in computer vision modeling? We propose here a generative Bayesian hierarchical model that learns to categorize natural images in a weakly supervised fashion. We represent an image by a collection of local regions, denoted as codewords obtained by unsupervised clustering. Each region is then represented as part of a `theme'. In previous work, such themes were learnt from hand-annotations of experts, while our method learns the theme distribution as well as the codewords distribution over the themes without such supervision. We report excellent categorization performances on a large set of 13 categories of complex scenes.

If time permits, I will also briefly go over some work toward generic object categorization in cluttered scenes, especially under constraint training condition. We will highlight some progress made in one-shot learning of object categories. Learning visual models of object categories notoriously requires thousands of training examples; this is due to the diversity and richness of object appearance which requires models containing hundreds of parameters. We present a method for learning object categories from just a few images (1~5). It is based on incorporating "generic" knowledge which may be obtained from previously learnt models of unrelated categories. We operate in a variational Bayesian framework: object categories are represented by probabilistic models, and "prior" knowledge is represented as a probability density function on the parameters of these models. The "posterior" model for an object category is obtained by updating the prior in the light of one or more observations. Our ideas are demonstrated on objects belonging to 101 widely varied categories.

Communities of Creation

Date and Time
Thursday, April 20, 2006 - 4:30pm to 6:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Cory Ondrejka, from Linden Lab
Host
Edward Felten
Second Life is a massively-multiplayer 3D virtual world, unique in that it is completely created by its residents. Via a mix of technology, ownership, markets, and community, Second Life has dispelled many myths about the quantity and quality of user-created content. In addition, with over 150,000 customers and 20% monthly growth, Second Life is exploring amateur-to-amateur creation and education on an increasingly large scale. The talk will provide a brief overview of Second Life and discuss several examples of commerce, play, education, and research. Finally, upcoming technical challenges will be discussed, with a focus on the need for further research.

Scalable Automated Methods for Software Reliability

Date and Time
Tuesday, April 18, 2006 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Koushik Sen, from University of Illinois Urbana-Champaign
Host
David Walker
Testing with manually generated test cases is the primary technique used in industry to improve reliability of software--in fact, such testing is reported to account for over half of the typical cost of software development. I will describe Concolic Testing, a systematic and efficient method which combines random and symbolic testing. Concolic testing enables automatic and systematic testing of large programs, avoids redundant test cases and does not generate false warnings. Experiments on real-world software show that concolic testing can be used to effectively catch generic errors such as assertion violations, memory leaks, uncaught exceptions, and segmentation faults. Combined with dynamic partial order reduction techniques and predictive analysis, concolic testing is effective in catching concurrency bugs such as data races and deadlocks as well as specification related bugs. I will describe my experience with building two concolic testing tools, CUTE for C programs and jCUTE for Java programs, and applying these tools to real-world software systems. Finally, I will provide a brief overview of my research in statistical and probabilistic model checking, application of machine learning to verify infinite state systems, and probabilistic programming.

SSH Security, TCP Leaks, and Not-so-AccuVotes: Computer Security from Proofs to People

Date and Time
Monday, April 17, 2006 - 4:30pm to 6:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Tadayoshi Kohno, from UC San Diego
Host
Edward Felten
Computer security research is a broad field, with research efforts ranging from the design and analysis of low-level cryptographic building blocks to the design and analysis of complex and socially important systems. My research illustrates how weak links and important issues often arise at the boundaries between different but relatively well-studied sub-areas. I provide three examples. My first example focuses on how results about authenticated encryption in standard cryptographic models lift to real systems. I show that although the popular Secure Shell (SSH) protocol uses the Encrypt-and-MAC method, which cryptographers have shown to be generically insecure, within SSH it is not only secure but provably so. In contrast I show that although recent versions of the popular WinZip application use the Encrypt-then-MAC method, which cryptographers have proven to be secure, within WinZip it is actually insecure. I emphasize that these results are not due to any weakness in the theory, but rather call for the the need to be careful when applying theoretical results to real systems. My second example shows that one cannot ascertain the security of a system by studying that system's software in isolation, but must rather study the complete system (software and hardware) as a whole. Specifically, I describe a new privacy issue with the TCP protocol that only arises once one considers the interaction between a device's TCP software implementation and the device's underlying hardware. For my third example, I describe my discovery of attacks against the Diebold AccuVote-TS electronic voting machines. I then describe some social and technical implications of my results.

Analyzing Intrusions Using Operating System Level Information Flow

Date and Time
Monday, April 10, 2006 - 4:30pm to 6:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Sam King, from University of Michigan
Host
Larry Peterson
Computers continue to get broken into, so intrusion analysis is a part of most system administrators'job description. System administrators must answer two main questions when analyzing intrusions: "how did the attacker gain access to my system?", and "what did the attacker do after they broke in". Current tools for analyzing intrusions fall short because they have insufficient information to fully track the intrusion and because they cannot separate the actions of attackers from the actions of legitimate users.

This talk will focus on how system administrators can use information flow graphs to help analyze intrusions. BackTracker is used to help answer the question "how did the attacker gain access to my system?". BackTracker starts with a suspicious object (e.g., malicious process, trojaned executable file) and follows the attack back in time, using causal OS events, to highlight the sequence of events and objects that lead to the suspicious state. Showing an information flow graph of these causally connected events and objects provides a system wide view of the attack and significantly reduces the amount of data an administrator must examine in order to determine which application was originally exploited. ForwardTracker helps answer the question "what did the attacker do after they broke in?". ForwardTracker starts from the application which was exploited and tracks causal events forward in time to display the information flow graph of events and objects that result from the intrusion. Finally, Bi-directional distributed BackTracker (BDB) continues the backward and forward information flow graphs across the network to highlight the set of computers on a local network which are likely to have been compromised by the attacker.

High-Order Markov Random Fields for Low-Level Vision

Date and Time
Wednesday, April 5, 2006 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Stefan Roth, from Brown University
Host
Adam Finkelstein
I will introduce several low-level vision tasks, such as image reconstruction and optical flow estimation, and show how they can be approached in a unified way as Bayesian inference problems. One key component of these Bayesian approaches is modeling the prior distribution. In image reconstruction applications, for example in image denoising, this amounts to modeling the prior probability of observing a particular image among all possible images. I will review Markov random fields (MRFs) and show how they can be used to formulate image priors. Past MRF approaches have mostly relied on simple random field structures that only model interactions between neighboring pixels, which is not powerful enough to capture the rich statistics of natural images. In my talk I will introduce a new high-order Markov random field model, termed Fields of Experts (FoE), that better captures the structure of natural images by modeling interactions among larger neighborhoods of pixels. The parameters of this model are learned from a database of natural images using contrastive divergence learning. I will demonstrate the capabilities of the FoE model on various image reconstruction applications. Furthermore, I will show that the Fields-of-Experts model is applicable to a wide range of other low-level vision problems and discuss its application to modeling and estimating optical flow.

The explore/exploit dilemma in human reinforcement learning: Computation, behavior, and neural substrates

Date and Time
Thursday, March 30, 2006 - 4:30pm to 6:00pm
Location
Fine Hall 101
Type
Colloquium
Speaker
Nathaniel Daw, from Gatsby Computational Neuroscience Unit
Host
Robert Schapire
We have rather detailed, if tentative, information about how organisms learn from experience to choose better actions. But it is much less clear how they arrange to obtain this experience. The problem of sampling unfamiliar options is a classic theoretical dilemma in reinforcement learning, because the costs and benefits of exploring unfamiliar options (which are substantial and difficult to quantify) must be balanced against those of exploiting the options that appear best on current knowledge.

Using behavioral analysis and functional neuroimaging in a bandit task, we study how humans approach this dilemma. We assess the fit to participants' trial-by-trial choices of different exploratory strategies from reinforcement learning, and, having validated an algorithmic account of behavior, use it to infer subjective factors such as when subjects are exploring versus exploiting. These estimates are then used to search for neural signals related to these phenomena. The results support the hypothesis that exploration is encouraged by the active override of an exploitative choice system, rather than an alternative, computationally motivated hypothesis under which a single (putatively dopaminergic) choice system integrates information about both the exploitative and exploratory ("uncertainty bonus") values of candidate actions. Although exploration is ubiquitous, it is also difficult to study in a controlled manner: We seize it only through the tight integration of computational, behavioral, and neural methods.

Efficient Matching for Recognition and Retrieval

Date and Time
Tuesday, March 28, 2006 - 4:00pm to 5:30pm
Location
Fine Hall 101
Type
Colloquium
Speaker
Kristen Grauman, from MIT
Host
Thomas Funkhouser
Local image features have emerged as a powerful way to describe images of objects and scenes. Their stability under variable image conditions is critical for success in a wide range of recognition and retrieval applications. However, comparing images represented by their collections of local features is challenging, since each set may vary in cardinality and its elements lack a meaningful ordering. Existing methods compare feature sets by searching for explicit correspondences between their elements, which is too computationally expensive in many realistic settings.

I will present the pyramid match, which efficiently forms an implicit partial matching between two sets of feature vectors. The matching has linear time complexity, naturally forms a Mercer kernel, and is robust to clutter or outlier features, a critical advantage for handling images with variable backgrounds, occlusions, and viewpoint changes. I will show how this dramatic increase in performance enables accurate and flexible image comparisons to be made on large-scale data sets, and removes the need to artificially limit the size of images' local descriptions. As a result, we can now access a new class of applications that relies on the analysis of rich visual data, such as place or object recognition and meta-data labeling. I will provide results on several important vision tasks, including our algorithm's state of the art recognition performance on a challenging data set of object categories.

Follow us: Facebook Twitter Linkedin