Quick links

Colloquium

An Empirical Approach to Computer Vision

Date and Time
Wednesday, February 26, 2003 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
David Martin, from University of California - Berkeley
Host
Douglas Clark
I will present an approach to computer vision research motivated by the 19th C. constructivist theory of Hermann von Helmholtz as well as the 20th C. ecological theories of James Gibson and Egon Brunswik. The thesis is that our perception can be modeled in a probabilistic framework based on the statistics of natural stimuli, or "ecological statistcs".

Toward the goal of modeling perceptual grouping, we have constructed a novel dataset of 12,000 segmentations of 1,000 natural images by 30 human subjects. The subjects marked the locations of objects in the images, providing ground truth data for learning grouping cues and benchmarking grouping algorithms. We feel that the data-driven approach is critical for two reasons: (1) the data reflects ecological statistics that the human visual system has evolved to exploit, and (2) innovations in computational vision should be evaluated quantitatively.

I will first present local boundary models based on brightness, color, and texture cues, where the cues are individually optimized with respect to the dataset and then combined in a statistically optimal manner with classifiers. The resulting detector is shown to significantly outperform prior state-of-the-art algorithms. Next, we learn from the dataset how to combine the boundary model with patch-based features in a pixel affinity model to settle long-standing debates in computer vision with empirical results: (1) brightness boundaries are more informative than patches, and vice a versa for color; (2) texture boundaries and patches are the two most powerful cues; (3) proximity is not a useful cue for grouping, it is simply a result of the process; and (4) both boundary-based and region-based approaches provide significant independent information for grouping.

Within this domain of image segmentation, this work demonstrates that from a single dataset encoding human perception on a high-level task, we can construct benchmarks for the various levels of processing required for the high-level task. This is analogous to the micro-benchmarks and application-level benchmarks employed in computer architecture and computer systems research to drive progress towards end-to-end performance improvement. I propose this as a viable model for stimulating progress in computer vision for tasks such as segmentation, tracking, and object recognition, using human ground truth data as the end-to-end goal.

Computer Vision and Control for Soccer Robots - The FU Fighters Team

Date and Time
Wednesday, February 19, 2003 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Raul Rojas, from University of Pennsylvania and Freie Universitaet Berlin
Host
Brian Kernighan
We have built two teams of soccer playing robots at the Free University of Berlin that have taken part in several RoboCup tournaments, the yearly robotic soccer world championship. In the small-size league, five autonomous robots play against other five using the image provide by a video camera hanging from the ceiling. In the mid-size league, four robots play against four using their own video camera and carrying a laptop. In this league the field measures 10 by 5 meters. In this talk, I will explain the computer vision techniques we have developed for dealing with varying lightning conditions in an extremely dynamic environment. Image segmentation and classification must be done in real time, at 30 or more frames per second. I will also explain the hierarchical control architecture we developed. Control is done using physical and virtual sensors that activate reactive behaviors with different temporal characteristics. The behaviors activate real or virtual actuators that determine the robots movements. The small robots are extremely fast and controlling them in the appropriate way is a challenging problem. Recently we have been studying how to make the robots learn from their own experience on the field. We started by letting the mid-size robots calibrate their cameras and color segmentation tables automatically, without human intervention. In the case of the small robots we are investigating the use of reinforcement learning techniques for letting them optimize their movements and reactions. The FU Fighters have won three times second place at the RoboCup tournament (1999, 2000, and 2002), and have won the European Championship twice (2000, and 2002). I will show video footage of the last RoboCup competitition. For more information visit www.fu-fighters.de

Logistical Networking and the Network Storage Stack

Date and Time
Wednesday, February 5, 2003 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
James S. Plank, from University of Tennessee
Host
Vivek Pai
This talk will detail the research program at the Logistical Computing and Internetwork Laboratory at the University of Tennessee. Specifically, we will detail our novel approach to network storage, which adheres to end-to-end design principles, and thus promises the ability to insert writable storage into the network as a scalable, shared resource. We term the paradigm of utilizing network storage to augment communication in the network ``logistical networking.'' Using the IP stack as our guide, we have developed a Network Storage Stack as a way for applications to make use of network storage. The central pieces of that stack are: - The Internet Backplane Protocol (IBP), for allocation and basic storage operations. - The exNode, for aggregation of multiple allocations. - The L-Bone for storage server discovery and network proximity querying. - The Logistical Runtime System (LoRS), for providing strong storage properties from the weak guarantees of IBP allocations. We will describe the stack in detail, present applications that make use of it, and give performance results. We will also demonstrate ``Video IBPster,'' an application that stores and plays video files from faulty, transient, wide-area network storage depots, the majority of which are currently on Planet Lab nodes distributed throughout the world. We will conclude with future directions.

Matching algorithms and hardware: How a brain 'computes' so well

Date and Time
Wednesday, December 4, 2002 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
John Hopfield, from Princeton
Host
Sanjeev Arora
When available 'device physics' or 'device biophysics' can be directly used in an algorithm, a small quantity of computing hardware can be astonishingly effective. Evolution makes such solutions available for neurobiological computations. I will describe 'biological' models of such effective systems in an engineering perspective, to give insight into how a brain may do some of its computations.

New Tools in Cryptography

Date and Time
Monday, November 25, 2002 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Dan Boneh, from Stanford
Host
Sanjeev Arora
Over the past three years we have seen a number of exciting new cryptographic constructions based on certain bilinear maps. In this talk we will survey some of these new constructions and their applications. For example, bilinear maps give rise to a new digital signature scheme with remarkable properties: it enables aggregation of many signatures which can be used to reduce communication in secure routing protocols and shrink certificate chains. Bilinear maps have also been used to construct the first practical public key encryption scheme where public keys can be arbitrary strings. In this talk we will survey some of the cryptographic applications of bilinear maps and describe several open problems in this area. The talk will be self contained.

Hourglass: An Architecture for Pervasive Applications

Date and Time
Thursday, November 14, 2002 - 4:30pm to 6:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Margo Seltzer, from Harvard
Host
Larry Peterson
Mainframes to mini-computers to workstations to PCs to PDAs. What comes next in the miniaturization of computing? Theanswer lies in networks of tiny computational devices, potentially equipped with sensors, embedded in the world around us. This area is rich in problems from the theoretical to the applied. In this talk, I will present our vision of pervasive applications, what they'll look like and what infrastructure we must provide to enable this next generation in computing.

Proof Tools and Correct Program Development

Date and Time
Monday, November 4, 2002 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Aaron Stump, from Washington University in St. Louis
Host
Andrew Appel
A new approach to developing correct programs is proposed. The goal of the approach is to all developers to translate some of their intuitive ideas of why their code is correct into something machine checkable, like a proof fragment or partial proof. These partial proofs are to be integrated with the program text and checked at compile time.

As a modest step towards this goal, extended assertions are proposed. Extended assertions allow developers to say "If these assertions hold at these program points, then this other assertion holds at this program point." A simple implementation is demonstrated for a minimal imperative programming language without recursion or loops. Programs in this language together with their extended assertions are compiled into logical formulas, which can be checked automatically by the CVC (Coorperating Validity Checker") system. An overview of CVC follows. The talk concludes with a look at Rogue, which is an experimental programming language for manipulating expressions. The compiler from the recursion-free language to CVC's logic is written in Rogue, and Rogue is being considered for writing parts of CVC itself.

Partitioned Applications in Protium

Date and Time
Wednesday, October 16, 2002 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Cliff Young, from Bell Labs
Host
Brian Kernighan
_Protium_ uses a partitioned application approach to provide universal data service: making traditional desktop applications available on any Internet-connected device. We partition applications into viewers (which run on the device near the user), services (which are accessed through the network and run in managed environments), and the application-specific protocols that connect them. The goal of using a partitioned architecture is to provide consistent, responsive, remote access on a wide variety of devices, where the devices are reliabily networked but may have limited communications or computational abilities. We particularly focus on hiding communication latency in a connected environment.

The network between viewers and services provides a new place to put infrastructure; Protium includes system infrastructure that supports multiple viewers sharing a service, disconnection and reconnection, name lookup, security, session management, and device adaptation.

Recovery Oriented Computing (ROC)

Date and Time
Tuesday, October 8, 2002 - 4:30pm to 6:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
David Patterson, from UC Berkeley
Host
Robert Tarjan
It is time to broaden our performance-dominated research agenda. A four order of magnitude increase in performance over 20 years means that few outside the CS&E research community believe that speed is the only problem of computer hardware and software. If we don't change our ways, our legacy may be cheap, fast, and flaky.

Recovery Oriented Computing (ROC) takes the perspective that hardware faults, software bugs, and operator errors are facts to be coped with, not problems to be solved. By concentrating on Mean Time to Repair rather than Mean Time to Failure, ROC reduces recovery time and thus offers higher availability. Since a large portion of system administration is dealing with failures, ROC may also reduce total cost of ownership. ROC principles include design for fast recovery, extensive error detection and diagnosis, systematic error insertion to test emergency systems, and recovery benchmarks to measure progress.

If we embrace availability and maintainability, systems of the future may compete on recovery performance rather than just processor performance, and on total cost of ownership rather than just system price. Such a change may restore our pride in the systems we craft.

Disappearing Security

Date and Time
Monday, September 23, 2002 - 12:30pm to 2:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Dirk Balfanz, from Xerox PARC
Host
Edward Felten
The Ubiquitous Computing community has been touting the "disappearing computer" for some time now. What they mean is not that there should be no more computers. Rather, they believe that small, unabtrusive computers will be so ubiquitous (think smart wallpaper) that we won't even notice them.

In my talk, I will argue that security mechanisms need to "disappear", in the same sense, for two reasons: First, to secure the "disappearing computer" we need to come up with novel, unobrtusive security mechanisms. Second "disappearing" security can actually make things more secure, because it gives users less oppotunity to get things wrong. I will give a few examples of recent projects at PARC that have been following this notion of "disappearing security".

Follow us: Facebook Twitter Linkedin