Quick links

Colloquium

Cryptography and Security, Theory and Practice

Date and Time
Tuesday, April 23, 2002 - 4:30pm to 6:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Benny Pinkas, from STAR Lab Intertrust Technologies
Host
Robert Tarjan
Research in cryptography and security has an incredible potential for fruitful interaction between theory and practice. The case of public-key cryptography may be considered an example of a successful transition from theory to practice. Elsewhere, there has been less interaction between the fields. For instance, cryptographers have devised procedures for performing many seemingly impossible tasks, using zero-knowledge proofs and protocols for secure function evaluation; however none of these procedures is in every-day use.

This talk describes a body of research that bridges this gap, presenting two results in detal. The first is a protocol involving two parties, each one holding a private database. The aim of the protocol is to compute the well-known ID3 data-mining algorithm on the union of the databases, correctly computing the result without revealing any other information about the two databases. The result is unique in showing that an existing complex algoithm can be implemented in the style of theorists' "secure function evaluation". The protocol has sub-linear overhead, and can be applied to databses containing millions of transactions.

The second result is a Secure Human Input Protocol (SHIP), which enables human users to encrypt messages (e.g. passwords) in a way that defeats attacks by automated eavesdropping adversaries, and requires no computational aids for the users. The protocol makes novel use of several techniques that attempt to distinguish between a human user and a computer program. We give precise reductions from the security of our protocol to that of the distinguishing techniques that we use.

Parallelizing Programs using Approximate Code

Date and Time
Wednesday, April 17, 2002 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Craig Zilles, from University of Wisconsin
Host
David August
Chip multi-processors (CMP), where a processor core is replicated multiple times on a single chip, are widely recognized as a means to exploit the exponentially growing transistor budgets predicted by Moore's Law, while respecting design complexity and physical constraints. While some workloads (e.g., server, scientific) already exhibit sufficient thread-level parallelism, most software does not. For these other programs, the additional complexity of manual parallelization is unattractive, and traditional automatic parallelization techniques have yet to be widely adopted. In this talk, I will present two hardware/software techniques to exploit thread-parallel resources to improve sequential program performance. The first technique uses "helper threads" to avoid stalls (due to cache misses and branch mispredictions) that would be incurred by a single-threaded execution of the program. The second technique uses a "master processor" to compute future checkpoints of program state to achieve a (speculative) parallel execution of the program. Both techniques exploit a common theme: the use of approximate versions of code to generate accurate value predictions.

Estimation Problems in Machine Learning

Date and Time
Wednesday, April 10, 2002 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Peter Bartlett, from BIOwulf Technologies and Australian National University
Host
Bernard Chazelle
Statistical estimation problems arise naturally in many areas of machine intelligence. In this talk, we consider problems of this kind from two areas of machine learning: supervised learning, where the goal is to learn to make decisions in an i.i.d. setting, and reinforcement learning, where the goal is to learn to make a sequence of related decisions. In prediction problems, such as pattern classification and regression, estimates of prediction error are important for the analysis and design of learning algorithms. We first review classical error estimates, and then describe more recent `large margin' estimates, which give a better explanation of the success of some of the most popular pattern classification techniques. All of these estimates measure the complexity of a function class without exploiting any information about the process that generated the data. We describe recent work on data-dependent error estimates, which can be much more accurate because they use the training data to capture important properties of this process. In reinforcement learning problems, an agent chooses actions to take in some environment, aiming to maximize a reward function. Many control, scheduling, optimization and game-playing tasks can be formulated in this way. Policy gradient methods consider agents that are restricted to some set of policies, and aim to move through policy space so as to improve the performance of the agent. The central problem for such methods is that of estimating performance gradients. We present algorithms for this problem, and show how their performance depends on properties of the controlled system, such as mixing times.

Learning Theory and Problems of Statistics

Date and Time
Wednesday, April 3, 2002 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Vladimir Vapnik, from NEC Research Institute
Host
Bernard Chazelle
In this talk I will discuss the main principles of inference from data developed in statistical learning theory which consider in non-parametric framework the problem of inference from a given (fixed) sample size.

Along with a survey of new ideas of statistical learning theory and their comparison to classical statistical approaches, I will discuss the problem of constructing learning machines that implement these new ideas.

Automatic Tools for Building Secure Systems

Date and Time
Wednesday, March 27, 2002 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Dawn Song, from UC Berkeley
Host
David Walker
Building a secure system is a complex and error-prone process in computing. System designers and developers face many challenges: What does it mean for a system to be secure? How do I know whether my system is secure? Will the security of my system break if I add a new component? SSL/TLS and other standard protocols do not work in my scenario. How can I find ways to achieve security in my system efficiently? How can I make sure the implementation adheres to the design? Finding answers to these questions is essential to building secure, efficient systems; however, the complexity, subtlety, and interactions among different components in a large system put the problem beyond the reach of even experienced security experts, not to mention average programmers who lack security expertise. As a result, the current design and implementation process for secure systems is slow, expensive, and often results in a vulnerable system. In my thesis, I propose a new automatic approach for building security protocols. In particular, I designed and built a suite of automatic tools, Athena, containing three components: 1) APV: an Automatic Protocol Analyzer; 2) APG: an Automatic Protocol Generator; 3) ACG: an Automatic Code Generator. In this talk, I'll discuss how this toolkit enables a new automatic approach for building security protocols that is more efficient, economical, and with higher security guarantee than the current approach.

Event (no name)

Date and Time
Wednesday, March 20, 2002 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
TBD
Host
Andrew Appel

A Signal-Processing Framework for Forward and Inverse Rendering

Date and Time
Wednesday, March 13, 2002 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Ravi Ramamoorthi, from Stanford University
Host
Thomas Funkhouser
Understanding the nature of reflection and illumination is important in many areas of computer graphics and computer vision. In this talk, I describe a new way of looking at reflection on a curved surface, as a special type of convolution of the incident illumination and the reflective properties of the surface (technically, the bi-directional reflectance distribution function or BRDF). We formalize these notions by deriving a convolution theorem in terms of the spherical harmonic coefficients of the lighting and BRDF. This allows us to introduce a signal-processing framework for reflection, wherein the incident lighting is the signal, the BRDF is the filter, and the reflected light is obtained by filtering the input illumination (signal) using the frequency response of the BRDF filter. I will demonstrate applications in two areas. First, we show how our framework can be used for computing and displaying synthetic images in real-time with natural illumination and physically-based BRDFs. We will call this the "forward rendering" or the convolution problem. Next, we extend and apply our framework to estimating realistic lighting and reflective properties from photographs, and show how this approach can be used to synthesize very realistic images under novel lighting and viewing conditions. We will call this the "inverse rendering" or the deconvolution problem. In my talk, I will first describe the theoretical framework, and then discuss the above two applications.

Programming for Pervasive Computing Environments

Date and Time
Tuesday, March 12, 2002 - 4:30pm to 6:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Robert Grimm, from University of Washington
Host
Chong Wang
Programming for Pervasive Computing Environments Pervasive computing provides an attractive vision for the future of computing. Computing power will be available everywhere. Mobile and stationary devices will dynamically connect and coordinate to seamlessly help people accomplish their tasks. However, for this vision to become a reality, programmers must build applications that constantly adapt to an environment in which people, devices, and services keep coming and going. In my talk, I will explore how to make the programmers' task feasible. I will discuss how to structure necessary systems support and how programming for highly dynamic computing environments impacts application development. I will introduce one.world, a system that I have built, and reflect on my own and other's experiences with it, which demonstrate the practicality of building and deploying pervasive applications. Robert Grimm University of Washington http://www.cs.washington.edu/homes/rgrimm

Digital Voices

Date and Time
Wednesday, March 6, 2002 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Cristina Lopes, from Xerox PARC
Host
Perry Cook
Aerial acoustic communications are remarkably successful in biological systems, most notably in humans, but they have been ignored or dismissed in computer networks. There are good reasons for this: the data rates are relatively low when compared to other media and the sounds tend to be annoying. But as computers get more ubiquitous, and as more and more devices support a channel for voice or music, sound becomes a cheap, secure and intuitive alternative for transferring small amounts of information among devices that happen to be near each other.

As any other modems, the design of these "air modems" must account for factors such as the data rate, the error probability and the computational overhead at the receiver. On top of that, these modems must also account for aesthetic and psychoacoustic factors. I will show how to vary certain parameters in standard modulation techniques such as ASK, FSK and Spread-Spectrum to obain communication systems in which the messages are musical and other familiar sounds, rather than modem sounds. Some applications of this technology will be discussed. I will also present the basis of a framework for studying low bit rate communication systems including air modems, bird songs, and human speach.

Sparse Sampling Algorithms for Probabilistic Artificial Intelligence

Date and Time
Wednesday, February 27, 2002 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Michael Kearns, from University of Pennsylvania
Host
Robert Tarjan
A powerful theme from computational learning theory is uniform convergence: the fact that a large or infinite class C of random variables can be simultaneously estimated from a sample whose size is just logarithmic in the size of C, or dependent only on combinatorial parameters such as its VC dimension. The application of this tool in learning theory has primarily been to supervised learning (classification and regression), and has largely been descriptive (for example, to model the generalization ability of existing learning algorithms), rather than prescriptive.

Over the past several years, we have been systematically exploring the application of this fundamental notion to a broader set of natural problems in probabilistic artificial intelligence, and have found a number of striking examples where it leads to novel algorithms and analyses for core problems. In particular, by applying uniform convergence arguments to carefully structured but very sparse samples in probabilistic environments, we obtain:

* A new algorithm providing a near-optimal solution to the exploration-exploitation trade-off in Markov decision processes, a basic problem in reinforcement learning;

* A sparse sampling algorithm for planning in Markov decision processes whose running time has no dependence on the number of states;

* Powerful new algorithms for probabilistic inference in Bayesian networks:

*Algorithms for computing approximate Nash equilibria in large-population games.

These are only a few of the many algorithmic applications of learning theory methods in modern AI. This talk will present a self contained survey of these developments.

Follow us: Facebook Twitter Linkedin