Quick links

Colloquium

Computers and the Sociology of Mathematical Proof

Date and Time
Monday, October 13, 2003 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Donald MacKenzie, from University of Edinburgh
Host
Andrew Appel
Since the 1950s, computers have become ever more widely used to perform mathematical proofs. Mathematicians have used them for proofs (such as of the famous four-colour theorem) that are too extensive to conduct by hand. Automated proving is a fundamental technique of "artificial intelligence," and also central to debates about the very possibility of such intelligence. Computer scientists use automated theorem provers to verify the design of software critical to human safety or to national security, and "model checking" programs are now an important part of computer hardware design.

After reviewing briefly the history of these developments, the talk will explain why they are of interest to the sociology of scientific knowledge: they suggest the emergence of a set of "cultures of proving" different from those of ordinary mathematics. Clashes between cultures of automated proving and those of ordinary mathematics, and the first litigation centering on the meaning of mathematical "proof," will be examined. The talk will also discuss areas of tension within automated proving and outline a case in which the priorities of national security can be seen in the very construction of an automated theorem prover.

Adiabatic quantum algorithms

Date and Time
Wednesday, October 8, 2003 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Umesh Vazirani, from UC Berkeley
Host
Sanjeev Arora
A couple of years ago a new paradigm, based on the quantum adiabatic theorem, was introduced for the design of quantum algorithms for solving optimization problems. In this talk, I will present recent results showing that for certain objective functions, the quantum adiabatic algorithm tunnels through local optima, thus performing exponentially faster than classical local search algorithms. I will also discuss limits on the power of this new paradigm, as well as its relationship to simulated annealing, a classical heuristic for algorithm design.

Statistical Analysis of Anatomical Shape and Function

Date and Time
Monday, April 21, 2003 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Polina Golland, from MIT AI Lab
Host
Thomas Funkhouser
We present a computational framework for image-based statistical analysis of anatomical image data in different populations. Applications of such analysis include understanding developmental and anatomical aspects of disorders when comparing patients vs. normal controls, studying morphological changes caused by aging, or even differences in normal anatomy, for example, differences between genders. Once a quantitative description of anatomy is extracted from input images, the problem of identifying differences between the two groups can be reduced to one of the classical questions in machine learning, namely constructing a classifier function for assigning new examples to one of the two groups while making as few mistakes as possible. In the traditional classification setting, the resulting classifier is rarely analyzed in terms of the properties of the input data that are captured by the discriminative model. In contrast, interpretation of the statistical model in the original input domain is an important component of image-based analysis. We introduce a novel approach to such interpretation that yields detailed descriptions of the detected differences in anatomically meaningful terms of organ development and deformation. Estimating statistical significance of the detected differences between the two groups of images is a challenging problem due to high dimensionality of data and a relatively small number of training examples. We demonstrate a non-parametric technique, based on permutation testing, for estimation of statistical significance in the context of discriminative analysis. This approach provides a weaker statistical guarantee than the classical convergence bounds, but is nevertheless useful in applications of machine learning that involve a large number of highly correlated features and a limited number of training examples. Example domains that give rise to such problems include view-based object recognition, text classification, gene expression analysis. We demonstrate the proposed analysis framework on several examples of studies of changes in the brain anatomy due to healthy aging and disorders, as well as of brain activation in response to visual stimuli.

Computer Science and Game Theory: Mutual Influences and Synergies

Date and Time
Monday, April 14, 2003 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Kevin Leyton-Brown, from Stanford University
Host
Kenneth Steiglitz
The last few years have seen a surge of interest in problems at the intersection between computer science and game theoretic economics. Sometimes, practical use of game theoretic mechanisms or solution concepts requires the application of ideas from computer science; in other cases problems in computer systems can be best addressed by the introduction of game theoretic incentives. Finally, the influence between the disciplines can be more symmetric, requiring a synthesized view of both computational limitations and the self-interested behavior of agents. This talk will describe one example of each of these three cases:
  • modeling the empirical computational hardness of selecting the winners of a combinatorial auction, analyzing these models, and applying them to construct algorithm portfolios and to generate hard problem instances
  • using incentives to diffuse temporally-focused usage of network resources at the lowest possible cost
  • local-effect games, a novel class of multi-player general-sum games for which representation is compact, pure-strategy Nash equilibria often exist, and equilibria can often be computed efficiently

Genes,Tumors, and Bayes nets: Improving the specificity of biological signal detection in microarray analysis.

Date and Time
Wednesday, April 9, 2003 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Olga Troyanskaya, from Stanford
Host
Mona Singh
Microarray analysis allows for large-scale exploration of gene expression by taking a snap shot of the cell at a specific point in time. Such data sets may provide insight into fundamental biological questions as well as address clinical issues such as diagnosis and therapy selection. The resulting data are complex and challenging to analyze without sophisticated computational tools. This seminar will highlight the issue of improving the specificity of biological signal detection from microarray data. I will present robust and accurate algorithms for missing value estimation for microarray data and for identification of differentially expressed genes from gene expression data sets. This talk will also address gene function prediction and present a Bayesian framework for integrated analysis of gene expression data with data sets from other high- throughput biological data sources.

Multiagent Learning in the Presence of Limited Agents

Date and Time
Monday, March 31, 2003 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Michael Bowling, from Carnegie Mellon University
Host
Robert Schapire
Learning to act in a multiagent environment is a challenging problem. Optimal behavior for one agent depends upon the behavior of the other agents, which may be learning as well. Multiagent environments are therefore non-stationary, violating the traditional assumption underlying single-agent learning. In addition, agents in complex tasks may have limitations, such as unintended physical constraints or designer-imposed approximations of the task that make learning tractable. Limitations prevent agents from acting optimally, which complicates the already challenging problem. A learning agent must effectively compensate for its own limitations while exploiting the limitations of the other agents. My thesis research focuses on these two challenges. The novel contributions of my thesis include (1) the WoLF (Win or Learn Fast) variable learning rate as a new principle that enables convergence to optimal responses in multiagent learning; (2) an analysis of the existence of Nash equilibria when agents have limitations; and (3) GraWoLF as a scalable multiagent learning algorithm.

In this talk I focus on the contributions of the WoLF principle and the GraWoLF algorithm. I show that the WoLF variable learning rate causes learning to converge to optimal responses in settings of simultaneous learning. I demonstrate this converging effect both theoretically in a subclass of single-state games and empirically in a variety of multiple-state domains. I then describe GraWoLF, a combination of policy gradient techniques and the WoLF principle. I show compelling results of applying this algorithm to a card game with an intractably large state space as well as an adversarial robot task. These results demonstrate that WoLF-based algorithms can effectively learn in the presence of other learning agents, and do so even in complex tasks with limited agents.

Efficiency in Online and Noise-Tolerant Learning

Date and Time
Wednesday, March 26, 2003 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Adam Kalai, from MIT
Host
Amit Sahai
Exponential weighting techniques have had a big impact on learning, in theory and practice. First, they provide a general theoretical approach for online (adaptive) learning, where one must adapt to an unknown future. Unfortunately, these weighting techniques are inefficient and impractical for large problems. We present a single approach that is simple, efficient and near-optimal for a wide range of online problems. Next, we discuss the problem of boosting, where one tries to improve the accuracy of any learning procedure. Here exponential weighting techniques have been consistently the most popular method and are an excellent example of a theoretical idea having a big impact in practice. The widely-recognized downside is that these boosting algorithms perform poorly with noisy data. We show that lesser-known boosting techniques, namely those based on decision graphs, are noise tolerant and can be made to have the same optimal efficiency rates.

Multiagent Planning with Factored MDPs

Date and Time
Wednesday, March 12, 2003 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Carlos Guestrin, from Stanford University
Host
Robert Schapire
Many real-world tasks require multiple agents to coordinate in order to achieve a common goal. Examples include multi-robot systems, network routing and supply chain management. Unfortunately, these large-scale problems are often quite complex, involving many states and multiple decision makers. Factored Markov decision processes (MDPs) allow us to represent a complex transition model compactly using a dynamic Bayesian network. This representation often allows an exponential reduction in the representation size, but the complexity of exact solution algorithms for such MDPs grows exponentially in the number of variables describing the state of the system and in the number of agents.

This talk presents a framework for approximate planning that can exploit structure in a factored MDP to solve problems with many trillions of states and actions. The talk will focus on three key elements:

  • Factored Linear Programs -- A novel LP decomposition technique, using ideas from inference in Bayesian networks, which can yield exponential reductions in planning time.
  • Distributed Coordination -- A distributed multiagent decision making algorithm, where the coordination structure arises naturally from the system dynamics.
  • Generalization in Relational MDPs -- A method for learning general solutions from solved tasks, that allows us to act in new scenarios without replanning.

We demonstrate our approach on the task of multiagent coordination in a real strategic computer war game.

Neptune: Programming and Runtime Support for Cluster-based Network Services

Date and Time
Wednesday, March 5, 2003 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Tao Yang, from UCSB/Teoma
Host
Vivek Pai
This talk presents the Neptune project that addresses the system and programming-level issues in building high performance and reliable runtime support for online services and data-intensive applications. Online applications differ from their offline counterpart in the performance sensitivity to highly concurrent workload and the availability requirement. Programming such applications is challenging in achieving high performance, reliability, and efficient resource management.

Neptune is a middleware system that allows services to be aggregated and deployed quickly on a cluster of workstations and SMPs. It shields application programmers from the complexities of replication, service discovery, failure detection and recovery, and resource management. The techniques investigated are architecture and programming support for thread/process based concurrency, quality-aware service differentiation, parallel data aggregation, and replica consistency management with staleness control.

This talk will also discuss the use of Neptune in Internet search at Ask Jeeves and Teoma.


Tao Yang is an Associate Professor of Computer Science at University of California at Santa Barbara. He has also been the Chief Scientist for Teoma and Ask Jeeves since 2000 for directing research and development of Internet search engines. His research interests are parallel and distributed systems, high performance scientific computing, and Internet search.

Art, Math, and Sculpture

Date and Time
Monday, March 3, 2003 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Carlo H. Séquin, from UC Berkeley
Host
Thomas Funkhouser
The roles of computers, multi-media, virtual environments, and rapid prototyping in the design of abstract geometrical sculptures are explored. The techniques described in this paper are the outgrowth of a six-year collaboration between Brent Collins, a wood sculptor, and Carlo Séquin, a computer scientist. They are particularly applicable to abstract geometrical sculptures, where precisely defined and highly optimized shapes follow a clear underlying logic. The use of these techniques has resulted in several sculpture families, represented by virtual displays, many small physical maquettes, by a few larger wood and bronce sculptures, and recently a 12-foot snow sculpture. KEYWORDS: sculpture design, procedural modeling, rapid prototyping.
Follow us: Facebook Twitter Linkedin