Quick links

Colloquium

In Search of Seamless Collaboration

Date and Time
Monday, April 5, 2004 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Kori Inkpen, from Dalhousie University
Host
Maria Klawe
In real life we seamless interact with others. Humans have strong interpersonal skills that enable them to utilize many subtle cues to coordinate their actions and communication. Now introduce computers to the mix. We often spend more time struggling with our technology than using it to help achieve our goals. This is especially true when people want to work together in a face-to-face environment. Often, we pull out our pens and paper and go to work at the whiteboard,abandoning computer technology all together. However, as more and more of our information is in digital form, we are compelled to find a new way to seamlessly incorporate technology into our rich face-to-face interactions.

Dr. Inkpen and members of the EDGE Lab at Dalhousie University are investigating ways to more effectively support users natural, face-to-face collaborative interactions. This talk will present their research directions in the area of shared space collaboration, exploring novel computing environments, interaction styles, and user interfaces to support small group collaboration in co-located environments. Advances in this area will fundamentally change how people work and play together in the future.

Improving the End-to-End Availability of Internet -Based Systems

Date and Time
Thursday, March 25, 2004 - 4:30pm to 6:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
David Andersen, from MIT
Host
Larry Peterson
The end-to-end availability of Internet services is between two and three orders of magnitude worse than other important engineered systems, including the US airline system, the 911 emergency response system, and the US public telephone system. This talk makes two contriutions to improve end-to-end availability. First, a study of three years of data collected on a 31-site testbed explores why failures happen, and finds that access network failures, inter-provider and wide-area routing anomalies, domain name system faults, and server-side failures all have a role to play in reducing availability.

Second, an overlay network with new algorithms for end-to-end path selection improves availabilty by one or two orders of magnitude compared to the current state. A purely overlay-based system, RON (resilient overlay networks), deploys nodes in different organizations and networks, carefully measures and monitors the status of available paths, and relies on them to cooperatively route packets by way of each other to bypass faults. A second system, MONET (multihomed overlay networks), uses a combination of physical path redundancy and overlays within a network of cooperative Web proxies to improve the availability for Web users. Experimental evidence suggests that RON can reduce failures by a factor of six, and that with physical path redundancy, a six-site MONET eliminates almost all network-based failures.

Message-passing Algorithms in Graphical Models and their Applications to Large-Scale Stochastic Systems

Date and Time
Tuesday, March 23, 2004 - 4:30pm to 6:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Martin Wainwright, from UC Berkeley
Host
Robert Schapire
Probability distributions defined by graphs arise in a variety of fields, including statistical signal and image processing, sensor networks, machine learning, and comunication theory. Graphical models provide a principled framework in which to combine local constraints so as to construct a global model. Important practical problems in applications of graphical models include computing marginal distributions or modes, and the log partition function. Although these problems can be solved efficiently in tree-structured models, these same tasks are intractable for general large-scale graphs with cycles.

In recent years, local message-passing algorithms (i.e., belief propagation, max-product) have been widely used to compute approximate solutions in graphs with cycles. We describe a class of reweighted message-passing algorithms, and demonstrate how they can be understood as methods for solving graph-structured optimization problems. These modified algorithms have advantages over standard methods, including unique fixed points and guaranteed upper bounds (reweighted belief propagation), and performance guarantees (reweighted mix-product). We discuss applications of graphical models and message-passing to statistical image processing and error-control coding, as well as directions for future research.

Unsupervised Learning of Natural Language Structure

Date and Time
Thursday, March 11, 2004 - 4:30pm to 6:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Dan Klein, from Stanford University
Host
Moses Charikar
There is precisely one complete language processing system to date: the human brain. Though there is debate on how much built-in bias human learners might have, we definitely acquire language in a primarily unsupervised fashion. On the other hand, computational approaches to language processing are almost exclusively supervised, relying on hand-labeled corpora for training. This reliance is largely due to repeated failures of unsupervised approaches. In particular, the problem of learning syntax (grammar) from completely unannotated text has received a great deal of attention for well over a decade, with little in the way of positive results.

We argue that previous methods for this task have generally failed because of the representations they used. Overly complex models are easily distracted by non-syntactic correlations (such as topical associations), while overly simple models aren't rich enough to capture important first-order properties of language (such as directionality, adjanency, and valence). We describe several syntactic representations which are designed to capture the basic character of natural language syntax as directly as possible.

With these representations, high-quality parses can be learned from surprisingly little text, with no labeled examples and no language-specific biases. Our results are the first to show above-baseline performance in unsupervised parsing, and far exceed the baseline (in multiple languages).These specific grammar learning methods are useful since parsed corpora exist for only a small number of languages. More generally, most high-level NLP tasks, such as machine translation and question-answering, lack richly annotated corpora, making unsupervised methods extremely appealing even for common languages like English.

Stable Internet Routing Without Global Coordination

Date and Time
Tuesday, March 9, 2004 - 4:30pm to 6:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Jennifer Rexford, from AT&T Labs-Research
Host
Larry Peterson
The Border Gateway Protocol (BGP) allows an autonomous system (AS) to apply diverse local policies for selecting routes and propagating reachability information to other domains. However, BGP permits ASes to have conflicting policies that can lead to routing instability.

This talk proposes a set of guidelines for an AS to follow in setting its routing policies, without requiring coordination with other ASes. Our approach exploits the Internet's hierarchical structure and the commercial relationships between ASes to impose a partial order on the set of routes to each destination. The guidelines conform to conventional traffic-engineering practices of ISPs, and provide each AS with significant flexibility in selecting its local policies. Furthermore, the guidelines ensure route convergence even under changes in the topology and routing policies. Drawing on a formal model of BGP, we prove that following our proposed policy guidelines guarantees route convergence. We also describe how our methodology can be applied to new types of relationships between ASes, how to verify the hierarchical AS relationships, and how to realize our policy guidelines. Our approach has significant practical value since it preserves the ability of each AS to apply complex local policies without divulging its BGP configurations to others. The end of the talk briefly summarizes follow-up studies that have built on this work.

The work described in this talk draws on the papers http://www.research.att.com/~jrex/papers/ton01-stable.pdf http://www.research.att.com/~jrex/papers/infocom01.ps

Multi-Robot Coordination in Highly Dynamic Environments

Date and Time
Wednesday, March 3, 2004 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Manuela Veloso, from Carnegie Mellon University
Host
Robert Schapire
In recent years, researchers have invested significant effort in multi-robot systems. Robot soccer, as a pioneering multi-robot task, has offered a challenging research testbed. In robot soccer, a team of robots faces an uncertain and dynamic environment including an opponent team.

We have researched in robot soccer, developing single-robot and multi-robot perception, cognition, and action algorithms. To form an effective team, individual robots need to be skilled. We have developed novel object-recognition, localization, and behavior-based algorithms to provide these skills. To achieve a reliable team of robots, we have contributed team coordination strategies, methods for representing and adapting team plans in response to a dynamic world, and multiagent learning. In this talk, I will present our contributions to this complex adversarial multi-robot task, focused in particular on our teams of small wheeled robots and of four-legged Sony AIBO robots.

Based on our research results, I will introduce an overarching hypothesis on a set of principles for effective robot autonomy in the physical world. I will conclude by presenting how my research goals in fit into the broad fields of artificial intelligence and robotics.

Heuristic Search and Triangle Inequality

Date and Time
Tuesday, February 17, 2004 - 3:30pm to 5:00pm
Location
Computer Science Large Auditorium (Room 104)
Type
Colloquium
Speaker
Andrew V. Goldberg, from Microsoft Research -- Silicon Valley
Host
Robert Tarjan
We study the problem of finding a shortest path between two vertices in a directed graph. This is an important problem with many applications, including that of computing driving directions, our motivating one. We allow preprocessing the graph using a linear (in the graph size) amount of extra space to store additional information. This information helps us to answer shortest paths queries quickly. Our main contribution is a new distance lower-bounding technique hat, combined with heuristic search (aka A* search), gives surprisingly good results for important classes of graphs. The this technique is based on landmarks and triangle inequality. We also study several bidirectional variants of heuristic search,including new variants. Computational experiments show that on some graph classes,in particular road networks, these algorithms work very well. This talk is aimed at a general Computer Science audience. Joint work with Chris Harrelson

Comparative Genomics for Biodefense: Tracking the Source of the 2001 Anthrax Attacks

Date and Time
Wednesday, February 11, 2004 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Steven Salzberg, from Institute for Genomic Research
Host
Olga Troyanskaya
In the fall of 2001, a series of biowarfare attacks were sent through the U.S. mail as letters containing a powder form of the anthrax bacterium, Bacillus Anthracis. Subsequent to these attacks, our group was asked to conduct a rapid sequencing project to decode the genome of the strain used in the attacks. We compared this sequence to a second, reference anthrax genome, which we were sequencing simultaneously. I will discuss the difficult challenges posed by trying to compare two imcomplete genome sequences, for which the sequencing error rate is relatively high, and for which the assembly of the genome data may contain mistakes. Using a combination of computational and statistical methods, our group identified 60 novel, high-quality genetic markers distinguishing the attack strain from other strains (1). We also concluded that, in order to facilitate future comparisons at this level of detail, genome sequencing centers need to release not only genome sequences but also detailed information about the accuracy of every nucleotide in those sequences, and about the genome assembly itself.

I will discus the difficult challenges posed by trying to compare two imcomplete genome sequences, for which the sequencing error rate is relatively high,and for which the assembly of the genome data may contain mistakes. Using a combination of computational and statistical methods, our group identified 60 novel, high-quality genetic markers distinguishing the attack strain from other strains (1). We also concluded that, in order to facilitate future comparisons at this level of detail, genome sequencing centers need to release not only genome sequences but also detailed information about the accuracy of every nucleotide in those sequences, and about the genome assembly itself.

If time permits, I will discuss a new project to sample and sequence the genomes of 10,000 or more isolates of the human influenze A virus, in order to anticipate and help prevent future flu pandemics.

The Aphasia Project: Designing Technology for and with People who have Aphasia

Date and Time
Wednesday, November 19, 2003 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Joanna McGrenere, from University of British Columbia
Host
Maria Klawe
The Human Computer Interaction community recognized long ago that it should play a role in the design, implementation and evaluation of technology for users with cognitive disabilities. While there are some significant challenges to working with users who have cognitive disabilities, there has never been a more opportune time to embrace those challenges. Advances in computer technology, including the prevalence of low-cost handheld multimedia devices, combined with the increasing likelihood that individuals with acquired cognitive disabilities where computer-literate prior to acquiring the disability, suggest new opportunites for assistive technology to support these individuals in their daily activities.

The Aphasia Project is an exciting new multi-disciplinary research project operating out of the University of British Columbia and Princeton University. Our goal is to investigate how technology can be designed to support individuals with aphasia in their daily life. Aphasia is a cognitive disorder, usually acquired as a result of a stroke, brain tumor, or other brain injury, that result in an impairment of langage, affecting the production and/or comprehension of speech and/or written language. Most individuals with aphasia retain their comprehension of visual images, suggesting that multimedia technology may plan a role in an assistive solution.

This talk will explore a number of research challenges in the design of assistive technology for individuals with aphasia. Key challenges include achieving effective design and evaluation for a user population with extremely high degree of variance, and creating appropriate multi- disciplinary teams to carry out the research. We will describe our initial approaches to dealing with these challenges and some early research findings.

3D Scanning in Egypt and Image-based Object Editing

Date and Time
Wednesday, November 5, 2003 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Holly Rushmeier, from IBM TJ Watson Research Laboratory
Host
Thomas Funkhouser
I will present an overview of a project to capture and use digital models of museum artifacts for the purposes of education and public information. I will begin by describing the original design of a 3D scanning system now in use in Cairoâs Egyptian Museum. The system captures both the geometry and surface color and detail of museum artifacts. I will report on the experience using the system and present samples of how the processed 3D data will be used on a web site designed to communicate Egyptian culture. One of the greatest problems that we have faced in the project is the need to have edits on complex 3D objects specified by non-technical end-users such as historians. I will present a new approach we are exploring in which we convert the problem of editing a 3D object of arbitrary size and surface properties to a problem of editing a 2D image. We allow the user to specify small edits in both geometry and surface properties from any view and at any resolution they find convenient, regardless of the interactive rendering capability of their computer. We believe a purely image-based approach to 3D editing offers the potential to make graphics tools available to a wider population of end-users.
Follow us: Facebook Twitter Linkedin