Quick links

Talk

Managing Sensor Network Resource Usage and Monitoring Active Volcanoes

Date and Time
Tuesday, December 15, 2009 - 12:30pm to 1:30pm
Location
Computer Science 402
Type
Talk
Speaker
Geoffrey Challen, from Harvard University
Host
Jennifer Rexford
Sensor networks composed of large numbers of self-organizing embedded devices are an increasingly valuable tool for understanding our world. Deployed networks allow scientists to observe phenomena at a scale and resolution that challenge existing instrumentation. Some call this new instrument the macroscope. My project uses sensor networks to monitor active volcanoes. Due to the high data rates and stringent fidelity requirements of this application, providing output suitable for scientific analysis requires carefully directing the limited resources available at each node. In this talk I will present Lance, a general approach to bandwidth and energy management targeting reliable data collection for sensor networks. By combining an application-level determination of value with a system-level estimation of cost, Lance maximizes the value of the data returned to the application by optimally allocating bandwidth and energy devoted to signal collection. Lance\\'s design decouples data collection policy from mechanism, allowing its optimization metrics to be customized to suit a variety of application goals. I will motivate and describe the Lance architecture, present results from the lab and the field, and discuss continuing efforts in this area, including single-node and network-wide architectures for distributed energy management.

Bio: Geoffrey Challen (ne Werner-Allen) is a Ph.D. Candidate in Computer Science at the Harvard University School of Engineering and Applied Sciences, advised by Matt Welsh. His research addresses the systems and networking challenges necessary to enable high-fidelity sensing applications, focusing specifically on maximizing the usage of the limited resources available to sensor network nodes. Working with geoscientists, he has helped perform three sensor network deployments on active Ecuadorean volcanoes. He built and maintains MoteLab, a wireless sensor network testbed used by researchers worldwide, and is a co-editor of a forthcoming book on sensor network deployments. Geoffrey is a 2009 Siebel Fellow, and a Resident Tutor at Eliot House.

Interdomain Routing: Stability, Security and Selfishness

Date and Time
Wednesday, December 16, 2009 - 11:00am to 12:00pm
Location
Computer Science 402
Type
Talk
Speaker
Michael Schapira, from Yale University and UC Berkeley
Host
Jennifer Rexford
The Border Gateway Protocol (BGP) establishes routes between the many independently-administered networks that make up the Internet. Over the past two decades there has been exponential growth in the scale and complexity of the Internet. However, BGP has not changed significantly in comparison and, consequently, does not always cope well with modern-day challenges (bounded computational resources, economically driven manipulations, security attacks, and more). Understanding, "fixing" and redesigning interdomain routing necessitates taking a principled approach that bridges theory and systems research, and breaks traditional disciplinary barriers. I shall present conceptual frameworks for addressing today's challenges and novel routing schemes that improve on the existing ones. Specifically, I shall present (1) a necessary condition for BGP safety, i.e., guaranteed BGP convergence to a "stable" routing outcome; (2) an economic approach to BGP security; and (3) Neighbor-Specific BGP -- a modest extension to BGP that allows network administrators more expressive routing policies while improving global network stability. I shall also discuss the surprising implications of these results for other areas of research: distributed computing, game theory and mechanism design. Finally, I shall outline interesting directions for future work.

Stable Internet Routing Without Global Coordination

Date and Time
Thursday, December 10, 2009 - 4:30pm to 5:30pm
Location
Sherrerd Hall 101
Type
Talk
Host

Incentive Compatibility and Dynamics of Internet Protocols

Date and Time
Tuesday, November 24, 2009 - 3:00pm to 4:00pm
Location
Computer Science 402
Type
Talk
Speaker
Aviv Zohar, from Hebrew University
Host
Jennifer Rexford
The internet, a large and distributed system, is not controlled by a single economic entity but rather by multiple agents, each with its own agenda. From this point of view, internet protocols are merely recommendations that agents can choose to ignore if they have anything to gain by doing so. Protocols must therefore be designed to include the right incentives, or agents may decide not to follow them. The talk will mainly focus on a model of congestion control in networks, where the interested agents are the end-hosts who try to maximize their throughput. In this setting, the packet dropping policies of the routers greatly affect the incentives of agents and the convergence properties of the network. I will discuss conditions under which congestion control schemes can be both efficient, so that capacity is not wasted, and incentive compatible, so that each participant can maximize its utility by following the prescribed protocol. As in other protocols, questions of incentive compatibility are often intrinsically linked to the convergence of the network dynamics when agents follow the protocol, and the understanding of one issue goes hand in hand with the understanding of the other.

Based on work with Brighten Godfrey, Hagay Levin, Jeff Rosenschein, Rahul Sami, Michael Schapira, and Scott Shenker.

Highly Available Byzantine Fault Tolerant Distributed Systems

Date and Time
Tuesday, December 1, 2009 - 12:30pm to 1:30pm
Location
Computer Science 402
Type
Talk
Speaker
Atul Singh, from NEC Labs (Princeton)
Host
Michael Freedman
Many distributed services are hosted at large, shared, geographically diverse data centers, and they use replication to achieve high availability despite the unreachability of an entire data center. Recent events show that non-crash faults occur in these services and may lead to long outages, for example, Amazon's S3 service was down for at least 7 hours recently due to a Byzantine fault in their servers. While Byzantine-Fault Tolerance (BFT) could be used to withstand these faults, current BFT protocols can become unavailable if a small fraction of their replicas are unreachable. This is because existing BFT protocols favor strong safety guarantees (consistency) over liveness (availability).

In this talk, I will present a novel BFT state machine replication protocol called Zeno that trades consistency for higher availability. In particular, Zeno replaces strong consistency (linearizability) with a weaker guarantee (eventual consistency): clients can temporarily miss each other's updates but when the network is stable the states from the individual partitions are merged by having the replicas agree on a total order for all requests. Evaluation of a prototype of Zeno shows that Zeno provides better availability than traditional BFT protocols.

Bio:
Atul Singh is a Researcher at the NEC Labs, Princeton. He received his PhD in Computer Science from Rice University and spent last two years visiting the Max Planck Institute for Software Systems (MPI-SWS), Saarbrucken, Germany. Before that, he spent two years visiting Intel Research Berkeley, working with the P2 group. His interests lie in the area of dependable distributed systems, overlay networks, declarative networking, and is currently focusing on exciting challenges emerging in the cloud computing arena.

Classification of Multivariate Time Series via Temporal Abstraction

Date and Time
Thursday, November 12, 2009 - 11:00am to 12:00pm
Location
Computer Science 402
Type
Talk
Speaker
Robert Moskovitch, from the Medical Informatics Research Center, Ben-Gurion University, Israel
Analysis of multivariate time stamped data, for purposes such as Temporal Knowledge Discovery, Classification and Clustering, presents many challenges. Time stamped data can be sampled in a fixed frequency, commonly when measured by electronic sensors, but also in a non fixed frequency, often when made manually. Additionally, raw temporal data can represent periods of a continuous or nominal value represented by time intervals. Temporal bstraction, in which time point series are abstracted into meaningful time intervals, is used to bring all the temporal variables to the same representation. In this talk I will present KarmaLego, a fast time intervals mining method for the discovery of non-ambiguous Time Intervals Related Patterns (TIRP) represented by Allen's temporal relations. Then I will present several uses of the TIRPs. In this talk I will focus on the use of classification of multivariate time series, in which TIRPs are used as classification features. The entire process and several computational abstraction methods for the task for classification will be presented. Finally an evaluation on real datasets from the medical domain will be presented, in addition to meaning examples of discovered temporal knowledge.

The Economics of Virtualization

Date and Time
Thursday, November 19, 2009 - 12:30pm to 1:30pm
Location
Sherrerd Hall 101
Type
Talk
Speaker
Paul Laskowski
Host
Edward Felten, CITP
The difficulty of making changes to the internet architecture has spawned widespread interest in large-scale, "virtualized" testbeds as a place to deploy new services. Despite the excitement, uncertainty surrounds the question of how technologies can bridge the gap from testbed to global availability. It is recognized that no amount of validation will spur today's ISPs to make architectural changes, so if new services are to reach a widespread audience, the testbed itself must provide that reach. This suggests two questions: First, would today's network providers (or a new set of providers) ever support a virtualized architecture on a global scale? Second, even if they did, would such a network, spanning a great many domains, support the adoption of new services or upgrades to the infrastructure?

In this talk, I will argue that the answers to these questions depend critically on how money flows to network and service providers. A novel economic theory, rooted in the classic model of Cournot competition, allows us to compare market types with regard to service innovation and network upgrade. According to this analysis, there is a danger that a virtualized testbed inherits the market structure prevalent in the internet architecture, causing investment levels to remain poor. On the other hand, alternate market designs can dramatically improve incentives to invest in services, and even in network upgrades, but may encounter resistance from network providers who are likely to see reduced profits. I will discuss how these alternate market types may be implemented.

RouteBricks: Exploiting Parallelism to Scale Software Routers

Date and Time
Tuesday, October 20, 2009 - 12:30pm to 1:30pm
Location
Computer Science 402
Type
Talk
Speaker
Katerina Argyraki, from EPFL
I will discuss the problem of building fast, programmable routers. I will present RouteBricks, a high-end router architecture that consists entirely of commodity servers. RouteBricks achieves high performance by parallelizing router functionality both across multiple servers and across multiple cores within a single server; it is fully programmable using the familiar Click/Linux environment. I will also present RB4, our 4-server prototype that routes at 35Gbps; this routing capacity can be linearly scaled through the use of additional servers.

Measurement Methods for Fast and Accurate Blackhole Identification with Binary Tomography

Date and Time
Wednesday, November 11, 2009 - 11:00am to 12:00pm
Location
Computer Science 302
Type
Talk
Speaker
Italo Cunha, from UPMC Paris Universitas
Binary tomography -- the process of identifying faulty network links through coordinated end-to-end probes -- is a promising method for detecting failures that the network does not automatically mask (e.g., network "blackholes"). Because tomography is sensitive to the quality of the input, however, naive end-to-end measurements can introduce inaccuracies. This paper develops two methods for generating inputs to binary tomography algorithms that improve their inference speed and accuracy. Failure confirmation is a per-path probing technique to distinguish packet losses caused by congestion from persistent link or node failures. Aggregation strategies combine path measurements from unsynchronized monitors into a set of consistent observations. When used in conjunction with existing binary tomography algorithms, our methods identify all failures that are longer than two measurement cycles, while inducing relatively few false alarms. In two wide-area networks, our techniques decrease the number of alarms by as much as two orders of magnitude. Compared to the state of the art in binary tomography, our techniques increase the identification rate and avoid hundreds of false alarms.

Bio
Italo Cunha received the B.Sc. degree in computer science and the M.Sc. degree in computer science from Universidade Federal de Minas Gerais, Brazil, in 2004 and 2007, respectively. He is currently a second year Ph.D. candidate in UPMC Paris Universitas, doing his research with Thomson, Paris, France. His research interests are in network measurement, troubleshooting, and management.

Avenger: Catching Bugs in Distributed Systems Via Almost-Invariant Inference

Date and Time
Wednesday, October 21, 2009 - 11:00am to 12:00pm
Location
Computer Science 302
Type
Talk
Speaker
Marco Canini, from EPFL
Host
Jennifer Rexford
It is notoriously hard to develop dependable distributed systems. This is partly due to the difficulties in foreseeing various corner cases and failure scenarios while the system is running over an asynchronous network. Reasoning about the distributed system invariants is easier than reasoning about the code itself. The invariants can be used for debugging, theorem proving, and runtime enforcement. In this talk, I'll introduce an approach to observe the behavior of a system and automatically infer invariants which reveal the bugs in the current implementation of the system. Using our tool, Avenger, we automatically generate a large number of potentially relevant properties, check them against the traces of system executions, and filter out all but a few properties before reporting them to the developer. Our key insight in filtering is that a good candidate for an invariant is the one that holds in all but a few cases, i.e., an "almost-invariant". Our experimental results with the BGP, RandTree, and Chord implementations demonstrate Avenger's ability to identify the almost-invariants that lead the developer to programming errors.

Bio:
Marco Canini received the "laurea" degree in Computer Science and Engineering from the University of Genoa, Italy. He holds a Ph.D. degree from the Department of Communications, Computer and Systems Science (DIST) of the University of Genoa. During his Ph.D., he was invited as a visitor to the University of Cambridge, Computer Laboratory. He now is with the Networked Systems Laboratory at EPFL, Switzerland. His research focuses on computer networking with emphasis on rethinking Internet fundamentals to include power awareness and improve Internet's energy efficiency, methods for Internet traffic classification based on application identification, design of network monitoring applications, and graphical visualization of networking data.

Follow us: Facebook Twitter Linkedin