Quick links

Talk

Towards Socially-aware Artificial Intelligence: Human Behavior Understanding at Scale

Date and Time
Wednesday, April 20, 2016 - 1:30pm to 2:30pm
Location
Computer Science 402
Type
Talk

Over the past sixty years, Intelligent Machines (IM) have made great progress in playing games, tagging images in isolation, and recently making decisions for self-driving vehicles. Despite these advancements, they are still far from making decisions in social scenes and assisting humans in public spaces such as terminals, malls, campuses, or any crowded urban environment. To overcome these limitations, we need to empower machines with social intelligence, i.e., the ability to get along well with others and facilitate mutual cooperation. This is crucial to design smart spaces that adapt to the behavior of humans for efficiency, or develop autonomous machines that assist in crowded public spaces (e.g., delivery robots, or self-navigating segways).

In this talk, I will present my work towards socially-aware machines that can understand human social dynamics and learn to forecast them. First, I will highlight the machine vision techniques behind understanding the behavior of more than 100 million individuals captured by multi-modal cameras in urban spaces. I will show how to use sparsity promoting priors to extract meaningful information about human behavior from an overwhelming volume of high dimensional and high entropy data. Second, I will introduce a new deep learning method to forecast human socialbehavior. The causality behind human behavior is an interplay between both observable and non-observable cues (e.g., intentions). For instance, when humans walk into crowded urban environments such as a busy train terminal, they obey a large number of (unwritten) common sense rules and comply with social conventions. They typically avoid crossing groups and keep a personal distance to their surrounding. I will present detailed insights on how to learn these interactions from millions of trajectories. I will describe a new recurrent neural network that can jointly reason on correlated sequences and simulate human trajectories in crowded scenes. It opens new avenues of research in learning the causalities behind the world we observe. I will conclude my talk by mentioning some ongoing work in applying these techniques to social robots, and the first generation of smart hospitals. 

Alexandre Alahi is currently a postdoctoral fellow at Stanford University and received his PhD from EPFL. His research interests span visual information processing, computer vision, machine learning, robotics, and are focused around understanding and forecasting human social behaviors. In particular, he is interested in sparse approximation, deep learning, big visual data processing, real-time machine vision, and multi-modal reasoning. He was awarded the Swiss NSF early and advanced researcher grants. He won the CVPR 2012 Open Source Award for his work on Retina-inspired image descriptor, and the ICDSC 2009 Challenge Prize for his sparsity driven algorithm to track sport players. His PhD was nominated for the EPFL PhD prize. His research has been covered by the Wall Street journal, and PBS TV channel in the US, as well as European newspapers, Swiss TV news, and Euronews TV channel in Europe. He has also co-founded the startup Visiosafe, transferring his research on understanding human behavior into an industrial product. He was selected as the Top 20 Swiss Venture leaders in 2010 and won several startup competitions.

The New ABCs of Research: Achieving Breakthrough Collaborations

Date and Time
Thursday, April 21, 2016 - 8:00pm to 9:00pm
Location
Computer Science Large Auditorium (Room 104)
Type
Talk
All Princeton ACM / IEEE-CS meetings are open to the public. Students and their parents are welcome.

How should we organize research programs in computer science and technology? Research helps us produce innovative new products (the next iPhone), and research also delivers “foundational theories” (new algorithms and supporting technologies).

Solving the immense problems of the 21st century will require ambitious research teams that are skilled at producing practical solutions and foundational theories simultaneously – that is the ABC Principle: Applied & Basic Combined. Then these research teams can deliver high-impact outcomes by applying the SED Principle: Blend Science, Engineering and Design Thinking, which encourages use of the methods from all three disciplines. These guiding principles (ABC & SED) are meant to replace Vannevar Bush’s flawed linear model from 1945 that has misled researchers for 70+ years. These new guiding principles will enable students, faculty, business leaders, and government policy makers to accelerate discovery and innovation. The examples in the talk will emphasize how these guiding principles can be applied to reinvigorate computing research. 

Ben Shneiderman is a Distinguished University Professor in the Department of Computer Science at the University of Maryland. He is also the Founding Director (1983-2000) of the Human-Computer Interaction Laboratory (http://www.cs.umd.edu/hcil/), and a Member of the University of Maryland Institute for Advanced Computer Studies (UMIACS). He is a Fellow of the AAAS, ACM, IEEE, and NAI, and a Member of the National Academy of Engineering, in recognition of his pioneering contributions to human-computer interaction and information visualization. His contributions include the direct manipulation concept, clickable highlighted web-links, touchscreen keyboards, dynamic query sliders for Spotfire, development of treemaps, novel network visualizations for NodeXL, and temporal event sequence analysis for electronic health records.

Ben is the co-author with Catherine Plaisant of Designing the User Interface: Strategies for Effective Human-Computer Interaction (6th ed., 2016). His book Leonardo’s Laptop (MIT Press) won the IEEE book award for Distinguished Literary Contribution. Shneiderman’s latest book is The New ABCs of Research: Achieving Breakthrough Collaborations (Oxford, February 2016).

 

Abstractions for Programming the Data Plane at Line Rate

Date and Time
Tuesday, December 1, 2015 - 3:00pm to 4:00pm
Location
Sherrerd Hall 306
Type
Talk
Speaker

The evolution of network switches has been driven primarily by performance. Recently, thanks in part to the emergence of large datacenter networks, the need for better control over network operations, and the desire for new features, programmability of switches has become as important as performance. In response, researchers and practitioners have developed reconfigurable switching chips that are performance-competitive with line-rate fixed function switching chips. These chips provide some programmability through restricted hardware primitives that can be configured with software directives.

This talk will focus on abstractions for programming such chips. The first abstraction, packet transactions, lets programmers express packet processing in an imperative language under the illusion that the switch processes exactly one packet at a time. A compiler then translates this programmer view into a pipelined implementation that processes multiple packets concurrently. The second abstraction, a push-in first-out queue allows programmers to program new scheduling algorithms using a priority queue coupled with a program to compute each packet's priority in the priority queue. Together, these two abstractions allow us to express several packet-processing functions at line rate including in-network congestion control, active queue management, data-plane load balancing, measurement, and packet scheduling.

This talk includes joint work with collaborators at MIT, Barefoot Networks, Cisco Systems, Microsoft Research, Stanford, and the University of Washington.

 

Anirudh Sivaraman is a graduate student in MIT’s Computer Science and Artificial Intelligence Laboratory. He is broadly interested in computer networking and his recent research work is in the area of programmable forwarding planes.

Behavioral Non-Portability in Scientific Numeric Computing

Date and Time
Friday, December 4, 2015 - 12:30pm to 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Talk
Host
Aarti Gupta

The precise semantics of floating-point arithmetic programs depends on the execution platform, including the compiler and the target hardware. Platform dependencies are particularly pronounced for arithmetic-intensive scientific numeric programs and infringe on the highly desirable goal of software portability (which is in fact promised by heterogeneous computing frameworks like OpenCL): the same program run on the same inputs on different platforms can produce different results. So far so bad.

Serious doubts on the portability of numeric applications arise when these differences are behavioral, i.e. when they lead to changes in the control flow of a program. In this work I will present an algorithm that takes a numeric procedure and determines an input that is likely to lead to different decisions depending merely on how the arithmetic in the procedure is compiled. Our implementation of the algorithm requires minimal intervention by the user. I will illustrate its operation on examples characteristic of scientific numeric computing, where control flow divergence actually occurs across different execution platforms.

Time permitting, I will also sketch how to prove the /absence/ of inputs that may lead to control flow divergence, i.e. how to prove programs /stable/ (in one of the many senses of this word). This is ongoing work.

Thomas Wahl is an Assistant Professor at Northeastern University. This is joint work with Yijia Gu, Mahsa Bayati, and Miriam Leeser at Northeastern.

A Fast Compiler for NetKAT

Date and Time
Monday, November 30, 2015 - 1:30pm to 2:30pm
Location
Sherrerd Hall 306
Type
Talk

In the past few years, high-level programming languages have played a key role in several networking platforms, by providing abstractions that streamline application development. Unfortunately, current compilers can take tens of minutes to generate forwarding state for even relatively small networks. This forces programmers to either work around performance issues or revert to using lower-level APIs.

In this talk, we first present a new compiler for NetKAT (a network programming language) that is two orders of magnitude faster than existing systems. Our key insight is a new intermediate representation based on a variant of binary decision diagrams that can represent network programs compactly and supports fast, algebraic manipulation. We argue that our compiler scales to large networks using a diverse set of benchmarks.

In addition to speed, our new intermediate representation lets us build a powerful new abstraction for network programming. Existing languages provide constructs for programming individual switches, which forces programmers to specify whole-network behavior on a switch-by-switch basis. For the first time, we can compile programs that syntactically represent sets of end-to-end paths through the network.  To do so, our compiler automatically inserts stateful operations (e.g., VLAN tagging) to distinguish overlapping paths from each other.

Finally, we present a very general implementation of network virtualization that leverages our ability to compile end-to-end paths.  The key insight is to give packets two locations--physical and virtual--and synthesize a program that moves packets along physical paths to account for hops in the virtual network. We show that different synthesis strategies can be used to implement global requirements, such as shortest paths, load-balancing, and so on.

Arjun Guha is an assistant professor of Computer Science at UMass Amherst. He enjoys tackling problems in systems using the tools and principles of programming languages. Apart from network programming, he has worked on Web security and system configuration languages. He received a PhD in Computer Science from Brown University in 2012 and a BA in Computer Science from Grinnell College in 2006.

Types and abstractions for concurrent programming

Date and Time
Monday, November 16, 2015 - 4:30pm to 5:30pm
Location
Computer Science 302
Type
Talk
Speaker
Yaron Minsky, from Jane Street Capital

Programming Languages Seminar

Concurrent programming is the art of coordinating multiple processes into a single program, and is a pervasive part of practical programming.  It's also hard to do well, in part because good abstractions for concurrent programming are hard to come by.

This talk discusses Async, a concurrent programming library for OCaml, a statically typed functional language in the ML family. We'll discuss how Async compares to other approaches to concurrent programming, and how we leverage OCaml's type system to build high-quality abstractions for concurrency, without modification to the underlying language.

We'll also discuss how our approach to concurrent programming has   evolved over the last decade, and what aspects of the API have   turned out to be problematic.

On the Computational Complexity of some Consistency Properties in SDNs

Date and Time
Tuesday, October 27, 2015 - 4:30pm to 5:30pm
Location
Computer Science 302
Type
Talk
While Software Defined Networks are controlled centrally by one (logical) controller, the dissemination of updates in the network itself is an inherently asynchronous distributed process. Even though eventual consistency is easy to guarantee, many useful network properties might be violated during the update process. 
In this talk, we are going to look at the computational complexity of guaranteeing consistent updates for loop freedom and bandwidth limits. The best update schedules for loop freedom are hard to approximate, and the migration of unsplittable flows without congestion is not easy either. A different picture unfolds for splittable flows as used in, e.g., SWAN: It is possible to decide quickly if a consistent migration is possible, but the migration time can be unbounded. To this end, we show how any consistent migration can be performed in only a linear number of steps if restricted to one (logical) destination.
 
Klaus-Tycho Foerster is a PhD candidate in the Computer Engineering and Networks Laboratory at ETH Zurich, advised by Roger Wattenhofer. Prior to joining ETH as a research assistant, he earned his master degrees in mathematics and computer science, both at the Braunschweig University of Technology. Klaus' research focus lies in distributed computing and graph algorithms, and since recently, Software Defined Networking: Funded by Microsoft Research, he is studying the complexity of consistent updates in SDNs.

Proving that programs eventually do something good

Date and Time
Tuesday, October 27, 2015 - 3:30pm to 4:30pm
Location
Computer Science 402
Type
Talk

In this talk I will discuss research advances that led to practical tools for automatically proving program termination and related properties, e.g. liveness. Practical applications include automatically proving device driver correctness, and pharmaceutical research. 

Byron Cook is Professor of Computer Science at University College London.  Byron is also a Senior Principal at Amazon.  See http://www0.cs.ucl.ac.uk/staff/b.cook/ for more information.

ParaDrop: Enabling Third Party Apps in Home WiFi Gateways for Fun and Profit

Date and Time
Tuesday, October 20, 2015 - 3:00pm to 4:00pm
Location
Computer Science 402
Type
Talk
Host
Kyle Jamieson
Over the last few years, we have been building a new home WiFi router platform that can allow 3rd party applications and services to be deployed on it. The system, called Paradrop, is all in software and is an extreme form of the popular notion known as edge computing. It leverages popular concepts of virtualization and software-defined networking. It also integrates techniques for better monitoring, management, and mitigation of RF interference experienced in these environments. It hopes to be to the home router market what the smartphones were to the cellphone market. We believe that ParaDrop can be a significant enabler for the emerging services for smarter homes that take advantage of Internet of Things applications. Furthermore, the platform can also be leveraged for delivering a more hands-on educational content on emerging topics, such as Internet-of-Things, mobile development, mobile and wireless systems design, and even basic computer networking. In this talk, I will describe some of the technical developments in creating this platform and outline our ongoing activities.
 
Suman Banerjee is an Professor in Computer Sciences at UW-Madison where he is the founding director of the WiNGS laboratory which broadly focuses on research in wireless and mobile networking systems. He received his undergraduate degree from IIT Kanpur, and MS and PhD degrees from the University of Maryland. He is the inaugural recipient of the ACM SIGMOBILE Rockstar award and a recipient of the NSF Career Award. He is a recipient of multiple award papers at various conferences, such as ACM MobiCom, ACM CoNEXT, and IEEE Dyspan. Further, technology developed by Prof. Banerjee have won various accolades including the first prize at the Wisconsin Governor's Business Plan Competition in 2011 and in the Interdigital Innovation Challenge in 2012. He is currently serving as the chair of ACM SIGMOBILE.

Building efficient network dataplanes in software

Date and Time
Tuesday, September 1, 2015 - 12:30pm to 1:30pm
Location
Computer Science 402
Type
Talk
Host
Jennifer Rexford

In this talk we give a survey of solutions -- and especially, discuss the underlying design principles -- that we developed in recent years to achieve extremely high packet processing rates in commodity operating systems, for both bare metal and virtual machines.

By using simple abstractions, and resisting the temptation to design systems around performant but constraining assumptions, we have been able to build a very flexible framework that addresses the speed and latency communication requirements of both bare metal and virtual machines up to the 100 Gbit/s range. Our goal is not to provide the fastest framework in the universe, but one that is easy to use and rich of features (and still, amazingly fast), to taking away concerns on the dataplane's speed from (reasonable) network applications.

Our NETMAP framework, opensource and BSD licensed, runs on 3 OSes (FreeBSD, Linux, Windows); provides access to NICs, host stack, virtual switches and point-to-point channels (netmap pipes), running between 20 and over 100Mpps on software ports, and saturating NICs with just a single core (reported up to 45 Mpps on recent 40G nics); achieves bare-metal speed on VMs thanks to a virtual passthrough mode (available for Qemu and bhyve); and can be used with no modifications by libpcap clients.

Luigi Rizzo is an associate professor at the Universita` di Pisa. He has worked on network emulation, high performance networking, packet scheduling, multicast and reliable multicast.  He is a long time contributor to FreeBSD, for which he has developed several subsystems including the dummynet network emulator, the ipfw firewall, and the netmap framework.  He has been program committe member for for sigcomm, conext, infocom, nsdi, Usenix ATC, ANCS and other conferences, as well as PC chair for Sigcomm 2009 and Conext 2014, ANCS 2016, and general chair for Sigcomm 2006. Luigi has been a frequent visiting researcher at various institutions including ICSI/UC Berkeley, Google Mountain View, Intel Research Cambridge, Intel Research Berkeley.

Follow us: Facebook Twitter Linkedin