Quick links

Colloquium

Leonardo's Laptop: Human Needs and the New Computing Technologies

Date and Time
Tuesday, November 30, 2004 - 4:30pm to 6:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Ben Shneiderman, from University of Maryland
Host
Olga Troyanskaya
Leonardo's Laptop: Human Needs and the New Computing Technologies http://mitpress.mit.edu/leonardoslaptop

Winner of IEEE book award for "Distinguished Literary Contribution furthering Public Understanding of the Profession"

Ben Shneiderman (ben@cs.umd.edu) Department of Computer Science, Human-Computer Interaction Laboratory, Institute for Advanced Computer Studies & Institute for Systems Research University of Maryland, College Park, MD 20742

The old computing was about what computers could do; the new computing is about what people can do.

To accelerate the shift from the old to the new computing designers need to: 1) reduce computer user frustration. Recent studies show 46% of time is lost to crashes, confusing instructions, navigation problems, etc. Public pressure for change could promote design improvements and increase reliability, thereby dramatically enhancing user experiences.

2) promote universal usability. Interfaces must be tailorable to a wide range of hardware, software, and networks, and users. When broad services such as voting, healthcare, and education are envisioned, the challenge to designers is substantial.

3) envision a future in which human needs more directly shape technology evolution. Four circles of human relationships and four human activities map out the human needs for mobility, ubiquity, creativity, and community. The World Wide Med and million-person communities will be accessible through desktop, palmtop and fingertip devices to support e-learning, e-business, e-healthcare, and e-government.

Leonardo da Vinci could help as an inspirational muse for the new computing. His example could push designers to improve quality through scientific study and more elegant visual design. Leonardo's example can guide us to the new computing, which emphasizes empowerment, creativity, and collaboration. Information visualization and personal photo interfaces will be shown: PhotoMesa (www.cs.umd.edu/hcil/photomesa) and PhotoFinder (www.cs.umd.edu/hcil/photolib).

For more: http://mitpress.mit.edu/leonardoslaptop http://www.cs.umd.edu/hcil/newcomputing

Ben Shneiderman is a Professor in the Department of Computer Science Founding Director (1983-2000)of the Human-Computer Interaction Laboratory (http://www.cs.umd.edu/hcil/), and Member of the Institutes for Advanced Computer Studies & for BEN SHNEIDERMAN is a Professor in the Department of Computer Science Founding Director dies & for Systems Research, all at the University of Maryland at College Park. He was elected as a Fellow of the Association for Computing (ACM ) in 1997 and a Fellow of the American Association for the Advancement of Science (AAAS) in 2001. He received the ACM SIGCHI Lifetime Achievement Award in 2001.

Ben is the author of "Software Psychology: Human Factors in Computer and Information Systems" (1980) and "Designing the User Interface: Strategies for Effective Human-Computer Interaction" (4th ed. 2004) http://www.awl.com/DTUI/ . He pioneered the highlighted textual link in 1983, and it became part of Hyperties, a precursor to the web. His move into information visualization helped spawn the successful company Spotfire http://www.spotfire.com/ . With S Card and J. Mackinlay, he co-authored "Readings in Information Visualization: Using Vision to Think" (1999). "Leonardo's Laptop" (MIT Press) appeared in October 2002, and his new book with B. Bederson, "The Craft of Information Visualization" was published in April 2003.

Correctness of an Operating System Microkernel

Date and Time
Wednesday, November 10, 2004 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Wolfgang Paul, from Saarland University
Host
Nicholas Pippenger
We outline the paper and pencil correctness proof for an operating system microkernel written in C. The key ingredients of the proof are 1) a simulation of virtual machines by physical processors with swap memory. Here physical memory serves as a cache for virtual memory. In the correctness proof one argues simultaneously about hardware (memory management units) and low level software (page fault handlers). 2) semantics and provably correct compiler for an appropriate subset of C. One needs small steps semantics resp. structured operational semantics, because the computations of the kernel needs to be interleaved with computations of the users. One needs also to consider in line assembler code, because the variables of an abstract C machine do not show the user processes. The correctness proof for the compiler is nontrivial, because the recursion of small steps semantics does not match the recursion for the code generation very well. 3) the definition of an abstract machine model for an operating system kernel providing multiprocessing and virtual memory. In this model interrupt handling and function calls can be treated in a very uniform way. The simulation of this model by a physical machine combines the techniques from the first two parts. This is joint work with M. Gargano, M. Hillebrand and D. Leinenbach. It is part of a large project funded by the German national government, which aims at the formal verification of entire computersystems consisting of hardware, system software, communication system and applications.

Unlocking the Clubhouse: Women in Computing

Date and Time
Thursday, November 4, 2004 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Allan Fisher, from CMU
Host
Adam Finkelstein
While men and women use desktop and Internet applications at nearly equal rates, the participation gap in the development of new computing technologies and applications remains undiminished, and may even be widening. For example,fewer than 20% of the computer science degrees awarded by PhD- granting departments go to women. In the absence of formal barriers, why do so few women choose computer science as a field of study or as a career? Why should we care? What should we do? This talk will address these questions in the context of research and institutional change efforts at Carnegie Mellon's School of Computer Science.

Event (no name)

Date and Time
Wednesday, October 20, 2004 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Bill Cheswick, from Lumeta
Host
Edward Felten

Toward Privacy in Public Databases

Date and Time
Thursday, October 14, 2004 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Cynthia Dwork, from Microsoft Research
Host
Sanjeev Arora
We will describe definitions and algorithmic results for the ``census problem''. Informally, in a census individual respondents give private information to a trusted (and trustworthy) party, who publishes a sanitized version of the data. There are two fundamentally conflicting requirements: privacy for the respondents and utility of the sanitized data. Unlike in the study of secure function evaluation, in which privacy is preserved to the extent possible given a specific functionality goal, in the census problem privacy is paramount; intuitively, things that cannot be learned ``safely'' should not be learned at all. The definition of privacy and the requirements for a safe sanitization are important contributions of this work. Our definition of privacy formalizes the notion of protection from being brought to the attention of others -- one's privacy is maintained to the extent that one blends in with the crowd. Our definition of a safe sanitization emulates the definition of semantic security for a cryptosystem and says, roughly, that an adversary given access to the sanitized data is not much more able to compromise privacy than an adversary who is not given access to the sanitization.

The Risks of Electronic Voting

Date and Time
Monday, October 11, 2004 - 4:30pm to 6:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Dan Wallach, from Rice University
Host
Andrew Appel
Recent election problems have sparked great interest in managing the election process through the use of electronic voting systems. While computer scientists, for the most part, have been warning of the perils of such action, vendors have forged ahead with their products,claiming increased security and reliability. Many municipalities have adopted electronic systems, and the number of deployed systems is rising. For these new computerized voting systems, neither source code nor the results of any third-party certification analyses have been available for the general population to study, because vendors claim that secrecy is a necessary requirement to keep their systems secure. Recently, however, the source code for a voting system from Diebold, a major manufacturer, appeared on the Internet. Diebold currently has around 30% of the market for electronic voting systems in the U.S.

This unique opportunity for independent scientific analysis of voting system source code demonstrates the fallacy of the closed-source argument for such a critical system. Our analysis shows that this voting system is far below even the most minimal security standards applicable in other contexts. We highlight several issues including unauthorized privilege escalation, incorrect use of cryptography, vulnerabilities to network threats, and poor software development processes. For example, common voters, without any insider privileges, can cast unlimited votes without being detected by any mechanisms within the voting terminal. Furthermore, we show that even the most serious of our outsider attacks could have been discovered without the source code. In the face of such attacks, the usual worries about insider threats are not the only concerns; outsiders can do the damage. That said, we demonstrate that the insider threat is also quite considerable. We conclude that, as a society, we must carefully consider the risks inherent in electronic voting, as it places our very democracy at risk.

Bio: Dan Wallach is an Assistant Professor in the Department of Computer Science at Rice University in Houston, Texas. He earned his bachelor's at the University of California, Berkeley and his PhD at Princeton University. His research involves computer security and the issues of building secure and robust software systems for the Internet. Wallach began his security career in 1995 when he and his colleagues found serious flaws in the security of Java applets; an attacker could use your web browser to hijack your entire computer. Wallach has also studied security issues that occur in distributed and peer-to-peer systems. Wallach, along with colleagues at Johns Hopkins, co-authors a groundbreaking study that reported significant flaws in Diebold's AccuVote-TS electronic voting system. He has testified about voting security issues before government bodies in the U.S., Mexico, and the European Union.

Google: A Computer Scientist's Playground

Date and Time
Wednesday, October 6, 2004 - 4:00pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Craig Nevill-Manning, from Google
Host
Szymon Rusinkiewicz
Google faces two large technical challenges: ensuring that our search results are as relevant as possible, and serving hundreds of millions of queries in a fraction of a second at a reasonable cost. To solve the first problem, we perform an offline matrix computation to produce PageRank, a query independent measure of page reputation, and combine it with more traditional query-specific scoring. To solve the distributed computing problem, we use tens of thousands of commodity PCs and highly fault-tolerant software. I will discuss some details of these solutions, and also share some interesting statistical tidbits about search and the web. Dr. Craig Nevill-Manning is a Senior Staff Research Scientist and New York Engineering Director at Google. While at Google, he has led the development team for Froogle, a product search engine. Prior to his four years at Google, Dr. Nevill-Manning was an assistant professor in the Computer Science Department at Rutgers University, New Jersey. He has 41 academic publications, and holds a Ph.D. in Computer Science from Waikato University, New Zealand.

Opening the Black-Box: New Techniques in Cryptography

Date and Time
Monday, May 10, 2004 - 10:30am to 12:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Boaz Barak, from Institute for Advanced Study
Host
Sanjeev Arora
The American Heritage dictionary defines the term "Black-Box" as

"A device or theoretical construct with known or specified performance characteristics but unknown or unspecified constituents and means of operation."

In the context of Computer Science, to use a program as a black box means to use only its input/output relation by executing the program on chosen inputs, without examining the actual code (i.e., representation as a sequence of symbols) of the program. Using programs as black boxes is very popular in Computer Science, as understanding properties of a program from its code is a notoriously hard problem.

In this talk I will survey some results regarding black-box vs. non-black-box use of programs in cryptography. Somewhat surprisingly and un-intuitively, it turns out that in some cases looking at the code of the program can actually help.

On the positive side, I will present a new cryptographic scheme (a zero knowledge proof system) obtainable using such "non-black-box" techniques. Such a scheme was previously proven not to be obtainable using black-box techniques, and hence was conjectured not to exist. I will also briefly survey other positive applications of "non-black-box" techniques.

I will also demonstrate the power of "non black box" techniques in a negative result, namely an impossibility result for the task of "scrambling" or "obfuscating" software.

End System Multicast

Date and Time
Thursday, May 6, 2004 - 4:30pm to 6:00pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Hui Zhang, from Carnegie Mellon University
Host
Kai Li
While the traditional IP network supports unicast datagram service, Deering, in his seminal SIGCOMM'88 paper, advocated extending the IP service model and router functionalities to support multicast. In the decade that followed, hundreds of technical papers and tens of Ph.D dissertations were written on various aspects of IP multicast. In addition, IP Multicast was incorporated into both IPv4 and IPv6 by the IETF, the standard organization for the Internet. However, despite significant amount of efforts by research, standard, application, router vendor, and service provider communities in the last decade, IP multicast service is still not offered on the Internet today.

In this talk, I will present the End System Multicast (ESM) project, which was among the first to take the position that IP Multicast is the wrong approach for supporting multipoint applications in the Internet. In the ESM architecture, end systems or hosts, instead of routers, implement all multicast related functionality including membership management and packet replication. To validate the approach, we have developed and deployed an Internet video broadcasting system based on ESM. I will report our experiences and reflect on the lessons we learned.

Finishing the Revolution

Date and Time
Wednesday, May 5, 2004 - 1:00pm to 2:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
Colloquium
Speaker
Greg Papadopoulos, from Sun Microsystems
Host
Maria Klawe
Follow us: Facebook Twitter Linkedin