Our work sits at the intersection of Human-Computer Interaction (HCI) and Medicine. That means that 50% of what we do is applying HCI methodology to problems in medicine, so we can spend the other 50% of our time "borrowing" technologies from medicine and applying them to HCI. This talk, therefore, will focus on two topics: (a) patient-friendly medical information displays, and (b) sensing physiological signals for computer input. In the first part of the talk, I'll present our work on making electronic medical record data more useful to patients, work that spans both qualitative research ("what *should* patient interfaces look like?") and technology research ("how do we actually build that without asking doctors to spend hours explaining everything to us?"). In the second part of the talk, I'll present our work on using sensing techniques that medical science has known about for centuries (like measuring electrical muscle activity, and measuring body acoustics) to build new computer input systems. This part of the talk will have cool videos, so bring your popcorn.
Dan Morris is a researcher in the Computational User Experiences group at Microsoft Research; he is interested in novel input devices, patient-facing medical technology, and computer support for music and creativity.
Dan studied neurobiology as an undergraduate at Brown, and developed brain-computer interfaces: first at Brown, and later as an engineer at Cyberkinetics, Inc. His PhD thesis at Stanford focused on haptic rendering and physical simulation for virtual surgery, and his work since coming to MSR (in 2006) has included using physiological signals for input systems, designing patient-friendly information displays for hospitals, and generating automatic accompaniment for sung melodies. If you would like to see both his research talents and his (lack of) performance talents showcased in one four-minute masterpiece, search for "Microsoft Songsmith Commercial".