Recent Progress in the Design of
Advanced Multimodal Interfaces
ABSTRACT
The
advent of multimodal interfaces based on recognition of human speech, touch,
pen input, gesture, gaze, and other natural behavior represents just the
beginning of a progression toward pervasive computational interfaces that are
capable of human-like sensory perception. Such interfaces eventually will
interpret continuous simultaneous input from many different input modes, which
will be recognized as users engage in everyday activities. They also will track
and incorporate information from multiple sensors on the user's interface and
surrounding physical environment in order to support intelligent
multimodal-multisensor adaptation to the user, task and usage environment. In
the present talk, I will describe state-of-the-art research on multimodal
interaction and interface design, and in particular two topics that are
generating considerable activity at the moment both within our own lab and
around the world. The first topic focuses on major robustness gains that have
been demonstrated for different types of multimodal system, compared with
unimodal ones. The second involves a recent surge of research activity on human
multisensory processing and users' multimodal integration patterns during human-computer
interaction, as well as implications for the design of adaptive multimodal
interfaces. The long-term goal of research in these and related areas is the
development of advanced multimodal interfaces that can support new
functionality, unparalleled robustness, and flexible adaptation to individual
users and real-world mobile usage contexts.
Brief Biography
Sharon
Oviatt is a Professor and Co-Director of the Center for Human-Computer
Communication (CHCC) in the Dept. of Computer Science at Oregon Health &
Science University (OHSU). Her research focuses on human-computer interaction,
spoken language and multimodal interfaces, and mobile and highly interactive
systems. Examples of recent work involve the development of novel design concepts
for multimodal and mobile interfaces, robust interfaces for real-world field
environments, adaptive conversational interfaces with animated software
characters, and modeling of diverse user groups across the lifespan. She is an
active member of the international HCI, speech, and multimodal communities. She
has published over 90 scientific articles in a wide range of venues, including
work featured in recent and upcoming special issues of Communications of the
ACM, Human Computer Interaction, Transactions on Human Computer Interaction,
IEEE Multimedia, Proceedings of IEEE, and IEEE Transactions on Neural Networks.
She received an NSF Special Extension for Creativity Award in 2000, and Chaired
the International Conference on Multimodal Interfaces 2003. Further information
about Dr. Oviatt and CHCC can be found at http://www.cse.ogi.edu/CHCC.