Research
We explore how people and intelligent systems can work together, whether through voice, gesture, physiology, or shared environments. Our research brings together human-computer interaction, robotics, and generative AI to develop interactive systems that adapt to human needs, respond to context, and support creativity and trust. We are especially interested in multimodal, real-world interactions that blur the line between physical and digital spaces.
To achieve this, we combine methods from AI, cognitive science, and design with user-centered experimentation. Our work spans domains such as augmented and virtual reality, intelligent assistants, physiological sensing, and human-robot collaboration. The lab supports these efforts with a dedicated robot space, high-density EEG, and cross-reality setups. We regularly collaborate across disciplines and welcome students, researchers, and partners interested in shaping how humans and intelligent systems interact.
Research Areas
- Generative AI in Interactive Systems
- Human-Robot Interaction
- Physiological Computing and Implicit Interaction
- Mixed and Cross-Reality Interaction
- Privacy, Trust, and Ethical Interactive Systems
- Tools and Methods for Intelligent System Development