Inaugural Lecture: Shaping Intelligent Environments through Human AI Interaction
For decades, computers were tools that users operated directly. Today, we live with a growing constellation of intelligent devices and services embedded in our homes, cities, workplaces, and personal technologies. This shift raises fundamental questions. How much control do users actually want to keep. What should an AI system do autonomously. And how do we design technology that can act proactively while still aligning with human expectations.
Instead of asking whether AI should take control, a more realistic question today is how much control users want to keep. In most everyday scenarios, people appreciate proactive assistance, but they still expect to step in, correct, or override the system when necessary. For now, AI systems are most effective when they act as supportive partners that assist users while keeping humans in the driver seat. Whether future, highly reliable systems might shift this balance is an open question, but today the focus remains on designing AI that enhances human agency rather than reducing it.
A guiding idea in his research is that we can improve human–system interaction by learning from human–human interaction. Humans interpret intent, react to subtle cues, and build trust through transparency and communication. Bringing these principles into AI-driven systems helps users understand how intelligent systems behave, why they act, and how to collaborate with them more effectively.
His research approaches these questions through four interconnected areas: User Sensing, Context Awareness, Embodied Agents, and Privacy & Security. Together, they form a research program aimed at enabling humans and AI systems to work together more effectively.
User Sensing
User sensing focuses on understanding people while they interact with technology. This includes reading subtle cues such as touch behavior, gaze, or physiological reactions to infer attention, workload, or intent. The goal is to move from explicit commands toward systems that are aware of the user’s state and can respond more naturally and intelligently.
Context Awareness
Context awareness extends this idea from the user to the surrounding environment. By interpreting objects, locations, and situations, AI can offer support that fits the moment — for example, by understanding what a person is looking at, anticipating what they might need next, or assisting with tasks in physical or digital spaces. This makes interaction more intuitive and situationally relevant.
Embodied Agents
While many AI systems live on screens, embodied agents bring intelligence into the physical world. This includes robots, actuated devices, and haptic systems that can collaborate with people, assist with tasks, or shape environments in real time. Research in this area explores how physical AI systems should behave, communicate, and coordinate with humans to be both useful and trustworthy.
Privacy & Security
Rich sensing and autonomy require strong foundations of privacy and security. This area examines how users perceive data collection, how these perceptions differ from actual risks, and how we can design interactions that make AI systems safer, more transparent, and easier to trust. Building privacy-aware and value-aligned systems is essential for long-term acceptance.
Looking ahead
These four areas build on each other: sensing informs context, context informs physical action, and privacy shapes what is appropriate and acceptable. Together, they point toward intelligent environments that are not just technologically advanced, but genuinely human centered.
With the new chair in Human AI Interaction at TU Dortmund University and the strong environment of the Research Center Trustworthy Data Science and Security, I look forward to advancing this research and exploring how humans and AI can accomplish more together than either could alone.
