Workshop on Engineering Interactive Systems Embedding AI Technologies
- Kolloquium
- Workshop
Automation is pervasive in interactive systems [13]. While automation varies in nature and objectives, it is present in every layer of interactive systems architectures, from hardware input device driver level (e.g., mouse acceleration [3]), to interaction technique level (e.g., multimodal fusion such as finger clustering [20] or more sophisticated ones such as the bubble cursor [15] integrating in a single design automation at both input and output levels) as well as at the interactive application level (e.g., auto-completion [14], or the automatic generation of visual or textual components).
AI technologies (e.g., machine learning, rule-based systems) enable complex tasks to be performed, targeting the ultimate goal of autonomous systems, e.g., self-driving cars, autonomous cooking chefs [21]. On the other hand, having more automation might induce more potential for failures (known as the lumberjack analogy ). Moreover, many AI technologies bring issues at the operation level due to their black-box nature, i.e., when users interact with an interactive application embedding them [2]. To address this issue, a recent contribution [24] has demonstrated the potential benefit of opening up that box and adding explanations.
Generative AI, for example, can generate various types of data (images, text, layout, persona’s,. . . ) driven by models that are trained with specific training data. Generative AI is best known for its capability to generate text and images. Still, it is also becoming more common as a supporting tool during the engineering life cycle for building interactive systems. Researchers have started exploring the use of Large Language Models for various aspects of the Human-Centred Development Process [25]. So far Generative AI has been used to support data-driven designs and generating prototypes [8, 17, 26], human-robot interaction [28], integrating multi-modal interaction techniques [29], and even feedback on User Interface designs [12] and usage within a broader social context [23].
Integrating AI technologies can be performed at various levels, from micro to macro, requiring different (and maybe conflicting) engineering approaches. Thus, at the engineering level, different issues appear depending on the type of AI-related technologies used and the type of interactions provided to the users of such systems. Indeed, beyond explanations, issues related to display/visualization [18] and control/command [19] arise. At the dependability level, even though multiple iterations have been performed, reliability remains very low (at about 80% accuracy for simple datasets) but much lower in some domains, such as food allergies, which might be considered critical in case of severe pathology [24].
Despite the many challenges AI technologies bring in interactive systems, the potential for effortless and seamless interaction with systems far outweighs the cost of designing usable systems. With this workshop, we aim to offer a platform for scientists interested in the design, development, evaluation, and use of interactive systems involving AI technologies to address these engineering challenges. This platform will offer idea exchange, discussions, and collaborations and, thus, drive the development of AI-powered interactive systems in an interdisciplinary manner.
For more information see https://sites.google.com/view/eis-emb-aitech/