Computer Science IX
Human-Computer Interaction — Computer Science IX — University of Würzburg

Research Projects

(since 2017)
Welcome to the EmbodimentLab!
Investation of the oportunities of large scale VR systems with high numbers of users.
A New Tool for Learning Classroom Management Using Virtual Reality
InterMem explores the usefulness of multimodal and multimedia interfaces with an increased perceptual coupling to strengthen the positive effects of biography work with patients suffering from dementia.
The PhD project explores aspects of social interactions, behavioral patterns, and hybrid avatar-agent systems in Virtual Realities.
(since 2015)
HistStadt4D: 'Multimodal access to historic image repositories to support the research and communication of city and architectural history' is a BMBF-funded junior scientist group. The research group investigates and develops methodical and technological approaches in order to merge, structure and annotate images in media repositories and additional information related to their place and time.
(since 2013)
GEtiT (Gamified Training Environment for Affine Transformations) achieves an interactive gamified 3D-training of affine transformations by requiring users to apply their affine transformation knowledge in order to solve challenging puzzles presented in an immersive and intuitive 3D environment.
(since 2012)
XRoads (Cross Reality On A Digital Surface) explores novel and multimodal interaction techniques for tabletop games. It is a Mixed Reality platform which combines touch, speech, and gestures as input modalities for turn-based and real-time strategy games.
(since 2011)
CaveUDK is a high-level VR middleware based on one of the most successful commercial game engines: the Unreal® Engine 3.0 (UE3). It is a VR framework implemented as an extension to the Unreal® Development Kit (UDK) supporting CAVE-like installations.
Simulator X
(since 2009)
Simulator X is a research testbed for novel software techniques and architectures for Real-Time Interactive Systems in VR, AR, MR, and computer games. It uses the central concept of semantic reflection based on a highly concurrent actor model to build intelligent multimodal graphics systems.
(since 2007)
SEARIS (Software Techniques and Architectures for Real-Time Interactive Systems) is an international research collaboration founded in 2007. Its goal is to advance the field of RIS software engineering.
SIRIS (Semantic Reflection for Intelligent Realtime Interactive Systems) is a research project which explores novel software architectures for Virtual, Augmented, Mixed Reality and computer games and similar domains.
Research and education collaboration during the time at the HTW Berlin. The project is now continued in several new activities.
(2003 – 2008)
SCIVE (Simulation Core for Intelligent Virtual Environments) explores software techniques combining Artificial Intelligence (AI) methods with Virtual and Augmented Reality (VR/AR).
(2006 – 2008)
PASION (Psychological Augmented Social Interaction Over Networks) explores communication and collaboration for social groups using immersive and mobile displays augmented by implicit communication signals (i.e., bio sensors)
AI & VR Lab
(1996 – 2008)
The AI & VR Lab of Bielefeld University founded by Prof. Wachsmuth and headed by Prof. Latoschik hosted several novel projects in the area of intelligent graphics and intelligent Virtual Environments.
The project's goal is the development of a demonstration platform for Virtual-Reality-based prototyping using multimodal (gesture and speech) interaction metaphors.
(2000 – 2005)
We have contributed to the MAX project (Multimodal Assembly eXpert) by a port to an immersive environment and the implementation of multimodal speech and gesture input.
(2001 – 2003)
DEIKON (DEixis In KonstruktionsDialogen), part of the SFB 360, explores the utilization of deictic expressions in gesture and speech as input methods for construction scenarios.
Virtual Constructor
(1998 – 2000)
We have contributed to the Virtual Constructor project by developing an immersive renderer supporting multimodal input.
(1996 – 1999)
SGIM (Speech and Gesture Interfaces for Multimedia) develops techniques for communicating with multimedia systems through the detection and interpretation of a user's verbal (speech) and coarse gestural input.
(1995 – 1997)
VIENA (Virtual Environments and Agents) explored the use of a multi agent system to capture natural language and coarse pointing gestures to interact with an interior design application.
Visit us on
HCI on YouTube
Follow us
HCI feed