Smi Hinterreiter, Martin Wessel, Fabian Schliski, Isao Echizen, Marc Erich Latoschik, Timo Spinde,
NewsUnfold: Creating a News-Reading Application That Indicates Linguistic Media Bias and Collects Feedback, In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 19.
2025. Conditionally accepted for publication [Download][BibSonomy]
@article{hinterreiter2025newsunfold,
author = {Smi Hinterreiter and Martin Wessel and Fabian Schliski and Isao Echizen and Marc Erich Latoschik and Timo Spinde},
journal = {Proceedings of the International AAAI Conference on Web and Social Media},
url = {https://arxiv.org/abs/2407.17045},
year = {2025},
volume = {19},
title = {NewsUnfold: Creating a News-Reading Application That Indicates Linguistic Media Bias and Collects Feedback}
}
Abstract:
Media bias is a multifaceted problem, leading to one-sided views and impacting decision-making. A way to address digital media bias is to detect and indicate it automatically through machine-learning methods. However, such detection is limited due to the difficulty of obtaining reliable training data. Human-in-the-loop-based feedback mechanisms have proven an effective way to facilitate the data-gathering process. Therefore, we introduce and test feedback mechanisms for the media bias domain, which we then implement on NewsUnfold, a news-reading web application to collect reader feedback on machine-generated bias highlights within online news articles. Our approach augments dataset quality by significantly increasing inter-annotator agreement by 26.31% and improving classifier performance by 2.49%. As the first human-in-the-loop application for media bias, the feedback mechanism shows that a user-centric approach to media bias data collection can return reliable data while being scalable and evaluated as easy to use. NewsUnfold demonstrates that feedback mechanisms are a promising strategy to reduce data collection expenses and continuously update datasets to changes in context.
2024
Kristoffer Waldow, Jonas Scholz, Martin Misiak, Arnulph Fuhrmann, Daniel Roth, Marc Erich Latoschik,
Anti-aliasing Techniques in Virtual Reality: A User Study with Perceptual Pairwise Comparison Ranking Scheme, In GI VR/AR Workshop, pp. 10--18420.
2024. [BibSonomy]
@inproceedings{waldow2024anti,
author = {Kristoffer Waldow and Jonas Scholz and Martin Misiak and Arnulph Fuhrmann and Daniel Roth and Marc Erich Latoschik},
year = {2024},
booktitle = {GI VR/AR Workshop},
pages = {10--18420},
title = {Anti-aliasing Techniques in Virtual Reality: A User Study with Perceptual Pairwise Comparison Ranking Scheme}
}
Abstract:
Martin J. Koch, Astrid Carolus, Carolin Wienrich, Marc Erich Latoschik,
Meta AI Literacy Scale: Further validation and development of a short version, In Heliyon, p. 23.
2024. [Download][BibSonomy][Doi]
@article{koch2024literacy,
author = {Martin J. Koch and Astrid Carolus and Carolin Wienrich and Marc Erich Latoschik},
journal = {Heliyon},
url = {https://www.cell.com/heliyon/fulltext/S2405-8440(24)15717-9},
year = {2024},
pages = {23},
doi = {10.1016/j.heliyon.2024.e39686},
title = {Meta AI Literacy Scale: Further validation and development of a short version}
}
Abstract:
The concept of AI literacy, its promotion, and measurement are important topics as they prepare society for the steadily advancing spread of AI technology. The first purpose of the current study is to advance the measurement of AI literacy by collecting evidence regarding the validity of the Meta AI Literacy Scale (MAILS) by Carolus and colleagues published in 2023: a self-assessment instrument for AI literacy and additional psychological competencies conducive for the use of AI. For this purpose, we first formulated the intended measurement purposes of the MAILS. In a second step, we derived empirically testable axioms and subaxioms from the purposes. We tested them in several already published and newly collected data sets. The results are presented in the form of three different empirical studies. We found overall evidence for the validity of the MAILS with some unexpected findings that require further research. We discuss the results for each study individually and also together. Also, avenues for future research are discussed. The study’s second purpose is to develop a short version (10 items) of the original instrument (34 items). It was possible to find a selection of ten items that represent the factors of the MAILS and show a good model fit when tested with confirmatory factor analysis. Further research will be needed to validate the short scale. This paper advances the knowledge about the validity and provides a short measure for AI literacy. However, more research will be necessary to further our understanding of the relationships between AI literacy and other constructs.
Fabian Unruh, Jean-Luc Lugrin, Marc Erich Latoschik,
Out-Of-Virtual-Body Experiences: Virtual Disembodiment Effects on Time Perception in VR, In Benjamin Weyers, Daniel Zielasko, Rob Lindeman, Stefania Serafin, Eike Langbehn, Victoria Interrante, Gerd Bruder, J. Edward Swan II, Christoph Borst, Carolin Wienrich, Rebecca Fribourg (Eds.), Proceedings of the 30th ACM Symposium on Virtual Reality Software and Technology(20), pp. 20:1-20:11. New York, NY, USA:
Association for Computing Machinery,
2024. [Download][BibSonomy][Doi]
@inproceedings{conf/vrst/UnruhLL24,
author = {Fabian Unruh and Jean-Luc Lugrin and Marc Erich Latoschik},
number = {20},
url = {https://doi.org/10.1145/3641825.3687717},
year = {2024},
booktitle = {Proceedings of the 30th ACM Symposium on Virtual Reality Software and Technology},
editor = {Benjamin Weyers and Daniel Zielasko and Rob Lindeman and Stefania Serafin and Eike Langbehn and Victoria Interrante and Gerd Bruder and J. Edward Swan II and Christoph Borst and Carolin Wienrich and Rebecca Fribourg},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {VRST '24},
pages = {20:1-20:11},
doi = {10.1145/3641825.3687717},
title = {Out-Of-Virtual-Body Experiences: Virtual Disembodiment Effects on Time Perception in VR}
}
Abstract:
This paper presents a novel experiment investigating the relationship between virtual disembodiment and time perception in Virtual Reality (VR). Recent work demonstrated that the absence of a virtual body in a VR application changes the perception of time. However, the effects of simulating an out-of-body experience (OBE) in VR on time perception are still unclear. We designed an experiment with two types of virtual disembodiment techniques based on viewpoint gradual transition: a virtual body’s behind view and facing view transitions. We investigated their effects on forty-four participants in an interactive scenario where a lamp was repeatedly activated and time intervals were estimated. Our results show that, while both techniques elicited a significant virtual disembodiment perception, time duration estimations in the minute range were only shorter in the facing view compared to the eye view condition. We believe that reducing agency in the facing view is a key factor in the time perception alteration. This provides first steps towards a novel approach to manipulating time perception in VR, with potential applications for mental health treatments such as schizophrenia or depression and for improving our understanding of the relation between body, virtual body, and time.
Andreas Halbig, Marc Erich Latoschik,
Common Cues? Toward the Relationship of Spatial Presence and the Sense of Embodiment, In 23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR).
IEEE Computer Society,
2024. Accepted for Publication [Download][BibSonomy]
@inproceedings{halbig2024common,
author = {Andreas Halbig and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-ISMAR-halbig-common-cues.pdf},
year = {2024},
booktitle = {23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
publisher = {IEEE Computer Society},
title = {Common Cues? Toward the Relationship of Spatial Presence and the Sense of Embodiment}
}
Abstract:
Katja Bertsch, Isabelle Göhre, Marianne Cottin, Max Zettl, Carolin Wienrich, Sarah N. Back,
Traumatic childhood experiences and personality functioning: effect of body connection in a cross-sectional German and Chilean sample, In Borderline Personality Disorder and Emotion Dysregulation, Vol. 11(1), p. 20.
2024. [Download][BibSonomy][Doi]
@article{Bertsch2024,
author = {Katja Bertsch and Isabelle Göhre and Marianne Cottin and Max Zettl and Carolin Wienrich and Sarah N. Back},
journal = {Borderline Personality Disorder and Emotion Dysregulation},
number = {1},
url = {https://doi.org/10.1186/s40479-024-00266-z},
year = {2024},
pages = {20},
volume = {11},
doi = {10.1186/s40479-024-00266-z},
title = {Traumatic childhood experiences and personality functioning: effect of body connection in a cross-sectional German and Chilean sample}
}
Abstract:
Traumatic childhood experiences are a major risk factor for developing mental disorders later in life. Over the past decade, researchers have begun to investigate the role of early trauma in impairments in personality functioning following the introduction of the Alternative Model of Personality Disorders in Section III of the Diagnostic and Statistical Manual for Mental Disorders 5. Although first studies were able to empirically demonstrate a significant link between early trauma and impairments in personality functioning, only little is known about the underlying mechanisms. One possible mechanism is body connection due to its involvement in self-regulatory processes and its link to both early trauma and personality (dys)functioning.
Arne Bürger, Carolin Wienrich,
TooCloseVR – Die Entwicklung eines Präventionsprogramms zur Verhinderung von Cyberbullying mittels immersiver Technologien, In Kindheit und Entwicklung, Vol. 33(2), p. 125–135.
Hogrefe Publishing Group,
2024. [Download][BibSonomy][Doi]
@article{B_rger_2024,
author = {Arne Bürger and Carolin Wienrich},
journal = {Kindheit und Entwicklung},
number = {2},
url = {http://dx.doi.org/10.1026/0942-5403/a000445},
year = {2024},
publisher = {Hogrefe Publishing Group},
pages = {125–135},
volume = {33},
doi = {10.1026/0942-5403/a000445},
title = {TooCloseVR – Die Entwicklung eines Präventionsprogramms zur Verhinderung von Cyberbullying mittels immersiver Technologien}
}
Abstract:
Martin J. Koch, Carolin Wienrich, Samantha Straka, Marc Erich Latoschik, Astrid Carolus,
Overview and confirmatory and exploratory factor analysis of AI literacy scale, In Computers and Education: Artificial Intelligence, Vol. 7, p. 100310.
Elsevier BV,
2024. [Download][BibSonomy][Doi]
@article{Koch_2024,
author = {Martin J. Koch and Carolin Wienrich and Samantha Straka and Marc Erich Latoschik and Astrid Carolus},
journal = {Computers and Education: Artificial Intelligence},
url = {http://dx.doi.org/10.1016/j.caeai.2024.100310},
year = {2024},
publisher = {Elsevier BV},
pages = {100310},
volume = {7},
doi = {10.1016/j.caeai.2024.100310},
title = {Overview and confirmatory and exploratory factor analysis of AI literacy scale}
}
Abstract:
Jinghuai Lin, Christian Rack, Carolin Wienrich, Marc Erich Latoschik,
Usability, Acceptance, and Trust of Privacy Protection Mechanisms and Identity Management in Social Virtual Reality, In 23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR).
IEEE Computer Society,
2024. [Download][BibSonomy]
@inproceedings{lin2024usability,
author = {Jinghuai Lin and Christian Rack and Carolin Wienrich and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-ismar-social-vr-identity-management-preprint.pdf},
year = {2024},
booktitle = {23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
publisher = {IEEE Computer Society},
title = {Usability, Acceptance, and Trust of Privacy Protection Mechanisms and Identity Management in Social Virtual Reality}
}
Abstract:
Christian Merz, Jonathan Tschanter, Florian Kern, Jean-Luc Lugrin, Carolin Wienrich, Marc Erich Latoschik,
Pipelining Processors for Decomposing Character Animation, In 30th ACM Symposium on Virtual Reality Software and Technology. New York, NY, USA:
Association for Computing Machinery,
2024. [Download][BibSonomy][Doi]
@inproceedings{merz2024processor,
author = {Christian Merz and Jonathan Tschanter and Florian Kern and Jean-Luc Lugrin and Carolin Wienrich and Marc Erich Latoschik},
url = {https://doi.org/10.1145/3641825.3689533},
year = {2024},
booktitle = {30th ACM Symposium on Virtual Reality Software and Technology},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {VRST '24},
doi = {10.1145/3641825.3689533},
title = {Pipelining Processors for Decomposing Character Animation}
}
Abstract:
This paper presents an openly available implementation of a modular pipeline architecture for character animation. It effectively decomposes frequently necessary processing steps into dedicated character processors, such as copying data from various motion sources, applying inverse kinematics, or scaling the character. Processors can easily be parameterized, extended (e.g., with AI), and freely arranged or even duplicated in any order necessary, greatly reducing side effects and fostering fine-tuning, maintenance, and reusability of the complex interplay of real-time animation steps.
Christian Merz, Carolin Wienrich, Marc Erich Latoschik,
Does Voice Matter? The Effect of Verbal Communication and Asymmetry on the Experience of Collaborative Social XR, In 23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR).
IEEE Computer Society,
2024. Accepted for publication [Download][BibSonomy]
@inproceedings{merz2024voice,
author = {Christian Merz and Carolin Wienrich and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-ismar-does-voice-matter-preprint.pdf},
year = {2024},
booktitle = {23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
publisher = {IEEE Computer Society},
title = {Does Voice Matter? The Effect of Verbal Communication and Asymmetry on the Experience of Collaborative Social XR}
}
Abstract:
This work evaluates how the asymmetry of device configurations and verbal communication influence the user experience of social eXtended Reality (XR) for self-perception, other-perception, and task perception. We developed an application that enables social collaboration between two users with varying device configurations. We compare the conditions of one symmetric interaction, where both device configurations are Head-Mounted Displays (HMDs) with tracked controllers, with the conditions of one asymmetric interaction, where one device configuration is an HMD with tracked controllers and the other device configuration is a desktop screen with a mouse. In our study, 52 participants collaborated in a dyadic interaction on a sorting task while talking to each other. We compare our results to previous work that evaluated the same scenario without verbal communication. In line with prior research, self-perception is influenced by the immersion of the used device configuration and verbal communication. While co-presence was not affected by the device configuration or the inclusion of verbal communication, social presence was only higher for HMD configurations that allowed verbal communication. Task perception was hardly affected by the device configuration or verbal communication. We conclude that the device in social XR is important for self-perception with or without verbal communication. However, the results indicate that the device configuration only affects the qualities of social interaction in collaborative scenarios when verbal communication is enabled. To sum up, asymmetric collaboration maintains the high quality of self-perception and interaction for highly immersed users while still enabling the participation of less immersed users.
Samantha Monty, Florian Kern, Marc Erich Latoschik,
Analysis of Immersive Mid-Air Sketching Behavior, Sketch Quality, and
User Experience in Design Ideation Tasks, In 23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR).
IEEE Computer Society,
2024. Accepted for publication [BibSonomy]
@inproceedings{monty2024,
author = {Samantha Monty and Florian Kern and Marc Erich Latoschik},
year = {2024},
booktitle = {23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
publisher = {IEEE Computer Society},
title = {Analysis of Immersive Mid-Air Sketching Behavior, Sketch Quality, and
User Experience in Design Ideation Tasks}
}
Abstract:
Immersive 3D sketching systems empower users with tools to create
sketches directly in the air around themselves, in all three dimensions,
using only simple hand gestures. These sketching systems
have the potential to greatly extend the interactive capabilities
of immersive learning environments. The perceptual challenges of
Virtual Reality (VR), however, combined with the ergonomic and
cognitive challenges of creating mid-air 3D sketches reduce the effectiveness
of immersive sketching used for problem-solving, reflection,
and to capture fleeting ideas. We contribute to the understanding
of the potential challenges of mid-air sketching systems in
educational settings, where expression is valued higher than accuracy,
and sketches are used to support problem-solving and to explain
abstract concepts. We conducted an empirical study with 36
participants with different spatial abilities to investigate if the way
that people sketch in mid-air is dependent on the goal of the sketch.
We compare the technique, quality, efficiency, and experience of
participants as they create 3D mid-air sketches in three different
tasks. We examine how users approach mid-air sketching when the
sketches they create serve to convey meaning and when sketches are
merely reproductions of geometric models created by someone else.
We found that in tasks aimed at expressing personal design ideas,
between starting and ending strokes, participants moved their heads
more and their controllers at higher velocities and created strokes
in faster times than in tasks aimed at recreating 3D geometric figures.
They reported feeling less time pressure to complete sketches
but redacted a larger percentage of strokes. These findings serve to
inform the design of creative virtual environments that support reasoning
and reflection through mid-air sketching. With this work, we
aim to strengthen the power of immersive systems that support midair
3D sketching by exploiting natural user behavior to assist users
to more quickly and faithfully convey their meaning in sketches.
Olaf Clausen, Martin Mišiak, Arnulph Fuhrmann, Ricardo Marroquim, Marc Erich Latoschik,
A Practical Real-Time Model for Diffraction on Rough Surfaces, In Journal of Computer Graphics Techniques, Vol. 13(1), pp. 1-27.
2024. [Download][BibSonomy]
@article{clausen2024practical,
author = {Olaf Clausen and Martin Mišiak and Arnulph Fuhrmann and Ricardo Marroquim and Marc Erich Latoschik},
journal = {Journal of Computer Graphics Techniques},
number = {1},
url = {https://jcgt.org/published/0013/01/01/},
year = {2024},
pages = {1-27},
volume = {13},
title = {A Practical Real-Time Model for Diffraction on Rough Surfaces}
}
Abstract:
Wave optics phenomena have a significant impact on the visual appearance of rough conductive surfaces even when illuminated with partially coherent light. Recent models address these phenomena, but none is real-time capable due to the complexity of the underlying physics equations. We provide a practical real-time model, building on the measurements and model by Clausen et al. 2023, that approximates diffraction-induced wavelength shifts and speckle patterns with only a small computational overhead compared to the popular Cook-Torrance GGX model. Our model is suitable for Virtual Reality applications, as it contains domain-specific improvements to address the issues of aliasing and highlight disparity.
Timo Menzel, Erik Wolf, Stephan Wenninger, Niklas Spinczyk, Lena Holderrieth, Ulrich Schwanecke, Marc Erich Latoschik, Mario Botsch,
WILDAVATARS: Smartphone-Based Reconstruction of Full-Body Avatars in the Wild, In TechRxiv.
2024. Preprint [Download][BibSonomy][Doi]
@article{menzel2024wildavatars,
author = {Timo Menzel and Erik Wolf and Stephan Wenninger and Niklas Spinczyk and Lena Holderrieth and Ulrich Schwanecke and Marc Erich Latoschik and Mario Botsch},
journal = {TechRxiv},
url = {https://d197for5662m48.cloudfront.net/documents/publicationstatus/221002/preprint_pdf/475c2f7830adb5d85a17466ac50bc9c5.pdf},
year = {2024},
doi = {10.36227/techrxiv.172503940.07538627/v1},
title = {WILDAVATARS: Smartphone-Based Reconstruction of Full-Body Avatars in the Wild}
}
Abstract:
Realistic full-body avatars play a key role in representing users in virtual environments, where they have been shown to considerably improve body ownership and presence. Driven by the growing demand for realistic virtual humans, extensive research on scanning-based avatar reconstruction has been conducted in recent years. Most methods, however, require complex hardware, such as expensive camera rigs and/or controlled capture setups, thereby restricting avatar generation to specialized labs. We propose WILDAVATARS, an approach that empowers even non-experts without access to complex equipment to capture realistic avatars in the wild. Our avatar generation is based on an easy-to-use smartphone application that guides the user through the scanning process and uploads the captured data to a server, which in a fully automatic manner reconstructs a photorealistic avatar that is ready to be downloaded into a VR application. To increase the availability and foster the use of realistic virtual humans in VR applications we will make WILDAVATARS publicly available for research purposes.
Sophia Maier, Sebastian Oberdörfer, Marc Erich Latoschik,
Ballroom Dance Training with Motion Capture and Virtual Reality, In Proceedings of Mensch Und Computer 2024 (MuC '24), pp. 617-621. New York, NY, USA:
Association for Computing Machinery,
2024. [Download][BibSonomy][Doi]
@inproceedings{maier2024ballroom,
author = {Sophia Maier and Sebastian Oberdörfer and Marc Erich Latoschik},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2024-muc-ballroom-dance-training-with-motion-capture-and-virtual-reality-preprint.pdf},
year = {2024},
booktitle = {Proceedings of Mensch Und Computer 2024 (MuC '24)},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
pages = {617-621},
doi = {10.1145/3670653.3677499},
title = {Ballroom Dance Training with Motion Capture and Virtual Reality}
}
Abstract:
This paper investigates the integration of motion capture and virtual reality (VR) technologies in competitive ballroom dancing (slow walz, tango, slow foxtrott, viennese waltz, quickstep), aiming to analyze posture correctness and provide feedback to dancers for posture enhancement. Through qualitative interviews, the study identifies specific requirements and gathers insights into potentially helpful feedback mechanisms. Using Unity and motion capture technology, we implemented a prototype system featuring real-time visual cues for posture correction and a replay function for analysis. A validation study with competitive ballroom dancers reveals generally positive feedback on the system’s usefulness, though challenges like cable obstruction and bad usability of the user interface are noted. Insights from participants inform future refinements, emphasizing the need for precise feedback, cable-free movement, and user-friendly interfaces. While the program is promising for ballroom dance training, further research is needed to evaluate the system’s overall efficacy.
Sebastian Oberdörfer, Sophia C Steinhaeusser, Amiin Najjar, Clemens Tümmers, Marc Erich Latoschik,
Pushing Yourself to the Limit - Influence of Emotional Virtual Environment Design on Physical Training in VR, In ACM Games.
2024. accepted for publication [BibSonomy]
@article{oberdorfer2024pushing,
author = {Sebastian Oberdörfer and Sophia C Steinhaeusser and Amiin Najjar and Clemens Tümmers and Marc Erich Latoschik},
journal = {ACM Games},
year = {2024},
title = {Pushing Yourself to the Limit - Influence of Emotional Virtual Environment Design on Physical Training in VR}
}
Abstract:
The design of virtual environments (VEs) can strongly influence users' emotions. These VEs are also an important aspect of immersive Virtual Reality (VR) exergames - training system that can inspire athletes to train in a highly motivated way and achieve a higher training intensity. VR-based training and rehabilitation systems can increase a user's motivation to train and to repeat physical exercises. The surrounding VE can potentially predominantly influence users' motivation and hence potentially even physical performance. Besides providing potentially motivating environments, physical training can be enhanced by gamification. However, it is unclear whether the surrounding VE of a VR-based physical training system influences the effectiveness of gamification. We investigate whether an emotional positive or emotional negative design influences the sport performance and interacts with the positive effects of gamification. In a user study, we immerse participants in VEs following either an emotional positive, neutral, or negative design and measure the duration the participants can hold a static strength-endurance exercise. The study targeted the investigation of the effects of 1) emotional VE design as well as the 2) presence and absence of gamification. We did not observe significant differences in the performance of the participants independent of the conditions of VE design or gamification. Gamification caused a dominating effect on emotion and motivation over the emotional design of the VEs, thus indicating an overall positive impact. The emotional design influenced the participants' intrinsic motivation but caused mixed results with respect to emotion. Overall, our results indicate the importance of using gamification, support the commonly used emotional positive VEs for physical training, but further indicate that the design space could also include other directions of VE design.
Larissa Brübach, Marius Röhm, Franziska Westermeier, Carolin Wienrich, Marc Erich Latoschik,
The Influence of a Low-Resolution Peripheral Display Extension on the Perceived Plausibility and Presence, In Proceedings of the 30th ACM Symposium on Virtual Reality Software and Technology(3). New York, NY, USA:
Association for Computing Machinery,
2024. [Download][BibSonomy][Doi]
@inproceedings{brubach2024influence,
author = {Larissa Brübach and Marius Röhm and Franziska Westermeier and Carolin Wienrich and Marc Erich Latoschik},
number = {3},
url = {https://doi.org/10.1145/3641825.3687713},
year = {2024},
booktitle = {Proceedings of the 30th ACM Symposium on Virtual Reality Software and Technology},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {VRST '24},
doi = {10.1145/3641825.3687713},
title = {The Influence of a Low-Resolution Peripheral Display Extension on the Perceived Plausibility and Presence}
}
Abstract:
The Field of View (FoV) is a central technical display characteristic of Head-Mounted Displays (HMDs), which has been shown to have a notable impact on important aspects of the user experience. For example, an increased FoV has been shown to foster a sense of presence and improve peripheral information processing, but it also increases the risk of VR sickness. This article investigates the impact of a wider but inhomogeneous FoV on the perceived plausibility, measuring its effects on presence, spatial presence, and VR sickness as a comparison to and replication of effects from prior work. We developed a low-resolution peripheral display extension to pragmatically increase the FoV, taking into account the lower peripheral acuity of the human eye. While this design results in inhomogeneous resolutions of HMDs at the display edges, it also is a low complexity and low-cost extension. However, its effects on important VR qualities have to be identified. We conducted two experiments with 30 and 27 participants, respectively. In a randomized 2x3 within-subject design, participants played three rounds of bowling in VR, both with and without the display extension. Two rounds contained incongruencies to induce breaks in plausibility. In experiment 2, we enhanced one incongruency to make it more noticeable and improved the shortcomings of the display extension that had previously been identified. However, neither study measured the low-resolution FoV extension's effect in terms of perceived plausibility, presence, spatial presence, or VR sickness. We found that one of the incongruencies could cause a break in plausibility without the extension, confirming the results of a previous study.
Larissa Brübach, Mona Röhm, Franziska Westermeier, Marc Erich Latoschik, Carolin Wienrich,
Manipulating Immersion: The Impact of Perceptual Incongruence on Perceived Plausibility in VR, In 23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR).
IEEE Computer Society,
2024. IEEE ISMAR Best Paper Nominee 🏆 [Download][BibSonomy]
@inproceedings{brubach2024manipulating,
author = {Larissa Brübach and Mona Röhm and Franziska Westermeier and Marc Erich Latoschik and Carolin Wienrich},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-ismar-manipulating-immersion.pdf},
year = {2024},
booktitle = {23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
publisher = {IEEE Computer Society},
title = {Manipulating Immersion: The Impact of Perceptual Incongruence on Perceived Plausibility in VR}
}
Abstract:
This work presents a study where we used incongruencies on the cognitive and the perceptual layer to investigate their effects on perceived plausibility and, thereby, presence and spatial presence. We used a 2x3 within-subject design with the factors familiar size (cognitive manipulation) and immersion (perceptual manipulation). For the different levels of immersion, we implemented three different tracking qualities: rotation-and-translation tracking, rotation-only tracking, and stereoscopic-view-only tracking. Participants scanned products in a virtual supermarket where the familiar size of these objects was manipulated. Simultaneously, they could either move their head normally or need to use the thumbsticks to navigate their view of the environment. Results show that both manipulations had a negative effect on perceived plausibility and, thereby, presence. In addition, the tracking manipulation also had a negative effect on spatial presence. These results are especially interesting in light of the ongoing discussion about the role of plausibility and congruence in evaluating XR environments. The results can hardly be explained by traditional presence models, where immersion should not be an influencing factor for perceived plausibility. However, they are in agreement with the recently introduced Congruence and Plausibility (CaP) model and provide empirical evidence for the model's predicted pathways.
Christian Rack, Lukas Schach, Felix Achter, Yousof Shehada, Jinghuai Lin, Marc Erich Latoschik,
Motion Passwords, In Proceedings of the 30th ACM Symposium on Virtual Reality Software and Technology(19), pp. 1-11. New York, NY, USA:
Association for Computing Machinery,
2024. [Download][BibSonomy][Doi]
@conference{rack2024motion,
author = {Christian Rack and Lukas Schach and Felix Achter and Yousof Shehada and Jinghuai Lin and Marc Erich Latoschik},
number = {19},
url = {https://doi.org/10.1145/3641825.3687711},
year = {2024},
booktitle = {Proceedings of the 30th ACM Symposium on Virtual Reality Software and Technology},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {VRST '24},
pages = {1-11},
doi = {10.1145/3641825.3687711},
title = {Motion Passwords}
}
Abstract:
This paper introduces “Motion Passwords”, a novel biometric authentication approach where virtual reality users verify their identity by physically writing a chosen word in the air with their hand controller. This method allows combining three layers of verification: knowledge-based password input, handwriting style analysis, and motion profile recognition. As a first step towards realizing this potential, we focus on verifying users based on their motion profiles. We conducted a data collection study with 48 participants, who performed over 3800 Motion Password signatures across two sessions. We assessed the effectiveness of feature-distance and similarity-learning methods for motion-based verification using the Motion Passwords as well as specific and uniform ball-throwing signatures used in previous works. In our results, the similarity-learning model was able to verify users with the same accuracy for both signature types. This demonstrates that Motion Passwords, even when applying only the motion-based verification layer, achieve reliability comparable to previous methods. This highlights the potential for Motion Passwords to become even more reliable with the addition of knowledge-based and handwriting style verification layers. Furthermore, we present a proof-of-concept Unity application demonstrating the registration and verification process with our pretrained similarity-learning model. We publish our code, the Motion Password dataset, the pretrained model, and our Unity prototype on https://github.com/cschell/MoPs
Felix Scheuerpflug, Christian Aulbach, Daniel Pohl, Sebastian von Mammen,
Experimental Immersive 3D Camera Setup for Mobile Phones, In 2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), pp. 881--882.
2024. [BibSonomy]
@inproceedings{Scheuerpflug:2024aa,
author = {Felix Scheuerpflug and Christian Aulbach and Daniel Pohl and Sebastian von Mammen},
year = {2024},
booktitle = {2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
pages = {881--882},
title = {Experimental Immersive 3D Camera Setup for Mobile Phones}
}
Abstract:
Marc Mußmann, Daniel Nicolas Hofstadler, Sebastian von Mammen,
An Agent-based, Interactive Simulation Model of Root Growth, In ALIFE 2024: Proceedings of the 2024 Artificial Life Conference.
2024. [BibSonomy]
@inproceedings{Mussmann:2024aa,
author = {Marc Mußmann and Daniel Nicolas Hofstadler and Sebastian von Mammen},
year = {2024},
booktitle = {ALIFE 2024: Proceedings of the 2024 Artificial Life Conference},
title = {An Agent-based, Interactive Simulation Model of Root Growth}
}
Abstract:
Maximilian Landeck, Fabian Unruh, Jean-Luc Lugrin, Marc Erich Latoschik,
Object Motion Manipulation and time perception in virtual reality, In Frontiers in Virtual Reality, Vol. 5.
2024. [Download][BibSonomy][Doi]
@article{10.3389/frvir.2024.1390703,
author = {Maximilian Landeck and Fabian Unruh and Jean-Luc Lugrin and Marc Erich Latoschik},
journal = {Frontiers in Virtual Reality},
url = {https://www.frontiersin.org/journals/virtual-reality/articles/10.3389/frvir.2024.1390703},
year = {2024},
volume = {5},
doi = {10.3389/frvir.2024.1390703},
title = {Object Motion Manipulation and time perception in virtual reality}
}
Abstract:
This paper presents a novel approach to altering how time is perceived in Virtual Reality (VR). It involves manipulating the speed and pattern of motion in objects associated with timekeeping, both directly (such as clocks) and indirectly (like pendulums). Objects influencing our perception of time are called ‘zeitgebers‘; for instance, observing a clock or pendulum tends to affect how we perceive the passage of time. The speed of motion of their internal parts (clock hands or pendulum rings) is explicitly or implicitly related to the perception of time. However, the perceptual effects of accelerating or decelerating the speed of a virtual clock or pendulum in VR is still an open question. We hypothesize that the acceleration of their internal motion will accelerate the passage of time and that the irregularity of the orbit pendulum’s motion will amplify this effect. We anticipate that the irregular movements of the pendulum will lower boredom and heighten attention, thereby making time seem to pass more quickly. Therefore, we conducted an experiment with 32 participants, exposing them to two types of virtual zeitgebers exhibiting both regular and irregular motions. These were a virtual clock and an orbit pendulum, each operating at slow, normal, and fast speeds. Our results revealed that time passed by faster when participants observed virtual zeitgebers in the fast speed condition than in the slow speed condition. The orbit pendulum significantly accelerated the perceived passage of time compared to the clock. We believe that the irregular motion requires a higher degree of attention, which is confirmed by the significantly longer gaze fixations of the participants. These findings are crucial for time perception manipulation in VR, offering potential for innovative treatments for conditions like depression and improving wellbeing. Yet, further clinical research is needed to confirm these applications.
Smi Hinterreiter, Timo Spinde, Sebastian Oberdörfer, Isao Echizen, Marc Erich Latoschik,
News Ninja: Gamified Annotation Of Linguistic Bias In
Online News, In Proceedings of the ACM Human-Computer Interaction, Vol. 8(CHI PLAY, Article 327), p. 29.
2024. [Download][BibSonomy][Doi]
@article{hinterreiter2024ninja,
author = {Smi Hinterreiter and Timo Spinde and Sebastian Oberdörfer and Isao Echizen and Marc Erich Latoschik},
journal = {Proceedings of the ACM Human-Computer Interaction},
number = {CHI PLAY, Article 327},
url = {https://dl.acm.org/doi/10.1145/3677092},
year = {2024},
pages = {29},
volume = {8},
doi = {10.1145/3677092},
title = {News Ninja: Gamified Annotation Of Linguistic Bias In
Online News}
}
Abstract:
Recent research shows that visualizing linguistic bias mitigates its negative effects. However, reliable automatic detection methods to generate such visualizations require costly, knowledge-intensive training data. To facilitate data collection for media bias datasets, we present News Ninja, a game employing data-collecting game mechanics to generate a crowdsourced dataset. Before annotating sentences, players are educated on media bias via a tutorial. Our findings show that datasets gathered with crowdsourced workers trained on News Ninja can reach significantly higher inter-annotator agreements than expert and crowdsourced datasets with similar data quality. As News Ninja encourages continuous play, it allows datasets to adapt to the reception and contextualization of news over time, presenting a promising strategy to reduce data collection expenses, educate players, and promote long-term bias mitigation.
Murat Yalcin, Andreas Halbig, Martin Fischbach, Marc Erich Latoschik,
Automatic Cybersickness Detection by Deep Learning of Augmented Physiological Data from Off-the-Shelf Consumer-Grade Sensors, In Frontiers in Virtual Reality, Vol. 5.
2024. [Download][BibSonomy][Doi]
@article{10.3389/frvir.2024.1364207,
author = {Murat Yalcin and Andreas Halbig and Martin Fischbach and Marc Erich Latoschik},
journal = {Frontiers in Virtual Reality},
url = {https://www.frontiersin.org/journals/virtual-reality/articles/10.3389/frvir.2024.1364207},
year = {2024},
volume = {5},
doi = {10.3389/frvir.2024.1364207},
title = {Automatic Cybersickness Detection by Deep Learning of Augmented Physiological Data from Off-the-Shelf Consumer-Grade Sensors}
}
Abstract:
Cybersickness is still a prominent risk factor potentially affecting the usability of virtual reality applications. Automated real-time detection of cybersickness promises to support a better general understanding of the phenomena and to avoid and counteract its occurrence. It could be used to facilitate application optimization, that is, to systematically link potential causes (technical development and conceptual design decisions) to cybersickness in closed-loop user-centered development cycles. In addition, it could be used to monitor, warn, and hence safeguard users against any onset of cybersickness during a virtual reality exposure, especially in healthcare applications. This article presents a novel real-time-capable cybersickness detection method by deep learning of augmented physiological data. In contrast to related preliminary work, we are exploring a unique combination of mid-immersion ground truth elicitation, an unobtrusive wireless setup, and moderate training performance requirements. We developed a proof-of-concept prototype to compare (combinations of) convolutional neural networks, long short-term memory, and support vector machines with respect to detection performance. We demonstrate that the use of a conditional generative adversarial network-based data augmentation technique increases detection performance significantly and showcase the feasibility of real-time cybersickness detection in a genuine application example. Finally, a comprehensive performance analysis demonstrates that a four-layered bidirectional long short-term memory network with the developed data augmentation delivers superior performance (91.1% F1-score) for real-time cybersickness detection. To encourage replicability and reuse in future cybersickness studies, we released the code and the dataset as publicly available.
Murat Yalcin, Marc Erich Latoschik,
DeepFear: Game Usage within Virtual Reality to Provoke Physiological Responses of Fear, In Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems, p. 1–8. New York, NY, USA:
Association for Computing Machinery,
2024. [Download][BibSonomy][Doi]
@inproceedings{Yalcin2024,
author = {Murat Yalcin and Marc Erich Latoschik},
url = {https://doi.org/10.1145/3613905.3650877},
year = {2024},
booktitle = {Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {CHI EA '24},
pages = {1–8},
doi = {10.1145/3613905.3650877},
title = {DeepFear: Game Usage within Virtual Reality to Provoke Physiological Responses of Fear}
}
Abstract:
The investigation and the classification of the physiological signals involved in fear perception is complicated due to the difficulties in reliably eliciting and measuring the complex construct of fear. Especially, using Virtual Reality (VR) games can well elicit the physiological responses, then it can be used developing treatments in healthcare domain. In this study, we carried out exploratory physiological data analysis and wearable sensory device feasibility for the responses of fear. We contributed 1) to use a of-the-shelf commercial game (Half Life-Alyx) to provoke fear emotion, 2) to demonstrate a performance analysis with different deep learning models like Convolutional Neural Network (CNN), Long-Short Term Memory (LSTM) and Transformer, 3) to investigate the most responsive physiological signal by comprehensive data analysis and best sensory device in terms of multi-level of fear classification. Accuracy metrics, f1-scores and confusion matrices showed that ECG and ACC are the most significant two signals for fear recognition.
Mark R Miller, Vivek C Nair, Eugy Han, Cyan DeVeaux, Christian Rack, Rui Wang, Brandon Huang, Marc Erich Latoschik, James F O'Brien, Jeremy N Bailenson,
Effect of Duration and Delay on the Identifiability of VR Motion, In 2024 IEEE 25th International Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM).
2024. [Download][BibSonomy][Doi]
@inproceedings{miller2024effect,
author = {Mark R Miller and Vivek C Nair and Eugy Han and Cyan DeVeaux and Christian Rack and Rui Wang and Brandon Huang and Marc Erich Latoschik and James F O'Brien and Jeremy N Bailenson},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-06-Miller-effect-of-duration-and-delay.pdf},
year = {2024},
booktitle = {2024 IEEE 25th International Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM)},
doi = {10.1109/WoWMoM60985.2024.00023},
title = {Effect of Duration and Delay on the Identifiability of VR Motion}
}
Abstract:
Vivek Nair, Christian Rack, Wenbo Guo, Rui Wang, Shuixian Li, Brandon Huang, Atticus Cull, James F. O'Brien, Marc Latoschik, Louis Rosenberg, Dawn Song,
Inferring Private Personal Attributes of Virtual Reality Users from Ecologically Valid Head and Hand Motion Data, In 2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), pp. 477-484.
2024. [BibSonomy][Doi]
@inproceedings{10536245,
author = {Vivek Nair and Christian Rack and Wenbo Guo and Rui Wang and Shuixian Li and Brandon Huang and Atticus Cull and James F. O'Brien and Marc Latoschik and Louis Rosenberg and Dawn Song},
year = {2024},
booktitle = {2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
pages = {477-484},
doi = {10.1109/VRW62533.2024.00094},
title = {Inferring Private Personal Attributes of Virtual Reality Users from Ecologically Valid Head and Hand Motion Data}
}
Abstract:
Marie Luisa Fiedler, Erik Wolf, Nina Döllinger, David Mal, Mario Botsch, Marc Erich Latoschik, Carolin Wienrich,
From Avatars to Agents: Self-Related Cues through Embodiment and Personalization Affect Body Perception in Virtual Reality, In IEEE Transactions on Visualization and Computer Graphics, pp. 1-11.
2024. [Download][BibSonomy][Doi]
@article{fiedler2024selfcues,
author = {Marie Luisa Fiedler and Erik Wolf and Nina Döllinger and David Mal and Mario Botsch and Marc Erich Latoschik and Carolin Wienrich},
journal = {IEEE Transactions on Visualization and Computer Graphics},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-ismar-tvcg-self-identification-body-weight-perception-preprint-reduced.pdf},
year = {2024},
pages = {1-11},
doi = {10.1109/TVCG.2024.3456211},
title = {From Avatars to Agents: Self-Related Cues through Embodiment and Personalization Affect Body Perception in Virtual Reality}
}
Abstract:
Our work investigates the influence of self-related cues in the design of virtual humans on body perception in virtual reality. In a 2x2 mixed design, 64 participants faced photorealistic virtual humans either as a motion-synchronized embodied avatar or as an autonomous moving agent, appearing subsequently with a personalized and generic texture. Our results unveil that self-related cues through embodiment and personalization yield an individual and complemented increase in participants' sense of embodiment and self-identification towards the virtual human. Different body weight modification and estimation tasks further showed an impact of both factors on participants' body weight perception. Additional analyses revealed that the participant's body mass index predicted body weight estimations in all conditions and that participants' self-esteem and body shape concerns correlated with different body weight perception results. Hence, we have demonstrated the occurrence of double standards through induced self-related cues in virtual human perception, especially through embodiment.
Damian Kutzias, Sebastian von Mammen,
Recent Advances in Procedural Generation of Buildings: From Diversity to Integration., In IEEE Trans. Games, Vol. 16(1), pp. 16-35.
2024. [Download][BibSonomy]
@article{Kutzias:2024ac,
author = {Damian Kutzias and Sebastian von Mammen},
journal = {IEEE Trans. Games},
number = {1},
url = {http://dblp.uni-trier.de/db/journals/tciaig/tciaig16.html#KutziasM24},
year = {2024},
pages = {16-35},
volume = {16},
title = {Recent Advances in Procedural Generation of Buildings: From Diversity to Integration.}
}
Abstract:
Sooraj K Babu, Mounsif Chetitah, Sebastian von Mammen,
Recommender-based User Guidance Framework, In 2024 IEEE International Conference on Artificial Intelligence and eXtended and Virtual Reality (AIxVR), pp. 275-280.
2024. [BibSonomy][Doi]
@inproceedings{Babu:2024aa,
author = {Sooraj K Babu and Mounsif Chetitah and Sebastian von Mammen},
year = {2024},
booktitle = {2024 IEEE International Conference on Artificial Intelligence and eXtended and Virtual Reality (AIxVR)},
pages = {275-280},
doi = {10.1109/AIxVR59861.2024.00046},
title = {Recommender-based User Guidance Framework}
}
Abstract:
User guidance is crucial for a seamless user experience in a software system. This paper explores the need for effective user guidance mechanisms to enhance user experience and reduce human errors and frustration. In order to do that, we formalise and symbolically represent user guidance for versatile applications across diverse contexts by considering the knowledge base on users and software systems. We comprehensively analyse four distinct categories of user guidance mechanisms, namely, familiarisation, continuation, navigation, and customisation, with examples and expected data needed for each category. In addition, we investigate whether a recommender system (RS) can function as an appropriate tool to intelligently guide a user through a virtual reality (VR) authoring platform and describe the implementation of the corresponding RS-based guidance mechanism that aligns with the proposed formalisation. The paper concludes with a discussion of the present state of affairs and prospects for future research.
Damian Kutzias, Sebastian von Mammen,
Handling Interfaces for the Procedural Generation of Complete Buildings, In Proceedings of ICSE GAS Workshop 2024, Vol. 8th International Workshop on Games and Software Engineering.
2024. [BibSonomy]
@inproceedings{Kutzias:2024ab,
author = {Damian Kutzias and Sebastian von Mammen},
year = {2024},
booktitle = {Proceedings of ICSE GAS Workshop 2024},
series = {Emerging Advanced Technologies for Game Engineering},
volume = {8th International Workshop on Games and Software Engineering},
title = {Handling Interfaces for the Procedural Generation of Complete Buildings}
}
Abstract:
Vivek Nair, Mark Roman Miller, Rui Wang, Brandon Huang, Christian Rack, Marc Erich Latoschik, James O'Brien,
Effect of Data Degradation on Motion Re-Identification, In 2024 IEEE 25th International Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM).
2024. [Download][BibSonomy][Doi]
@inproceedings{nair2024effect,
author = {Vivek Nair and Mark Roman Miller and Rui Wang and Brandon Huang and Christian Rack and Marc Erich Latoschik and James O'Brien},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-06-nair-obfuscation.pdf},
year = {2024},
booktitle = {2024 IEEE 25th International Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM)},
doi = {10.1109/WoWMoM60985.2024.00026},
title = {Effect of Data Degradation on Motion Re-Identification}
}
Abstract:
Erik Wolf,
Individual-, System-, and Application-Related Factors Influencing the Perception of Virtual Humans in Virtual Environments.
2024. PhD Thesis [Download][BibSonomy][Doi]
@phdthesis{wolf2024thesis,
author = {Erik Wolf},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-phd-thesis-erik-wolf.pdf},
year = {2024},
doi = {https://doi.org/10.25972/OPUS-37468},
title = {Individual-, System-, and Application-Related Factors Influencing the Perception of Virtual Humans in Virtual Environments}
}
Abstract:
Mixed, augmented, and virtual reality, collectively known as extended reality (XR), allows users to immerse themselves in virtual environments and engage in experiences surpassing reality's boundaries. Virtual humans are ubiquitous in such virtual environments and can be utilized for myriad purposes, offering the potential to greatly impact daily life. Through the embodiment of virtual humans, XR offers the opportunity to influence how we see ourselves and others. In this function, virtual humans serve as a predefined stimulus whose perception is elementary for researchers, application designers, and developers to understand. This dissertation aims to investigate the influence of individual-, system-, and application-related factors on the perception of virtual humans in virtual environments, focusing on their potential use as stimuli in the domain of body perception. Individual-related factors encompass influences based on the user's characteristics, such as appearance, attitudes, and concerns. System-related factors relate to the technical properties of the system that implements the virtual environment, such as the level of immersion. Application-related factors refer to design choices and specific implementations of virtual humans within virtual environments, such as their rendering or animation style.
This dissertation provides a contextual framework and reviews the relevant literature on factors influencing the perception of virtual humans. To address identified research gaps, it reports on five empirical studies analyzing quantitative and qualitative data from a total of 165 participants. The studies utilized a custom-developed XR system, enabling users to embody rapidly generated, photorealistically personalized virtual humans that can be realistically altered in body weight and observed using different immersive XR displays. The dissertation's findings showed, for example, that embodiment and personalization of virtual humans serve as self-related cues and moderate the perception of their body weight based on the user's body weight. They also revealed a display bias that significantly influences the perception of virtual humans, with disparities in body weight perception of up to nine percent between different immersive XR displays. Based on all findings, implications for application design were derived, including recommendations regarding reconstruction, animation, body weight modification, and body weight estimation methods for virtual humans, but also for the general user experience. By revealing influences on the perception of virtual humans, this dissertation contributes to understanding the intricate relationship between users and virtual humans. The findings and implications presented have the potential to enhance the design and development of virtual humans, leading to improved user experiences and broader applications beyond the domain of body perception.
Kristoffer Waldow, Lukas Decker, Martin Mišiak, Arnulph Fuhrmann, Daniel Roth, Marc Erich Latoschik,
Investigating Incoherent Depth Perception Features in Virtual Reality using Stereoscopic Impostor-Based Rendering, In Proceedings of the 31st IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VR '24).
IEEE,
2024. Best poster award 🏆. To be published [Download][BibSonomy]
@inproceedings{waldow2024investigating,
author = {Kristoffer Waldow and Lukas Decker and Martin Mišiak and Arnulph Fuhrmann and Daniel Roth and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-ieeevr-incoherent-depth-cues.pdf},
year = {2024},
booktitle = {Proceedings of the 31st IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VR '24)},
publisher = {IEEE},
title = {Investigating Incoherent Depth Perception Features in Virtual Reality using Stereoscopic Impostor-Based Rendering}
}
Abstract:
Depth perception is essential for our daily experiences, aiding in
orientation and interaction with our surroundings. Virtual Reality
allows us to decouple such depth cues mainly represented through
binocular disparity and motion parallax. Dealing with fully mesh-based rendering methods these cues are not problematic as they originate from the object’s underlying geometry. However, manipulating motion parallax, as seen in stereoscopic imposter-based
rendering, raises multiple perceptual questions. Therefore, we conducted a user experiment to investigate how varying object sizes affect such visual errors and perceived 3-dimensionality, revealing an interestingly significant negative correlation and new assumptions about visual quality.
Philipp Krop, Martin J. Koch, Astrid Carolus, Marc Erich Latoschik, Carolin Wienrich,
The Effects of Expertise, Humanness, and Congruence on Perceived Trust, Warmth, Competence and Intention to Use Embodied AI, In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA ’24), p. 9. New York, NY, USA:
ACM,
2024. [Download][BibSonomy][Doi]
@inproceedings{krop2024effects,
author = {Philipp Krop and Martin J. Koch and Astrid Carolus and Marc Erich Latoschik and Carolin Wienrich},
url = {https://dl.acm.org/doi/pdf/10.1145/3613905.3650749},
year = {2024},
booktitle = {Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA ’24)},
publisher = {ACM},
address = {New York, NY, USA},
pages = {9},
doi = {10.1145/3613905.3650749},
title = {The Effects of Expertise, Humanness, and Congruence on Perceived Trust, Warmth, Competence and Intention to Use Embodied AI}
}
Abstract:
Even though people imagine different embodiments when asked which AI they would like to work with, most studies investigate trust in AI systems without specific physical appearances. This study aims to close this gap by combining influencing factors of trust to analyze their impact on the perceived trustworthiness, warmth, and competence of an embodied AI. We recruited 68 par- ticipants who observed three co-working scenes with an embodied AI, presented as expert/novice (expertise), human/AI (humanness), or congruent/slightly incongruent to the environment (congruence). Our results show that the expertise condition had the largest im- pact on trust, acceptance, and perceived warmth and competence. When controlled for perceived competence, the humanness of the AI and the congruence of its embodiment to the environment also influence acceptance. The results show that besides expertise and the perceived competence of the AI, other design variables are rele- vant for successful human-AI interaction, especially when the AI is embodied.
David Mal, Nina Döllinger, Erik Wolf, Stephan Wenninger, Mario Botsch, Carolin Wienrich, Marc Erich Latoschik,
Am I the odd one? Exploring (in)congruencies in the realism of avatars and virtual others in virtual reality, In Frontiers in Virtual Reality, Vol. 5.
2024. [Download][BibSonomy][Doi]
@article{mal2024oddone,
author = {David Mal and Nina Döllinger and Erik Wolf and Stephan Wenninger and Mario Botsch and Carolin Wienrich and Marc Erich Latoschik},
journal = {Frontiers in Virtual Reality},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-frontiers-vhp-group-final.pdf},
year = {2024},
volume = {5},
doi = {10.3389/frvir.2024.1417066},
title = {Am I the odd one? Exploring (in)congruencies in the realism of avatars and virtual others in virtual reality}
}
Abstract:
Virtual humans play a pivotal role in social virtual environments, shaping users’ VR experiences. The diversity in available options and users’ individual preferences can result in a heterogeneous mix of appearances among a group of virtual humans. The resulting variety in higher-order anthropomorphic and realistic cues introduces multiple (in)congruencies, eventually impacting the plausibility of the experience. However, related work investigating the effects of being co-located with multiple virtual humans of different appearances remains limited. In this work, we consider the impact of (in)congruencies in the realism of a group of virtual humans, including co-located others (agents) and one’s self-representation (self-avatar), on users’ individual VR experiences. In a 2 × 3 mixed design, participants embodied either (1) a personalized realistic or (2) a customized stylized self-avatar across three consecutive VR exposures in which they were accompanied by a group of virtual others being either (1) all realistic, (2) all stylized, or (3) mixed between stylized and realistic. Our results indicate groups of virtual others of higher realism, i.e., potentially more congruent with participants’ real-world experiences and expectations, were considered more human-like, increasing the feeling of co-presence and the impression of interaction possibilities. (In)congruencies concerning the homogeneity of the group did not cause considerable effects. Furthermore, our results indicate that a self-avatar’s congruence with the participant’s real-world experiences concerning their own physical body yielded notable benefits for virtual body ownership and self-identification for realistic personalized avatars. Notably, the incongruence between a stylized self-avatar and a group of realistic virtual others resulted in diminished ratings of self-location and self-identification. This suggests that higher-order (in)congruent visual cues that are not within the ego-central referential frame of one’s (virtual) body, can have an (adverse) effect on the relationship between one’s self and body. We conclude on the implications of our findings and discuss our results within current theories of VR experiences, considering (in)congruent visual cues and their impact on the perception of virtual others, self-representation, and spatial presence
David Mal, Erik Wolf, Nina Döllinger, Mario Botsch, Carolin Wienrich, Marc Erich Latoschik,
From 2D-Screens to VR: Exploring the Effect of Immersion on the Plausibility of Virtual Humans, In CHI 24 Conference on Human Factors in Computing Systems Extended Abstracts, pp. 1-8.
2024. [Download][BibSonomy][Doi]
@inproceedings{mal2024vhpvr,
author = {David Mal and Erik Wolf and Nina Döllinger and Mario Botsch and Carolin Wienrich and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-chi-vhp-in-vr-preprint.pdf},
year = {2024},
booktitle = {CHI 24 Conference on Human Factors in Computing Systems Extended Abstracts},
pages = {1-8},
doi = {10.1145/3613905.3650773},
title = {From 2D-Screens to VR: Exploring the Effect of Immersion on the Plausibility of Virtual Humans}
}
Abstract:
Virtual humans significantly contribute to users' plausible XR experiences. However, it may be not only the congruent rendering of the virtual human but also the intermediary display technology having a significant impact on virtual humans' plausibility. In a low-immersive desktop-based and a high-immersive VR condition, participants rated realistic and abstract animated virtual humans regarding plausibility, affective appraisal, and social judgments. First, our results confirmed the factor structure of a preliminary virtual human plausibility questionnaire in VR. Further, the appearance and behavior of realistic virtual humans were overall perceived as more plausible compared to abstract virtual humans, an effect that increased with high immersion. Moreover, only for high immersion, realistic virtual humans were rated as more trustworthy and sympathetic than abstract virtual humans. Interestingly, we observed a potential uncanny valley effect for low but not for high immersion. We discuss the impact of a natural perception of anthropomorphic and realistic cues in VR and highlight the potential of immersive technology to elicit distinct effects in virtual humans.
Christian Merz, Christopher Göttfert, Carolin Wienrich, Marc Erich Latoschik,
Universal Access for Social XR Across Devices: The Impact of Immersion on the Experience in Asymmetric Virtual Collaboration, In Proceedings of the 31st IEEE Virtual Reality conference (VR '24).
2024. [Download][BibSonomy]
@inproceedings{merz2024universal,
author = {Christian Merz and Christopher Göttfert and Carolin Wienrich and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-ieeevr-universal-access-social-xr.pdf},
year = {2024},
booktitle = {Proceedings of the 31st IEEE Virtual Reality conference (VR '24)},
title = {Universal Access for Social XR Across Devices: The Impact of Immersion on the Experience in Asymmetric Virtual Collaboration}
}
Abstract:
This article investigates the influence of input/output device characteristics and degrees of immersion on the User Experience (UX) of specific eXtended Reality (XR) effects, i.e., presence, self-perception, other-perception, and task perception. It targets universal access to social XR, where dedicated XR hardware is unavailable or can not be used, but participation is desirable or even necessary. We compare three different device configurations: (i) desktop screen with mouse, (ii) desktop screen with tracked controllers, and (iii) Head-Mounted Display (HMD) with tracked controllers. 87 participants took part in collaborative dyadic interaction (a sorting task) with asymmetric device configurations in a specifically developed social XR. In line with prior research, the sense of presence and embodiment were significantly lower for the desktop setups. However, we only found minor differences in task load and no differences in usability and enjoyment of the task between the conditions. Additionally, the perceived humanness and virtual human plausibility of the other were not affected, no matter the device used. Finally, there was no impact regarding co-presence and social presence independent of the level of immersion of oneself or the other. We conclude that the device in social XR is important for self-perception and presence. However, our results indicate that the devices do not affect important UX and usability aspects, specifically, the qualities of social interaction in collaborative scenarios, paving the way for universal access to social XR encounters and significantly promoting participation.
Florian Kern, Jonathan Tschanter, Marc Erich Latoschik,
Handwriting for Text Input and the Impact of XR Displays, Surface Alignments, and Sentence Complexities, In IEEE Transactions on Visualization and Computer Graphics, pp. 1-11.
2024. [Download][BibSonomy][Doi]
@article{10460576,
author = {Florian Kern and Jonathan Tschanter and Marc Erich Latoschik},
journal = {IEEE Transactions on Visualization and Computer Graphics},
url = {https://ieeexplore.ieee.org/document/10460576},
year = {2024},
pages = {1-11},
doi = {10.1109/TVCG.2024.3372124},
title = {Handwriting for Text Input and the Impact of XR Displays, Surface Alignments, and Sentence Complexities}
}
Abstract:
Text input is desirable across various eXtended Reality (XR) use cases and is particularly crucial for knowledge and office work. This article compares handwriting text input between Virtual Reality (VR) and Video See-Through Augmented Reality (VST AR), facilitated by physically aligned and mid-air surfaces when writing simple and complex sentences. In a 2x2x2 experimental design, 72 participants performed two ten-minute handwriting sessions, each including ten simple and ten complex sentences representing text input in real-world scenarios. Our developed handwriting application supports different XR displays, surface alignments, and handwriting recognition based on digital ink. We evaluated usability, user experience, task load, text input performance, and handwriting style. Our results indicate high usability with a successful transfer of handwriting skills to the virtual domain. XR displays and surface alignments did not impact text input speed and error rate. However, sentence complexities did, with participants achieving higher input speeds and fewer errors for simple sentences (17.85 WPM, 0.51% MSD ER) than complex sentences (15.07 WPM, 1.74% MSD ER). Handwriting on physically aligned surfaces showed higher learnability and lower physical demand, making them more suitable for prolonged handwriting sessions. Handwriting on mid-air surfaces yielded higher novelty and stimulation ratings, which might diminish with more experience. Surface alignments and sentence complexities significantly affected handwriting style, leading to enlarged and more connected cursive writing in both mid-air and for simple sentences. The study also demonstrated the benefits of using XR controllers in a pen-like posture to mimic styluses and pressure-sensitive tips on physical surfaces for input detection. We additionally provide a phrase set of simple and complex sentences as a basis for future text input studies, which can be expanded and adapted.
Franziska Westermeier, Larissa Brübach, Carolin Wienrich, Marc Erich Latoschik,
Assessing Depth Perception in VR and Video See-Through AR: A Comparison on Distance Judgment, Performance, and Preference, In IEEE Transactions on Visualization and Computer Graphics, Vol. 30(5), pp. 2140 - 2150.
2024. [Download][BibSonomy][Doi]
@article{westermeier2024assessing,
author = {Franziska Westermeier and Larissa Brübach and Carolin Wienrich and Marc Erich Latoschik},
journal = {IEEE Transactions on Visualization and Computer Graphics},
number = {5},
url = {https://ieeexplore.ieee.org/document/10458408},
year = {2024},
pages = {2140 - 2150},
volume = {30},
doi = {10.1109/TVCG.2024.3372061},
title = {Assessing Depth Perception in VR and Video See-Through AR: A Comparison on Distance Judgment, Performance, and Preference}
}
Abstract:
Spatial User Interfaces along the Reality-Virtuality continuum heavily depend on accurate depth perception. However, current display technologies still exhibit shortcomings in the simulation of accurate depth cues, and these shortcomings also vary between Virtual or Augmented Reality (VR, AR: eXtended Reality (XR) for short). This article compares depth perception between VR and Video See-Through (VST) AR. We developed a digital twin of an existing office room where users had to perform five depth-dependent tasks in VR and VST AR. Thirty-two participants took part in a user study using a 1×4 within-subjects design. Our results reveal higher misjudgment rates in VST AR due to conflicting depth cues between virtual and physical content. Increased head movements observed in participants were interpreted as a compensatory response to these conflicting cues. Furthermore, a longer task completion time in the VST AR condition indicates a lower task performance in VST AR. Interestingly, while participants rated the VR condition as easier and contrary to the increased misjudgments and lower performance with the VST AR display, a majority still expressed a preference for the VST AR experience. We discuss and explain these findings with the high visual dominance and referential power of the physical content in the VST AR condition, leading to a higher spatial presence and plausibility.
Carolin Wienrich, Viktoria Horn, Arne Bürger, Jana Krauss,
Personal Space Invasion to Prevent Cyberbullying: Design, Development, and Evaluation of an Immersive Prevention Measure for Children and Adolescents, In Springer Virtual Reality.
2024. [BibSonomy]
@article{wienrich2024personal,
author = {Carolin Wienrich and Viktoria Horn and Arne Bürger and Jana Krauss},
journal = {Springer Virtual Reality},
year = {2024},
title = {Personal Space Invasion to Prevent Cyberbullying: Design, Development, and Evaluation of an Immersive Prevention Measure for Children and Adolescents}
}
Abstract:
Previous work on cyberbullying has shown that the number of victims is increasing and the need for prevention is exceptionally high among younger school students (5th-9th grade). Due to the omnipresence of cyberattacks, victims can hardly distance themselves psychologically, thus experience an intrusion in almost all areas of life. The perpetrators, on the other hand, feel the consequences of their actions even less in cyberspace. However, there is a gap between the need and the existence of innovative prevention programs tied to the digital reality of the target group and the treatment of essential aspects of psychological distance. This article explores the design space, feasibility, and effectiveness of a unique VR-based cyberbullying prevention component in a human-centered iterative approach. The central idea is reflected in creating a virtual personal space invasion with virtual objects associated with cyberbullying making the everyday intrusion of victims tangible. A pre-study revealed that harmful speech texts in bright non-removable message boxes best transferred the psychological determinants associated with a personal space invasion to virtual objects contextualized in cyberbullying scenarios. Therefore, these objects were incorporated into a virtual prevention program that was then tested in a laboratory study with 41 participants. The results showed that the intervention could trigger cognitive dissonance and empathy. In the second step, the intervention was evaluated and improved in a focus group with the actual target group of children and adolescents. The improved application was then evaluated in a school workshop for five days with 100 children and adolescents. The children understood the metaphor of virtual space invasion by the harmful text boxes and reported the expected psychological effects. They also showed great interest in VR. In summary, this paper contributes to the innovative and effective prevention of cyberbullying by using the potential of VR. It provides empirical evidence from a laboratory experiment and a field study with a large sample from the target group of children and adolescents and discusses implications for future developments.
Nina Döllinger, David Mal, Sebastian Keppler, Erik Wolf, Mario Botsch, Johann Habakuk Israel, Marc Erich Latoschik, Carolin Wienrich,
Virtual Body Swapping: A VR-Based Approach to Embodied Third-Person Self-Processing in Mind-Body Therapy, In 2024 CHI Conference on Human Factors in Computing Systems, pp. 1-18.
2024. [Download][BibSonomy][Doi]
@inproceedings{dollinger2024bodyswap,
author = {Nina Döllinger and David Mal and Sebastian Keppler and Erik Wolf and Mario Botsch and Johann Habakuk Israel and Marc Erich Latoschik and Carolin Wienrich},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-chi-bodyswap-preprint.pdf},
year = {2024},
booktitle = {2024 CHI Conference on Human Factors in Computing Systems},
pages = {1-18},
doi = {10.1145/3613904.3642328},
title = {Virtual Body Swapping: A VR-Based Approach to Embodied Third-Person Self-Processing in Mind-Body Therapy}
}
Abstract:
Virtual reality (VR) offers various opportunities for innovative therapeutic approaches, especially regarding self-related mind-body interventions. We introduce a VR body swap system enabling multiple users to swap their perspectives and appearances and evaluate its effects on virtual sense of embodiment (SoE) and perception- and cognition-based self-related processes. In a self-compassion-framed scenario, twenty participants embodied their personalized, photorealistic avatar, swapped bodies with an unfamiliar peer, and reported their SoE, interoceptive awareness (perception), and self-compassion (cognition). Participants' experiences differed between bottom-up and top-down processes. Regarding SoE, their agency and self-location shifted to the swap avatar, while their top-down self-identification remained with their personalized avatar. Further, the experience positively affected interoceptive awareness but not self-compassion. Our outcomes offer novel insights into the SoE in a multiple-embodiment scenario and highlight the need to differentiate between the different processes in intervention design. They raise concerns and requirements for future research on avatar-based mind-body interventions.
Carolin Wienrich, Stephanie Vogt, Nina Döllinger, David Obremski,
Promoting Eco-Friendly Behavior through Virtual Reality - Implementation and Evaluation of Immersive Feedback Conditions of a Virtual CO2 Calculator, In 2024 CHI Conference on Human Factors in Computing Systems.
2024. [Download][BibSonomy]
@inproceedings{wienrich2024promoting,
author = {Carolin Wienrich and Stephanie Vogt and Nina Döllinger and David Obremski},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024_chi_co2_calculator.pdf},
year = {2024},
booktitle = {2024 CHI Conference on Human Factors in Computing Systems},
title = {Promoting Eco-Friendly Behavior through Virtual Reality - Implementation and Evaluation of Immersive Feedback Conditions of a Virtual CO2 Calculator}
}
Abstract:
Climate change is one of the most pressing global challenges in the 21st century. Urgent actions favoring the environment's well-being are essential to mitigate its potentially irreversible consequences. However, the delayed and often distant nature of the effects of sustainable behavior makes it challenging for individuals to connect with the issue personally.
Immersive media are an opportunity to introduce innovative feedback mechanisms to highlight the urgency of behavior effects.
We introduce a VR carbon calculator that visualizes users' annual carbon footprint as CO2-filled balloons over multiple periods.
In a 2x2 design, participants calculated and visualized their carbon footprint numerically or as balloons over one or three years.
We found no effect of our visualization but a significant impact of the visualized period on participants' environmental self-efficacy. These findings emphasize the importance of target-oriented design in VR behavior interventions.
Nina Döllinger, Jessica Topel, Mario Botsch, Carolin Wienrich, Marc Erich Latoschik, Jean-Luc Lugrin,
Exploring Agent-User Personality Similarity and Dissimilarity for Virtual Reality Psychotherapy, In 2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW).
2024. [Download][BibSonomy]
@inproceedings{dollinger2024exploring,
author = {Nina Döllinger and Jessica Topel and Mario Botsch and Carolin Wienrich and Marc Erich Latoschik and Jean-Luc Lugrin},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-ieeevr-pandas-personality.pdf},
year = {2024},
booktitle = {2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
title = {Exploring Agent-User Personality Similarity and Dissimilarity for Virtual Reality Psychotherapy}
}
Abstract:
Imaginary self-encounters are a common approach in psychotherapy. Recent virtual reality advancements enable innovative approaches to enhanced self-encounters using photorealistic personalized Doppelgangers (DG). Yet, next to appearance, similarity in body language could be a great driver of self-identification with a DG or a generic agent. One cost-efficient and time-saving approach could be personality-enhanced animations.
We present a pilot study evaluating the effects of personality-enhanced body language in DGs and generic agents.
Eleven participants evaluated a Photorealistic DG and a Generic Agent, animated in a seated position to simulate four personality types: Low and High Extraversion and Low and High Emotional Stability.
Participants rated the agents' personalities and their self-identification with them.
We found an overall positive relationship between a calculated personality similarity score, self-attribution, and perceived behavior-similarity. Perceived appearance-similarity was affected by personality similarity only in generic agents, indicating the potential of body language to provoke a feeling of similarity even in dissimilar-appearing agents.
Pascal Martinez Pankotsch, Sebastian Oberdörfer, Marc Erich Latoschik,
Effects of Nonverbal Communication of Virtual Agents on Social Pressure and Encouragement in VR, In Proceedings of the 31st IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VR '24).
2024. [Download][BibSonomy]
@inproceedings{martinezpankotsch2024effects,
author = {Pascal Martinez Pankotsch and Sebastian Oberdörfer and Marc Erich Latoschik},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2024-ieeevr-agent-encouragement-peer-pressure-preprint.pdf},
year = {2024},
booktitle = {Proceedings of the 31st IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VR '24)},
title = {Effects of Nonverbal Communication of Virtual Agents on Social Pressure and Encouragement in VR}
}
Abstract:
Our study investigated how virtual agents impact users in challenging VR environments, exploring if nonverbal animations affect social pressure, positive encouragement, and trust in 30 female participants. Despite showing signs of pressure and support during the experimental trials, we could not find significant differences in post-exposure measurements of social pressure and encouragement, interpersonal trust, and well-being. While inconclusive, the findings suggest potential, indicating the need for further research with improved animations and a larger sample size for validation.
Sebastian Oberdörfer, Sandra Birnstiel, Marc Erich Latoschik,
Influence of Virtual Shoe Formality on Gait and Cognitive Performance in a VR Walking Task, In Proceedings of the 31st IEEE Virtual Reality conference (VR '24).
2024. [Download][BibSonomy]
@inproceedings{oberdorfer2024influence,
author = {Sebastian Oberdörfer and Sandra Birnstiel and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-ieeevr-stroop-shoes-preprint.pdf},
year = {2024},
booktitle = {Proceedings of the 31st IEEE Virtual Reality conference (VR '24)},
title = {Influence of Virtual Shoe Formality on Gait and Cognitive Performance in a VR Walking Task}
}
Abstract:
Depending on their formality, clothes do not only change one's appearance, but can also influence behavior and cognitive processes. Shoes are a special aspect of an outfit. Besides coming in various degrees of formality, their structure can affect human gait. Avatars used to embody users in immersive Virtual Reality (VR) can wear any kind of clothing. According to the Proteus Effect, the appearance of a user's avatar can influence their behavior. Users change their behavior in accordance to the expected behavior of the avatar. In our study, we embody 39 participants with a generic avatar of the user's gender wearing three different pairs of shoes as within condition. The shoes differ in degree of formality. We measure the gait during a 2-minute walking task during which participants wore the same real shoe and assess selective attention using the Stroop task. Our results show significant differences in gait between the tested virtual shoe pairs. We found small effects between the three shoe conditions with respect to selective attention. However, we found no significant differences with respect to correct items and response time in the Stroop task. Thus, our results indicate that virtual shoes are accepted by users and, although not eliciting any physical constraints, lead to changes in gait. This suggests that users not only adjust personal behavior according to the Proteus Effect, but also are affected by virtual biomechanical constraints. Also, our results suggest a potential influence of virtual clothing on cognitive performance.
Sebastian Oberdörfer, Sandra Birnstiel, Marc Erich Latoschik,
Proteus Effect or Bodily Affordance? The Influence of Virtual High-Heels on Gait Behavior, In Virtual Reality, Vol. 28(2), p. 81.
2024. [Download][BibSonomy][Doi]
@article{oberdorfer2024proteus,
author = {Sebastian Oberdörfer and Sandra Birnstiel and Marc Erich Latoschik},
journal = {Virtual Reality},
number = {2},
url = {https://rdcu.be/dCXMh},
year = {2024},
pages = {81},
volume = {28},
doi = {10.1007/s10055-024-00966-5},
title = {Proteus Effect or Bodily Affordance? The Influence of Virtual High-Heels on Gait Behavior}
}
Abstract:
Shoes are an important part of the fashion industry, stereotypically affect our self-awareness as well as external perception, and can even biomechanically modify our gait pattern. Immersive Virtual Reality (VR) enables users not only to explore virtual environments, but also to control an avatar as a proxy for themselves. These avatars can wear any kind of shoe which might similarly affect self-awareness due to the Proteus Effect and even cause a bodily affordance to change the gait pattern. Bodily affordance describes a behavioral change in accordance with the expected constraints of the avatar a user is embodied with. In this article, we present the results of three user studies investigating potential changes in the gait pattern evoked by wearing virtual high-heels. Two user studies targeted female participants and one user study focused male participants. The participants wore either virtual sneakers or virtual high-heels while constantly wearing sneakers or socks in reality. To measure the gait pattern, the participants walked on a treadmill that also was added to the virtual environment. We measured significant differences in stride length and in the flexion of the hips and knees at heel strike and partly at toe off. Also, participants reported to walk more comfortably in the virtual sneakers in contrast to the virtual high-heels. This indicates a strong acceptance of the virtual shoes as their real shoes and hence suggests the existence of a bodily affordance. While sparking a discussion about the boundaries as well as aspects of the Proteus Effect and providing another insight into the effects of embodiment in VR, our results might also be important for researchers and developers.
Erik Wolf, Carolin Wienrich, Marc Erich Latoschik,
Towards an Altered Body Image Through the Exposure to a Modulated Self in Virtual Reality, In 2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), pp. 857-858.
2024. [Download][BibSonomy][Doi]
@inproceedings{wolf2024towards,
author = {Erik Wolf and Carolin Wienrich and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2024-ieeevr-altered-body-perception-through-modulated-self-preprint.pdf},
year = {2024},
booktitle = {2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
pages = {857-858},
doi = {10.1109/VRW62533.2024.00225},
title = {Towards an Altered Body Image Through the Exposure to a Modulated Self in Virtual Reality}
}
Abstract:
Self-exposure using modulated embodied avatars in virtual reality (VR) may support a positive body image. However, further investigation is needed to address methodological challenges and to understand the concrete effects, including their quantification. We present an iteratively refined paradigm for studying the tangible effects of exposure to a modulated self in VR. Participants perform body-centered movements in front of a virtual mirror, encountering their photorealistically personalized embodied avatar with increased, decreased, or unchanged body size. Additionally, we propose different body size estimation tasks conducted in reality and VR before and after exposure to assess participants' putative-elicited perceptual adaptations.
Christian Rack, Vivek Nair, Lukas Schach, Felix Foschum, Marcel Roth, Marc Erich Latoschik,
Navigating the Kinematic Maze: Analyzing, Standardizing and Unifying XR Motion Datasets, In 2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW).
2024. [Download][BibSonomy][Doi]
@inproceedings{noauthororeditor2024navigating,
author = {Christian Rack and Vivek Nair and Lukas Schach and Felix Foschum and Marcel Roth and Marc Erich Latoschik},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2024-01-Rack-Navigating_the_Kinematic_Maze.pdf},
year = {2024},
booktitle = {2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
doi = {10.1109/VRW62533.2024.00098},
title = {Navigating the Kinematic Maze: Analyzing, Standardizing and Unifying XR Motion Datasets}
}
Abstract:
This paper addresses the critical importance of standards and documentation in kinematic research, particularly within Extended Reality (XR) environments. We focus on the pivotal role of motion data, emphasizing the challenges posed by the current lack of standardized practices in XR user motion datasets. Our work involves a detailed analysis of 8 existing datasets, identifying gaps in documentation and essential specifications such as coordinate systems, rotation representations, and units of measurement. We highlight how these gaps can lead to misinterpretations and irreproducible results. Based on our findings, we propose a set of guidelines and best practices for creating and documenting motion datasets, aiming to improve their quality, usability, and reproducibility. We also created a web-based tool for visual inspection of motion recordings, further aiding in dataset evaluation and standardization. Furthermore, we introduce the XR Motion Dataset Catalogue, a collection of the analyzed datasets in a unified and aligned format. This initiative significantly streamlines access for researchers, allowing them to download partial or entire datasets with a single line of code and without the need for additional alignment efforts. Our contributions enhance dataset integrity and reliability in kinematic research, paving the way for more consistent and scientifically robust studies in this evolving field.
2023
Erik Göbel, Kristof Korwisi, Andrea Bartl, Martin Hennecke, Marc Erich Latoschik,
Algorithmen erleben in VR, pp. 415--416. Bonn:
Gesellschaft für Informatik e.V.,
2023. [Download][BibSonomy][Doi]
@inproceedings{inproceedings,
author = {Erik Göbel and Kristof Korwisi and Andrea Bartl and Martin Hennecke and Marc Erich Latoschik},
url = {https://dl.gi.de/items/40798a70-3bec-43f8-8dff-827ab9d4650a},
year = {2023},
publisher = {Gesellschaft für Informatik e.V.},
address = {Bonn},
pages = {415--416},
doi = {10.18420/infos2023-046},
title = {Algorithmen erleben in VR}
}
Abstract:
Anne Lewerentz, Nico Manke, David Schantz, Juliano Sarmento Cabral, Sebastian von Mammen,
Integrating Julia Code into the Unity Game Engine to Dive into Aquatic Plant Growth., In Nuria Pelechano, Fotis Liarokapis, Ali Asadipour, Damien Rohmer, Marta Fairén, Jordi Moyés (Eds.), IMET, pp. 97-100.
Eurographics,
2023. [Download][BibSonomy]
@inproceedings{Lewerentz:2023aa,
author = {Anne Lewerentz and Nico Manke and David Schantz and Juliano Sarmento Cabral and Sebastian von Mammen},
url = {http://dblp.uni-trier.de/db/conf/imet/imet2023.html#LewerentzMSCM23},
year = {2023},
booktitle = {IMET},
editor = {Nuria Pelechano and Fotis Liarokapis and Ali Asadipour and Damien Rohmer and Marta Fairén and Jordi Moyés},
publisher = {Eurographics},
pages = {97-100},
title = {Integrating Julia Code into the Unity Game Engine to Dive into Aquatic Plant Growth.}
}
Abstract:
Andreas Müller, Stefan Müller, Tobias Brixner, Sebastian von Mammen,
Programmatic Design and Architecture of an Immersive Laser Laboratory., In IMET, pp. 45--52.
2023. [BibSonomy]
@inproceedings{Muller:2023aa,
author = {Andreas Müller and Stefan Müller and Tobias Brixner and Sebastian von Mammen},
year = {2023},
booktitle = {IMET},
pages = {45--52},
title = {Programmatic Design and Architecture of an Immersive Laser Laboratory.}
}
Abstract:
Mounsif Chetitah, Sebastian von Mammen,
Ontologically Mapping Learning Objectives to Gameplay Goals, In 2023 IEEE Conference on Games (CoG), pp. 1-4.
2023. [BibSonomy][Doi]
@inproceedings{Chetitah:2023ab,
author = {Mounsif Chetitah and Sebastian von Mammen},
year = {2023},
booktitle = {2023 IEEE Conference on Games (CoG)},
pages = {1-4},
doi = {10.1109/CoG57401.2023.10333134},
title = {Ontologically Mapping Learning Objectives to Gameplay Goals}
}
Abstract:
Anne Vetter, Johannes Büttner, Sebastian von Mammen,
Baked Burger Bash: A Serious Virtual Reality Game Informing about the Effects of Acute Cannabinoid Intoxication, In 2023 IEEE Conference on Games (CoG), pp. 1-4.
2023. [BibSonomy][Doi]
@inproceedings{Vetter:2023aa,
author = {Anne Vetter and Johannes Büttner and Sebastian von Mammen},
year = {2023},
booktitle = {2023 IEEE Conference on Games (CoG)},
pages = {1-4},
doi = {10.1109/CoG57401.2023.10333198},
title = {Baked Burger Bash: A Serious Virtual Reality Game Informing about the Effects of Acute Cannabinoid Intoxication}
}
Abstract:
This paper presents Baked Burger Bash, a serious game that aims to enhance a drug prevention program for adolescents using virtual reality (VR). The game is based on a simulation of acute cannabinoid intoxication embedded in an interactive cooking game. The player takes on the role of a food truck chef who needs to serve up the orders of passers-by. Various effects of intoxication make it hard to live up to the customers' expectations. In this paper, we present the scientific background, insights from formative playtests, and feedback from a drug prevention expert who guided the iterative development.
Tobias Brixner, Stefan Mueller, Andreas Müller, Andreas Knote, Wilhelm Schnepp, Samuel Truman, Anne Vetter, Sebastian von Mammen,
femtoPro: virtual-reality interactive training simulator of an ultrafast laser laboratory, In Applied Physics B, Vol. 129(5).
Springer Science and Business Media LLC,
2023. [Download][BibSonomy][Doi]
@article{Brixner:2023aa,
author = {Tobias Brixner and Stefan Mueller and Andreas Müller and Andreas Knote and Wilhelm Schnepp and Samuel Truman and Anne Vetter and Sebastian von Mammen},
journal = {Applied Physics B},
number = {5},
url = {https://doi.org/10.1007%2Fs00340-023-08018-7},
year = {2023},
publisher = {Springer Science and Business Media LLC},
volume = {129},
doi = {10.1007/s00340-023-08018-7},
title = {femtoPro: virtual-reality interactive training simulator of an ultrafast laser laboratory}
}
Abstract:
Sarah Hofmann, Cem Özdemir, Sebastian von Mammen,
Record, Review, Edit, Apply: A Motion Data Pipeline for Virtual Reality Development &$\mathsemicolon$ Design, In Proceedings of the 18th International Conference on the Foundations of Digital Games.
ACM,
2023. [Download][BibSonomy][Doi]
@inproceedings{Hofmann:2023aa,
author = {Sarah Hofmann and Cem Özdemir and Sebastian von Mammen},
url = {https://doi.org/10.1145%2F3582437.3587191},
year = {2023},
booktitle = {Proceedings of the 18th International Conference on the Foundations of Digital Games},
publisher = {ACM},
doi = {10.1145/3582437.3587191},
title = {Record, Review, Edit, Apply: A Motion Data Pipeline for Virtual Reality Development &$\mathsemicolon$ Design}
}
Abstract:
Jan von Pichowski, Sebastian von Mammen,
Engineering Surrogate Models for Boid Systems, In Proceedings of the 2023 Artificial Life Conference, Vol. ALIFE 2023: Ghost in the Machine, p. 102. Sapporo, Japan:
MIT Press,
2023. [Download][BibSonomy]
@inproceedings{von-Pichowski:2023ab,
author = {Jan von Pichowski and Sebastian von Mammen},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2023-ALIFE-SurrogateBoids.pdf},
year = {2023},
booktitle = {Proceedings of the 2023 Artificial Life Conference},
publisher = {MIT Press},
address = {Sapporo, Japan},
series = {Artificial Life Conference Proceedings},
pages = {102},
volume = {ALIFE 2023: Ghost in the Machine},
title = {Engineering Surrogate Models for Boid Systems}
}
Abstract:
Amelie Hetterich, Sebastian von Mammen, Daniel Pohl,
Mitigation of Fear Triggers for Image Viewing in Virtual Reality, In Proceedings of the Eurographics Symposium on Virtual Environments (2023), pp. 35--36.
The Eurographics Association,
2023. [BibSonomy]
@inproceedings{Hetterich:2023aa,
author = {Amelie Hetterich and Sebastian von Mammen and Daniel Pohl},
year = {2023},
booktitle = {Proceedings of the Eurographics Symposium on Virtual Environments (2023)},
publisher = {The Eurographics Association},
pages = {35--36},
title = {Mitigation of Fear Triggers for Image Viewing in Virtual Reality}
}
Abstract:
Tobias Buhl, David Marcomin, Stefan Fallert, Jana Blechschmidt, Franziska Bönisch, Robert Mark, Juliano Sarmento Cabral, Sebastian von Mammen,
User-Centered Engineering of an Interactive Land Use Exploration Tool, In Soumya Dutta, Kathrin Feige, Karsten Rink, Dirk Zeckzer (Eds.), Workshop on Visualisation in Environmental Sciences (EnvirVis).
The Eurographics Association,
2023. [Download][BibSonomy][Doi]
@inproceedings{Buhl:2023aa,
author = {Tobias Buhl and David Marcomin and Stefan Fallert and Jana Blechschmidt and Franziska Bönisch and Robert Mark and Juliano Sarmento Cabral and Sebastian von Mammen},
url = {https://doi.org/10.2312/envirvis.20231109},
year = {2023},
booktitle = {Workshop on Visualisation in Environmental Sciences (EnvirVis)},
editor = {Soumya Dutta and Kathrin Feige and Karsten Rink and Dirk Zeckzer},
publisher = {The Eurographics Association},
doi = {10.2312/envirvis.20231109},
title = {User-Centered Engineering of an Interactive Land Use Exploration Tool}
}
Abstract:
Mounsif Chetitah, Julian Müller, Lorenz Deserno, Maria Walttman, Sebastian von Mammen,
Gamification Framework for Reinforcement Learning-based Neuropsychology Experiments, In Proceedings of the 18th International Conference on the Foundations of Digital Games, pp. 1--4.
2023. [Download][BibSonomy]
@inproceedings{Chetitah:2023aa,
author = {Mounsif Chetitah and Julian Müller and Lorenz Deserno and Maria Walttman and Sebastian von Mammen},
url = {https://dl.acm.org/doi/abs/10.1145/3582437.3587190?casa_token=O9vQbYCFrsMAAAAA:6mf9XGoc9Rrn0irReZDBlIdAPU4PtsxHAiKyO6fBhCl9GfXVM6ncba8sbT75iMHAXVPzcFWvJQBCf2Xn},
year = {2023},
booktitle = {Proceedings of the 18th International Conference on the Foundations of Digital Games},
pages = {1--4},
title = {Gamification Framework for Reinforcement Learning-based Neuropsychology Experiments}
}
Abstract:
Mounsif Chetitah, Sebastian von Mammen,
Ontologically Mapping Learning Objectives and Gameplay Goals., In CoG, pp. 1-4.
IEEE,
2023. [Download][BibSonomy]
@inproceedings{conf/cig/ChetitahM23,
author = {Mounsif Chetitah and Sebastian von Mammen},
url = {http://dblp.uni-trier.de/db/conf/cig/cog2023.html#ChetitahM23},
year = {2023},
booktitle = {CoG},
publisher = {IEEE},
pages = {1-4},
title = {Ontologically Mapping Learning Objectives and Gameplay Goals.}
}
Abstract:
Anne Lewerentz, Nico Manke, David Schantz, Juliano Sarmento Cabral, Sebastian von Mammen,
Integrating Julia Code into the Unity Game Engine to Dive into Aquatic Plant Growth., In Nuria Pelechano, Fotis Liarokapis, Ali Asadipour, Damien Rohmer, Marta Fairén, Jordi Moyés (Eds.), IMET, pp. 97-100.
Eurographics,
2023. [Download][BibSonomy]
@inproceedings{conf/imet/LewerentzMSCM23,
author = {Anne Lewerentz and Nico Manke and David Schantz and Juliano Sarmento Cabral and Sebastian von Mammen},
url = {http://dblp.uni-trier.de/db/conf/imet/imet2023.html#LewerentzMSCM23},
year = {2023},
booktitle = {IMET},
editor = {Nuria Pelechano and Fotis Liarokapis and Ali Asadipour and Damien Rohmer and Marta Fairén and Jordi Moyés},
publisher = {Eurographics},
pages = {97-100},
title = {Integrating Julia Code into the Unity Game Engine to Dive into Aquatic Plant Growth.}
}
Abstract:
Anne Vetter, Johannes Büttner, Sebastian von Mammen,
Baked Burger Bash: A Serious Virtual Reality Game Informing about the Effects of Acute Cannabinoid Intoxication, In 2023 IEEE Conference on Games (CoG), pp. 1-4.
2023. [BibSonomy][Doi]
@inproceedings{10333198,
author = {Anne Vetter and Johannes Büttner and Sebastian von Mammen},
year = {2023},
booktitle = {2023 IEEE Conference on Games (CoG)},
pages = {1-4},
doi = {10.1109/CoG57401.2023.10333198},
title = {Baked Burger Bash: A Serious Virtual Reality Game Informing about the Effects of Acute Cannabinoid Intoxication}
}
Abstract:
This paper presents Baked Burger Bash, a serious game that aims to enhance a drug prevention program for adolescents using virtual reality (VR). The game is based on a simulation of acute cannabinoid intoxication embedded in an interactive cooking game. The player takes on the role of a food truck chef who needs to serve up the orders of passers-by. Various effects of intoxication make it hard to live up to the customers’ expectations. In this paper, we present the scientific background, insights from formative playtests, and feedback from a drug prevention expert who guided the iterative development.
Felix Sittner, Oliver Hartmann, Sergio Montenegro, Jan-Philipp Friese, Larissa Brübach, Marc Erich Latoschik, Carolin Wienrich,
An Update on the Virtual Mission Control Room, In 2023 Small Satellite Conference. Utah State University, Logan, UT.
2023. [Download][BibSonomy]
@proceedings{sittner2023update,
author = {Felix Sittner and Oliver Hartmann and Sergio Montenegro and Jan-Philipp Friese and Larissa Brübach and Marc Erich Latoschik and Carolin Wienrich},
url = {https://digitalcommons.usu.edu/smallsat/2023/all2023/193/},
year = {2023},
booktitle = {2023 Small Satellite Conference},
address = {Utah State University, Logan, UT},
title = {An Update on the Virtual Mission Control Room}
}
Abstract:
In 2021 we presented the Virtual Mission Control Room (VMCR) on the verge from fun educational project to testing ground for remote cooperative mission control. Since then, we successfully participated in ESA's 2022 campaign "New ideas to make XR a reality", which granted us additional funding to improve the VMCR software and conduct usability testing in cooperation with the chair of human-computer-interaction. In this paper and the corresponding poster session we give an update on the current state of the project, the new features and project structure. We explain the changes suggested by early test users and ESA to make operators feel more at home in the virtual environment. Subsequently, our project partners present their first suggestions for improvements to the VMCR as well as their plans for user testing. We conclude with lessons learned and and a look ahead into our plans for the future of the project.
Carolin Wienrich, Jana Krauss, Lukas Polifke, Viktoria Horn, Arne Bürger, Marc-Erich Latoschik,
Harnessing the Potential of the Metaverse to Counter its Dangers.
2023. [BibSonomy]
@misc{wienrich2023harnessing,
author = {Carolin Wienrich and Jana Krauss and Lukas Polifke and Viktoria Horn and Arne Bürger and Marc-Erich Latoschik},
year = {2023},
title = {Harnessing the Potential of the Metaverse to Counter its Dangers}
}
Abstract:
Simon Seibt, Bartosz Von Rymon Lipinski, Thomas Chang, Marc Erich Latoschik,
DFM4SFM - Dense Feature Matching for Structure from Motion, In 2023 IEEE International Conference on Image Processing Challenges and Workshops (ICIPCW), pp. 1-5.
2023. [Download][BibSonomy][Doi]
@inproceedings{10328368,
author = {Simon Seibt and Bartosz Von Rymon Lipinski and Thomas Chang and Marc Erich Latoschik},
url = {https://ieeexplore.ieee.org/document/10328368},
year = {2023},
booktitle = {2023 IEEE International Conference on Image Processing Challenges and Workshops (ICIPCW)},
pages = {1-5},
doi = {10.1109/ICIPC59416.2023.10328368},
title = {DFM4SFM - Dense Feature Matching for Structure from Motion}
}
Abstract:
Structure from motion (SfM) is a fundamental task in computer vision and allows recovering the 3D structure of a stationary scene from an image set. Finding robust and accurate feature matches plays a crucial role in the early stages of SfM. So in this work, we propose a novel method for computing image correspondences based on dense feature matching (DFM) using homographic decomposition: The underlying pipeline provides refinement of existing matches through iterative rematching, detection of occlusions and extrapolation of additional matches in critical image areas between image pairs. Our main contributions are improvements of DFM specifically for SfM, resulting in global refinement and global extrapolation of image correspondences between related views. Furthermore, we propose an iterative version of the Delaunay-triangulation-based outlier detection algorithm for robust processing of repeated image patterns. Through experiments, we demonstrate that the proposed method significantlv improves the reconstruction accuracy.
Florian Kern, Marc Erich Latoschik,
Reality Stack I/O: A Versatile and Modular Framework for Simplifying and Unifying XR Applications and Research, In 2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), pp. 74-76.
2023. [Download][BibSonomy][Doi]
@inproceedings{10322199,
author = {Florian Kern and Marc Erich Latoschik},
url = {https://ieeexplore.ieee.org/document/10322199},
year = {2023},
booktitle = {2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)},
pages = {74-76},
doi = {10.1109/ISMAR-Adjunct60411.2023.00023},
title = {Reality Stack I/O: A Versatile and Modular Framework for Simplifying and Unifying XR Applications and Research}
}
Abstract:
This paper introduces Reality Stack I/O (RSIO), a versatile and modular framework designed to facilitate the development of extended reality (XR) applications. Researchers and developers often spend a significant amount of time enabling cross-device and cross-platform compatibility, leading to delays and increased complexity. RSIO provides the essential features to simplify and unify the development of XR applications. It enhances cross-device and cross-platform compatibility, expedites integration, and allows developers to focus more on building XR experiences rather than device integration. We offer a public Unity reference implementation with examples.
Michael F. Clements, Larissa Brübach, Jessica Glazov, Stephanie Gu, Rahila Kashif, Caroline Catmur, Alexandra L. Georgescu,
Measuring trust with the Wayfinding Task: Implementing a novel task in immersive virtual reality and desktop setups across remote and in-person test environments, In PLOS ONE.
2023. [BibSonomy]
@article{clements2023measuring,
author = {Michael F. Clements and Larissa Brübach and Jessica Glazov and Stephanie Gu and Rahila Kashif and Caroline Catmur and Alexandra L. Georgescu},
journal = {PLOS ONE},
year = {2023},
title = {Measuring trust with the Wayfinding Task: Implementing a novel task in immersive virtual reality and desktop setups across remote and in-person test environments}
}
Abstract:
Kathrin Gemesi, Nina Döllinger, Natascha-Alexandra Weinberger, Erik Wolf, David Mal, Carolin Wienrich, Claudia Luck-Sikorski, Erik Bader, Christina Holzapfel,
Auswirkung von (virtuellen) Körperbildübungen auf das Ernährungsverhalten von Personen mit Adipositas – Ergebnisse der ViTraS-Pilotstudie, In Adipositas - Ursachen, Folgeerkrankungen, Therapie, Vol. 17(03), pp. S10-05.
2023. [Download][BibSonomy][Doi]
@article{gemesi2023auswirkung,
author = {Kathrin Gemesi and Nina Döllinger and Natascha-Alexandra Weinberger and Erik Wolf and David Mal and Carolin Wienrich and Claudia Luck-Sikorski and Erik Bader and Christina Holzapfel},
journal = {Adipositas - Ursachen, Folgeerkrankungen, Therapie},
number = {03},
url = {http://www.thieme-connect.com/products/ejournals/abstract/10.1055/s-0043-1771568},
year = {2023},
pages = {S10-05},
volume = {17},
doi = {10.1055/s-0043-1771568},
title = {Auswirkung von (virtuellen) Körperbildübungen auf das Ernährungsverhalten von Personen mit Adipositas – Ergebnisse der ViTraS-Pilotstudie}
}
Abstract:
Einleitung Die multimodale Adipositastherapie besteht aus Elementen der Ernährung, der Bewegung und des Verhaltens bzw. des Körperbildes. Anwendungen der virtuellen Realität (VR) können das Methodenspektrum der Körperbildtherapie erweitern. Im vorliegenden Projekt wurde ein VR-System (bestehend aus einem Avatar und einem virtuellen Spiegel) zur Verbesserung von Körperwahrnehmung und Körperbild bei Personen mit Adipositas entwickelt.
Methoden Im Rahmen der multizentrischen kontrollierten ViTraS-Pilotstudie (Registrierungsnummer: DRKS00027906) wurden bei Personen mit Adipositas virtuelle bzw. traditionelle Körperbildübungen angewendet. Es fanden drei Sitzungen im Abstand von ca. zwei Wochen statt. Über die Zeit wurden anthropometrische Daten sowie Daten zum Ernährungsverhalten (Dutch Eating Behavior Questionnaire (DEBQ) und eine Auswahl von Fragen des The Eating Motivation Survey (TEMS)) erfasst. In zwei Sitzungen wurden Körperbildübungen virtuell (VR-Gruppe) oder nicht-virtuell (Kontrollgruppe) durchgeführt. Eine online Follow-Up-Datenerhebung fand 6 Wochen nach der letzten Sitzung statt.
Ergebnisse Insgesamt wurden 66 Personen (VR-Gruppe: 31, Kontrollgruppe: 35) in die Studie eingeschlossen. Die Personen waren zu 79% (52/66) weiblich, 45±13 Jahre alt und der mittlere BMI lag bei 37±4 kg/m2. Die Auswertung des DEBQ Fragebogens hat ergeben, dass in der VR-Gruppe gezügeltes Essverhalten von Anfang bis Ende der Studie signifikant (p<0,05) zugenommen hat. Die Ergebnisse der Auswahl von Fragen des TEMS hat keine signifikanten (p≥0,05) Unterschiede innerhalb oder zwischen den Gruppen ergeben.
Schlussfolgerung Die Studie hat ergeben, dass die Durchführung von virtuellen Körperbildübungen eine Auswirkung auf das Ernährungsverhalten von Personen mit Adipositas haben kann.
Christian Rack, Lukas Schach, Marc Latoschik,
Motion Learning Toolbox – A Python library for preprocessing of XR motion tracking data for machine learning applications.
2023. [Download][BibSonomy]
@misc{rack2023motionlearningtoolbox,
author = {Christian Rack and Lukas Schach and Marc Latoschik},
url = {https://github.com/cschell/Motion-Learning-Toolbox},
year = {2023},
title = {Motion Learning Toolbox – A Python library for preprocessing of XR motion tracking data for machine learning applications}
}
Abstract:
The Motion Learning Toolbox is a Python library designed to facilitate the preprocessing of motion tracking data in extended reality (XR) setups. It's particularly useful for researchers and engineers wanting to use XR tracking data as input for machine learning models. Originally developed for academic research targeting the identification of XR users by their motions, this toolbox includes a variety of data encoding methods that enhance machine learning model performance.
Franziska Westermeier, Larissa Brübach, Carolin Wienrich, Marc Erich Latoschik,
A Virtualized Augmented Reality Simulation for Exploring Perceptual Incongruencies, In Proceedings of the 29th ACM Symposium on Virtual Reality Software and Technology. New York, NY, USA:
Association for Computing Machinery,
2023. [Download][BibSonomy][Doi]
@inproceedings{westermeier2023virtualized,
author = {Franziska Westermeier and Larissa Brübach and Carolin Wienrich and Marc Erich Latoschik},
url = {https://doi.org/10.1145/3611659.3617227},
year = {2023},
booktitle = {Proceedings of the 29th ACM Symposium on Virtual Reality Software and Technology},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {VRST '23},
doi = {10.1145/3611659.3617227},
title = {A Virtualized Augmented Reality Simulation for Exploring Perceptual Incongruencies}
}
Abstract:
When blending virtual and physical content, certain incongruencies emerge from hardware limitations, inaccurate tracking, or different appearances of virtual and physical content. They restrain us from perceiving virtual and physical content as one experience. Hence, it is crucial to investigate these issues to determine how they influence our experience. We present a virtualized augmented reality simulation that can systematically examine single incongruencies or different configurations.
Sooraj K. Babu, Tobias Brandner, Samuel Truman, Sebastian von Mammen,
Investigating Crowdsourced Help Facilities for Enhancing User Guidance, In Nuria Pelechano, Fotis Liarokapis, Damien Rohmer, Ali Asadipour (Eds.), International Conference on Interactive Media, Smart Systems and Emerging Technologies (IMET).
The Eurographics Association,
2023. [Download][BibSonomy][Doi]
@inproceedings{Babu:2023aa,
author = {Sooraj K. Babu and Tobias Brandner and Samuel Truman and Sebastian von Mammen},
url = {https://diglib.eg.org/handle/10.2312/imet20231252},
year = {2023},
booktitle = {International Conference on Interactive Media, Smart Systems and Emerging Technologies (IMET)},
editor = {Nuria Pelechano and Fotis Liarokapis and Damien Rohmer and Ali Asadipour},
publisher = {The Eurographics Association},
doi = {10.2312/imet.20231252},
title = {Investigating Crowdsourced Help Facilities for Enhancing User Guidance}
}
Abstract:
We present two help facilities aimed at addressing the challenges faced by users new to a software. Software documentation
should ideally familiarise with the overall functionality, navigate through the concrete workflows, and opportunities for customisation. Large degrees of freedom in the use and vast basic scopes, however, often make it infeasible to convey this information
upfront. Rather, it needs to be broken into helpful bits and pieces which are delivered at appropriate times. In order to address
this challenge, we have designed two concrete help facilities, i.e. tooltips and tips-of-the-day, to feature crowdsourced information. In this paper, we present the results of a formative study to implement early proofs-of-concept for the open-source game
authoring platform Godot. To support further research, we have made the code base publicly available on GitHub.
Nina Döllinger, Matthias Beck, Erik Wolf, David Mal, Mario Botsch, Marc Erich Latoschik, Carolin Wienrich,
“If It’s Not Me It Doesn’t Make a Difference” – The Impact of Avatar Personalization on User Experience and Body Awareness in Virtual Reality, In 2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 483-492.
2023. [Download][BibSonomy][Doi]
@inproceedings{dollinger2023doesnt,
author = {Nina Döllinger and Matthias Beck and Erik Wolf and David Mal and Mario Botsch and Marc Erich Latoschik and Carolin Wienrich},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2023-ismar-impact-of-avatar-appearance-preprint.pdf},
year = {2023},
booktitle = {2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
pages = {483-492},
doi = {10.1109/ISMAR59233.2023.00063},
title = {“If It’s Not Me It Doesn’t Make a Difference” – The Impact of Avatar Personalization on User Experience and Body Awareness in Virtual Reality}
}
Abstract:
Body awareness is relevant for the efficacy of psychotherapy. However, previous work on virtual reality (VR) and avatar-assisted therapy has often overlooked it. We investigated the effect of avatar individualization on body awareness in the context of VR-specific user experience, including sense of embodiment (SoE), plausibility, and sense of presence (SoP). In a between-subject design, 86 participants embodied three avatar types and engaged in VR movement exercises. The avatars were (1) generic and gender-matched, (2) customized from a set of pre-existing options, or (3) personalized photorealistic scans. Compared to the other conditions, participants with personalized avatars reported increased SoE, yet higher eeriness and reduced body awareness. Further, SoE and SoP positively correlated with body awareness across conditions. Our results indicate that VR user experience and body awareness do not always dovetail and do not necessarily predict each other. Future research should work towards a balance between body awareness and SoE.
J. Bruschke, C. Kröber, F. Maiwald, R. Utescher, A. Pattee,
INTRODUCING A MULTIMODAL DATASET FOR THE RESEARCH OF ARCHITECTURAL ELEMENTS, In The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XLVIII-M-2-2023, pp. 325--331.
Copernicus GmbH,
2023. [Download][BibSonomy][Doi]
@article{Bruschke_archilabel_2023,
author = {J. Bruschke and C. Kröber and F. Maiwald and R. Utescher and A. Pattee},
journal = {The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
url = {https://doi.org/10.5194%2Fisprs-archives-xlviii-m-2-2023-325-2023},
year = {2023},
publisher = {Copernicus GmbH},
pages = {325--331},
volume = {XLVIII-M-2-2023},
doi = {10.5194/isprs-archives-xlviii-m-2-2023-325-2023},
title = {INTRODUCING A MULTIMODAL DATASET FOR THE RESEARCH OF ARCHITECTURAL ELEMENTS}
}
Abstract:
Jonas Bruschke, Cindy Köber, Ronja Utescher, Florian Niebling,
Towards Querying Multimodal Annotations Using Graphs, In Sander Münster, Aaron Pattee, Cindy Kröber, Florian Niebling (Eds.), Research and Education in Urban History in the Age of Digital Libraries, pp. 65--87. Cham:
Springer Nature Switzerland,
2023. [BibSonomy][Doi]
@inproceedings{10.1007/978-3-031-38871-2_5,
author = {Jonas Bruschke and Cindy Köber and Ronja Utescher and Florian Niebling},
year = {2023},
booktitle = {Research and Education in Urban History in the Age of Digital Libraries},
editor = {Sander Münster and Aaron Pattee and Cindy Kröber and Florian Niebling},
publisher = {Springer Nature Switzerland},
address = {Cham},
pages = {65--87},
doi = {10.1007/978-3-031-38871-2_5},
title = {Towards Querying Multimodal Annotations Using Graphs}
}
Abstract:
Photographs and 3D reconstructions of buildings as well as textual information and documents play an important role in art history and architectural studies when it comes to investigating architecture, the construction history of buildings, and the impact these constructions had on a city. Advanced tools have the potential to enhance and support research workflows and source criticism by linking corresponding materials and annotations, such that relevant data can be quickly queried and identified. Images are a primary source in the 3D reconstruction process, with the possibility to create spatializations of additional photographs of buildings which were not part of the initial SfM process, enabling the linking of annotations between these photographs and the respective 3D model. In contrast, identifying and locating respective annotations in text sources requires a different approach due to their more abstract nature. This paper presents concepts for automatic linking of texts and their respective annotations to corresponding images, as well as to 3D models and their annotations. Controlled vocabularies for architectural elements and a graph representation are utilized to reduce ambiguity when querying related instances.
Maximilian Landeck, Fabian Unruh, Jean-Luc Lugrin, Marc Erich Latoschik,
From Clocks to Pendulums: A Study on the Influence of External Moving Objects on Time Perception in Virtual Environments, In The 29th ACM Symposium on Virtual Reality Software and Technology (VRST), Vol. 29th, p. 11.
2023. [Download][BibSonomy][Doi]
@article{noauthororeditor,
author = {Maximilian Landeck and Fabian Unruh and Jean-Luc Lugrin and Marc Erich Latoschik},
journal = {The 29th ACM Symposium on Virtual Reality Software and Technology (VRST)},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2023_vrst_conference_influence_moving_objects_on_time_perception__preprint_version_1.pdf},
year = {2023},
pages = {11},
volume = {29th},
doi = {10.1145/3611659.3615703},
title = {From Clocks to Pendulums: A Study on the Influence of External Moving Objects on Time Perception in Virtual Environments}
}
Abstract:
This paper investigates the relationship between perceived object motion and the experience of time in virtual environments. We developed an application to measure how the motion properties of virtual objects and the degree of immersion and embodiment may affect the time experience. A first study (n = 145) was conducted remotely using an online video survey, while a second study (n = 60) was conducted under laboratory conditions in virtual reality (VR). Participants in both studies experienced seven different virtual objects in a randomized order and then answered questions about time experience. The VR study added an "embodiment" condition in which participants were either represented by a virtual full body or lacked any form of virtual body representation.
In both studies, time was judged to pass faster when viewing oscillating motion in immersive and non-immersive settings and independently of the presence or absence of a virtual body. This trend was strongest when virtual pendulums were displayed. Both studies also found a significant inverse correlation between the passage of time and boredom. Our results support the development of applications that manipulate the perception of time in virtual environments for therapeutic use, for instance, for disorders such as depression, autism, and schizophrenia. Disturbances in the perception of time are known to be associated with these disorders.
Larissa Brübach, Franziska Westermeier, Carolin Wienrich, Marc Erich Latoschik,
A Systematic Evaluation of Incongruencies and Their Influence on Plausibility in Virtual Reality, In 2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 894-901.
IEEE Computer Society,
2023. [Download][BibSonomy][Doi]
@inproceedings{brubach2023systematic,
author = {Larissa Brübach and Franziska Westermeier and Carolin Wienrich and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2023-ismar-bruebach-a-systematic-evaluation-of-incongruencies-preprint.pdf},
year = {2023},
booktitle = {2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
publisher = {IEEE Computer Society},
series = {ISMAR '23},
pages = {894-901},
doi = {10.1109/ISMAR59233.2023.00105},
title = {A Systematic Evaluation of Incongruencies and Their Influence on Plausibility in Virtual Reality}
}
Abstract:
Currently, there is an ongoing debate about the influencing factors of one's extended reality (XR) experience. Plausibility, congruence, and their role have recently gained more and more attention. One of the latest models to describe XR experiences, the Congruence and Plausibility model (CaP), puts plausibility and congruence right in the center. However, it is unclear what influence they have on the overall XR experience and what influences our perceived plausibility rating. In this paper, we implemented four different incongruencies within a virtual reality scene using breaks in plausibility as an analogy to breaks in presence. These manipulations were either located on the cognitive or perceptual layer of the CaP model. They were also either connected to the task at hand or not. We tested these manipulations in a virtual bowling environment to see which influence they had. Our results show that manipulations connected to the task caused a lower perceived plausibility. Additionally, cognitive manipulations seem to have a larger influence than perceptual manipulations. We were able to cause a break in plausibility with one of our incongruencies. These results show a first direction on how the influence of plausibility in XR can be systematically investigated in the future.
Christian Rack, Tamara Fernando, Murat Yalcin, Andreas Hotho, Marc Erich Latoschik,
Who Is Alyx? A new Behavioral Biometric Dataset for User Identification in XR, In David Swapp (Eds.), Frontiers in Virtual Reality, Vol. 4.
2023. [Download][BibSonomy][Doi]
@article{rack2023behavioral,
author = {Christian Rack and Tamara Fernando and Murat Yalcin and Andreas Hotho and Marc Erich Latoschik},
journal = {Frontiers in Virtual Reality},
url = {https://www.frontiersin.org/journals/virtual-reality/articles/10.3389/frvir.2023.1272234/full},
year = {2023},
editor = {David Swapp},
volume = {4},
doi = {10.3389/frvir.2023.1272234},
title = {Who Is Alyx? A new Behavioral Biometric Dataset for User Identification in XR}
}
Abstract:
This article presents a new dataset containing motion and physiological data of users playing the game 'Half-Life: Alyx'. The dataset specifically targets behavioral and biometric identification of XR users. It includes motion and eye-tracking data captured by a HTC Vive Pro of 71 users playing the game on two separate days for 45 minutes. Additionally, we collected physiological data from 31 of these users. We provide benchmark performances for the task of motion-based identification of XR users with two prominent state-of-the-art deep learning architectures (GRU and CNN). After training on the first session of each user, the best model can identify the 71 users in the second session with a mean accuracy of 95% within 2 minutes. The dataset is freely available under https://github.com/cschell/who-is-alyx
Maximilian Landeck, Fabian Unruh, Jean-Luc Lugrin, Marc Erich Latoschik,
Time Perception Research in Virtual Reality: Lessons Learned, In Mensch und Computer 2023.
Veröffentlicht durch die Gesellschaft für Informatik e.V. in P. Fröhlich & V. Cobus (Hrsg.),
2023. [Download][BibSonomy][Doi]
@inproceedings{landeck2023perception,
author = {Maximilian Landeck and Fabian Unruh and Jean-Luc Lugrin and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/virtual-times/publications/2023_MUC_Landeck_TimePerceptionInVR.pdf},
year = {2023},
booktitle = {Mensch und Computer 2023},
publisher = {Veröffentlicht durch die Gesellschaft für Informatik e.V. in P. Fröhlich & V. Cobus (Hrsg.)},
doi = {10.18420/muc2023-mci-ws05-442},
title = {Time Perception Research in Virtual Reality: Lessons Learned}
}
Abstract:
In this article, we present a selection of recent studies from our research group that investigated the relationship between time perception and virtual reality (VR). We focus on the influence of avatar embodiment, visual fidelity, motion perception, and body representation. We summarize findings on the impact of these factors on time perception, discuss lessons learned, and implications for future applications.
In a waiting room experiment, the passage of time in VR with an avatar was perceived significantly faster than without an avatar. The passage of time in the real waiting room was not perceived as significantly different from the waiting room in VR with or without an avatar.
In an interactive scenario, the absence of a virtual avatar resulted in a significantly slower perceived passage of time compared to the partial and full-body avatar conditions. High and medium embodiment conditions are assumed to be more plausible and to less different from a real experience.
A virtual tunnel that induced the illusion of self-motion (vection) appeared to contribute to the perceived passage of time and experience of time. This effect was shown to increase with tunnel speed and the number of tunnel segments.
A framework was proposed for the use of virtual zeitgebers along three dimensions (speed, density, synchronicity) to systematically control the experience of time. The body itself, as well as external objects, seem to be addressed by this theory of virtual zeitgebers.
Finally, the standardization of the methodology and future research considerations are discussed.
Martin Mišiak, Tom Müller, Arnulph Fuhrmann, Marc Erich Latoschik,
An Evaluation of Dichoptic Tonemapping in Virtual Reality Experiences, In GI VR/AR Workshop.
2023. [Download][BibSonomy]
@inproceedings{misiak2023evaluation,
author = {Martin Mišiak and Tom Müller and Arnulph Fuhrmann and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2023-givrar-dichoptic-tonemapping.pdf},
year = {2023},
booktitle = {GI VR/AR Workshop},
title = {An Evaluation of Dichoptic Tonemapping in Virtual Reality Experiences}
}
Abstract:
Martin Mišiak, Arnulph Fuhrmann, Marc Erich Latoschik,
The Impact of Reflection Approximations on Visual Quality in Virtual Reality, In ACM Symposium on Applied Perception 2023 (SAP '23), August, pp. 1--11.
ACM,
2023. [Download][BibSonomy]
@inproceedings{misiak2023reflectionApprox,
author = {Martin Mišiak and Arnulph Fuhrmann and Marc Erich Latoschik},
url = {https://doi.org/10.1145/3605495.3605794},
year = {2023},
booktitle = {ACM Symposium on Applied Perception 2023 (SAP '23), August},
publisher = {ACM},
pages = {1--11},
title = {The Impact of Reflection Approximations on Visual Quality in Virtual Reality}
}
Abstract:
Philipp Krop, Sebastian Oberdörfer, Marc Erich Latoschik,
Traversing the Pass: Improving the Knowledge Retention of Serious Games Using a Pedagogical Agent, In Proceedings of the 23rd ACM International Conference on Intelligent Virtual Agents. Würzburg, Germany:
Association for Computing Machinery,
2023. [Download][BibSonomy][Doi]
@inproceedings{krop2023traversing,
author = {Philipp Krop and Sebastian Oberdörfer and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2023-iva-traversing-the-pass.pdf},
year = {2023},
booktitle = {Proceedings of the 23rd ACM International Conference on Intelligent Virtual Agents},
publisher = {Association for Computing Machinery},
address = {Würzburg, Germany},
doi = {10.1145/3570945.3607360},
title = {Traversing the Pass: Improving the Knowledge Retention of Serious Games Using a Pedagogical Agent}
}
Abstract:
Machine learning is an essential aspect of modern life that many educational institutions incorporate into their curricula. Often, students struggle to grasp how neural networks learn. Teaching these concepts could be assisted with pedagogical agents and serious games, which both have proven helpful for complex topics like engineering. We present "Traversing the Pass," a serious game that utilizes a mentor-like agent to explain the underlying machine learning concepts and provides feedback. We optimized the agent’s design in a pre-study before evaluating its effectiveness compared to a text-only user interface with experts and students. Participants performed better in a second assessment two weeks later if they played the game using the agent. Although criticized as repetitive, the game created an understanding of basic machine learning concepts and achieved high flow values. Our results indicate that agents could be used to enhance the beneficial effects of serious games with improved knowledge retention.
Jinghuai Lin, Johrine Cronjé, Carolin Wienrich, Paul Pauli, Marc Erich Latoschik,
Visual Indicators Representing Avatars' Authenticity in Social Virtual Reality and Their Impacts on Perceived Trustworthiness, In IEEE Transactions on Visualization and Computer Graphics (TVCG), Vol. 29(11), pp. 4589-4599.
2023. [Download][BibSonomy][Doi]
@article{lin2023visual,
author = {Jinghuai Lin and Johrine Cronjé and Carolin Wienrich and Paul Pauli and Marc Erich Latoschik},
journal = {IEEE Transactions on Visualization and Computer Graphics (TVCG)},
number = {11},
url = {https://ieeexplore.ieee.org/document/10269746},
year = {2023},
pages = {4589-4599},
volume = {29},
doi = {10.1109/TVCG.2023.3320234},
title = {Visual Indicators Representing Avatars' Authenticity in Social Virtual Reality and Their Impacts on Perceived Trustworthiness}
}
Abstract:
Astrid Carolus, Martin J. Koch, Samantha Straka, Marc Erich Latoschik, Carolin Wienrich,
MAILS - Meta AI Literacy Scale: Development and Testing of an AI Literacy Questionnaire Based on Well-Founded Competency Models and Psychological Change- and Meta-Competencies, In Computers in Human Behavior: Artificial Humans, Vol. 1(2), p. 100014.
2023. [Download][BibSonomy][Doi]
@article{carolus2023mails,
author = {Astrid Carolus and Martin J. Koch and Samantha Straka and Marc Erich Latoschik and Carolin Wienrich},
journal = {Computers in Human Behavior: Artificial Humans},
number = {2},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Carolus-and-Kocch-(2023)-MAILS.pdf},
year = {2023},
pages = {100014},
volume = {1},
doi = {10.1016/j.chbah.2023.100014},
title = {MAILS - Meta AI Literacy Scale: Development and Testing of an AI Literacy Questionnaire Based on Well-Founded Competency Models and Psychological Change- and Meta-Competencies}
}
Abstract:
Valid measurement of AI literacy is important for the selection of personnel, identification of shortages in skill and knowledge, and evaluation of AI literacy interventions. A questionnaire is missing that is deeply grounded in the existing literature on AI literacy, is modularly applicable depending on the goals, and includes further psychological competencies in addition to the typical facets of AIL. This paper presents the development and validation of a questionnaire considering the desiderata described above. We derived items to represent different facets of AI literacy and psychological competencies, such as problem-solving, learning, and emotion regulation in regard to AI. We collected data from 300 German-speaking adults to confirm the factorial structure. The result is the Meta AI Literacy Scale (MAILS) for AI literacy with the facets Use & apply AI, Understand AI, Detect AI, and AI Ethics and the ability to Create AI as a separate construct, and AI Self-efficacy in learning and problem-solving and AI Self-management (i.e., AI persuasion literacy and emotion regulation). This study contributes to the research on AI literacy by providing a measurement instrument relying on profound competency models. Psy chological competencies are included particularly important in the context of pervasive change through AI systems.
Yann Glémarec, Jean-Luc Lugrin, Amelie Hörmann, Anne-Gwenn Bosser, Cédric Buche, Marc Erich Latoschik, Norina Lauer,
Towards Virtual Audience Simulation For Speech Therapy, In Proceedings of the 23rd International Conference on Intelligent Virtual Agents (IVA).
2023. [Download][BibSonomy]
@inproceedings{noauthororeditor2023towards,
author = {Yann Glémarec and Jean-Luc Lugrin and Amelie Hörmann and Anne-Gwenn Bosser and Cédric Buche and Marc Erich Latoschik and Norina Lauer},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2023-iva-towards-virtual-audience-simulation-for-speech-therapy-preprint.pdf},
year = {2023},
booktitle = {Proceedings of the 23rd International Conference on Intelligent Virtual Agents (IVA)},
title = {Towards Virtual Audience Simulation For Speech Therapy}
}
Abstract:
Jean-Luc Lugrin, Jessica Topel, Yann Glémarec, Birgit Lugrin, Marc Erich Latoschik,
Posture Parameters for Personality-Enhanced Virtual Audiences, In Proceedings of the 23rd International Conference on Intelligent Virtual Agents (IVA).
2023. [Download][BibSonomy]
@conference{lugrin2023posture,
author = {Jean-Luc Lugrin and Jessica Topel and Yann Glémarec and Birgit Lugrin and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2023-iva-lugrin-posture-rarameters-for-personality-enhanced-virtual-audiences.pdf},
year = {2023},
booktitle = {Proceedings of the 23rd International Conference on Intelligent Virtual Agents (IVA)},
title = {Posture Parameters for Personality-Enhanced Virtual Audiences}
}
Abstract:
Mounsif Chetitah, Julian Müller, Lorenz Deserno, Maria Walttman, Sebastian von Mammen,
Gamification Framework for Reinforcement Learning-based Neuropsychology Experiments, In Proceedings of the 18th International Conference on the Foundations of Digital Games, pp. 1--4.
2023. [Download][BibSonomy]
@inproceedings{chetitah2023gamification,
author = {Mounsif Chetitah and Julian Müller and Lorenz Deserno and Maria Walttman and Sebastian von Mammen},
url = {https://dl.acm.org/doi/abs/10.1145/3582437.3587190?casa_token=O9vQbYCFrsMAAAAA:6mf9XGoc9Rrn0irReZDBlIdAPU4PtsxHAiKyO6fBhCl9GfXVM6ncba8sbT75iMHAXVPzcFWvJQBCf2Xn},
year = {2023},
booktitle = {Proceedings of the 18th International Conference on the Foundations of Digital Games},
pages = {1--4},
title = {Gamification Framework for Reinforcement Learning-based Neuropsychology Experiments}
}
Abstract:
Reinforcement learning (RL) is an adaptive process where an agent relies on its experience to improve the outcome of its performance. It learns by taking actions to maximize its rewards, and by minimizing the gap between predicted and received rewards. In experimental neuropsychology, RL algorithms are used as a conceptual basis to account for several aspects of human motivation and cognition. A number of neuropsychological experiments, such as reversal learning, sequential decision-making, and go-no-go tasks, are required to validate the decisive RL algorithms. The experiments are conducted in digital environments and are comprised of numerous trials that lead to participants’ frustration and fatigue. This paper presents a gamification framework for reinforcement-based neuropsychology experiments that aims to increase participant engagement and provide them with appropriate testing environments.
Ferdinand Maiwald, Jonas Bruschke, Danilo Schneider, Markus Wacker, Florian Niebling,
Giving Historical Photographs a New Perspective: Introducing Camera Orientation Parameters as New Metadata in a Large-Scale 4D Application, In Remote Sensing, Vol. 15(7), p. 1879.
2023. [Download][BibSonomy][Doi]
@article{maiwald2023giving,
author = {Ferdinand Maiwald and Jonas Bruschke and Danilo Schneider and Markus Wacker and Florian Niebling},
journal = {Remote Sensing},
number = {7},
url = {https://www.mdpi.com/2072-4292/15/7/1879},
year = {2023},
pages = {1879},
volume = {15},
doi = {10.3390/rs15071879},
title = {Giving Historical Photographs a New Perspective: Introducing Camera Orientation Parameters as New Metadata in a Large-Scale 4D Application}
}
Abstract:
Sooraj K Babu, Tim Schneider, Sebastian von Mammen,
User-centred Design and Development of a Graphical User Interface for Learning Classifier Systems, In GECCO '23 Companion: Proceedings of the Companion Conference on Genetic and Evolutionary Computation, pp. 1830--1837. Lisbon, Portugal:
ACM,
2023. [Download][BibSonomy][Doi]
@conference{Babu:2023ab,
author = {Sooraj K Babu and Tim Schneider and Sebastian von Mammen},
url = {https://dl.acm.org/doi/10.1145/3583133.3596357},
year = {2023},
booktitle = {GECCO '23 Companion: Proceedings of the Companion Conference on Genetic and Evolutionary Computation},
publisher = {ACM},
address = {Lisbon, Portugal},
pages = {1830--1837},
doi = {https://doi.org/10.1145/3583133.3596357},
title = {User-centred Design and Development of a Graphical User Interface for Learning Classifier Systems}
}
Abstract:
This study presents an application that offers an interactive representation of the learning cycle of a learning classifier system (LCS), a rule-based machine learning technique. The research approach utilized a combination of human-computer interaction requirements, design heuristics, and user-centred engineering to scaffold the development of a graphical user interface for an LCS. The evaluation of the application by LCS experts and a novice yielded encouraging results, demonstrating its ability to convey the LCS mechanics to the users. In addition, the experts validated the correctness of the learning cycle. The study's contribution is two-fold: it presents an innovative approach to making the user understand the underlying mechanics of LCS and validates the effectiveness of the developed application. Furthermore, this work sets the stage for future research towards design revision and to further develop the application to accommodate different LCS models and custom data.
Jan von Pichowski, Sebastian von Mammen,
Engineering Surrogate Models for Boid Systems, In Proceedings of ALIFE 2023, pp. 102-110. Sapporo, Japan:
MIT Press,
2023. [Download][BibSonomy]
@inproceedings{von-Pichowski:2023aa,
author = {Jan von Pichowski and Sebastian von Mammen},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2023-ALIFE-SurrogateBoids.pdf},
year = {2023},
booktitle = {Proceedings of ALIFE 2023},
publisher = {MIT Press},
address = {Sapporo, Japan},
pages = {102-110},
title = {Engineering Surrogate Models for Boid Systems}
}
Abstract:
Daniel Götz, Sebastian von Mammen,
Modulith: A Game Engine Made for Modding, In Proceedings of the 18th International Conference on the Foundations of Digital Games. New York, NY, USA:
Association for Computing Machinery,
2023. [Download][BibSonomy][Doi]
@inproceedings{Gotz:2023aa,
author = {Daniel Götz and Sebastian von Mammen},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2023-FDG-Modulith.pdf},
year = {2023},
booktitle = {Proceedings of the 18th International Conference on the Foundations of Digital Games},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {FDG '23},
doi = {10.1145/3582437.3582486},
title = {Modulith: A Game Engine Made for Modding}
}
Abstract:
A modular game engine facilitates development of game technologies and game contents. The latter includes content creation by the player in the form of modding. Based on a requirements analysis to meet the needs of developers, gamers and modders, we present a novel concept for a fully modular game engine. This concept includes a module system, which allows additional code to be loaded at runtime, and various support systems that simplify the use of the engine and the creation of modules. We present a proof-of-concept implementation, underline its fulfillment of the unearthed requirements, and demonstrate its use.
Sarah Hofmann, Cem Özdemir, Sebastian von Mammen,
Record, Review, Edit, Apply: A Motion Data Pipeline for Virtual Reality Development & Design, In Proceedings of the 18th International Conference on the Foundations of Digital Games.
ACM,
2023. [Download][BibSonomy][Doi]
@inproceedings{Hofmann_2023,
author = {Sarah Hofmann and Cem Özdemir and Sebastian von Mammen},
url = {https://doi.org/10.1145%2F3582437.3587191},
year = {2023},
booktitle = {Proceedings of the 18th International Conference on the Foundations of Digital Games},
publisher = {ACM},
doi = {10.1145/3582437.3587191},
title = {Record, Review, Edit, Apply: A Motion Data Pipeline for Virtual Reality Development & Design}
}
Abstract:
Tobias Mühling, Isabelle Späth, Joy Backhaus, Nathalie Milke, Sebastian Oberdörfer, Alexander Meining, Marc Erich Latoschik, Sarah König,
Virtual reality in medical emergencies training: benefits, perceived stress, and learning success, In Multimedia Systems, Vol. 29(4), p. 2239–2252.
Springer Nature,
2023. [BibSonomy][Doi]
@article{muhling:2023,
author = {Tobias Mühling and Isabelle Späth and Joy Backhaus and Nathalie Milke and Sebastian Oberdörfer and Alexander Meining and Marc Erich Latoschik and Sarah König},
journal = {Multimedia Systems},
number = {4},
year = {2023},
publisher = {Springer Nature},
pages = {2239–2252},
volume = {29},
doi = {https://doi.org/10.21203/rs.3.rs-2197674/v2},
title = {Virtual reality in medical emergencies training: benefits, perceived stress, and learning success}
}
Abstract:
Medical graduates lack procedural skills experience required to manage emergencies. Recent advances in virtual reality (VR)technology enable the creation of highly immersive learning environments representing easy-to-use and affordable solutionsfor training with simulation. However, the feasibility in compulsory teaching, possible side effects of immersion, perceivedstress, and didactic benefits have to be investigated systematically. VR-based training sessions using head-mounted displaysalongside a real-time dynamic physiology system were held by student assistants for small groups followed by debriefing witha tutor. In the pilot study, 36 students rated simulation sickness. In the main study, 97 students completed a virtual scenarioas active participants (AP) and 130 students as observers (OBS) from the first-person perspective on a monitor. Participantscompleted questionnaires for evaluation purposes and exploratory factor analysis was performed on the items. The extent ofsimulation sickness remained low to acceptable among participants of the pilot study. In the main study, students valued therealistic environment and guided practical exercise. AP perceived the degree of immersion as well as the estimated learn-ing success to be greater than OBS and proved to be more motivated post training. With respect to AP, the factor “sense ofcontrol” revealed a typical inverse U-shaped relationship to the scales “didactic value” and “individual learning benefit”.Summing up, curricular implementation of highly immersive VR-based training of emergencies proved feasible and founda high degree of acceptance among medical students. This study also provides insights into how different conceptions of perceived stress distinctively moderate subjective learning success.
David Fernes, Sebastian Oberdörfer, Marc Erich Latoschik,
Work, Trade, Learn: Developing an Immersive Serious Game for History Education, In 2023 9th International Conference of the Immersive Learning Research Network (iLRN).
2023. [Download][BibSonomy]
@inproceedings{fernes2023trade,
author = {David Fernes and Sebastian Oberdörfer and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2023-ilrn-worklearntrade-preprint.pdf},
year = {2023},
booktitle = {2023 9th International Conference of the Immersive Learning Research Network (iLRN)},
title = {Work, Trade, Learn: Developing an Immersive Serious Game for History Education}
}
Abstract:
History education often struggles with a lack of interest from students. Serious games can help make learning about history more engaging. Students can directly experience situations of the past as well as interact and communicate with agents representing people of the respective era. This allows for situated learning. Besides using computer screens, the gameplay can also be experienced using immersive Virtual Reality (VR). VR adds an additional spatial level and can further increase the engagement as well as vividness. To investigate the benefits of using VR for serious games targeting the learning of history, we developed a serious game for desktop-3D and VR. Our serious game puts a player into the role of a medieval miller’s apprentice. Following a situated learning approach, the learner operates a mill and interacts with several other characters. These agents discuss relevant facts of the medieval life, thus enabling the construction of knowledge about the life in medieval towns. An evaluation showed that the game in general was successful in increasing the user’s knowledge about the covered topics as well as their topic interest in history. Whether the immersive VR or the simple desktop version of the application was used did not have any influence on these results. Additional feedback was gathered to improve the game further in the future.
Sebastian Oberdörfer, Anne Elsässer, Silke Grafe, Marc Erich Latoschik,
Superfrog: Comparing Learning Outcomes and Potentials of a Worksheet, Smartphone, and Tangible AR Learning Environment, In 2023 9th International Conference of the Immersive Learning Research Network (iLRN).
2023. [Download][BibSonomy]
@inproceedings{oberdorfer2023comparing,
author = {Sebastian Oberdörfer and Anne Elsässer and Silke Grafe and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2023-ilrn-horst-learning-preprint.pdf},
year = {2023},
booktitle = {2023 9th International Conference of the Immersive Learning Research Network (iLRN)},
title = {Superfrog: Comparing Learning Outcomes and Potentials of a Worksheet, Smartphone, and Tangible AR Learning Environment}
}
Abstract:
The widespread availability of smartphones facilitates the integration of digital, augmented reality (AR), and tangible augmented reality (TAR) learning environments into the classroom. A haptic aspect can enhance the user’s overall experience during a learning process. To investigate further benefits of using TAR for educational purposes, we compare a TAR and a smartphone learning environment with a traditional worksheet counterpart in terms of learning effectiveness, emotions, motivation, and cognitive load. 64 sixth-grade students from a German high school used one of the three conditions to learn about frog anatomy. We found no significant differences in learning effectiveness and cognitive load. The TAR condition elicited significantly higher positive emotions than the worksheet, but not the smartphone condition. Both digital learning environments elicited significantly higher motivation, in contrast to the worksheet. Thus, our results suggest that smartphone and TAR learning environments are equally beneficial for enhancing learning.
Tobias Mühling, Isabelle Späth, Joy Backhaus, Nathalie Milke, Sebastian Oberdörfer, Alexander Meining, Marc Erich Latoschik, Sarah König,
Virtual reality in medical emergencies training: benefits, perceived stress, and learning success, In Multimedia System.
2023. [Download][BibSonomy][Doi]
@article{muhling2023virtual,
author = {Tobias Mühling and Isabelle Späth and Joy Backhaus and Nathalie Milke and Sebastian Oberdörfer and Alexander Meining and Marc Erich Latoschik and Sarah König},
journal = {Multimedia System},
url = {https://link.springer.com/article/10.1007/s00530-023-01102-0},
year = {2023},
doi = {10.1007/s00530-023-01102-0},
title = {Virtual reality in medical emergencies training: benefits, perceived stress, and learning success}
}
Abstract:
Medical graduates lack procedural skills experience required to manage emergencies. Recent advances in virtual reality (VR) technology enable the creation of highly immersive learning environments representing easy-to-use and affordable solutions for training with simulation. However, the feasibility in compulsory teaching, possible side effects of immersion, perceived stress, and didactic benefits have to be investigated systematically. VR-based training sessions using head-mounted displays alongside a real-time dynamic physiology system were held by student assistants for small groups followed by debriefing with a tutor. In the pilot study, 36 students rated simulation sickness. In the main study, 97 students completed a virtual scenario as active participants (AP) and 130 students as observers (OBS) from the first-person perspective on a monitor. Participants completed questionnaires for evaluation purposes and exploratory factor analysis was performed on the items. The extent of simulation sickness remained low to acceptable among participants of the pilot study. In the main study, students valued the realistic environment and guided practical exercise. AP perceived the degree of immersion as well as the estimated learning success to be greater than OBS and proved to be more motivated post training. With respect to AP, the factor “sense of control” revealed a typical inverse U-shaped relationship to the scales “didactic value” and “individual learning benefit”. Summing up, curricular implementation of highly immersive VR-based training of emergencies proved feasible and found a high degree of acceptance among medical students. This study also provides insights into how different conceptions of perceived stress distinctively moderate subjective learning success.
Florian Kern, Jonathan Tschanter, Marc Erich Latoschik,
Virtual-to-Physical Surface Alignment and Refinement Techniques for Handwriting, Sketching, and Selection in XR, In 2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), pp. 502-506.
2023. [Download][BibSonomy][Doi]
@inproceedings{10108564,
author = {Florian Kern and Jonathan Tschanter and Marc Erich Latoschik},
url = {https://ieeexplore.ieee.org/document/10108564/},
year = {2023},
booktitle = {2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
pages = {502-506},
doi = {10.1109/VRW58643.2023.00109},
title = {Virtual-to-Physical Surface Alignment and Refinement Techniques for Handwriting, Sketching, and Selection in XR}
}
Abstract:
The alignment of virtual to physical surfaces is essential to improve symbolic input and selection in XR. Previous techniques optimized for efficiency can lead to inaccuracies. We investigate regression-based refinement techniques and introduce a surface accuracy eval-uation. The results revealed that refinement techniques can highly improve surface accuracy and show that accuracy depends on the gesture shape and surface dimension. Our reference implementation and dataset are publicly available.
Tobias Brixner, Stefan Mueller, Andreas Müller, Andreas Knote, Wilhelm Schnepp, Samuel Truman, Anne Vetter, Sebastian von Mammen,
femtoPro: virtual-reality interactive training simulator of an ultrafast laser laboratory, In Applied Physics B, Vol. 129(5).
Springer Science and Business Media LLC,
2023. [Download][BibSonomy][Doi]
@article{Brixner_2023,
author = {Tobias Brixner and Stefan Mueller and Andreas Müller and Andreas Knote and Wilhelm Schnepp and Samuel Truman and Anne Vetter and Sebastian von Mammen},
journal = {Applied Physics B},
number = {5},
url = {https://doi.org/10.1007%2Fs00340-023-08018-7},
year = {2023},
publisher = {Springer Science and Business Media LLC},
volume = {129},
doi = {10.1007/s00340-023-08018-7},
title = {femtoPro: virtual-reality interactive training simulator of an ultrafast laser laboratory}
}
Abstract:
Samantha Straka, Martin Jakobus Koch, Astrid Carolus, Marc Erich Latoschik, Carolin Wienrich,
How Do Employees Imagine AI They Want to Work with: A Drawing Study, In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems. New York, NY, USA:
Association for Computing Machinery,
2023. [Download][BibSonomy][Doi]
@inproceedings{10.1145/3544549.3585631,
author = {Samantha Straka and Martin Jakobus Koch and Astrid Carolus and Marc Erich Latoschik and Carolin Wienrich},
url = {https://doi.org/10.1145/3544549.3585631},
year = {2023},
booktitle = {Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {CHI EA '23},
doi = {10.1145/3544549.3585631},
title = {How Do Employees Imagine AI They Want to Work with: A Drawing Study}
}
Abstract:
Perceptions about AI influence the attribution of characteristics and the interaction with AI. To find out how workers imagine an AI they would like to work with and what characteristics they attribute to it, we asked 174 working individuals to draw an AI they would like to work with, to report five adjectives they associate with their drawing and to evaluate the drawn and three other, typical AI representations (e.g. robot, smartphone) either presented as male or female. Participants mainly drew humanoid or robotic AIs. The adjectives that describe AI mainly referred to the inner characteristics, capabilities, shape, or relationship types. Regarding the evaluation, we identified four dimensions (warmth, competence, animacy, size) that can be reproduced for male and female AIs and different AI representations. This work addresses diverse conceptions of AI in the workplace and shows that human-centered AI development is necessary to address the huge design space.
Marie Luisa Fiedler, Erik Wolf, Carolin Wienrich, Marc Erich Latoschik,
Holographic Augmented Reality Mirrors for Daily Self-Reflection on the Own Body Image, In CHI 2023 WS28 Integrating Individual and Social Contexts into Self-Reflection Technologies Workshop, pp. 1-4.
2023. [Download][BibSonomy]
@inproceedings{fiedler2023holographic,
author = {Marie Luisa Fiedler and Erik Wolf and Carolin Wienrich and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2023-chi-reflection-workshop.pdf},
year = {2023},
booktitle = {CHI 2023 WS28 Integrating Individual and Social Contexts into Self-Reflection Technologies Workshop},
pages = {1-4},
title = {Holographic Augmented Reality Mirrors for Daily Self-Reflection on the Own Body Image}
}
Abstract:
Mirror self-reflection can help us to develop a deeper understanding and appreciation of our body. Due to technological advancements, holographic augmented reality (AR) mirrors can create realistic visualizations of virtual humans that can represent one's appearance in an altered way while remaining in a familiar environment. Further developing those mirrors opens a new field for use in everyday life. In this work, we outline possible future scenarios where AR mirrors can empower individuals to visualize their emotions, thought patterns, and discrepancies related to their physical body and mental body image. Thus, AR mirrors can encourage their self-reflection, promote a positive and healthy relationship with their bodies, or motivate them to take action to improve their well-being.
Peter Kullmann, Timo Menzel, Mario Botsch, Marc Erich Latoschik,
An Evaluation of Other-Avatar Facial Animation Methods for Social VR, In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, pp. 1--7. New York, NY, USA:
Association for Computing Machinery,
2023. [Download][BibSonomy][Doi]
@inproceedings{kullmann2023facialExpressionComparison,
author = {Peter Kullmann and Timo Menzel and Mario Botsch and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2023-chi-Kullmann-An-Evaluation-of-Other-Avatar-Facial-Animation-Methods-for-Social-VR.pdf},
year = {2023},
booktitle = {Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {CHI EA '23},
pages = {1--7},
doi = {10.1145/3544549.3585617},
title = {An Evaluation of Other-Avatar Facial Animation Methods for Social VR}
}
Abstract:
We report a mixed-design study on the effect of facial animation method (static, synthesized, or tracked expressions) and its synchronization to speaker audio (in sync or delayed by the method’s inherent latency) on an avatar’s perceived naturalness and plausibility. We created a virtual human for an actor and recorded his spontaneous half-minute responses to conversation prompts. As a simulated immersive interaction, 44 participants unfamiliar with the actor observed and rated performances rendered with the avatar, each with the different facial animation methods. Half of them observed performances in sync and the others with the animation method’s latency. Results show audio synchronization did not influence ratings and static faces were rated less natural and less plausible than animated faces. Notably, synthesized expressions were rated as more natural and more plausible than tracked expressions. Moreover, ratings of verbal behavior naturalness differed in the same way. We discuss implications of these results for avatar-mediated communication.
Sophia C. Steinhaeusser, Lenny Siol, Elisabeth Ganal, Sophia Maier, Birgit Lugrin,
The NarRobot Plugin - Connecting the Social Robot Reeti to the Unity Game Engine, In Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction (HRI).
2023. [BibSonomy][Doi]
@inproceedings{steinhaeusser2023narrobot,
author = {Sophia C. Steinhaeusser and Lenny Siol and Elisabeth Ganal and Sophia Maier and Birgit Lugrin},
year = {2023},
booktitle = {Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction (HRI)},
doi = {10.1145/3568294.3580044},
title = {The NarRobot Plugin - Connecting the Social Robot Reeti to the Unity Game Engine}
}
Abstract:
Martina Lein, Melissa Donnermann, Sophia C. Steinhaeusser, Birgit Lugrin,
Using a Social Robot as a Hotel Assessment Tool, In Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction (HRI).
2023. [BibSonomy][Doi]
@inproceedings{lein2023,
author = {Martina Lein and Melissa Donnermann and Sophia C. Steinhaeusser and Birgit Lugrin},
year = {2023},
booktitle = {Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction (HRI)},
doi = {10.1145/3568294.3580054},
title = {Using a Social Robot as a Hotel Assessment Tool}
}
Abstract:
Florian Kern, Florian Niebling, Marc Erich Latoschik,
Text Input for Non-Stationary XR Workspaces: Investigating Tap and Word-Gesture Keyboards in Virtual and Augmented Reality, In IEEE Transactions on Visualization and Computer Graphics, pp. 2658--2669.
2023. [Download][BibSonomy][Doi]
@article{kern2023input,
author = {Florian Kern and Florian Niebling and Marc Erich Latoschik},
journal = {IEEE Transactions on Visualization and Computer Graphics},
url = {https://ieeexplore.ieee.org/document/10049665/},
year = {2023},
pages = {2658--2669},
doi = {10.1109/TVCG.2023.3247098},
title = {Text Input for Non-Stationary XR Workspaces: Investigating Tap and Word-Gesture Keyboards in Virtual and Augmented Reality}
}
Abstract:
This article compares two state-of-the-art text input techniques between non-stationary virtual reality (VR) and video see-through augmented reality (VST AR) use-cases as XR display condition. The developed contact-based mid-air virtual tap and wordgesture (swipe) keyboard provide established support functions for text correction, word suggestions, capitalization, and punctuation. A user evaluation with 64 participants revealed that XR displays and input techniques strongly affect text entry performance, while subjective measures are only influenced by the input techniques. We found significantly higher usability and user experience ratings for tap keyboards compared to swipe keyboards in both VR and VST AR. Task load was also lower for tap keyboards. In terms of performance, both input techniques were significantly faster in VR than in VST AR. Further, the tap keyboard was significantly faster than the swipe keyboard in VR. Participants showed a significant learning effect with only ten sentences typed per condition. Our results are consistent with previous work in VR and optical see-through (OST) AR, but additionally provide novel insights into usability and performance of the selected text input techniques for VST AR. The significant differences in subjective and objective measures emphasize the importance of specific evaluations for each possible combination of input techniques and XR displays to provide reusable, reliable, and high-quality text input solutions. With our work, we form a foundation for future research and XR workspaces. Our reference implementation is publicly available to encourage replicability and reuse in future XR workspaces.
Sebastian Oberdörfer, Sandra Birnstiel, Sophia C. Steinhaeusser, Marc Erich Latoschik,
An Approach to Investigate an Influence of Visual Angle Size on Emotional Activation During a Decision-Making Task, In Virtual, Augmented and Mixed Reality (HCII 2023), pp. 649--664. Cham:
Springer Nature Switzerland,
2023. [Download][BibSonomy][Doi]
@inproceedings{oberdorfer2023approach,
author = {Sebastian Oberdörfer and Sandra Birnstiel and Sophia C. Steinhaeusser and Marc Erich Latoschik},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2023-hcii-igt-visual-angles-preprint.pdf},
year = {2023},
booktitle = {Virtual, Augmented and Mixed Reality (HCII 2023)},
publisher = {Springer Nature Switzerland},
address = {Cham},
series = {Lecture Notes in Computer Science},
pages = {649--664},
doi = {10.1007/978-3-031-35634-6_47},
title = {An Approach to Investigate an Influence of Visual Angle Size on Emotional Activation During a Decision-Making Task}
}
Abstract:
Decision-making is an important ability in our daily lives. Decision-making can be influenced by emotions. A virtual environment and objects in it might follow an emotional design, thus potentially influ- encing the mood of a user. A higher visual angle on a particular stimulus can lead to a higher emotional response to it. The use of immersive vir- tual reality (VR) surrounds a user visually with a virtual environment, as opposed to the partial immersion of using a normal computer screen. This higher immersion may result in a greater visual angle on a particu- lar stimulus and thus a stronger emotional response to it. In a between- subjects user study, we compare the results of a decision-making task in VR presented in three different visual angles. We used the Iowa Gambling Task (IGT) as task and to detect potential differences in decision-making. The IGT was displayed in one of three dimensions, thus yielding visual angles of 20◦, 35◦, and 50◦. Our results indicate no difference between the three conditions with respect to decision-making. Thus, our results possibly imply that a higher visual angle has no influence on a task that is influenced by emotions but is otherwise cognitive.
Christian Rack, Konstantin Kobs, Tamara Fernando, Andreas Hotho, Marc Erich Latoschik,
Versatile User Identification in Extended Reality using Pretrained Similarity-Learning, In arXiv, p. arXiv:2302.07517.
2023. [Download][BibSonomy][Doi]
@preprint{2023arXiv230207517S,
author = {Christian Rack and Konstantin Kobs and Tamara Fernando and Andreas Hotho and Marc Erich Latoschik},
journal = {arXiv},
url = {https://arxiv.org/abs/2302.07517},
year = {2023},
pages = {arXiv:2302.07517},
doi = {10.48550/arXiv.2302.07517},
title = {Versatile User Identification in Extended Reality using Pretrained Similarity-Learning}
}
Abstract:
In this paper, we combine the strengths of distance-based and classification-based approaches for the task of identifying extended reality users by their movements. For this we explore an embedding-based model that leverages deep metric learning. We train the model on a dataset of users playing the VR game "Half-Life: Alyx" and conduct multiple experiments and analyses using a state of the art classification-based model as baseline. The results show that the embedding-based method 1) is able to identify new users from non-specific movements using only a few minutes of enrollment data, 2) can enroll new users within seconds, while retraining the baseline approach takes almost a day, 3) is more reliable than the baseline approach when only little enrollment data is available, 4) can be used to identify new users from another dataset recorded with different VR devices.
Altogether, our solution is a foundation for easily extensible XR user identification systems, applicable to a wide range of user motions. It also paves the way for production-ready models that could be used by XR practitioners without the requirements of expertise, hardware, or data for training deep learning models.
Fabian Kerwagen, Konrad F. Fuchs, Melanie Ullrich, Andreas Schulze, Samantha Straka, Philipp Krop, Marc E. Latoschik, Fabian Gilbert, Andreas Kunz, Georg Fette, Stefan Störk, Maximilian Ertl,
Usability of a Mhealth Solution Using Speech Recognition for Point-of-care Diagnostic Management, In Journal of Medical Systems, Vol. 47(18).
2023. [Download][BibSonomy][Doi]
@article{kerwagen2023,
author = {Fabian Kerwagen and Konrad F. Fuchs and Melanie Ullrich and Andreas Schulze and Samantha Straka and Philipp Krop and Marc E. Latoschik and Fabian Gilbert and Andreas Kunz and Georg Fette and Stefan Störk and Maximilian Ertl},
journal = {Journal of Medical Systems},
number = {18},
url = {https://link.springer.com/article/10.1007/s10916-022-01896-y},
year = {2023},
volume = {47},
doi = {10.1007/s10916-022-01896-y},
title = {Usability of a Mhealth Solution Using Speech Recognition for Point-of-care Diagnostic Management}
}
Abstract:
The administrative burden for physicians in the hospital can affect the quality of patient care. The Service Center Medical Informatics (SMI) of the University Hospital Würzburg developed and implemented the smartphone-based mobile application (MA) ukw.mobile1 that uses speech recognition for the point-of-care ordering of radiological examinations. The aim of this study was to examine the usability of the MA workflow for the point-of-care ordering of radiological examinations. All physicians at the Department of Trauma and Plastic Surgery at the University Hospital Würzburg, Germany, were asked to participate in a survey including the short version of the User Experience Questionnaire (UEQ-S) and the Unified Theory of Acceptance and Use of Technology (UTAUT). For the analysis of the different domains of user experience (overall attractiveness, pragmatic quality and hedonic quality), we used a two-sided dependent sample t-test. For the determinants of the acceptance model, we employed regression analysis. Twenty-one of 30 physicians (mean age 34\,$\pm$\,8 years, 62\% male) completed the questionnaire. Compared to the conventional desktop application (DA) workflow, the new MA workflow showed superior overall attractiveness (mean difference 2.15\,$\pm$\,1.33), pragmatic quality (mean difference 1.90\,$\pm$\,1.16), and hedonic quality (mean difference 2.41\,$\pm$\,1.62; all p\,$<$\,.001). The user acceptance measured by the UTAUT (mean 4.49\,$\pm$\,0.41; min. 1, max. 5) was also high. Performance expectancy (beta\,=\,0.57, p\,=\,.02) and effort expectancy (beta\,=\,0.36, p\,=\,.04) were identified as predictors of acceptance, the full predictive model explained 65.4\% of the variance. Point-of-care mHealth solutions using innovative technology such as speech-recognition seem to address the users' needs and to offer higher usability in comparison to conventional technology. Implementation of user-centered mHealth innovations might therefore help to facilitate physicians' daily work.
Martin Mišiak, Arnulph Fuhrmann, Marc Erich Latoschik,
A Subjective Quality Assessment of Temporally Reprojected Specular Reflections in Virtual Reality, In 2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), pp. 825-826.
2023. [Download][BibSonomy][Doi]
@inproceedings{misiak2023subjective,
author = {Martin Mišiak and Arnulph Fuhrmann and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2023-ieeevr-noise-perception.pdf},
year = {2023},
booktitle = {2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
pages = {825-826},
doi = {10.1109/VRW58643.2023.00255},
title = {A Subjective Quality Assessment of Temporally Reprojected Specular Reflections in Virtual Reality}
}
Abstract:
Temporal reprojection is a popular method for mitigating sampling artifacts from a variety of sources. This work investigates it's impact on the subjective quality of specular reflections in Virtual Reality(VR). Our results show that temporal reprojection is highly effective at improving the visual comfort of specular materials, especially at low sample counts. A slightly diminished effect could also be observed in improving the subjective accuracy of the resulting reflection.
Marie Luisa Fiedler, Erik Wolf, Nina Döllinger, Mario Botsch, Marc Erich Latoschik, Carolin Wienrich,
Embodiment and Personalization for Self-Identification with Virtual Humans, In 2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), pp. 799-800.
2023. [Download][BibSonomy][Doi]
@inproceedings{fiedler2023selfidentification,
author = {Marie Luisa Fiedler and Erik Wolf and Nina Döllinger and Mario Botsch and Marc Erich Latoschik and Carolin Wienrich},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2023-ieeevr-self-identification-preprint.pdf},
year = {2023},
booktitle = {2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
pages = {799-800},
doi = {10.1109/VRW58643.2023.00242},
title = {Embodiment and Personalization for Self-Identification with Virtual Humans}
}
Abstract:
Our work investigates the impact of virtual human embodiment and personalization on the sense of embodiment (SoE) and self-identification (SI). We introduce preliminary items to query self-similarity (SS) and self-attribution (SA) with virtual humans as dimensions of SI. In our study, 64 participants successively observed personalized and generic-looking virtual humans, either as embodied avatars in a virtual mirror or as agents while performing tasks. They reported significantly higher SoE and SI when facing personalized virtual humans and significantly higher SoE and SA when facing embodied avatars, indicating that both factors have strong separate and complimentary influence on SoE and SI.
Maximilian Landeck, Federico Alvarez Igarzábal, Fabian Unruh, Hannah Habenicht, Shiva Khoshnoud, Marc Wittmann, Jean-Luc Lugrin, Marc Erich Latoschik,
Journey Through a Virtual Tunnel: Simulated Motion and its Effects on the Experience of Time, In Frontiers in Virtual Reality, Vol. 3, p. 195.
Frontiers,
2023. [Download][BibSonomy][Doi]
@article{landeck3journey,
author = {Maximilian Landeck and Federico Alvarez Igarzábal and Fabian Unruh and Hannah Habenicht and Shiva Khoshnoud and Marc Wittmann and Jean-Luc Lugrin and Marc Erich Latoschik},
journal = {Frontiers in Virtual Reality},
url = {https://www.frontiersin.org/articles/10.3389/frvir.2022.1059971/full},
year = {2023},
publisher = {Frontiers},
pages = {195},
volume = {3},
doi = {10.3389/frvir.2022.1059971},
title = {Journey Through a Virtual Tunnel: Simulated Motion and its Effects on the Experience of Time}
}
Abstract:
David Mal, Erik Wolf, Nina Döllinger, Carolin Wienrich, Marc Erich Latoschik,
The Impact of Avatar and Environment Congruence on Plausibility, Embodiment, Presence, and the Proteus Effect in Virtual Reality, In IEEE Transactions on Visualization and Computer Graphics, Vol. 29(5), pp. 2358-2368.
2023. [Download][BibSonomy][Doi]
@article{mal2023impact,
author = {David Mal and Erik Wolf and Nina Döllinger and Carolin Wienrich and Marc Erich Latoschik},
journal = {IEEE Transactions on Visualization and Computer Graphics},
number = {5},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2023-ieeevr-avatar-and-environment-congruence-preprint.pdf},
year = {2023},
pages = {2358-2368},
volume = {29},
doi = {10.1109/TVCG.2023.3247089},
title = {The Impact of Avatar and Environment Congruence on Plausibility, Embodiment, Presence, and the Proteus Effect in Virtual Reality}
}
Abstract:
Many studies show the significance of the Proteus effect for serious virtual reality applications. The present study extends the existing knowledge by considering the relationship (congruence) between the self-embodiment (avatar) and the virtual environment. We investigated the impact of avatar and environment types and their congruence on avatar plausibility, sense of embodiment, spatial presence, and the Proteus effect. In a 2 × 2 between-subjects design, participants embodied either an avatar in sports- or business wear in a semantic congruent or incongruent environment while performing lightweight exercises in virtual reality. The avatar-environment congruence significantly affected the avatar’s plausibility but not the sense of embodiment or spatial presence. However, a significant Proteus effect emerged only for participants who reported a high feeling of (virtual) body ownership, indicating that a strong sense of having and owning a virtual body is key to facilitating the Proteus effect. We discuss the results assuming current theories of bottom-up and top-down determinants of the Proteus effect and thus contribute to understanding its underlying mechanisms and determinants.
Jinghuai Lin, Johrine Cronjé, Ivo Käthner, Paul Pauli, Marc Erich Latoschik,
Measuring Interpersonal Trust towards Virtual Humans with a Virtual Maze Paradigm, In IEEE Transactions on Visualization and Computer Graphics (TVCG), Vol. 29(5), p. 2401–2411.
2023. IEEE VR Best Paper Honorable Mention 🏆 [Download][BibSonomy][Doi]
@article{lin2023measuring,
author = {Jinghuai Lin and Johrine Cronjé and Ivo Käthner and Paul Pauli and Marc Erich Latoschik},
journal = {IEEE Transactions on Visualization and Computer Graphics (TVCG)},
number = {5},
url = {https://ieeexplore.ieee.org/document/10049655},
year = {2023},
pages = {2401–2411},
volume = {29},
doi = {10.1109/TVCG.2023.3247095},
title = {Measuring Interpersonal Trust towards Virtual Humans with a Virtual Maze Paradigm}
}
Abstract:
Virtual humans, including virtual agents and avatars, play an increasingly important role as VR technology advances. For example, virtual humans are used as proxies of users in social VR or as interfaces for AI assistants in online financing. Interpersonal trust is an essential prerequisite in real-life interactions, as well as in the virtual world. However, to date, there are no established interpersonal trust measurement tools specifically for virtual humans in virtual reality. Previously, a virtual maze task was proposed to measure trust towards virtual characters. For the current study, a variant of the paradigm was implemented. The task of the users (the trustors) is to navigate through a maze in virtual reality, where they can interact with a virtual human (the trustee). They can choose to 1) ask for advice and 2) follow advice from the virtual human if they want to. These measures served as behavioural measures of trust. We conducted a validation study with 70 participants in a between-subject design. The two conditions did not differ in the content of the advice but in the appearance, tone of voice and engagement of the trustees (alleged as avatars controlled by other participants). Results indicate that the experimental manipulation was successful, as participants rated the virtual human as more trustworthy in the trustworthy condition than in the untrustworthy condition. Importantly, this manipulation affected the trust behaviour of our participants, who, in the trustworthy condition, asked for advice more often and followed advice more often, indicating that the paradigm is sensitive to assessing interpersonal trust towards virtual humans. Thus, our paradigm can be used to measure differences in interpersonal trust towards virtual humans and may serve as a valuable research tool to investigate trust in virtual reality.
Franziska Westermeier, Larissa Brübach, Marc Erich Latoschik, Carolin Wienrich,
Exploring Plausibility and Presence in Mixed Reality Experiences, In IEEE Transactions on Visualization and Computer Graphics, Vol. 29(5), pp. 2680-2689.
2023. IEEE VR Best Paper Nominee 🏆 [Download][BibSonomy][Doi]
@article{westermeier2023exploring,
author = {Franziska Westermeier and Larissa Brübach and Marc Erich Latoschik and Carolin Wienrich},
journal = {IEEE Transactions on Visualization and Computer Graphics},
number = {5},
url = {https://ieeexplore.ieee.org/document/10049710},
year = {2023},
pages = {2680-2689},
volume = {29},
doi = {10.1109/TVCG.2023.3247046},
title = {Exploring Plausibility and Presence in Mixed Reality Experiences}
}
Abstract:
Mixed Reality (MR) applications along Milgram's Reality-Virtuality (RV) continuum motivated a number of recent theories on potential constructs and factors describing MR experiences. This paper investigates the impact of incongruencies on the sensation/perception and cognition layers to provoke breaks in plausibility, and the effects these breaks have on spatial and overall presence as prominent constructs of Virtual Reality (VR). We developed a simulated maintenance application to test virtual electrical devices. Participants performed test operations on these devices in a counterbalanced, randomized 2x2 between-subject design in either VR as congruent, or Augmented Reality (AR) as incongruent on the sensation/perception layer. Cognitive incongruency was induced by the absence of traceable power outages, decoupling perceived cause and effect after activating potentially defective devices. Our results indicate significant differences in the plausibility ratings between the VR and AR conditions, hence between congruent/incongruent conditions on the sensation/perception layer. In addition, spatial presence revealed a comparable interaction pattern with the VR vs AR conditions. Both factors decreased for the AR condition (incongruent sensation/perception) compared to VR (congruent sensation/perception) for the congruent cognitive case but increased for the incongruent cognitive case. The results are discussed and put into perspective in the scope of recent theories of MR experiences.
Fabian Unruh, David H.V. Vogel, Maximilian Landeck, Jean-Luc Lugrin, Marc Erich Latoschik,
Body and Time: Virtual Embodiment and its effect on Time Perception, In IEEE Transactions on Visualization and Computer Graphics (TVCG), Vol. 29(5), pp. 2626 - 2636.
2023. IEEE VR Best Paper Nominee 🏆 [Download][BibSonomy][Doi]
@article{unruh2023virtual,
author = {Fabian Unruh and David H.V. Vogel and Maximilian Landeck and Jean-Luc Lugrin and Marc Erich Latoschik},
journal = {IEEE Transactions on Visualization and Computer Graphics (TVCG)},
number = {5},
url = {https://ieeexplore.ieee.org/abstract/document/10049718},
year = {2023},
pages = {2626 - 2636},
volume = {29},
doi = {10.1109/TVCG.2023.3247040},
title = {Body and Time: Virtual Embodiment and its effect on Time Perception}
}
Abstract:
This article explores the effect of one’s body representation on time perception. Time Perception is modulated by a variety of factors including, e.g., the current situation or activity, it can display significant disturbances caused by psychological disorders, and it is influenced by emotional and interoceptive states, i.e., ”the sense of the physiological condition of the body”. We investigated this relation between one’s own body and the perception of time in a novel Virtual Reality (VR) experiment explicitly fostering user activity. 36 participants randomly experienced different degrees of embodiment: i) without an avatar (low), ii) with hands (medium), and iii) with a high quality avatar (high). Participants had to repeatedly activate a virtual lamp and estimate the duration of time intervals as well as judge the passage of time. Our results show a significant effect of embodiment on time perception: time passes slower in the low embodiment condition compared to the medium and high conditions. In contrast to prior work, the study provides missing evidence that this effect is independent of the level of activity of participants: In our task users were prompted to repeatedly perform body actions, thereby ruling-out a potential influence of the level of activity. Importantly, duration judgements in both the millisecond and minute ranges seemed unaffected by variations in embodiment. Taken together, these results lead to a better understanding of the relationship between the body and time.
Nina Döllinger, Erik Wolf, Mario Botsch, Marc Erich Latoschik, Carolin Wienrich,
Are Embodied Avatars Harmful to our Self-Experience? The Impact of Virtual Embodiment on Body Awareness, In 2023 CHI Conference on Human Factors in Computing Systems, pp. 1-14.
2023. Honorable Mention 🏆 [Download][BibSonomy][Doi]
@inproceedings{dollinger2023embodied,
author = {Nina Döllinger and Erik Wolf and Mario Botsch and Marc Erich Latoschik and Carolin Wienrich},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2023-chi-virtual-mirrors-body-awareness-preprint.pdf},
year = {2023},
booktitle = {2023 CHI Conference on Human Factors in Computing Systems},
pages = {1-14},
doi = {10.1145/3544548.3580918},
title = {Are Embodied Avatars Harmful to our Self-Experience? The Impact of Virtual Embodiment on Body Awareness}
}
Abstract:
Virtual Reality (VR) allows us to replace our visible body with a virtual self-representation (avatar) and to explore its effects on our body perception. While the feeling of owning and controlling a virtual body is widely researched, how VR affects the awareness of internal body signals (body awareness) remains open. Forty participants performed moving meditation tasks in reality and VR, either facing their mirror image or not. Both the virtual environment and avatars photorealistically matched their real counterparts.
We found a negative effect of VR on body awareness, mediated by feeling embodied in and changed by the avatar. Further, we revealed a negative effect of a mirror on body awareness. Our results indicate that assessing body awareness should be essential in evaluating VR designs and avatar embodiment aiming at mental health, as even a scenario as close to reality as possible can distract users from their internal body signals.
Rebecca Hein, Jeanine Steinbock, Maria Eisenmann, Marc Erich Latoschik, Carolin Wienrich,
Virtual Reality im modernen Englischunterricht und das Potenzial für Inter- und Transkulturelles Lernen - Eine Pilotstudie, In Miriam Mulders, Josef Buchner, Andreas Dengel, Raphael Zender (Eds.), MedienPädagogik: Zeitschrift für Theorie Und Praxis Der Medienbildung(51), pp. 191-213.
2023. [Download][BibSonomy][Doi]
@article{hein2023medienpaed,
author = {Rebecca Hein and Jeanine Steinbock and Maria Eisenmann and Marc Erich Latoschik and Carolin Wienrich},
journal = {MedienPädagogik: Zeitschrift für Theorie Und Praxis Der Medienbildung},
number = {51},
url = {https://www.medienpaed.com/article/view/1572},
year = {2023},
editor = {Miriam Mulders and Josef Buchner and Andreas Dengel and Raphael Zender},
pages = {191-213},
doi = {https://doi.org/10.21240/mpaed/51/2023.01.18.X.},
title = {Virtual Reality im modernen Englischunterricht und das Potenzial für Inter- und Transkulturelles Lernen - Eine Pilotstudie}
}
Abstract:
Dieser Beitrag präsentiert die Ergebnisse des Seminarkonzepts und der durchgeführten Begleitforschung zum inter- und transkulturellen Lernen in Virtual Reality (VR). Im Rahmen eines Universitätsseminars entwarfen TEFL-Studierende (Teaching English as a Foreign Language) Unterrichtsstunden für fortgeschrittene Lernende des Faches Englisch der gymnasialen Oberstufe, die sich mit der Entwicklung von Empathie und der Fähigkeit zur Perspektivübernahme in interkulturellen Kommunikations- und Austauschsituationen befassten. Der Fokus des Konzepts lag dabei auf dem Verständnis und der Annahme kultureller Gemeinsamkeiten und Unterschiede. Hier werden beispielhaft zwei der erstellten Unterrichtsentwürfe für VR Interventionen vorgestellt und diskutiert. Begleitend zum Seminar wurden empirische Daten erhoben. Zum einen wurde das Potenzial des InteractionSuitcase (eine Sammlung virtueller Objekte) von den Studierenden bewertet. Zum anderen wurde explorativ eine qualitative Methode zur Messung interkultureller Kompetenz (Autobiography of Intercultural Encounters) getestet. Die Ergebnisse dieser explorativen Pilotstudie zeigen, dass die Interaktion mit dem InteractionSuitcase von den Studierenden als intuitiv und gewinnbringend für die Konzeption von Unterrichtskonzepten bewertet wurde. Dennoch integrierten die Studierenden die Manipulation der virtuellen Avatare häufiger im Gegensatz zum InteractionSuitcase in die Unterrichtskonzepte. Der Beitrag verfolgt das Ziel, das Potenzial von VR für inter- und transkulturelles Lernen im gymnasialen Englischunterricht zu identifizieren.
2022
Dr Vova, Oyudari, Sarah König, Samantha Monty,
WueDive - Complex systems ``Climate Change and the Effects on Health'' - an innovative teaching project to enhance digital culture in medical education., In AGU Fall Meeting Abstracts, Vol. 2022, pp. ED15C-0378.
2022. [BibSonomy]
@inproceedings{2022AGUFMED15C0378V,
author = {Dr Vova, Oyudari and Sarah König and Samantha Monty},
year = {2022},
booktitle = {AGU Fall Meeting Abstracts},
pages = {ED15C-0378},
volume = {2022},
title = {WueDive - Complex systems ``Climate Change and the Effects on Health'' - an innovative teaching project to enhance digital culture in medical education.}
}
Abstract:
Sarah Hofmann, Maximilian Seeger, Henning Rogge-Pott, Sebastian von Mammen,
Real-Time Music-Driven Movie Design Framework, In Rémi Ronfard, Hui-Yin Wu (Eds.), Workshop on Intelligent Cinematography and Editing.
The Eurographics Association,
2022. [Download][BibSonomy][Doi]
@inproceedings{Hofmann:2022aa,
author = {Sarah Hofmann and Maximilian Seeger and Henning Rogge-Pott and Sebastian von Mammen},
url = {https://diglib.eg.org/bitstream/handle/10.2312/wiced20221052/053-060.pdf?sequence=1&isAllowed=y},
year = {2022},
booktitle = {Workshop on Intelligent Cinematography and Editing},
editor = {Rémi Ronfard and Hui-Yin Wu},
publisher = {The Eurographics Association},
doi = {10.2312/wiced.20221052},
title = {Real-Time Music-Driven Movie Design Framework}
}
Abstract:
Sophia C Steinhaeusser, Sebastian Oberdörfer, Sebastian von Mammen, Marc Erich Latoschik, Birgit Lugrin,
Joyful Adventures and Frightening Places - Designing Emotion-Inducing Virtual Environments, In Frontiers in Virtual Reality, Vol. 3, pp. 2673--4192.
2022. [Download][BibSonomy][Doi]
@article{Steinhaeusser:2022ab,
author = {Sophia C Steinhaeusser and Sebastian Oberdörfer and Sebastian von Mammen and Marc Erich Latoschik and Birgit Lugrin},
journal = {Frontiers in Virtual Reality},
url = {https://www.frontiersin.org/articles/10.3389/frvir.2022.919163},
year = {2022},
pages = {2673--4192},
volume = {3},
doi = {10.3389/frvir.2022.919163},
title = {Joyful Adventures and Frightening Places - Designing Emotion-Inducing Virtual Environments}
}
Abstract:
Sooraj K Babu, Sebastian von Mammen,
Learning Classifier Systems as Recommender Systems, In Lifelike Computing Systems Workshop LIFELIKE 2022.
2022. [Download][BibSonomy]
@inproceedings{Babu:2022aa,
author = {Sooraj K Babu and Sebastian von Mammen},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/BabuVonMammen2022.pdf},
year = {2022},
booktitle = {Lifelike Computing Systems Workshop LIFELIKE 2022},
title = {Learning Classifier Systems as Recommender Systems}
}
Abstract:
A multitude of software domains, ranging from e-commerce services to personal digital assistants, rely on recommender systems (RS) to enhance their performance and customer experience. There are several established approaches to RS, including algorithms, namely, collaborative filtering, content based filtering, and various hybrid systems that use different verticals of data, such as the user's preferences, the metadata of items to be suggested, etc., to generate recommendations. Contrary to those, we adopt a classifier system to benefit from its rule-based knowledge representation and interwoven learning mechanisms. In this paper, we present our approach and elaborate on its potential use to flexibly and effectively guide a user through the application of an authoring platform.
Kerstin Schmid, Andreas Knote, Alexander Mück, Keram Pfeiffer, Sebastian von Mammen, Sabine C. Fischer,
Interactive, Visual Simulation of a Spatio-Temporal Model of Gas Exchange in the Human Alveolus, In Frontiers in Bioinformatics, Vol. 1.
2022. [Download][BibSonomy][Doi]
@article{Schmid:2022aa,
author = {Kerstin Schmid and Andreas Knote and Alexander Mück and Keram Pfeiffer and Sebastian von Mammen and Sabine C. Fischer},
journal = {Frontiers in Bioinformatics},
url = {https://www.frontiersin.org/article/10.3389/fbinf.2021.774300},
year = {2022},
volume = {1},
doi = {10.3389/fbinf.2021.774300},
title = {Interactive, Visual Simulation of a Spatio-Temporal Model of Gas Exchange in the Human Alveolus}
}
Abstract:
In interdisciplinary fields such as systems biology, good communication between experimentalists and theorists is crucial for the success of a project. Theoretical modeling in physiology usually describes complex systems with many interdependencies. On one hand, these models have to be grounded on experimental data. On the other hand, experimenters must be able to understand the interdependent complexities of the theoretical model in order to interpret the model's results in the physiological context. We promote interactive, visual simulations as an engaging way to present theoretical models in physiology and to make complex processes tangible. Based on a requirements analysis, we developed a new model for gas exchange in the human alveolus in combination with an interactive simulation software named Alvin. Alvin exceeds the current standard with its spatio-temporal resolution and a combination of visual and quantitative feedback. In Alvin, the course of the simulation can be traced in a three-dimensional rendering of an alveolus and dynamic plots. The user can interact by configuring essential model parameters. Alvin allows to run and compare multiple simulation instances simultaneously. We exemplified the use of Alvin for research by identifying unknown dependencies in published experimental data. Employing a detailed questionnaire, we showed the benefits of Alvin for education. We postulate that interactive, visual simulation of theoretical models, as we have implemented with Alvin on respiratory processes in the alveolus, can be of great help for communication between specialists and thereby advancing research.
Simon Keller, Tobias Hoßfeld, Sebastian von Mammen,
Edge-Case Integration into Established NAT Traversal Techniques, In 2022 IEEE Ninth International Conference on Communications and Electronics (ICCE), pp. 75-80.
2022. [Download][BibSonomy][Doi]
@inproceedings{Keller:2022aa,
author = {Simon Keller and Tobias Hoßfeld and Sebastian von Mammen},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2022-icce-keller.pdf},
year = {2022},
booktitle = {2022 IEEE Ninth International Conference on Communications and Electronics (ICCE)},
pages = {75-80},
doi = {10.1109/ICCE55644.2022.9852092},
title = {Edge-Case Integration into Established NAT Traversal Techniques}
}
Abstract:
Traversing Network Address Translation (NAT) is often necessary for establishing direct communication between clients. The traversal of NAT with static port translation is solved in many cases by the Session Traversal Utilities for NAT (STUN) protocol. Nevertheless, it does not cover the traversal of progressing symmetric and random symmetric NAT, which make it necessary to correctly predict opened ports. This paper presents a method for predicting (a) progressing symmetric NAT-translated ports based on a network traffic model and the Expected Value Method, and (b) random symmetric NAT-translated ports based on heuristics between monitored and opened ports across numerous traversal attempts. Tests were conducted in German cities using local cellular communication providers. Compared to established approaches, they yielded considerable improvements traversing progressing symmetric NAT and slight improvements traversing random symmetric NAT in real-world environments.
Stefan Müller, Tobias Brixner, Andreas Knote, Wilhelm Schnepp, Samuel Truman, Anne Vetter, Sebastian von Mammen,
femtoPro: Ultrafast Phenomena in Virtual Reality, In The International Conference on Ultrafast Phenomena (UP) 2022 (2022), paper W4A.42, p. W4A.42.
Optica Publishing Group,
2022. [Download][BibSonomy]
@inproceedings{Muller:2022aa,
author = {Stefan Müller and Tobias Brixner and Andreas Knote and Wilhelm Schnepp and Samuel Truman and Anne Vetter and Sebastian von Mammen},
url = {https://opg.optica.org/abstract.cfm?uri=UP-2022-W4A.42},
year = {2022},
booktitle = {The International Conference on Ultrafast Phenomena (UP) 2022 (2022), paper W4A.42},
publisher = {Optica Publishing Group},
pages = {W4A.42},
title = {femtoPro: Ultrafast Phenomena in Virtual Reality}
}
Abstract:
We introduce ``femtoPro,'' an interactive simulator of an ultrafast laser laboratory in virtual reality (VR). Gaussian beam propagation as well as linear and nonlinear optical phenomena are calculated in real time on consumer-grade VR devices.
Anne Lewerentz und David Schantz und Julian Gröh und Andreas Knote und Prof. Dr. Sebastian von Mammen und Prof. Dr. Juliano Cabral,
bioDIVERsity. Ein Computerspiel gegen das Imageproblem von Wasserpflanzen, In Mitteilungen der Fränkischen Geographischen Gesellschaft, Vol. 0(67), pp. 29--36.
2022. [Download][BibSonomy]
@article{David-Schantz-und-Julian-Groh-und-Andreas-Knote-und-Prof.-Dr.-Sebastian-von-Mammen-und-Prof.-Dr.-Juliano-Cabral:2022aa,
author = {Anne Lewerentz und David Schantz und Julian Gröh und Andreas Knote und Prof. Dr. Sebastian von Mammen und Prof. Dr. Juliano Cabral},
journal = {Mitteilungen der Fränkischen Geographischen Gesellschaft},
number = {67},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Lewerentz2022aa.pdf},
year = {2022},
pages = {29--36},
volume = {0},
title = {bioDIVERsity. Ein Computerspiel gegen das Imageproblem von Wasserpflanzen}
}
Abstract:
Wasserpflanzen haben ein Imageproblem: Sie stören vermeintlich beim Baden in Seen, was zu panikartigenReaktionen führen kann. Begriffe wie �Schlingpflanzen`` und �Veralgung`` werden falsch verwendet odersind negativ konnotiert. Zudem sieht man Wasserpflanzen kaum, da sie überwiegend unter der Wasseroberflächewachsen. Ihre vielen wertvollen Funktionen im Ökosystem See bleiben daher oft unerkannt undsind durch die schlechte Erreichbarkeit und Erfahrbarkeit nur schwer zu vermitteln.Virtuelle Medien eröffnen eine neue Möglichkeit, schwierig erreichbare Orte erfahrbar zu machen. Fürdie Vermittlung von Umweltthemen -- gerade an eine jüngere Zielgruppe -- wird ein großes Potential indigitalen Spielen und Simulationen (Computerspiele, Apps, Virtual Reality-Spiele), speziell sogenanntenSerious Games, gesehen.Ein Prototyp für das Serious Game bioDIVERsity wurde im Rahmen eines interdisziplinären Kurses derArbeitsgruppe Games Engineering am Institut für Informatik in Zusammenarbeit mit dem Center for Computationaland Theoretical Biology an der Universität Würzburg entwickelt. Ziel des Spiels ist es, KindernKenntnisse über das Ökosystem See und die Auswirkungen der Wasserqualität und der Klimabedingungenauf Wasserpflanzen zu vermitteln. Darüber hinaus soll ein Bewusstsein für dieses Ökosystem und die Auswirkungenmenschlicher Eingriffe entstehen.
Jonas Bruschke, Cindy Kröber, Florian Niebling,
Ein 4D-Browser für historische Fotografien – Forschungspotenziale für die Kunstgeschichte. Das Projekt HistStadt4D, In Martin Munke (Eds.), Landes- und Regionalgeschichte digital, pp. 106-114. Dresden und München:
Thelem,
2022. [BibSonomy][Doi]
@inbook{bruschke20224dbrowser,
author = {Jonas Bruschke and Cindy Kröber and Florian Niebling},
year = {2022},
booktitle = {Landes- und Regionalgeschichte digital},
editor = {Martin Munke},
publisher = {Thelem},
address = {Dresden und München},
pages = {106-114},
doi = {10.25366/2021.32},
title = {Ein 4D-Browser für historische Fotografien – Forschungspotenziale für die Kunstgeschichte. Das Projekt HistStadt4D}
}
Abstract:
Sander Münster, Jonas Bruschke, Cindy Kröber, Stephan Hoppe, Ferdinand Maiwald, Florian Niebling, Aaron Pattee, Ronja Utescher, Sina Zarriess,
Multimodale KI zur Unterstützung geschichtswissenschaftlicher Quellenkritik – Ein Forschungsaufriss, In DHd2022: Kulturen des digitalen Gedächtnisses, pp. 179--182.
2022. [BibSonomy][Doi]
@inproceedings{mnster2022multimodale,
author = {Sander Münster and Jonas Bruschke and Cindy Kröber and Stephan Hoppe and Ferdinand Maiwald and Florian Niebling and Aaron Pattee and Ronja Utescher and Sina Zarriess},
year = {2022},
booktitle = {DHd2022: Kulturen des digitalen Gedächtnisses},
pages = {179--182},
doi = {10.5281/zenodo.6304590},
title = {Multimodale KI zur Unterstützung geschichtswissenschaftlicher Quellenkritik – Ein Forschungsaufriss}
}
Abstract:
Sooraj K Babu,
A Multi-Layered User Guidance Framework, In Organic Computing Doctoral Dissertation Colloquium (OC-DDC). Kassel:
Kassel University Press,
2022. [Download][BibSonomy][Doi]
@conference{Babu:2022ac,
author = {Sooraj K Babu},
url = {https://kobra.uni-kassel.de/handle/123456789/14004},
year = {2022},
booktitle = {Organic Computing Doctoral Dissertation Colloquium (OC-DDC)},
publisher = {Kassel University Press},
address = {Kassel},
doi = {10.17170/kobra-202202215780},
title = {A Multi-Layered User Guidance Framework}
}
Abstract:
Using a complex software system can pose a difficult challenge for its users, especially for those without technical backgrounds. This paper outlines a concept to deploy Recommender Systems coupled with Intelligent User Interfaces as an effective mechanism to guide a user of a complex software system through context-aware suggestions. More concretely, we focus on an authoring tool that our research draws motivation from and will find opportunities for evaluation---it is a platform that aims at empowering medical professionals to create virtual reality experiences to help their work. This paper elaborates on the envisioned concept and the plan for implementation and evaluation.
René Stingl, Chris Zimmerer, Martin Fischbach, Marc Erich Latoschik,
Are You Referring to Me? - Giving Virtual Objects Awareness, In 2022 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), pp. 671-673.
2022. [Download][BibSonomy][Doi]
@inproceedings{9974498,
author = {René Stingl and Chris Zimmerer and Martin Fischbach and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2022-ismar-natural-pointing-preprint.pdf},
year = {2022},
booktitle = {2022 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct)},
pages = {671-673},
doi = {10.1109/ISMAR-Adjunct57072.2022.00139},
title = {Are You Referring to Me? - Giving Virtual Objects Awareness}
}
Abstract:
This work introduces an interaction technique to determine the user’s non-verbal deixis in Virtual Reality (VR) applications. We tailored it for multimodal speech & gesture interfaces (MMIs). Here, non-verbal deixis is often determined by the use of ray-casting due to its simplicity and intuitiveness. However, ray-casting’s rigidness and dichotomous nature pose limitations concerning the MMI’s flexibility and efficiency. In contrast, our technique considers a more comprehensive set of directional cues to determine non-verbal deixis and provides probabilistic output to tackle these limitations. We present a machine-learning-based reference implementation of our technique in VR and the results of a first performance benchmark. Future work includes an in-depth user study evaluating our technique’s user experience in an MMI.
Birgit Lugrin, Catherine Pelachaud, Elisabeth André, Ruth Aylett, Timothy Bickmore, Cynthia Breazeal, Joost Broekens, Kerstin Dautenhahn, Jonathan Gratch, Stefan Kopp, Jacqueline Nadel, Ana Paiva, Agnieszka Wykowska,
Challenge Discussion on Socially Interactive Agents: Considerations on Social Interaction, Computational Architectures, Evaluation, and Ethics, In The Handbook on Socially Interactive Agents: 20 years of Research on Embodied Conversational Agents, Intelligent Virtual Agents, and Social Robotics Volume 2: Interactivity, Platforms, Application, p. 561–626.
ACM,
2022. [Download][BibSonomy][Doi]
@inbook{10.1145/3563659.3563677,
author = {Birgit Lugrin and Catherine Pelachaud and Elisabeth André and Ruth Aylett and Timothy Bickmore and Cynthia Breazeal and Joost Broekens and Kerstin Dautenhahn and Jonathan Gratch and Stefan Kopp and Jacqueline Nadel and Ana Paiva and Agnieszka Wykowska},
journal = {The Handbook on Socially Interactive Agents: 20 years of Research on Embodied Conversational Agents, Intelligent Virtual Agents, and Social Robotics Volume 2: Interactivity, Platforms, Application},
url = {https://doi.org/10.1145/3563659.3563677},
year = {2022},
publisher = {ACM},
pages = {561–626},
doi = {10.1145/3563659.3563677},
title = {Challenge Discussion on Socially Interactive Agents: Considerations on Social Interaction, Computational Architectures, Evaluation, and Ethics}
}
Abstract:
Birgit Lugrin,
Preface, In The Handbook on Socially Interactive Agents: 20 years of Research on Embodied Conversational Agents, Intelligent Virtual Agents, and Social Robotics Volume 2: Interactivity, Platforms, Application, pp. xxv-xl.
ACM,
2022. [Download][BibSonomy][Doi]
@inbook{noauthororeditor,
author = {Birgit Lugrin},
journal = {The Handbook on Socially Interactive Agents: 20 years of Research on Embodied Conversational Agents, Intelligent Virtual Agents, and Social Robotics Volume 2: Interactivity, Platforms, Application},
url = {https://doi.org/10.1145/3563659.3563661},
year = {2022},
publisher = {ACM},
pages = {xxv-xl},
doi = {10.1145/3563659.3563661},
title = {Preface}
}
Abstract:
Birgit Lugrin, Catherine Pelachaud, David Traum,
The Handbook on Socially Interactive Agents: 20 years of Research on Embodied Conversational Agents, Intelligent Virtual Agents, and Social Robotics Volume 2: Interactivity, Platforms, Application.
ACM,
2022. [Download][BibSonomy][Doi]
@book{2022,
author = {Birgit Lugrin and Catherine Pelachaud and David Traum},
url = {https://doi.org/10.1145%2F3563659},
year = {2022},
publisher = {ACM},
doi = {10.1145/3563659},
title = {The Handbook on Socially Interactive Agents: 20 years of Research on Embodied Conversational Agents, Intelligent Virtual Agents, and Social Robotics Volume 2: Interactivity, Platforms, Application}
}
Abstract:
Nina Döllinger, Erik Wolf, David Mal, Stephan Wenninger, Mario Botsch, Marc Erich Latoschik, Carolin Wienrich,
Resize Me! Exploring the User Experience of Embodied Realistic Modulatable Avatars for Body Image Intervention in Virtual Reality, In Frontiers in Virtual Reality, Vol. 3.
2022. [Download][BibSonomy][Doi]
@article{dollinger2022resizeme,
author = {Nina Döllinger and Erik Wolf and David Mal and Stephan Wenninger and Mario Botsch and Marc Erich Latoschik and Carolin Wienrich},
journal = {Frontiers in Virtual Reality},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2022-frontiers-modulatable-avatars-body-image-intervention.pdf},
year = {2022},
volume = {3},
doi = {10.3389/frvir.2022.935449},
title = {Resize Me! Exploring the User Experience of Embodied Realistic Modulatable Avatars for Body Image Intervention in Virtual Reality}
}
Abstract:
Obesity is a serious disease that can affect both physical and psychological well-being. Due to weight stigmatization, many affected individuals suffer from body image disturbances whereby they perceive their body in a distorted way, evaluate it negatively, or neglect it. Beyond established interventions such as mirror exposure, recent advancements aim to complement body image treatments by the embodiment of visually altered virtual bodies in virtual reality (VR). We present a high-fidelity prototype of an advanced VR system that allows users to embody a rapidly generated personalized, photorealistic avatar and to realistically modulate its body weight in real-time within a carefully designed virtual environment. In a formative multi-method approach, a total of 12 participants rated the general user experience (UX) of our system during body scan and VR experience using semi-structured qualitative interviews and multiple quantitative UX measures. Using body weight modification tasks, we further compared three different interaction methods for real-time body weight modification and measured our system’s impact on the body image relevant measures body awareness and body weight perception. From the feedback received, demonstrating an already solid UX of our overall system and providing constructive input for further improvement, we derived a set of design guidelines to guide future development and evaluation processes of systems supporting body image interventions.
Lucas Plabst, Sebastian Oberdörfer, Francisco Raul Ortega, Florian Niebling,
Push the Red Button: Comparing Notification Placement with Augmented and Non-Augmented Tasks in AR, In Proceedings of the 10th Symposium on Spatial User Interaction (SUI '22). New York, NY, USA:
ACM,
2022. [Download][BibSonomy][Doi]
@inproceedings{plabst2022press,
author = {Lucas Plabst and Sebastian Oberdörfer and Francisco Raul Ortega and Florian Niebling},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2022-push-the-red-button-preprint.pdf},
year = {2022},
booktitle = {Proceedings of the 10th Symposium on Spatial User Interaction (SUI '22)},
publisher = {ACM},
address = {New York, NY, USA},
doi = {10.1145/3565970.3567701},
title = {Push the Red Button: Comparing Notification Placement with Augmented and Non-Augmented Tasks in AR}
}
Abstract:
Eileen Roesler, Sophia C. Steinhaeusser, Birgit Lugrin, Linda Onnasch,
The Influence of Visible Cables and Story Content on Perceived Autonomy in Social Human-Robot Interaction, In Robotics, Vol. 12(1), p. 3.
MDPI,
2022. [BibSonomy][Doi]
@article{Roesler_2022,
author = {Eileen Roesler and Sophia C. Steinhaeusser and Birgit Lugrin and Linda Onnasch},
journal = {Robotics},
number = {1},
year = {2022},
publisher = {MDPI},
pages = {3},
volume = {12},
doi = {10.3390/robotics12010003},
title = {The Influence of Visible Cables and Story Content on Perceived Autonomy in Social Human-Robot Interaction}
}
Abstract:
Sophia C. Steinhaeusser, Benjamin Eckstein, Birgit Lugrin,
A multi-method approach to compare presence, fear induction and desensitization in survival horror games within the reality-virtuality-continuum, In Entertainment Computing, p. 100539.
Elsevier,
2022. [BibSonomy][Doi]
@article{Steinhaeusser_2022,
author = {Sophia C. Steinhaeusser and Benjamin Eckstein and Birgit Lugrin},
journal = {Entertainment Computing},
year = {2022},
publisher = {Elsevier},
pages = {100539},
doi = {10.1016/j.entcom.2022.100539},
title = {A multi-method approach to compare presence, fear induction and desensitization in survival horror games within the reality-virtuality-continuum}
}
Abstract:
David Obremski, Helena Babette Hering, Paula Friedrich, Birgit Lugrin,
Mixed-Cultural Speech for Intelligent Virtual Agents - the Impact of Different Non-Native Accents Using Natural or Synthetic Speech in the English Language, In HAI '22: International Conference on Human-Agent Interaction, pp. 67-75.
ACM,
2022. [Download][BibSonomy][Doi]
@inproceedings{Obremski_HAI_Paper_2022,
author = {David Obremski and Helena Babette Hering and Paula Friedrich and Birgit Lugrin},
url = {https://doi.org/10.1145%2F3527188.3561921},
year = {2022},
booktitle = {HAI '22: International Conference on Human-Agent Interaction},
publisher = {ACM},
pages = {67-75},
doi = {10.1145/3527188.3561921},
title = {Mixed-Cultural Speech for Intelligent Virtual Agents - the Impact of Different Non-Native Accents Using Natural or Synthetic Speech in the English Language}
}
Abstract:
David Obremski, Birgit Lugrin,
Mixed-Cultural Speech for Mixed-Cultural Users - Natural vs. Synthetic Speech for Virtual Agents, In HAI '22: International Conference on Human-Agent Interaction, pp. 290-292.
ACM,
2022. [Download][BibSonomy][Doi]
@inproceedings{Obremski_HAI_Poster_2022,
author = {David Obremski and Birgit Lugrin},
url = {https://doi.org/10.1145%2F3527188.3563910},
year = {2022},
booktitle = {HAI '22: International Conference on Human-Agent Interaction},
publisher = {ACM},
pages = {290-292},
doi = {10.1145/3527188.3563910},
title = {Mixed-Cultural Speech for Mixed-Cultural Users - Natural vs. Synthetic Speech for Virtual Agents}
}
Abstract:
Akimi Oyanagi, Takuji Narumi, Jean-Luc Lugrin, Kazuma Aoyama, Kenichiro Ito, Tomohiro Amemiya, Michitaka Hirose,
The Possibility of Inducing the Proteus Effect for Social VR Users, In International Conference on Human-Computer Interaction, pp. 143--158. Cham:
Springer Nature Switzerland,
2022. [Download][BibSonomy][Doi]
@inproceedings{10.1007/978-3-031-21707-4_11,
author = {Akimi Oyanagi and Takuji Narumi and Jean-Luc Lugrin and Kazuma Aoyama and Kenichiro Ito and Tomohiro Amemiya and Michitaka Hirose},
url = {https://link.springer.com/chapter/10.1007/978-3-031-21707-4_11},
year = {2022},
booktitle = {International Conference on Human-Computer Interaction},
publisher = {Springer Nature Switzerland},
address = {Cham},
pages = {143--158},
doi = {https://doi.org/10.1007/978-3-031-21707-4_11},
title = {The Possibility of Inducing the Proteus Effect for Social VR Users}
}
Abstract:
The Proteus effect in Virtual Reality (VR) happens when the user's behavior or attitude are affected by their avatar's appearance. The virtual body appearance is somehow affecting the emotional, behavioral and psychological state of its user. In recent years, manipulation of an avatar has become a more common situation because of social VR platforms. It is considered that the Proteus effect can be induced even for social VR users. However, there are possibilities to disrupt the Proteus effect by becoming accustomed to embodying an avatar or attachment to their avatar.
Erik Wolf, Nina Döllinger, David Mal, Stephan Wenninger, Andrea Bartl, Mario Botsch, Marc Erich Latoschik, Carolin Wienrich,
Does Distance Matter? Embodiment and Perception of Personalized Avatars in Relation to the Self-Observation Distance in Virtual Reality, In Frontiers in Virtual Reality, Vol. 3.
2022. [Download][BibSonomy][Doi]
@article{wolf2022distance,
author = {Erik Wolf and Nina Döllinger and David Mal and Stephan Wenninger and Andrea Bartl and Mario Botsch and Marc Erich Latoschik and Carolin Wienrich},
journal = {Frontiers in Virtual Reality},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2022-frontiers-self-observation-distance.pdf},
year = {2022},
volume = {3},
doi = {10.3389/frvir.2022.1031093},
title = {Does Distance Matter? Embodiment and Perception of Personalized Avatars in Relation to the Self-Observation Distance in Virtual Reality}
}
Abstract:
Virtual reality applications employing avatar embodiment typically use virtual mirrors to allow users to perceive their digital selves not only from a first-person perspective but also from a holistic third-person view. However, due to distance-related biases such as the distance compression effect or a reduced relative rendering resolution, the self-observation distance (SOD) between the user and the virtual mirror might influence how users perceive their embodied avatar. Our article systematically investigates the effects of a short (1 meter), middle (2.5 meter), and far (4 meter) SOD between user and mirror on the perception of personalized and self-embodied avatars. The avatars were photorealistic reconstructed using state-of-the-art photogrammetric methods. Thirty participants were repeatedly exposed to their real-time animated self-embodied avatars in each of the three SOD conditions. In each condition, the personalized avatars were repeatedly altered in their body weight, and participants were asked to judge the (1) sense of embodiment, (2) body weight perception, and (3) affective appraisal towards their avatar. We found that the different SODs are unlikely to influence any of our measures except for the perceived body weight estimation difficulty. Here, the participants judged the difficulty significantly higher for the farthest SOD. We further found that the participants' self-esteem significantly impacted their ability to modify their avatar's body weight to their current body weight and that it positively correlated with the perceived attractiveness of the avatar. Additionally, the participants' concerns about their body shape affected how eerie they perceived their avatars. Both measures influenced the perceived body weight estimation difficulty. For practical application, we conclude that the virtual mirror in embodiment scenarios can be freely placed and varied at a distance of one to four meters from the user without expecting major effects on the perception of the avatar.
David Obremski, Helena Babette Hering, Paula Friedrich, Birgit Lugrin,
Exploratory Study on the Perception of Intelligent Virtual Agents With Non-Native Accents Using Synthetic and Natural Speech in German, In INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION.
ACM,
2022. [Download][BibSonomy][Doi]
@inproceedings{Obremski_2022,
author = {David Obremski and Helena Babette Hering and Paula Friedrich and Birgit Lugrin},
url = {https://doi.org/10.1145%2F3536221.3556608},
year = {2022},
booktitle = {INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION},
publisher = {ACM},
doi = {10.1145/3536221.3556608},
title = {Exploratory Study on the Perception of Intelligent Virtual Agents With Non-Native Accents Using Synthetic and Natural Speech in German}
}
Abstract:
David Obremski, Paula Friedrich, Nora Haak, Philipp Schaper, Birgit Lugrin,
The impact of mixed-cultural speech on the stereotypical perception of a virtual robot, In Frontiers in Robotics and AI, Vol. 9.
Frontiers Media SA,
2022. [Download][BibSonomy][Doi]
@article{Obremski_2022,
author = {David Obremski and Paula Friedrich and Nora Haak and Philipp Schaper and Birgit Lugrin},
journal = {Frontiers in Robotics and AI},
url = {https://doi.org/10.3389%2Ffrobt.2022.983955},
year = {2022},
publisher = {Frontiers Media SA},
volume = {9},
doi = {10.3389/frobt.2022.983955},
title = {The impact of mixed-cultural speech on the stereotypical perception of a virtual robot}
}
Abstract:
Carolin Wienrich, David Obremski, Johann Habakuk Israel,
Repeated experience or a virtual nose to reduce simulator sickness? Investigating prediction of the sensorial conflict theory and the rest-frame hypothesis in two virtual games, In Entertainment Computing, Vol. 43, p. 100514.
Elsevier BV,
2022. [Download][BibSonomy][Doi]
@article{Wienrich_2022,
author = {Carolin Wienrich and David Obremski and Johann Habakuk Israel},
journal = {Entertainment Computing},
url = {https://doi.org/10.1016%2Fj.entcom.2022.100514},
year = {2022},
publisher = {Elsevier BV},
pages = {100514},
volume = {43},
doi = {10.1016/j.entcom.2022.100514},
title = {Repeated experience or a virtual nose to reduce simulator sickness? Investigating prediction of the sensorial conflict theory and the rest-frame hypothesis in two virtual games}
}
Abstract:
Sebastian Keppler, Nina Döllinger, Carolin Wienrich, Marc Erich Latoschik, Johann Habakuk Israel,
Self-Touch: An Immersive Interaction-Technique to Enhance Body Awareness, In i-com, Vol. 21(3), pp. 329--337.
2022. [Download][BibSonomy][Doi]
@article{KepplerDöllingerWienrichLatoschikIsrael+2022+329+337,
author = {Sebastian Keppler and Nina Döllinger and Carolin Wienrich and Marc Erich Latoschik and Johann Habakuk Israel},
journal = {i-com},
number = {3},
url = {https://doi.org/10.1515/icom-2022-0028},
year = {2022},
pages = {329--337},
volume = {21},
doi = {doi:10.1515/icom-2022-0028},
title = {Self-Touch: An Immersive Interaction-Technique to Enhance Body Awareness}
}
Abstract:
Physical well-being depends essentially on how the own body is perceived. A missing correspondence between the perception of one’s own body and reality can be distressing and eventually lead to mental illness. The touch of the own body is a multi-sensory experience to strengthen the feeling of the own body. We have developed an interaction technique that allows the self-touch of the own body in an immersive environment to support therapy procedures. Through additional visual feedback, we want to strengthen the feeling for the own body to achieve a sustainable effect in the own body perception. We conducted an expert evaluation to analyse the potential impact of our application and to localize and fix possible usability problems. The experts noted the ease of understanding and suitability of the interaction technique for increasing body awareness. However, the technical challenges such as stable and accurate body tracking were also mentioned. In addition, new ideas were given that would further support body awareness.
Jinghuai Lin, Marc Erich Latoschik,
Digital body, identity and privacy in social virtual reality: A systematic review, In Frontiers in Virtual Reality, Vol. 3.
2022. [Download][BibSonomy][Doi]
@article{lin2022digital,
author = {Jinghuai Lin and Marc Erich Latoschik},
journal = {Frontiers in Virtual Reality},
url = {https://www.frontiersin.org/articles/10.3389/frvir.2022.974652},
year = {2022},
volume = {3},
doi = {10.3389/frvir.2022.974652},
title = {Digital body, identity and privacy in social virtual reality: A systematic review}
}
Abstract:
Social Virtual Reality (social VR or SVR) provides digital spaces for diverse human activities, social interactions, and embodied face-to-face encounters. While our digital bodies in SVR can in general be of almost any conceivable appearance, individualized or even personalized avatars bearing users’ likeness recently became an interesting research topic. Such digital bodies show a great potential to enhance the authenticity of social VR citizens and increase the trustworthiness of interpersonal interaction. However, using such digital bodies might expose users to privacy and identity issues such as identity theft: For instance, how do we know whether the avatars we encounter in the virtual world are who they claim to be? Safeguarding users’ identities and privacy, and preventing harm from identity infringement, are crucial to the future of social VR. This article provides a systematic review on the protection of users’ identity and privacy in social VR, with a specific focus on digital bodies. Based on 814 sources, we identified and analyzed 49 papers that either: 1) discuss or raise concerns about the addressed issues, 2) provide technologies and potential solutions for protecting digital bodies, or 3) examine the relationship between the digital bodies and users of social VR citizens. We notice a severe lack of research and attention on the addressed topic and identify several research gaps that need to be filled. While some legal and ethical concerns about the potential identity issues of the digital bodies have been raised, and despite some progress in specific areas such as user authentication has been made, little research has proposed practical solutions. Finally, we suggest potential future research directions for digital body protection and include relevant research that might provide insights. We hope this work could provide a good overview of the existing discussion, potential solutions, and future directions for researchers with similar concerns. We also wish to draw attention to identity and privacy issues in social VR and call for interdisciplinary collaboration.
Christian Schott, Sarah Hofmann, Sebastian von Mammen,
Parametric Skin Wound Generation for Medical Simulations, In 2022 International Conference on Interactive Media, Smart Systems and Emerging Technologies (IMET), pp. 1-8.
2022. [Download][BibSonomy][Doi]
@inproceedings{Schott:2022ab,
author = {Christian Schott and Sarah Hofmann and Sebastian von Mammen},
url = {https://ieeexplore.ieee.org/document/9929659},
year = {2022},
booktitle = {2022 International Conference on Interactive Media, Smart Systems and Emerging Technologies (IMET)},
pages = {1-8},
doi = {10.1109/IMET54801.2022.9929659},
title = {Parametric Skin Wound Generation for Medical Simulations}
}
Abstract:
Medical simulations offer various opportunities to improve treatment training in different areas of medicine. The realism and thus effectiveness of wound care training goes hand in hand with the realism of the wounds themselves. In this paper, we propose a novel approach to (a) procedurally generate convincing-looking wound textures and (b) to dynamically project them onto animated, skinned meshes. Our method allows for creating dermal wounds of different types including abrasions, lacerations and puncture wounds. We analyzed the outcome by recreating visualization instances of these types, demonstrating the applicability of our system.
Timo Menzel, Mario Botsch, Marc Erich Latoschik,
Automated Blendshape Personalization for Faithful Face Animations Using Commodity Smartphones, In Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology (VRST ’22)(22). New York, NY, USA:
Association for Computing Machinery,
2022. [Download][BibSonomy][Doi]
@inproceedings{menzel2022automated,
author = {Timo Menzel and Mario Botsch and Marc Erich Latoschik},
number = {22},
url = {https://doi.org/10.1145/3562939.3565622},
year = {2022},
booktitle = {Proceedings of the 28th ACM Symposium on Virtual Reality Software and Technology (VRST ’22)},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
doi = {10.1145/3562939.3565622},
title = {Automated Blendshape Personalization for Faithful Face Animations Using Commodity Smartphones}
}
Abstract:
Digital reconstruction of humans has various interesting use-cases. Animated virtual humans, avatars and agents alike, are the central entities in virtual embodied human-computer and human-human encounters in social XR. Here, a faithful reconstruction of facial expressions becomes paramount due to their prominent role in non-verbal behavior and social interaction. Current XR-platforms, like Unity 3D or the Unreal Engine, integrate recent smartphone technologies to animate faces of virtual humans by facial motion capturing. Using the same technology, this article presents an optimization-based approach to generate personalized blendshapes as animation targets for facial expressions. The proposed method combines a position-based optimization with a seamless partial deformation transfer, necessary for a faithful reconstruction. Our method is fully automated and considerably outperforms existing solutions based on example-based facial rigging or deformation transfer, and overall results in a much lower reconstruction error. It also neatly integrates with recent smartphone-based reconstruction pipelines for mesh generation and automated rigging, further paving the way to a widespread application of human-like and personalized avatars and agents in various use-cases.
Philipp Schaper, Anna Riedmann, Miriam Semineth, Birgit Lugrin,
Addressing Motivation in an E-Learning Scenario by applying Narration and Choice, In Proceedings of the 35th British Human-Computer Interaction Conference (BCS HCI 22).
2022. [BibSonomy]
@inproceedings{schaper2022addressing,
author = {Philipp Schaper and Anna Riedmann and Miriam Semineth and Birgit Lugrin},
year = {2022},
booktitle = {Proceedings of the 35th British Human-Computer Interaction Conference (BCS HCI 22)},
title = {Addressing Motivation in an E-Learning Scenario by applying Narration and Choice}
}
Abstract:
Christian Rack, Andreas Hotho, Marc Erich Latoschik,
Comparison of Data Encodings and Machine Learning Architectures for User Identification on Arbitrary Motion Sequences, In Proceedings of the IEEE International conference on artificial intelligence & Virtual Reality (IEEE AIVR).
IEEE,
2022. [Download][BibSonomy][Doi]
@inproceedings{schell2022comparison,
author = {Christian Rack and Andreas Hotho and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2022-ieeeaivr-schell-comparison-of-data-representations-and-machine-learning-architectures-for-user-identification-on-arbitrary-motion-sequences.pdf},
year = {2022},
booktitle = {Proceedings of the IEEE International conference on artificial intelligence & Virtual Reality (IEEE AIVR)},
publisher = {IEEE},
doi = {10.1109/AIVR56993.2022.00010},
title = {Comparison of Data Encodings and Machine Learning Architectures for User Identification on Arbitrary Motion Sequences}
}
Abstract:
Philipp Schaper, Anna Riedmann, Sebastian Oberdörfer, Birgit Lugrin,
Addressing Waste Separation With a Persuasive Augmented Reality App, In Proceedings of the 24th International Conference on Mobile Human-Computer Interaction (MobileHCI '22). New York, NY, USA:
Association for Computing Machinery,
2022. [BibSonomy][Doi]
@inproceedings{schaper2022addressing,
author = {Philipp Schaper and Anna Riedmann and Sebastian Oberdörfer and Birgit Lugrin},
year = {2022},
booktitle = {Proceedings of the 24th International Conference on Mobile Human-Computer Interaction (MobileHCI '22)},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
doi = {10.1145/3546740},
title = {Addressing Waste Separation With a Persuasive Augmented Reality App}
}
Abstract:
Sophia C. Steinhaeusser, Anna Riedmann, Philipp Schaper, Emily Guthmann, Julia Pfister, Katharina Schmitt, Theresa Wild, Birgit Lugrin,
Second Language Learning through Storytelling with a Social Robot-An Online Case Study, In 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), pp. 67-74.
2022. [BibSonomy]
@inproceedings{steinhaeusser2022second,
author = {Sophia C. Steinhaeusser and Anna Riedmann and Philipp Schaper and Emily Guthmann and Julia Pfister and Katharina Schmitt and Theresa Wild and Birgit Lugrin},
year = {2022},
booktitle = {2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)},
pages = {67-74},
title = {Second Language Learning through Storytelling with a Social Robot-An Online Case Study}
}
Abstract:
Sophia C. Steinhaeusser, Martina Lein, Melissa Donnermann, Birgit Lugrin,
Designing Social Robots' Speech in the Hotel Context-A Series of Online Studies, In 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), pp. 163-170.
2022. [BibSonomy]
@inproceedings{steinhaeusser2022designing,
author = {Sophia C. Steinhaeusser and Martina Lein and Melissa Donnermann and Birgit Lugrin},
year = {2022},
booktitle = {2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)},
pages = {163-170},
title = {Designing Social Robots' Speech in the Hotel Context-A Series of Online Studies}
}
Abstract:
Melissa Donnermann, Philipp Schaper, Birgit Lugrin,
Investigating Adaptive Robot Tutoring in a Long-Term Interaction in Higher Education, In 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2022), pp. 171-178.
2022. [BibSonomy]
@inproceedings{donnermann2022investigating,
author = {Melissa Donnermann and Philipp Schaper and Birgit Lugrin},
year = {2022},
booktitle = {31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2022)},
pages = {171-178},
title = {Investigating Adaptive Robot Tutoring in a Long-Term Interaction in Higher Education}
}
Abstract:
Stefan Müller, Tobias Brixner, Andreas Knote, Wilhelm Schnepp, Samuel Truman, Anne Vetter, Sebastian von Mammen,
femtoPro: Ultrafast Phenomena in Virtual Reality, In The International Conference on Ultrafast Phenomena (UP) 2022 (2022), paper W4A.42, p. W4A.42.
Optica Publishing Group,
2022. [Download][BibSonomy]
@inproceedings{mullerFemtoProUltrafastPhenomena2022,
author = {Stefan Müller and Tobias Brixner and Andreas Knote and Wilhelm Schnepp and Samuel Truman and Anne Vetter and Sebastian von Mammen},
url = {https://opg.optica.org/abstract.cfm?uri=UP-2022-W4A.42},
year = {2022},
booktitle = {The International Conference on Ultrafast Phenomena (UP) 2022 (2022), paper W4A.42},
publisher = {Optica Publishing Group},
pages = {W4A.42},
title = {femtoPro: Ultrafast Phenomena in Virtual Reality}
}
Abstract:
We introduce “femtoPro,” an interactive simulator of an ultrafast laser laboratory in virtual reality (VR). Gaussian beam propagation as well as linear and nonlinear optical phenomena are calculated in real time on consumer-grade VR devices.
J. Heß, P. Karageorgos, T. Richter, B. Müller, A. Riedmann, P. Schaper, B. Lugrin,
Digitale Leseförderung: Effekte eines silbenbasierten digitalen Lesetrainings in Klasse 2, In Vortrag auf dem 52. Kongress der Deutschen Gesellschaft für Psychologie (DGPs).
2022. [BibSonomy]
@conference{hess2022digitale,
author = {J. Heß and P. Karageorgos and T. Richter and B. Müller and A. Riedmann and P. Schaper and B. Lugrin},
year = {2022},
booktitle = {Vortrag auf dem 52. Kongress der Deutschen Gesellschaft für Psychologie (DGPs)},
title = {Digitale Leseförderung: Effekte eines silbenbasierten digitalen Lesetrainings in Klasse 2}
}
Abstract:
Rebecca Hein, Marc Erich Latoschik, Carolin Wienrich,
Usability and User Experience of Virtual Objects Supporting Learning and Communicating in Virtual Reality, In Bastian Pfleging, Kathrin Gerling, Sven Mayer (Eds.), Mensch und Computer 2022 - Tagungsband, pp. 510-514. New York:
ACM,
2022. [Download][BibSonomy][Doi]
@inproceedings{mci/Hein2022,
author = {Rebecca Hein and Marc Erich Latoschik and Carolin Wienrich},
url = {https://dl.gi.de/handle/20.500.12116/39266},
year = {2022},
booktitle = {Mensch und Computer 2022 - Tagungsband},
editor = {Bastian Pfleging and Kathrin Gerling and Sven Mayer},
publisher = {ACM},
address = {New York},
pages = {510-514},
doi = {10.1145/3543758.3547568},
title = {Usability and User Experience of Virtual Objects Supporting Learning and Communicating in Virtual Reality}
}
Abstract:
This study aims to evaluate the usability and user experience of the InteractionSuitecase. This is a collection of virtual objects that have cultural connotations (associated with German or Anglo-American culture). They are intended to promote communication about cultural differences and similarities in English lessons at German schools. They are thus part of a didactic concept for using social VR for trans- and intercultural learning. Future teacher used the virtual objects during a practical seminar and rated them as useful. Further they associates a positive user experience. Since the virtual objects should encourage communication, the sense of connectedness instead of isolation is a very important results. The results proved the readiness of the InteractionSuitcase for further pedagogical applications.
Simon Keller, Tobias Hoßfeld, Sebastian von Mammen,
Edge-Case Integration into Established NAT Traversal Techniques, In 2022 IEEE Ninth International Conference on Communications and Electronics (ICCE), pp. 75-80.
2022. [Download][BibSonomy][Doi]
@inproceedings{9852092,
author = {Simon Keller and Tobias Hoßfeld and Sebastian von Mammen},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2022-icce-keller.pdf},
year = {2022},
booktitle = {2022 IEEE Ninth International Conference on Communications and Electronics (ICCE)},
pages = {75-80},
doi = {10.1109/ICCE55644.2022.9852092},
title = {Edge-Case Integration into Established NAT Traversal Techniques}
}
Abstract:
Traversing Network Address Translation (NAT) is often necessary for establishing direct communication between clients. The traversal of NAT with static port translation is solved in many cases by the Session Traversal Utilities for NAT (STUN) protocol. Nevertheless, it does not cover the traversal of progressing symmetric and random symmetric NAT, which make it necessary to correctly predict opened ports. This paper presents a method for predicting (a) progressing symmetric NAT-translated ports based on a network traffic model and the Expected Value Method, and (b) random symmetric NAT-translated ports based on heuristics between monitored and opened ports across numerous traversal attempts. Tests were conducted in German cities using local cellular communication providers. Compared to established approaches, they yielded considerable improvements traversing progressing symmetric NAT and slight improvements traversing random symmetric NAT in real-world environments.
Andrea Bartl, Christian Merz, Daniel Roth, Marc Erich Latoschik,
The Effects of Avatar and Environment Design on Embodiment, Presence, Activation, and Task Load in a Virtual Reality Exercise Application, In IEEE International Symposium on Mixed and Augmented Reality (ISMAR).
2022. [Download][BibSonomy]
@inproceedings{bartl2022effects,
author = {Andrea Bartl and Christian Merz and Daniel Roth and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2022-ismar-ilast-avatar-environment-design-vr-exercise-application.pdf},
year = {2022},
booktitle = {IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
title = {The Effects of Avatar and Environment Design on Embodiment, Presence, Activation, and Task Load in a Virtual Reality Exercise Application}
}
Abstract:
ABSTRACT
The development of embodied Virtual Reality (VR) systems involves multiple central design choices. These design choices affect the user perception and therefore require thorough consideration. This article reports on two user studies investigating the influence of common design choices on relevant intermediate factors (sense of embodiment, presence, motivation, activation, and task load) in a VR application for physical exercises. The first study manipulated the avatar fidelity (abstract, partial body vs. anthropomorphic, full-body) and the environment (with vs. without mirror). The second study manipulated the avatar type (healthy vs. injured) and the environment type (beach vs. hospital) and, hence, the avatar-environment congruence. The full-body avatar significantly increased the sense of embodiment and decreased mental demand. Interestingly, the mirror did not influence the dependent variables. The injured avatar significantly increased the temporal demand. The beach environment significantly reduced the tense activation. On the beach, participants felt more present in the incongruent condition embodying the injured avatar.
Christian Krupitzer, Jens Naber, Jan-Philipp Stauffert, Jan Mayer, Jan Spielmann, Paul Ehmann, Noel Boci, Maurice Bürkle, André Ho, Clemens Komorek, Felix Heinickel, Samuel Kounev, Christian Becker, Marc Erich Latoschik,
CortexVR: Immersive analysis and training of cognitive executive functions of soccer players using virtual reality and machine learning, In Frontiers in Psychology, Vol. 13.
Frontiers,
2022. [Download][BibSonomy][Doi]
@article{krupitzer2022cortexvr,
author = {Christian Krupitzer and Jens Naber and Jan-Philipp Stauffert and Jan Mayer and Jan Spielmann and Paul Ehmann and Noel Boci and Maurice Bürkle and André Ho and Clemens Komorek and Felix Heinickel and Samuel Kounev and Christian Becker and Marc Erich Latoschik},
journal = {Frontiers in Psychology},
url = {https://www.frontiersin.org/articles/10.3389/fpsyg.2022.754732/full},
year = {2022},
publisher = {Frontiers},
volume = {13},
doi = {https://doi.org/10.3389/fpsyg.2022.754732},
title = {CortexVR: Immersive analysis and training of cognitive executive functions of soccer players using virtual reality and machine learning}
}
Abstract:
Goal: This paper presents an immersive Virtual Reality (VR) system to analyze and train Executive Functions (EFs) of soccer players. EFs are important cognitive functions for athletes. They are a relevant quality that distinguishes amateurs from professionals.
Method: The system is based on immersive technology, hence, the user interacts naturally and experiences a training session in a virtual world. The proposed system has a modular design supporting the extension of various so-called game modes. Game modes combine selected game mechanics with specific simulation content to target particular training aspects. The system architecture decouples selection/parameterization and analysis of training sessions via a coaching app from an Unity3D-based VR simulation core. Monitoring of user performance and progress is recorded by a database that sends the necessary feedback to the coaching app for analysis. Results: The system is tested for VR-critical performance criteria to reveal the usefulness of a new interaction paradigm in the cognitive training and analysis of EFs. Subjective ratings for overall usability show that the design as VR application enhances the user experience compared to a traditional desktop app; whereas the new, unfamiliar interaction paradigm does not negatively impact the effort for using the application.
Conclusion: The system can provide immersive training of EF in a fully virtual environment, eliminating potential distraction. It further provides an easy-to-use analyzes tool to compare user but also an automatic, adaptive training mode.
Kathrin Gemesi, Sophie Laura Holzmann, Regine Hochrein, Nina Döllinger, Carolin Wienrich, Natascha-Alexandra Weinberger, Claudia Luck-Sikorski, Christina Holzapfel,
Attitude of Nutrition Experts Toward Psychotherapy and Virtual Reality as Part of Obesity Treatment—An Online Survey, In Frontiers in Psychiatry, Vol. 13.
Frontiers Media SA,
2022. [Download][BibSonomy][Doi]
@article{gemesi2022attitude,
author = {Kathrin Gemesi and Sophie Laura Holzmann and Regine Hochrein and Nina Döllinger and Carolin Wienrich and Natascha-Alexandra Weinberger and Claudia Luck-Sikorski and Christina Holzapfel},
journal = {Frontiers in Psychiatry},
url = {https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9082543/},
year = {2022},
publisher = {Frontiers Media SA},
volume = {13},
doi = {10.3389/fpsyt.2022.787832},
title = {Attitude of Nutrition Experts Toward Psychotherapy and Virtual Reality as Part of Obesity Treatment—An Online Survey}
}
Abstract:
Nina Döllinger, Christopher Göttfert, Erik Wolf, David Mal, Marc Erich Latoschik, Carolin Wienrich,
Analyzing Eye Tracking Data in Mirror Exposure, In 2022 Conference on Mensch und Computer, p. 513–517.
2022. [Download][BibSonomy][Doi]
@inproceedings{dollinger2022eyetracking,
author = {Nina Döllinger and Christopher Göttfert and Erik Wolf and David Mal and Marc Erich Latoschik and Carolin Wienrich},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2022-muc-eyetracking_in_mirror_exposition-preprint.pdf},
year = {2022},
booktitle = {2022 Conference on Mensch und Computer},
pages = {513–517},
doi = {10.1145/3543758.3547567},
title = {Analyzing Eye Tracking Data in Mirror Exposure}
}
Abstract:
Mirror exposure is an important method in the treatment of body image disturbances. Eye tracking can support the unaffected assessment of attention biases during mirror exposure. However, the analysis of eye tracking data in mirror exposure comes with various difficulties and is associated with a high manual workload during data processing. We present an automated data processing framework that enables us to determine any body part as an area of interest without placing markers on the bodies of participants. A short, formative user study proved the quality compared to the gold standard. The automatic processing and openness for different systems allow a broad range of applications.
Erik Wolf, David Mal, Viktor Frohnapfel, Nina Döllinger, Stephan Wenninger, Mario Botsch, Marc Erich Latoschik, Carolin Wienrich,
Plausibility and Perception of Personalized Virtual Humans between Virtual and Augmented Reality, In 2022 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 489-498.
2022. [Download][BibSonomy][Doi]
@inproceedings{wolf2022plausibility,
author = {Erik Wolf and David Mal and Viktor Frohnapfel and Nina Döllinger and Stephan Wenninger and Mario Botsch and Marc Erich Latoschik and Carolin Wienrich},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2022-ismar-avatar_plausibility_and_perception_display_study-preprint.pdf},
year = {2022},
booktitle = {2022 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
pages = {489-498},
doi = {10.1109/ISMAR55827.2022.00065},
title = {Plausibility and Perception of Personalized Virtual Humans between Virtual and Augmented Reality}
}
Abstract:
This article investigates the effects of different XR displays on the
perception and plausibility of personalized virtual humans. We compared immersive virtual reality (VR), video see-through augmented reality (VST AR), and optical see-through AR (OST AR). The personalized virtual alter egos were generated by state-of-the-art photogrammetry methods. 42 participants were repeatedly exposed to animated versions of their 3D-reconstructed virtual alter egos in each of the three XR display conditions. The reconstructed virtual alter egos were additionally modified in body weight for each repetition. We show that the display types lead to different degrees of incongruence between the renderings of the virtual humans and the presentation of the respective environmental backgrounds, leading to significant effects of perceived mismatches as part of a plausibility measurement. The device-related effects were further partly confirmed by subjective misestimations of the modified body weight and the measured spatial presence. Here, the exceedingly incongruent OST AR condition leads to the significantly highest weight misestimations as well as to the lowest perceived spatial presence. However, similar effects could not be confirmed for the affective appraisal (i.e., humanness, eeriness, or attractiveness) of the virtual humans, giving rise to the assumption that these factors might be unrelated to each other.
Sooraj K Babu, Sebastian von Mammen,
Learning Classifier Systems as Recommender Systems, In Lifelike Computing Systems Workshop LIFELIKE 2022.
2022. [Download][BibSonomy]
@inproceedings{babu2022learning,
author = {Sooraj K Babu and Sebastian von Mammen},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/BabuVonMammen2022.pdf},
year = {2022},
booktitle = {Lifelike Computing Systems Workshop LIFELIKE 2022},
title = {Learning Classifier Systems as Recommender Systems}
}
Abstract:
A multitude of software domains, ranging from e-commerce services to personal digital assistants, rely on recommender systems (RS) to enhance their performance and customer experience. There are several established approaches to RS, including algorithms, namely, collaborative filtering, content based filtering, and various hybrid systems that use different verticals of data, such as the user’s preferences, the metadata of items to be suggested, etc., to generate recommendations. Contrary to those, we adopt a classifier system to benefit from its rule-based knowledge representation and interwoven learning mechanisms. In this paper, we present our approach and elaborate on its potential use to flexibly and effectively guide a user through the application of an authoring platform.
Sarah Hofmann, Andreas Müller, Sebastian von Mammen,
Interactive Hemodynamic Simulation Model of a Cross-scale Cardiovascular System, In ANNSIM 22.
Society for Modeling & Simulation International (SCS),
2022. [Download][BibSonomy]
@inproceedings{Hofmann:2022ac,
author = {Sarah Hofmann and Andreas Müller and Sebastian von Mammen},
url = {https://scs.org/wp-content/uploads/2022/07/87_Paper_INTERACTIVE-HEMODYNAMIC-SIMULATION-MODEL-OF-A-CROSS-SCALE-CARDIOVASCULAR-SYSTEM.pdf},
year = {2022},
booktitle = {ANNSIM 22},
publisher = {Society for Modeling & Simulation International (SCS)},
title = {Interactive Hemodynamic Simulation Model of a Cross-scale Cardiovascular System}
}
Abstract:
Sarah Hofmann, Maximilian Seeger, Henning Rogge-Pott, Sebastian von Mammen,
Real-Time Music-Driven Movie Design Framework, In Rémi Ronfard, Hui-Yin Wu (Eds.), Workshop on Intelligent Cinematography and Editing.
The Eurographics Association,
2022. [Download][BibSonomy][Doi]
@inproceedings{10.2312:wiced.20221052,
author = {Sarah Hofmann and Maximilian Seeger and Henning Rogge-Pott and Sebastian von Mammen},
url = {https://diglib.eg.org/bitstream/handle/10.2312/wiced20221052/053-060.pdf?sequence=1&isAllowed=y},
year = {2022},
booktitle = {Workshop on Intelligent Cinematography and Editing},
editor = {Rémi Ronfard and Hui-Yin Wu},
publisher = {The Eurographics Association},
doi = {10.2312/wiced.20221052},
title = {Real-Time Music-Driven Movie Design Framework}
}
Abstract:
Sophia C Steinhaeusser, Sebastian Oberdörfer, Sebastian von Mammen, Marc Erich Latoschik, Birgit Lugrin,
Joyful Adventures and Frightening Places - Designing Emotion-Inducing Virtual Environments, In Frontiers in Virtual Reality, Vol. 3, p. 2673–4192.
2022. [Download][BibSonomy][Doi]
@article{steinhaeusser2022joyful,
author = {Sophia C Steinhaeusser and Sebastian Oberdörfer and Sebastian von Mammen and Marc Erich Latoschik and Birgit Lugrin},
journal = {Frontiers in Virtual Reality},
url = {https://www.frontiersin.org/articles/10.3389/frvir.2022.919163},
year = {2022},
pages = {2673–4192},
volume = {3},
doi = {10.3389/frvir.2022.919163},
title = {Joyful Adventures and Frightening Places - Designing Emotion-Inducing Virtual Environments}
}
Abstract:
Rebecca M. Hein, Marc Erich Latoschik, Carolin Wienrich,
Inter- and Transcultural Learning in Social Virtual Reality: A Proposal for an Inter- and Transcultural Virtual Object Database to be Used in the Implementation, Reflection, and Evaluation of Virtual Encounters, In Lars Erik Holmquist Mark Billinghurst, Fotis Liarokapis, Mu-Chun Su (Eds.), Multimodal Technologies and Interaction, Vol. 6(7), p. 50.
2022. [Download][BibSonomy][Doi]
@article{mti6070050,
author = {Rebecca M. Hein and Marc Erich Latoschik and Carolin Wienrich},
journal = {Multimodal Technologies and Interaction},
number = {7},
url = {https://www.mdpi.com/2414-4088/6/7/50},
year = {2022},
editor = {Lars Erik Holmquist Mark Billinghurst, Fotis Liarokapis and Mu-Chun Su},
pages = {50},
volume = {6},
doi = {10.3390/mti6070050},
title = {Inter- and Transcultural Learning in Social Virtual Reality: A Proposal for an Inter- and Transcultural Virtual Object Database to be Used in the Implementation, Reflection, and Evaluation of Virtual Encounters}
}
Abstract:
Visual stimuli are frequently used to improve memory, language learning or perception, and understanding of metacognitive processes. However, in virtual reality (VR), there are few systematically and empirically derived databases. This paper proposes the first collection of virtual objects based on empirical evaluation for inter-and transcultural encounters between English- and German-speaking learners. We used explicit and implicit measurement methods to identify cultural associations and the degree of stereotypical perception for each virtual stimuli (n = 293) through two online studies, including native German and English-speaking participants. The analysis resulted in a final well-describable database of 128 objects (called InteractionSuitcase). In future applications, the objects can be used as a great interaction or conversation asset and behavioral measurement tool in social VR applications, especially in the field of foreign language education. For example, encounters can use the objects to describe their culture, or teachers can intuitively assess stereotyped attitudes of the encounters.
Anna Riedmann, Philipp Schaper, Melissa Donnermann, Martina Lein, Sophia C. Steinhaeusser, Panagiotis Karageorgos, Bettina Müller, Tobias Richter, Birgit Lugrin,
Iteratively Digitizing an Analogue Syllable-Based Reading Intervention, In Interacting with Computers.
Oxford University Press,
2022. [BibSonomy][Doi]
@article{2022,
author = {Anna Riedmann and Philipp Schaper and Melissa Donnermann and Martina Lein and Sophia C. Steinhaeusser and Panagiotis Karageorgos and Bettina Müller and Tobias Richter and Birgit Lugrin},
journal = {Interacting with Computers},
year = {2022},
publisher = {Oxford University Press},
doi = {10.1093/iwc/iwac005},
title = {Iteratively Digitizing an Analogue Syllable-Based Reading Intervention}
}
Abstract:
Anna Riedmann, Philipp Schaper, Benedikt Jakob, Birgit Lugrin,
A Theory Based Adaptive Pedagogical Agent in a Reading App for Primary Students - A User Study, In S. Crossley, E. Popescu (Eds.), Intelligent Tutoring Systems. ITS 2022. Lecture Notes in Computer Science, Vol. 13284.
Springer,
2022. [BibSonomy][Doi]
@inproceedings{riedmann2022theory,
author = {Anna Riedmann and Philipp Schaper and Benedikt Jakob and Birgit Lugrin},
year = {2022},
booktitle = {Intelligent Tutoring Systems. ITS 2022. Lecture Notes in Computer Science},
editor = {S. Crossley and E. Popescu},
publisher = {Springer},
volume = {13284},
doi = {10.1007/978-3-031-09680-8_26},
title = {A Theory Based Adaptive Pedagogical Agent in a Reading App for Primary Students - A User Study}
}
Abstract:
Anna Riedmann, Philipp Schaper, Birgit Lugrin,
Integration of a social robot and gamification in adult learning and effects on motivation, engagement and performance, In AI & Society.
Springer,
2022. [BibSonomy][Doi]
@article{Riedmann_2022,
author = {Anna Riedmann and Philipp Schaper and Birgit Lugrin},
journal = {AI & Society},
year = {2022},
publisher = {Springer},
doi = {10.1007/s00146-022-01514-y},
title = {Integration of a social robot and gamification in adult learning and effects on motivation, engagement and performance}
}
Abstract:
Sebastian Oberdörfer, Philipp Krop, Samantha Straka, Silke Grafe, Marc Erich Latoschik,
Fly My Little Dragon: Using AR to Learn Geometry, In Proceedings of the IEEE Conference on Games (CoG '22), pp. 528-531.
IEEE,
2022. [Download][BibSonomy][Doi]
@inproceedings{oberdorfer2022little,
author = {Sebastian Oberdörfer and Philipp Krop and Samantha Straka and Silke Grafe and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2022-fly-my-little-dragon-preprint.pdf},
year = {2022},
booktitle = {Proceedings of the IEEE Conference on Games (CoG '22)},
publisher = {IEEE},
pages = {528-531},
doi = {10.1109/CoG51982.2022.9893601},
title = {Fly My Little Dragon: Using AR to Learn Geometry}
}
Abstract:
We present the gamified AR learning environment ARES. During the gameplay, learners guide the little dragon called ”Ares” through dungeons. Using AR, ARES threedimensionally integrates the dungeons in the real-world. Each dungeon represents a coordinate system with obstacles that ultimately create mathematical exercises, e.g. rotating a door to let the dragon fly through it. Overcoming such a challenge requires a learner to spatially analyze the exercise and apply the fundamental mathematical principles. ARES displays a debriefing screen at the end of a dungeon to further support the learning process. In a preliminary qualitative user study, preservice and in-service teachers saw great potential in ARES for providing further practical and motivating exercises to deepen the knowledge in the classroom and at home.
E. Ganal, M. Heimbrock, P. Schaper, B. Lugrin,
Don’t Touch This! - Investigating the Potential of Visualizing Touched Surfaces on the Consideration of Behavior Change, In N. Baghaei, J. Vassileva, R. Ali, K. Oyibo (Eds.), Persuasive Technology 2022.
Springer International Publishing,
2022. [BibSonomy]
@inproceedings{ganal2022touch,
author = {E. Ganal and M. Heimbrock and P. Schaper and B. Lugrin},
year = {2022},
booktitle = {Persuasive Technology 2022},
editor = {N. Baghaei and J. Vassileva and R. Ali and K. Oyibo},
publisher = {Springer International Publishing},
title = {Don’t Touch This! - Investigating the Potential of Visualizing Touched Surfaces on the Consideration of Behavior Change}
}
Abstract:
Philipp Krop, Samantha Straka, Melanie Ullrich, Maximilian Ertl, Marc Erich Latoschik,
IT-Supported Request Management for Clinical Radiology: Contextual Design and Remote Prototype Testing, In CHI Conference on Human Factors in Computing Systems Extended Abstracts(45), pp. 1-8. New York, NY, USA:
Association for Computing Machinery,
2022. [Download][BibSonomy][Doi]
@inproceedings{krop2022a,
author = {Philipp Krop and Samantha Straka and Melanie Ullrich and Maximilian Ertl and Marc Erich Latoschik},
number = {45},
url = {https://dl.acm.org/doi/10.1145/3491101.3503571},
year = {2022},
booktitle = {CHI Conference on Human Factors in Computing Systems Extended Abstracts},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {CHI EA '22},
pages = {1-8},
doi = {10.1145/3491101.3503571},
title = {IT-Supported Request Management for Clinical Radiology: Contextual Design and Remote Prototype Testing}
}
Abstract:
Management of radiology requests in larger clinical contexts is characterized by a complex and distributed workflow. In our partner hospital, representing many similar clinics, these processes often still rely on exchanging physical papers and forms, making patient or case data challenging to access. This often leads to phone calls with long waiting queues, which are time-inefficient and result in frequent interrupts. We report on a user-centered design approach based on Rapid Contextual Design with an additional focus group to optimize and iteratively develop a new workflow. Participants found our prototypes fast and intuitive, the design clean and consistent, relevant information easy to access, and the request process fast and easy. Due to the COVID pandemic, we switched to remote prototype testing, which yielded equally good feedback and increased the participation rate. In the end, we propose best practices for remote prototype testing in hospitals with complex and distributed workflows.
Chris Zimmerer, Martin Fischbach, Marc Erich Latoschik,
A Case Study on the Rapid Development of Natural and Synergistic Multimodal Interfaces for XR Use-Cases, In CHI Conference on Human Factors in Computing Systems Extended Abstracts. New York, NY, USA:
Association for Computing Machinery,
2022. [Download][BibSonomy][Doi]
@inproceedings{10.1145/3491101.3503552,
author = {Chris Zimmerer and Martin Fischbach and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2022-chi-case-study-mmi-zimmerer.pdf},
year = {2022},
booktitle = {CHI Conference on Human Factors in Computing Systems Extended Abstracts},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {CHI EA '22},
doi = {10.1145/3491101.3503552},
title = {A Case Study on the Rapid Development of Natural and Synergistic Multimodal Interfaces for XR Use-Cases}
}
Abstract:
Multimodal Interfaces (MMIs) supporting the synergistic use of natural modalities like speech and gesture have been conceived as promising for spatial or 3D interactions, e.g., in Virtual, Augmented, and Mixed Reality (XR for short). Yet, the currently prevailing user interfaces are unimodal. Commercially available software platforms like the Unity or Unreal game engines simplify the complexity of developing XR applications through appropriate tool support. They provide ready-to-use device integration, e.g., for 3D controllers or motion tracking, and according interaction techniques such as menus, (3D) point-and-click, or even simple symbolic gestures to rapidly develop unimodal interfaces. A comparable tool support is yet missing for multimodal solutions in this and similar areas. We believe that this hinders user-centered research based on rapid prototyping of MMIs, the identification and formulation of practical design guidelines, the development of killer applications highlighting the power of MMIs, and ultimately a widespread adoption of MMIs. This article investigates potential reasons for the ongoing uncommonness of MMIs. Our case study illustrates and analyzes lessons learned during the development and application of a toolchain that supports rapid development of natural and synergistic MMIs for XR use-cases. We analyze the toolchain in terms of developer usability, development time, and MMI customization. This analysis is based on the knowledge gained in years of research and academic education. Specifically, it reflects on the development of appropriate MMI tools and their application in various demo use-cases, in user-centered research, and in the lab work of a mandatory MMI course of an HCI master’s program. The derived insights highlight successful choices made as well as potential areas for improvement.
Chris Zimmerer, Philipp Krop, Martin Fischbach, Marc Erich Latoschik,
Reducing the Cognitive Load of Playing a Digital Tabletop Game with a Multimodal Interface, In CHI Conference on Human Factors in Computing Systems. New York, NY, USA:
Association for Computing Machinery,
2022. [Download][BibSonomy][Doi]
@inproceedings{10.1145/3491102.3502062,
author = {Chris Zimmerer and Philipp Krop and Martin Fischbach and Marc Erich Latoschik},
url = {https://dl.acm.org/doi/10.1145/3491102.3502062},
year = {2022},
booktitle = {CHI Conference on Human Factors in Computing Systems},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {CHI '22},
doi = {10.1145/3491102.3502062},
title = {Reducing the Cognitive Load of Playing a Digital Tabletop Game with a Multimodal Interface}
}
Abstract:
Multimodal Interfaces (MMIs) combining speech and spatial input have the potential to elicit minimal cognitive load. Low cognitive load increases effectiveness as well as user satisfaction and is regarded as an important aspect of intuitive use. While this potential has been extensively theorized in the research community, experiments that provide supporting observations based on functional interfaces are still scarce. In particular, there is a lack of studies comparing the commonly used Unimodal Interfaces (UMIs) with theoretically superior synergistic MMI alternatives. Yet, these studies are an essential prerequisite for generalizing results, developing practice-oriented guidelines, and ultimately exploiting the potential of MMIs in a broader range of applications. This work contributes a novel observation towards the resolution of this shortcoming in the context of the following combination of applied interaction techniques, tasks, application domain, and technology: We present a comprehensive evaluation of a synergistic speech & touch MMI and a touch-only menu-based UMI (interaction techniques) for selection and system control tasks in a digital tabletop game (application domain) on an interactive surface (technology). Cognitive load, user experience, and intuitive use are evaluated, with the former being assessed by means of the dual-task paradigm. Our experiment shows that the implemented MMI causes significantly less cognitive load and is perceived significantly more usable and intuitive than the UMI. Based on our results, we derive recommendations for the interface design of digital tabletop games on interactive surfaces. Further, we argue that our results and design recommendations are suitable to be generalized to other application domains on interactive surfaces for selection and system control tasks.
Yann Glémarec, Jean-Luc Lugrin, Anne-Gwenn Bosser, Cedric Buche, Marc Erich Latoschik,
Controlling the STAGE: A High-Level Control System for Virtual Audiences In Virtual Reality, In Frontiers in Virtual Reality – Virtual Reality and Human Behaviour, Vol. 3, p. 2673–4192.
2022. [Download][BibSonomy]
@article{glemarec2022controlling,
author = {Yann Glémarec and Jean-Luc Lugrin and Anne-Gwenn Bosser and Cedric Buche and Marc Erich Latoschik},
journal = {Frontiers in Virtual Reality – Virtual Reality and Human Behaviour},
url = {https://www.frontiersin.org/articles/10.3389/frvir.2022.876433/abstract},
year = {2022},
pages = {2673–4192},
volume = {3},
title = {Controlling the STAGE: A High-Level Control System for Virtual Audiences In Virtual Reality}
}
Abstract:
This article presents a novel method for controlling a virtual audience system (VAS) in Virtual Reality (VR) application, called STAGE, which has been originally designed for supervised public speaking training in university seminars dedicated to the preparation and delivery of scientific talks.
We are interested in creating pedagogical narratives: narratives encompass affective phenomena and rather than organizing events changing the course of a training scenario, pedagogical plans using our system focus on organizing the affects it arouses for the trainees.
Efficiently controlling a virtual audience towards a specific training objective while evaluating the speaker's performance presents a challenge for a seminar instructor: the high level of cognitive and physical demands required to be able to control the virtual audience, whilst evaluating speaker's performance, adjusting and allowing it to quickly react to the user’s behaviors and interactions. It is indeed a critical limitation of a number of existing systems that they rely on a Wizard of Oz approach, where the tutor drives the audience in reaction to the user’s performance. We address this problem by integrating with a VAS a high-level control component for tutors, which allows using predefined audience behavior rules, defining custom ones, as well as intervening during run-time for finer control of the unfolding of the pedagogical plan. At its core, this component offers a tool to program, select, modify and monitor interactive training narratives using a high-level representation.
The STAGE offers the following features: i) a high-level API to program pedagogical narratives focusing on a specific public speaking situation and training objectives, ii) an interactive visualization interface iii) computation and visualization of user metrics, iv) a semi-autonomous virtual audience composed of virtual spectators with automatic reactions to the speaker and surrounding spectators while following the pedagogical plan V) and the possibility for the instructor to embody a virtual spectator to ask questions or guide the speaker from within the Virtual Environment. We present here the design, implementation of the tutoring system and its integration in STAGE, and discuss its reception by end-users.
Christian Rack, Fabian Sieper, Lukas Schach, Murat Yalcin, Marc E. Latoschik,
Dataset: Who is Alyx? (GitHub Repository).
2022. [Download][BibSonomy][Doi]
@dataset{who_is_alyx_2022,
author = {Christian Rack and Fabian Sieper and Lukas Schach and Murat Yalcin and Marc E. Latoschik},
url = {https://github.com/cschell/who-is-alyx},
year = {2022},
doi = {10.5281/zenodo.6472417},
title = {Dataset: Who is Alyx? (GitHub Repository)}
}
Abstract:
This dataset contains over 110 hours of motion, eye-tracking and physiological data from 71 players of the virtual reality game “Half-Life: Alyx”. Each player played the game on two separate days for about 45 minutes using a HTC Vive Pro.
Rebecca Hein, Jeanine Steinbock, Maria Eisenmann, Carolin Wienrich, Marc Erich Latoschik,
Virtual Reality im modernen Englischunterricht und das Potenzial für Inter- und Transkulturelles Lernen, In MedienPädagogik Zeitschrift für Theorie und Praxis der Medienbildung, Vol. 47, pp. 246-266.
2022. [Download][BibSonomy][Doi]
@article{hein2021medienpaed,
author = {Rebecca Hein and Jeanine Steinbock and Maria Eisenmann and Carolin Wienrich and Marc Erich Latoschik},
journal = {MedienPädagogik Zeitschrift für Theorie und Praxis der Medienbildung},
url = {https://www.researchgate.net/publication/359903684_Virtual_Reality_im_modernen_Englischunterricht_und_das_Potenzial_fur_Inter-_und_Transkulturelles_Lernen/references},
year = {2022},
pages = {246-266},
volume = {47},
doi = {10.21240/mpaed/47/2022.04.12.X},
title = {Virtual Reality im modernen Englischunterricht und das Potenzial für Inter- und Transkulturelles Lernen}
}
Abstract:
Sophia C. Steinhaeusser, Birgit Lugrin,
Effects of Colored LEDs in Robotic Storytelling on Storytelling Experience and Robot Perception, In Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction (HRI2022).
IEEE Press,
2022. [BibSonomy]
@inproceedings{steinhaeusser2022effects,
author = {Sophia C. Steinhaeusser and Birgit Lugrin},
year = {2022},
booktitle = {Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction (HRI2022)},
publisher = {IEEE Press},
title = {Effects of Colored LEDs in Robotic Storytelling on Storytelling Experience and Robot Perception}
}
Abstract:
Anne Lewerentz und David Schantz und Julian Gröh und Andreas Knote und Prof. Dr. Sebastian von Mammen und Prof. Dr. Juliano Cabral,
bioDIVERsity. Ein Computerspiel gegen das Imageproblem von Wasserpflanzen, In Mitteilungen der Fränkischen Geographischen Gesellschaft, Vol. 0(67), pp. 29--36.
2022. [Download][BibSonomy]
@article{Lewerentz:2022aa,
author = {Anne Lewerentz und David Schantz und Julian Gröh und Andreas Knote und Prof. Dr. Sebastian von Mammen und Prof. Dr. Juliano Cabral},
journal = {Mitteilungen der Fränkischen Geographischen Gesellschaft},
number = {67},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Lewerentz2022aa.pdf},
year = {2022},
pages = {29--36},
volume = {0},
title = {bioDIVERsity. Ein Computerspiel gegen das Imageproblem von Wasserpflanzen}
}
Abstract:
Wasserpflanzen haben ein Imageproblem: Sie stören vermeintlich beim Baden in Seen, was zu panikartigenReaktionen führen kann. Begriffe wie „Schlingpflanzen“ und „Veralgung“ werden falsch verwendet odersind negativ konnotiert. Zudem sieht man Wasserpflanzen kaum, da sie überwiegend unter der Wasseroberflächewachsen. Ihre vielen wertvollen Funktionen im Ökosystem See bleiben daher oft unerkannt undsind durch die schlechte Erreichbarkeit und Erfahrbarkeit nur schwer zu vermitteln.Virtuelle Medien eröffnen eine neue Möglichkeit, schwierig erreichbare Orte erfahrbar zu machen. Fürdie Vermittlung von Umweltthemen – gerade an eine jüngere Zielgruppe – wird ein großes Potential indigitalen Spielen und Simulationen (Computerspiele, Apps, Virtual Reality-Spiele), speziell sogenanntenSerious Games, gesehen.Ein Prototyp für das Serious Game bioDIVERsity wurde im Rahmen eines interdisziplinären Kurses derArbeitsgruppe Games Engineering am Institut für Informatik in Zusammenarbeit mit dem Center for Computationaland Theoretical Biology an der Universität Würzburg entwickelt. Ziel des Spiels ist es, KindernKenntnisse über das Ökosystem See und die Auswirkungen der Wasserqualität und der Klimabedingungenauf Wasserpflanzen zu vermitteln. Darüber hinaus soll ein Bewusstsein für dieses Ökosystem und die Auswirkungenmenschlicher Eingriffe entstehen.
Nina Döllinger, Erik Wolf, David Mal, Nico Erdmannsdörfer, Mario Botsch, Marc Erich Latoschik, Carolin Wienrich,
Virtual Reality for Mind and Body: Does the Sense of Embodiment Towards a Virtual Body Affect Physical Body Awareness?, In CHI 22 Conference on Human Factors in Computing Systems Extended Abstracts, pp. 1-8.
2022. [Download][BibSonomy][Doi]
@inproceedings{dollinger2022virtual,
author = {Nina Döllinger and Erik Wolf and David Mal and Nico Erdmannsdörfer and Mario Botsch and Marc Erich Latoschik and Carolin Wienrich},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2022-chi-body-awareness-virtual-human-perception-preprint.pdf},
year = {2022},
booktitle = {CHI 22 Conference on Human Factors in Computing Systems Extended Abstracts},
pages = {1-8},
doi = {10.1145/3491101.3519613},
title = {Virtual Reality for Mind and Body: Does the Sense of Embodiment Towards a Virtual Body Affect Physical Body Awareness?}
}
Abstract:
Mind-body therapies aim to improve health by combining physical and mental exercises. Recent developments tend to incorporate virtual reality (VR) into their design and execution, but there is a lack of research concerning the inclusion of virtual bodies and their effect on body awareness in these designs.
In this study, 24 participants performed in-VR body awareness movement tasks in front of a virtual mirror while embodying a photorealistic, personalized avatar. Subsequently, they performed a heartbeat counting task and rated their perceived body awareness and sense of embodiment towards the avatar.
We found a significant relationship between sense of embodiment and self-reported body awareness but not between sense of embodiment and heartbeat counting.
Future work can build on these findings and further explore the relationship between avatar embodiment and body awareness.
Chiara Palmisano, Peter Kullmann, Ibrahem Hanafi, Marta Verrecchia, Marc Erich Latoschik, Andrea Canessa, Martin Fischbach, Ioannis Ugo Isaias,
A Fully-Immersive Virtual Reality Setup to Study Gait Modulation, In Frontiers in Human Neuroscience, Vol. 16.
2022. [Download][BibSonomy][Doi]
@article{10.3389/fnhum.2022.783452,
author = {Chiara Palmisano and Peter Kullmann and Ibrahem Hanafi and Marta Verrecchia and Marc Erich Latoschik and Andrea Canessa and Martin Fischbach and Ioannis Ugo Isaias},
journal = {Frontiers in Human Neuroscience},
url = {https://www.frontiersin.org/article/10.3389/fnhum.2022.783452},
year = {2022},
volume = {16},
doi = {10.3389/fnhum.2022.783452},
title = {A Fully-Immersive Virtual Reality Setup to Study Gait Modulation}
}
Abstract:
Objective: Gait adaptation to environmental challenges is fundamental for independent and safe community ambulation. The possibility of precisely studying gait modulation using standardized protocols of gait analysis closely resembling everyday life scenarios is still an unmet need.Methods: We have developed a fully-immersive virtual reality (VR) environment where subjects have to adjust their walking pattern to avoid collision with a virtual agent (VA) crossing their gait trajectory. We collected kinematic data of 12 healthy young subjects walking in real world (RW) and in the VR environment, both with (VR/A+) and without (VR/A-) the VA perturbation. The VR environment closely resembled the RW scenario of the gait laboratory. To ensure standardization of the obstacle presentation the starting time speed and trajectory of the VA were defined using the kinematics of the participant as detected online during each walking trial.Results: We did not observe kinematic differences between walking in RW and VR/A-, suggesting that our VR environment per se might not induce significant changes in the locomotor pattern. When facing the VA all subjects consistently reduced stride length and velocity while increasing stride duration. Trunk inclination and mediolateral trajectory deviation also facilitated avoidance of the obstacle.Conclusions: This proof-of-concept study shows that our VR/A+ paradigm effectively induced a timely gait modulation in a standardized immersive and realistic scenario. This protocol could be a powerful research tool to study gait modulation and its derangements in relation to aging and clinical conditions.
Andreas Halbig, Sooraj K. Babu, Shirin Gatter, Marc Erich Latoschik, Kirsten Brukamp, Sebastian von Mammen,
Opportunities and Challenges of Virtual Reality in Healthcare -- A Domain Experts Inquiry, In Frontiers in Virtual Reality, Vol. 3.
2022. [Download][BibSonomy][Doi]
@article{Halbig:2022aa,
author = {Andreas Halbig and Sooraj K. Babu and Shirin Gatter and Marc Erich Latoschik and Kirsten Brukamp and Sebastian von Mammen},
journal = {Frontiers in Virtual Reality},
url = {https://www.frontiersin.org/article/10.3389/frvir.2022.837616},
year = {2022},
volume = {3},
doi = {10.3389/frvir.2022.837616},
title = {Opportunities and Challenges of Virtual Reality in Healthcare -- A Domain Experts Inquiry}
}
Abstract:
In recent years, the applications and accessibility of Virtual Reality (VR) for the healthcare sector have continued to grow. However, so far, most VR applications are only relevant in research settings. Information about what healthcare professionals would need to independently integrate VR applications into their daily working routines is missing. The actual needs and concerns of the people who work in the healthcare sector are often disregarded in the development of VR applications, even though they are the ones who are supposed to use them in practice. By means of this study, we systematically involve health professionals in the development process of VR applications. In particular, we conducted an online survey with 102 healthcare professionals based on a video prototype which demonstrates a software platform that allows them to create and utilise VR experiences on their own. For this study, we adapted and extended the Technology Acceptance Model (TAM). The survey focused on the perceived usefulness and the ease of use of such a platform, as well as the attitude and ethical concerns the users might have. The results show a generally positive attitude toward such a software platform. The users can imagine various use cases in different health domains. However, the perceived usefulness is tied to the actual ease of use of the platform and sufficient support for learning and working with the platform. In the discussion, we explain how these results can be generalized to facilitate the integration of VR in healthcare practice.
Melissa Donnermann, Philipp Schaper, Birgit Lugrin,
Social Robots in Applied Settings: A Long-Term Study on Adaptive Robotic Tutors in Higher Education, In Frontiers in Robotics and AI, Vol. 9.
Frontiers Media SA,
2022. [Download][BibSonomy][Doi]
@article{Donnermann_2022,
author = {Melissa Donnermann and Philipp Schaper and Birgit Lugrin},
journal = {Frontiers in Robotics and AI},
url = {https://doi.org/10.3389%2Ffrobt.2022.831633},
year = {2022},
publisher = {Frontiers Media SA},
volume = {9},
doi = {10.3389/frobt.2022.831633},
title = {Social Robots in Applied Settings: A Long-Term Study on Adaptive Robotic Tutors in Higher Education}
}
Abstract:
Carolin Wienrich, Lennart Fries, Marc Erich Latoschik,
Remote at Court – Challenges and Solutions of Video Conferencing in the Judicial System, In Proceedings 25th HCI International Conference, Vol. Design, Operation and Evaluation of Mobile Communications, pp. 82-106.
Springer,
2022. [BibSonomy]
@inproceedings{wienrich2022remote,
author = {Carolin Wienrich and Lennart Fries and Marc Erich Latoschik},
year = {2022},
booktitle = {Proceedings 25th HCI International Conference},
publisher = {Springer},
pages = {82-106},
volume = {Design, Operation and Evaluation of Mobile Communications},
title = {Remote at Court – Challenges and Solutions of Video Conferencing in the Judicial System}
}
Abstract:
Larissa Brübach, Franziska Westermeier, Carolin Wienrich, Marc Erich Latoschik,
Breaking Plausibility Without Breaking Presence - Evidence For The Multi-Layer Nature Of Plausibility, In IEEE Transactions on Visualization and Computer Graphics, Vol. 28(5), pp. 2267-2276.
2022. IEEE VR Best Journal Paper Nominee 🏆 [Download][BibSonomy][Doi]
@article{9714117,
author = {Larissa Brübach and Franziska Westermeier and Carolin Wienrich and Marc Erich Latoschik},
journal = {IEEE Transactions on Visualization and Computer Graphics},
number = {5},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2022-ieeevr-breaking-plausibility-without-breaking-presence.pdf},
year = {2022},
pages = {2267-2276},
volume = {28},
doi = {10.1109/TVCG.2022.3150496},
title = {Breaking Plausibility Without Breaking Presence - Evidence For The Multi-Layer Nature Of Plausibility}
}
Abstract:
A novel theoretical model recently introduced coherence and plausibility as the essential conditions of XR experiences, challenging contemporary presence-oriented concepts. This article reports on two experiments validating this model, which assumes coherence activation on three layers (cognition, perception, and sensation) as the potential sources leading to a condition of plausibility and from there to other XR qualia such as presence or body ownership. The experiments introduce and utilize breaks in plausibility (in analogy to breaks in presence): We induce incoherence on the perceptual and the cognitive layer simultaneously by a simulation of object behaviors that do not conform to the laws of physics, i.e., gravity. We show that this manipulation breaks plausibility and hence confirm that it results in the desired effects in the theorized condition space but that the breaks in plausibility did not affect presence. In addition, we show that a cognitive manipulation by a storyline framing is too weak to successfully counteract the strong bottom-up inconsistencies. Both results are in line with the predictions of the recently introduced three-layer model of coherence and plausibility, which incorporates well-known top-down and bottom-up rivalries and its theorized increased independence between plausibility and presence.
Simon Seibt, Bartosz von Ramon Lipinski, March Erich Latoschik,
Dense Feature Matching based on Homographic Decomposition, In IEEE Access, Vol. X, pp. 1-1.
2022. [Download][BibSonomy][Doi]
@article{seibt2022dense,
author = {Simon Seibt and Bartosz von Ramon Lipinski and March Erich Latoschik},
journal = {IEEE Access},
url = {https://ieeexplore.ieee.org/document/9716106},
year = {2022},
pages = {1-1},
volume = {X},
doi = {10.1109/ACCESS.2022.3152539},
title = {Dense Feature Matching based on Homographic Decomposition}
}
Abstract:
Finding robust and accurate feature matches is a fundamental problem in computer vision. However, incorrect correspondences and suboptimal matching accuracies lead to significant challenges for many real-world applications. In conventional feature matching, corresponding features in an image pair are greedily searched using their descriptor distance. The resulting matching set is then typically used as input for geometric model fitting methods to find an appropriate fundamental matrix and filter out incorrect matches. Unfortunately, this basic approach cannot solve all practical problems, such as fundamental matrix degeneration, matching ambiguities caused by repeated patterns and rejection of initially mismatched features without further reconsideration. In this paper we introduce a novel matching pipeline, which addresses all of the aforementioned challenges at once: First, we perform iterative rematching to give mismatched feature points a further chance for being considered in later processing steps. Thereby, we are searching for inliers that exhibit the same homographic transformation per iteration. The resulting homographic decomposition is used for refining matches, occlusion detection (e.g. due to parallaxes) and extrapolation of additional features in critical image areas. Furthermore, Delaunay triangulation of the matching set is utilized to minimize the repeated pattern problem and to implement focused matching. Doing so, enables us to further increase matching quality by concentrating on local image areas, defined by the triangular mesh. We present and discuss experimental results with multiple real-world matching datasets. Our contributions, besides improving matching recall and precision for image processing applications in general, also relate to use cases in image-based computer graphics.
Sandra Birnstiel, Sebastian Oberdörfer, Marc Erich Latoschik,
Stay Safe! Safety Precautions for Walking on a Conventional Treadmill in VR, In Proceedings of the 29th IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VR '22), pp. 732-733.
2022. [Download][BibSonomy][Doi]
@inproceedings{birnstiel2022safety,
author = {Sandra Birnstiel and Sebastian Oberdörfer and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2022-ieeevr-stay-safe.pdf},
year = {2022},
booktitle = {Proceedings of the 29th IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VR '22)},
pages = {732-733},
doi = {10.1109/VRW55335.2022.00217},
title = {Stay Safe! Safety Precautions for Walking on a Conventional Treadmill in VR}
}
Abstract:
Conventional treadmills are used in virtual reality (VR) applications, such as for rehabilitation training or gait studies. However, using the devices in VR poses risks of injury. Therefore, this study investigates safety precautions when using a conventional treadmill for a walking task. We designed a safety belt and displayed parts of the treadmill in VR. The safety belt was much appreciated by the participants and did not affect the walking behavior. However, the participants requested more visual cues in the user’s field of view.
David Mal, Erik Wolf, Nina Döllinger, Mario Botsch, Carolin Wienrich, Marc Erich Latoschik,
Virtual Human Coherence and Plausibility – Towards a Validated Scale, In 2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), pp. 788-789.
2022. [Download][BibSonomy][Doi]
@inproceedings{mal2022virtual,
author = {David Mal and Erik Wolf and Nina Döllinger and Mario Botsch and Carolin Wienrich and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2022_ieeevr_virtual_human_plausibility_preprint.pdf},
year = {2022},
booktitle = {2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
pages = {788-789},
doi = {10.1109/VRW55335.2022.00245},
title = {Virtual Human Coherence and Plausibility – Towards a Validated Scale}
}
Abstract:
Virtual humans contribute to users’ state of plausibility in various XR applications. We present the development and preliminary evaluation of a self-assessment questionnaire to quantify virtual human’s plausibility in virtual environments based on eleven concise items. A principal component analysis of 650 appraisals collected in an online survey revealed two highly reliable components within the items. We interpret the components as possible factors, i.e., appearance and behavior plausibility and match to the virtual environment, and propose future work aiming towards a standardized virtual human plausibility scale by validating the structure and sensitivity of both sub-components in XR environments.
Kerstin Schmid, Andreas Knote, Alexander Mück, Keram Pfeiffer, Sebastian von Mammen, Sabine C. Fischer,
Interactive, Visual Simulation of a Spatio-Temporal Model of Gas Exchange in the Human Alveolus, In Frontiers in Bioinformatics, Vol. 1.
2022. [Download][BibSonomy][Doi]
@article{10.3389/fbinf.2021.774300,
author = {Kerstin Schmid and Andreas Knote and Alexander Mück and Keram Pfeiffer and Sebastian von Mammen and Sabine C. Fischer},
journal = {Frontiers in Bioinformatics},
url = {https://www.frontiersin.org/article/10.3389/fbinf.2021.774300},
year = {2022},
volume = {1},
doi = {10.3389/fbinf.2021.774300},
title = {Interactive, Visual Simulation of a Spatio-Temporal Model of Gas Exchange in the Human Alveolus}
}
Abstract:
In interdisciplinary fields such as systems biology, good communication between experimentalists and theorists is crucial for the success of a project. Theoretical modeling in physiology usually describes complex systems with many interdependencies. On one hand, these models have to be grounded on experimental data. On the other hand, experimenters must be able to understand the interdependent complexities of the theoretical model in order to interpret the model’s results in the physiological context. We promote interactive, visual simulations as an engaging way to present theoretical models in physiology and to make complex processes tangible. Based on a requirements analysis, we developed a new model for gas exchange in the human alveolus in combination with an interactive simulation software named Alvin. Alvin exceeds the current standard with its spatio-temporal resolution and a combination of visual and quantitative feedback. In Alvin, the course of the simulation can be traced in a three-dimensional rendering of an alveolus and dynamic plots. The user can interact by configuring essential model parameters. Alvin allows to run and compare multiple simulation instances simultaneously. We exemplified the use of Alvin for research by identifying unknown dependencies in published experimental data. Employing a detailed questionnaire, we showed the benefits of Alvin for education. We postulate that interactive, visual simulation of theoretical models, as we have implemented with Alvin on respiratory processes in the alveolus, can be of great help for communication between specialists and thereby advancing research.
Sebastian Oberdörfer, David Schraudt, Marc Erich Latoschik,
Embodied Gambling–Investigating the Influence of Level of Embodiment, Avatar Appearance, and Virtual Environment Design on an Online VR Slot Machine, In Frontiers in Virtual Reality, Vol. 3.
2022. [Download][BibSonomy][Doi]
@article{oberdorfer2022embodied,
author = {Sebastian Oberdörfer and David Schraudt and Marc Erich Latoschik},
journal = {Frontiers in Virtual Reality},
url = {https://www.frontiersin.org/articles/10.3389/frvir.2022.828553},
year = {2022},
volume = {3},
doi = {10.3389/frvir.2022.828553},
title = {Embodied Gambling–Investigating the Influence of Level of Embodiment, Avatar Appearance, and Virtual Environment Design on an Online VR Slot Machine}
}
Abstract:
Slot machines are one of the most played games by players suffering from gambling disorder. New technologies like immersive Virtual Reality (VR) offer more possibilities to exploit erroneous beliefs in the context of gambling. Recent research indicates a higher risk potential when playing a slot machine in VR than on desktop. To continue this investigation, we evaluate the effects of providing different degrees of embodiment, i.e., minimal and full embodiment. The avatars used for the full embodiment further differ in their appearance, i.e., they elicit a high or a low socio-economic status. The virtual environment (VE) design can cause a potential influence on the overall gambling behavior. Thus, we also embed the slot machine in two different VEs that differ in their emotional design: a colorful underwater playground environment and a virtual counterpart of our lab. These design considerations resulted in four different versions of the same VR slot machine: 1) full embodiment with high socio-economic status, 2) full embodiment with low socio-economic status, 3) minimal embodiment playground VE, and 4) minimal embodiment laboratory VE. Both full embodiment versions also used the playground VE. We determine the risk potential by logging gambling frequency as well as stake size, and measuring harm-inducing factors, i.e., dissociation, urge to gamble, dark flow, and illusion of control, using questionnaires. Following a between groups experimental design, 82 participants played for 20 game rounds one of the four versions. We recruited our sample from the students enrolled at the University of Würzburg. Our safety protocol ensured that only participants without any recent gambling activity took part in the experiment. In this comparative user study, we found no effect of the embodiment nor VE design on neither the gambling frequency, stake sizes, nor risk potential. However, our results provide further support for the hypothesis of the higher visual angle on gambling stimuli and hence the increased emotional response being the true cause for the higher risk potential.
Christian Seufert, Sebastian Oberdörfer, Alice Roth, Silke Grafe, Jean-Luc Lugrin, Marc Erich Latoschik,
Classroom management competency enhancement for student teachers using a fully immersive virtual classroom, In Computers & Education, Vol. 179, p. 104410.
2022. [Download][BibSonomy][Doi]
@article{seufert:2022n,
author = {Christian Seufert and Sebastian Oberdörfer and Alice Roth and Silke Grafe and Jean-Luc Lugrin and Marc Erich Latoschik},
journal = {Computers & Education},
url = {https://www.sciencedirect.com/science/article/pii/S0360131521002876},
year = {2022},
pages = {104410},
volume = {179},
doi = {https://doi.org/10.1016/j.compedu.2021.104410},
title = {Classroom management competency enhancement for student teachers using a fully immersive virtual classroom}
}
Abstract:
The purpose of this study is to examine whether pre-service teachers studying classroom management (CM) in a virtual reality (VR)-supported setting enhance their CM competencies more than students do in a setting using conventional methods. With this aim in mind and to address a lack of available situations of practicing and reflecting CM competencies besides simply gaining theoretical knowledge about CM in education, we integrated a novel fully immersive VR application in selected CM courses. We evaluated the development of self-assessed and instructor-rated CM competencies and the learning quality in the different learning conditions. Additionally, we evaluated the presence, social presence, believability and utility of the VR application and the VR-assisted and video-assisted course. Participants were pre-service teachers (n = 55) of the University of Würzburg who participated in a quasi-experimental pre-test/post-test intervention. The students were randomly assigned to one of two intervention groups: the test group used the virtual classroom Breaking Bad Behaviors (BBB) during the term (n = 39), and the comparison group's learning was video-assisted (n = 16). The instructor rating shows significant differences between the VR group and the video group, the two points of measurement and for the interaction between the condition and time of measurement. It demonstrates a highly significant improvement in CM competencies in the VR setting between the pre-test and post-test (p < 0.001, Cohen's d = 1.06) as compared to the video-based setting. The participants of the VR setting themselves rated their CM competencies in the post-test significantly higher than in the pre-test (p = 0.02, Cohen's d = 0.39). Interestingly, the video group also rated themselves better in the post-test (p = 0.02, Cohen's d = 0.67), which reveals that self-assessment and external assessment show different results. In addition, we observed that even if both groups gained similar theoretical knowledge, the CM competencies developed to a greater degree in the VR-based settings. The participants rated the CM training system a useful tool to evaluate and reflect on individual teacher actions. Its immersion contributes to a high presence and the simulation of realistic scenarios in a CM course. These findings suggest that VR-based settings can lead to a higher benefit in the enhancement of pre-service teachers' CM competencies.
Erik Wolf, Marie Luisa Fiedler, Nina Döllinger, Carolin Wienrich, Marc Erich Latoschik,
Exploring Presence, Avatar Embodiment, and Body Perception with a Holographic Augmented Reality Mirror, In 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 350-359.
2022. [Download][BibSonomy][Doi]
@inproceedings{wolf2022holographic,
author = {Erik Wolf and Marie Luisa Fiedler and Nina Döllinger and Carolin Wienrich and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2022-ieeevr-hololens-embodiment-preprint.pdf},
year = {2022},
booktitle = {2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
pages = {350-359},
doi = {10.1109/VR51125.2022.00054},
title = {Exploring Presence, Avatar Embodiment, and Body Perception with a Holographic Augmented Reality Mirror}
}
Abstract:
The embodiment of avatars in virtual reality (VR) is a promising tool for enhancing the user's mental health. A great example is the treatment of body image disturbances, where eliciting a full-body illusion can help identify, visualize, and modulate persisting misperceptions. Augmented reality (AR) could complement recent advances in the field by incorporating real elements, such as the therapist or the user's real body, into therapeutic scenarios. However, research on the use of AR in this context is very sparse. Therefore, we present a holographic AR mirror system based on an optical see-through (OST) device and markerless body tracking, collect valuable qualitative feedback regarding its user experience, and compare quantitative results regarding presence, embodiment, and body weight perception to similar systems using video see-through (VST) AR and VR. For our OST AR system, a total of 27 normal-weight female participants provided predominantly positive feedback on display properties (field of view, luminosity, and transparency of virtual objects), body tracking, and the perception of the avatar’s appearance and movements. In the quantitative comparison to the VST AR and VR systems, participants reported significantly lower feelings of presence, while they estimated the body weight of the generic avatar significantly higher when using our OST AR system. For virtual body ownership and agency, we found only partially significant differences. In summary, our study shows the general applicability of OST AR in the given context offering huge potential in future therapeutic scenarios. However, the comparative evaluation between OST AR, VST AR, and VR also revealed significant differences in relevant measures. Future work is mandatory to corroborate our findings and to classify the significance in a therapeutic context.
Marc Erich Latoschik, Carolin Wienrich,
Congruence and Plausibility, not Presence?! Pivotal Conditions for XR Experiences and Effects, a Novel Model, In Frontiers in Virtual Reality, Vol. 3:694433.
2022. [Download][BibSonomy][Doi]
@article{latoschik2021coherence,
author = {Marc Erich Latoschik and Carolin Wienrich},
journal = {Frontiers in Virtual Reality},
url = {https://www.frontiersin.org/article/10.3389/frvir.2022.694433},
year = {2022},
volume = {3:694433},
doi = {10.3389/frvir.2022.694433},
title = {Congruence and Plausibility, not Presence?! Pivotal Conditions for XR Experiences and Effects, a Novel Model}
}
Abstract:
Presence is often considered the most important quale describing the subjective feeling of being in a computer-generated and/or computer-mediated virtual environment. The identification and separation of orthogonal presence components, i.e., the place illusion and the plausibility illusion, has been an accepted theoretical model describing Virtual Reality (VR) experiences for some time. This perspective article challenges this presence-oriented VR theory. First, we argue that a place illusion cannot be the major construct to describe the much wider scope of virtual, augmented, and mixed reality (VR, AR, MR: or XR for short). Second, we argue that there is no plausibility illusion but merely plausibility, and we derive the place illusion caused by the congruent and plausible generation of spatial cues and similarly for all the current model’s so-defined illusions. Finally, we propose congruence and plausibility to become the central essential conditions in a novel theoretical model describing XR experiences and effects.
2021
Yasin Raies, Sebastian von Mammen,
A Swarm Grammar-Based Approach to Virtual World Generation, In Juan Romero, Tiago Martins, Nereida Rodr\'ıguez-Fernández (Eds.), Artificial Intelligence in Music, Sound, Art and Design, pp. 459--474. Cham:
Springer International Publishing,
2021. [Download][BibSonomy]
@inproceedings{Raies:2021aa,
author = {Yasin Raies and Sebastian von Mammen},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Raies2021aa.pdf},
year = {2021},
booktitle = {Artificial Intelligence in Music, Sound, Art and Design},
editor = {Juan Romero and Tiago Martins and Nereida Rodr\'ıguez-Fernández},
publisher = {Springer International Publishing},
address = {Cham},
pages = {459--474},
title = {A Swarm Grammar-Based Approach to Virtual World Generation}
}
Abstract:
In this work we formulate and propose an extended version of the multi-agent Swarm Grammar (SG) model for the generation of virtual worlds. It unfolds a comparatively small database into a complex world featuring terrain, vegetation and bodies of water. This approach allows for adaptivity of generated assets to their environment, unbounded worlds and interactivity in their generation. In order to evaluate the model, we conducted sensitivity analyses at a local interaction scale. In addition, at a global scale, we investigated two virtual environments, discussing notable interactions, recurring configuration patterns, and obstacles in working with SGs. These analyses showed that SGs can create visually interesting virtual worlds, but require further work in ease of use. Lastly we identified which future extensions might shrink required database sizes.
Johannes Büttner, Sebastian Von Mammen,
Training a Reinforcement Learning Agent based on XCS in a Competitive Snake Environment, In 2021 IEEE Conference on Games (CoG), pp. 1-5.
2021. [Download][BibSonomy][Doi]
@inproceedings{Buttner:2021aa,
author = {Johannes Büttner and Sebastian Von Mammen},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Buttner2021aa.pdf},
year = {2021},
booktitle = {2021 IEEE Conference on Games (CoG)},
pages = {1-5},
doi = {10.1109/CoG52621.2021.9619161},
title = {Training a Reinforcement Learning Agent based on XCS in a Competitive Snake Environment}
}
Abstract:
Julian Tritscher, Anna Krause, Daniel Schlör, Fabian Gwinner, Sebastian Von Mammen, Andreas Hotho,
A financial game with opportunities for fraud, In 2021 IEEE Conference on Games (CoG), pp. 1-5.
2021. [Download][BibSonomy][Doi]
@inproceedings{Tritscher:2021ab,
author = {Julian Tritscher and Anna Krause and Daniel Schlör and Fabian Gwinner and Sebastian Von Mammen and Andreas Hotho},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Tritscher2021aa.pdf},
year = {2021},
booktitle = {2021 IEEE Conference on Games (CoG)},
pages = {1-5},
doi = {10.1109/CoG52621.2021.9619070},
title = {A financial game with opportunities for fraud}
}
Abstract:
Marc Mußmann, Samuel Truman, Sebastian von Mammen,
Game-Ready Inventory Systems for Virtual Reality, In IEEE COG 2021, Vol. 2021.
2021. Best student paper award 🏆 [Download][BibSonomy]
@article{Mussmann:2021aa,
author = {Marc Mußmann and Samuel Truman and Sebastian von Mammen},
journal = {IEEE COG 2021},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Mu%C3%9Fmann2021aa.pdf},
year = {2021},
volume = {2021},
title = {Game-Ready Inventory Systems for Virtual Reality}
}
Abstract:
Martin Mišiak, Arnulph Fuhrmann, Marc Erich Latoschik,
Impostor-Based Rendering Acceleration for Virtual, Augmented, and Mixed Reality, In Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology. New York, NY, USA:
Association for Computing Machinery,
2021. [Download][BibSonomy][Doi]
@inproceedings{misiak2021impostorbased,
author = {Martin Mišiak and Arnulph Fuhrmann and Marc Erich Latoschik},
url = {https://doi.org/10.1145/3489849.3489865},
year = {2021},
booktitle = {Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {VRST '21},
doi = {10.1145/3489849.3489865},
title = {Impostor-Based Rendering Acceleration for Virtual, Augmented, and Mixed Reality}
}
Abstract:
This paper presents an image-based rendering approach to accelerate rendering time of virtual scenes containing a large number of complex high poly count objects. Our approach replaces complex objects by impostors, light-weight image-based representations leveraging geometry and shading related processing costs. In contrast to their classical implementation, our impostors are specifically designed to work in Virtual-, Augmented- and Mixed Reality scenarios (XR for short), as they support stereoscopic rendering to provide correct depth perception. Motion parallax of typical head movements is compensated by using a ray marched parallax correction step. Our approach provides a dynamic run-time recreation of impostors as necessary for larger changes in view position. The dynamic run-time recreation is decoupled from the actual rendering process. Hence, its associated processing cost is therefore distributed over multiple frames. This avoids any unwanted frame drops or latency spikes even for impostors of objects with complex geometry and many polygons. In addition to the significant performance benefit, our impostors compare favorably against the original mesh representation, as geometric and textural temporal aliasing artifacts are heavily suppressed.
Johannes Büttner,
Utilizing the Observer/Controller Architecture for a General Game AI, In Sven Tomforde, Christian Krupitzer (Eds.), Organic Computing: Doctoral Dissertation Colloquium 2021, pp. 74--85.
kassel university press,
2021. [Download][BibSonomy]
@inbook{buttner2021utilizing,
author = {Johannes Büttner},
url = {https://kobra.uni-kassel.de/handle/123456789/14004},
year = {2021},
booktitle = {Organic Computing: Doctoral Dissertation Colloquium 2021},
editor = {Sven Tomforde and Christian Krupitzer},
publisher = {kassel university press},
pages = {74--85},
title = {Utilizing the Observer/Controller Architecture for a General Game AI}
}
Abstract:
Jonas Bruschke, Markus Wacker, Florian Niebling,
Comparing Methods to Visualize Orientation of Photographs: A User Study, In Florian Niebling, Sander Münster, Heike Messemer (Eds.), Research and Education in Urban History in the Age of Digital Libraries, pp. 129--151. Cham:
Springer International Publishing,
2021. [BibSonomy][Doi]
@inproceedings{10.1007/978-3-030-93186-5_6,
author = {Jonas Bruschke and Markus Wacker and Florian Niebling},
year = {2021},
booktitle = {Research and Education in Urban History in the Age of Digital Libraries},
editor = {Florian Niebling and Sander Münster and Heike Messemer},
publisher = {Springer International Publishing},
address = {Cham},
pages = {129--151},
doi = {10.1007/978-3-030-93186-5_6},
title = {Comparing Methods to Visualize Orientation of Photographs: A User Study}
}
Abstract:
We present methods to visualize characteristics in collections of historical photographs, especially focusing on the presentation of spatial position and orientation of photographs in relation to the buildings they depict. The developed methods were evaluated and compared in a user study focusing on their appropriateness to gain insight into specific research questions of art historians: 1) which buildings have been depicted most often in a collection of images, 2) which positions have been preferred by photographers to take pictures of a given building, 3) what is the main perspective of photographers regarding a specific building. To analyze spatial datasets of photographs, we have adapted related methods used in the visualization of fluid dynamics. As these existing visualization methods are not suitable in all photographic situations---especially when a multitude of photographs are pointing into diverging directions---we have developed additional cluster-based approaches that aim to overcome these issues. Our user study shows that the introduced cluster-based visualizations can elicit a better understanding of large photographic datasets concerning real-world research questions in certain situations, while performing comparably well in situations where existing methods are already adequate.
David Obremski, Alicia L. Schäfer, Benjamin P. Lange, Birgit Lugrin, Elisabeth Ganal, Laura Witt, Tania R. Nuñez, Sascha Schwarz, Frank Schwab,
Put that Away and Talk to Me - the Effects of Smartphone induced Ostracism while Interacting with an Intelligent Virtual Agent, In Proceedings of the 9th International Conference on Human-Agent Interaction 2021.
ACM,
2021. [BibSonomy][Doi]
@inproceedings{Obremski_2021,
author = {David Obremski and Alicia L. Schäfer and Benjamin P. Lange and Birgit Lugrin and Elisabeth Ganal and Laura Witt and Tania R. Nuñez and Sascha Schwarz and Frank Schwab},
year = {2021},
booktitle = {Proceedings of the 9th International Conference on Human-Agent Interaction 2021},
publisher = {ACM},
doi = {10.1145/3472307.3484669},
title = {Put that Away and Talk to Me - the Effects of Smartphone induced Ostracism while Interacting with an Intelligent Virtual Agent}
}
Abstract:
Gabriela Ripka, Silke Grafe, Marc Erich Latoschik,
Mapping pre-service teachers' TPACK development using a social virtual reality and a video-conferencing system, In T. Bastiaens (Eds.), Proceedings of Innovate Learning Summit 2021, pp. 145-159.
2021. [Download][BibSonomy]
@inproceedings{gabrielaripka,
author = {Gabriela Ripka and Silke Grafe and Marc Erich Latoschik},
url = {https://www.learntechlib.org/p/220280/},
year = {2021},
booktitle = {Proceedings of Innovate Learning Summit 2021},
editor = {T. Bastiaens},
pages = {145-159},
title = {Mapping pre-service teachers' TPACK development using a social virtual reality and a video-conferencing system}
}
Abstract:
Social VR's characteristics, by offering authentic learning environments that enable interaction remotely and synchronously and permit learning experiences that affect learners in a multi-sensory way, offer great potential for teaching and learning processes. However, concerning its use to promote pre-service teachers' TPACK in initial teacher education, there remains a research desideratum. In this context, this exploratory study addressed the following research question: How did pre-service teachers' TPACK develop using a social VR learning environment prototype in comparison to a video conferencing platform throughout a semester? Following a design-based research approach, an action-oriented pedagogical concept for teaching and learning in social VR was designed and implemented for initial teacher education at a German university with a convenience sample of 14 participants. The lesson plans were collected and analyzed with the help of Epistemic Network Analysis (Shaffer, 2017) at three points of time during the semester and the GATI reflection process (Krauskopf et al., 2018). Further, 14 GATI diagrams gave insights into pre-service teachers' self-estimated TPACK. As the results indicate, pre-service students constructed more complex mental models of TPACK in social VR compared to the video conferencing platform, indicating that more interrelations between knowledge domains could be constructed by planning and designing VR-integrated lesson plans.
Julian Tritscher, Anna Krause, Daniel Schlör, Fabian Gwinner, Sebastian Von Mammen, Andreas Hotho,
A financial game with opportunities for fraud, In 2021 IEEE Conference on Games (CoG), pp. 1-5.
2021. [Download][BibSonomy][Doi]
@inproceedings{Tritscher:2021aa,
author = {Julian Tritscher and Anna Krause and Daniel Schlör and Fabian Gwinner and Sebastian Von Mammen and Andreas Hotho},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Tritscher2021aa.pdf},
year = {2021},
booktitle = {2021 IEEE Conference on Games (CoG)},
pages = {1-5},
doi = {10.1109/CoG52621.2021.9619070},
title = {A financial game with opportunities for fraud}
}
Abstract:
Johannes Büttner, Sebastian Von Mammen,
Training a Reinforcement Learning Agent based on XCS in a Competitive Snake Environment, In 2021 IEEE Conference on Games (CoG), pp. 1-5.
2021. [Download][BibSonomy][Doi]
@inproceedings{Buttner:2021aa,
author = {Johannes Büttner and Sebastian Von Mammen},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Buttner2021aa.pdf},
year = {2021},
booktitle = {2021 IEEE Conference on Games (CoG)},
pages = {1-5},
doi = {10.1109/CoG52621.2021.9619161},
title = {Training a Reinforcement Learning Agent based on XCS in a Competitive Snake Environment}
}
Abstract:
Samuel Truman, Jakob Seitz, Sebastian von Mammen,
Stigmergic, Diegetic Guidance of Swarm Construction, In 2021 IEEE International Conference on Autonomic Computing and Self-Organizing Systems Companion (ACSOS-C), pp. 226-231.
2021. [Download][BibSonomy][Doi]
@inproceedings{Truman:2021ab,
author = {Samuel Truman and Jakob Seitz and Sebastian von Mammen},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Truman2021aa.pdf},
year = {2021},
booktitle = {2021 IEEE International Conference on Autonomic Computing and Self-Organizing Systems Companion (ACSOS-C)},
pages = {226-231},
doi = {10.1109/ACSOS-C52956.2021.00062},
title = {Stigmergic, Diegetic Guidance of Swarm Construction}
}
Abstract:
Samuel Truman, Sebastian von Mammen,
Interactive Self-Assembling Agent Ensembles, In Proceedings of the 1st Games Technology Summit, pp. 29--36.
Würzburg University Press,
2021. [Download][BibSonomy]
@inproceedings{Truman:2021ad,
author = {Samuel Truman and Sebastian von Mammen},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/TrumanvonMammen2021.pdf},
year = {2021},
booktitle = {Proceedings of the 1st Games Technology Summit},
publisher = {Würzburg University Press},
pages = {29--36},
title = {Interactive Self-Assembling Agent Ensembles}
}
Abstract:
Octavia Madeira, Daniel Gromer, Marc Erich Latoschik, Paul Pauli,
Effects of Acrophobic Fear and Trait Anxiety on Human Behavior in a Virtual Elevated Plus-Maze, In Frontiers in Virtual Reality, Vol. 2.
2021. [Download][BibSonomy][Doi]
@article{madeira:2021x,
author = {Octavia Madeira and Daniel Gromer and Marc Erich Latoschik and Paul Pauli},
journal = {Frontiers in Virtual Reality},
url = {https://www.frontiersin.org/article/10.3389/frvir.2021.635048},
year = {2021},
volume = {2},
doi = {10.3389/frvir.2021.635048},
title = {Effects of Acrophobic Fear and Trait Anxiety on Human Behavior in a Virtual Elevated Plus-Maze}
}
Abstract:
The Elevated Plus-Maze (EPM) is a well-established apparatus to measure anxiety in rodents, i.e., animals exhibiting an increased relative time spent in the closed vs. the open arms are considered anxious. To examine whether such anxiety-modulated behaviors are conserved in humans, we re-translated this paradigm to a human setting using virtual reality in a Cave Automatic Virtual Environment (CAVE) system. In two studies, we examined whether the EPM exploration behavior of humans is modulated by their trait anxiety and also assessed the individuals' levels of acrophobia (fear of height), claustrophobia (fear of confined spaces), sensation seeking, and the reported anxiety when on the maze. First, we constructed an exact virtual copy of the animal EPM adjusted to human proportions. In analogy to animal EPM studies, participants (N = 30) freely explored the EPM for 5 min. In the second study (N = 61), we redesigned the EPM to make it more human-adapted and to differentiate influences of trait anxiety and acrophobia by introducing various floor textures and lower walls of closed arms to the height of standard handrails. In the first experiment, hierarchical regression analyses of exploration behavior revealed the expected association between open arm avoidance and Trait Anxiety, an even stronger association with acrophobic fear. In the second study, results revealed that acrophobia was associated with avoidance of open arms with mesh-floor texture, whereas for trait anxiety, claustrophobia, and sensation seeking, no effect was detected. Also, subjects' fear rating was moderated by all psychometrics but trait anxiety. In sum, both studies consistently indicate that humans show no general open arm avoidance analogous to rodents and that human EPM behavior is modulated strongest by acrophobic fear, whereas trait anxiety plays a subordinate role. Thus, we conclude that the criteria for cross-species validity are met insufficiently in this case. Despite the exploratory nature, our studies provide in-depth insights into human exploration behavior on the virtual EPM.
Rebecca Magdalena Hein, Carolin Wienrich, Marc Erich Latoschik,
A Systematic Review of Foreign Language Learning with Immersive Technologies (2001-2020), In AIMS Electronics and Electrical Engineering, Vol. 5(2), pp. 117-145.
2021. [Download][BibSonomy][Doi]
@article{noauthororeditor,
author = {Rebecca Magdalena Hein and Carolin Wienrich and Marc Erich Latoschik},
journal = {AIMS Electronics and Electrical Engineering},
number = {2},
url = {http://www.aimspress.com/article/doi/10.3934/electreng.2021007},
year = {2021},
pages = {117-145},
volume = {5},
doi = {10.3934/electreng.2021007},
title = {A Systematic Review of Foreign Language Learning with Immersive Technologies (2001-2020)}
}
Abstract:
This study provides a systematic literature review of research (2001–2020) in the field of teaching and learning a foreign language and intercultural learning using immersive technologies. Based on 2507 sources, 54 articles were selected according to a predefined selection criteria. The review is aimed at providing information about which immersive interventions are being used for foreign language learning and teaching and where potential research gaps exist. The papers were analyzed and coded according to the following categories: (1) investigation form and education level, (2) degree of immersion, and technology used, (3) predictors, and (4) criterions. The review identified key research findings relating the use of immersive technologies for learning and teaching a foreign language and intercultural learning at cognitive, affective, and conative levels. The findings revealed research gaps in the area of teachers as a target group, and virtual reality (VR) as a fully immersive intervention form. Furthermore, the studies reviewed rarely examined behavior, and implicit measurements related to inter- and trans-cultural learning and teaching. Inter- and transcultural learning and teaching especially is an underrepresented investigation subject. Finally, concrete suggestions for future research are given. The systematic review contributes to the challenge of interdisciplinary cooperation between pedagogy, foreign language didactics, and Human-Computer Interaction to achieve innovative teaching-learning formats and a successful digital transformation.
Rebecca Hein, Jeanine Steinbock, Maria Eisenmann, Marc Erich Latoschik, Carolin Wienrich,
Development of the InteractionSuitcase in virtual reality to support inter- and transcultural learning processes in English as Foreign Language education, In Andrea Kienle, Andreas Harrer, Joerg M. Haake, Andreas Lingnau (Eds.), DELFI 2021, pp. 91-96. Bonn:
Gesellschaft für Informatik e.V.,
2021. [Download][BibSonomy]
@inproceedings{hein2021development,
author = {Rebecca Hein and Jeanine Steinbock and Maria Eisenmann and Marc Erich Latoschik and Carolin Wienrich},
url = {https://dl.gi.de/bitstream/handle/20.500.12116/36994/DELFI_2021_91-96.pdf?sequence=1&isAllowed=y},
year = {2021},
booktitle = {DELFI 2021},
editor = {Andrea Kienle and Andreas Harrer and Joerg M. Haake and Andreas Lingnau},
publisher = {Gesellschaft für Informatik e.V.},
address = {Bonn},
pages = {91-96},
title = {Development of the InteractionSuitcase in virtual reality to support inter- and transcultural learning processes in English as Foreign Language education}
}
Abstract:
Immersion programs and the experiences they offer learners are irreplaceable. In times of Covid-19, social VR applications can offer enormous potential for the acquisition of inter- and transcultural competencies (ITC). Virtual objects (VO) could initiate communication and reflection processes between learners with different cultural backgrounds and therefore offer an exciting approach. Accordingly, we address the following research questions: (1) What is a sound way to collect objects for the InteractionSuitcase to promote ITC acquisition by means of Social VR? (2) For which aspects do students use the objects when developing an ITC learning scenario? (3) Which VO are considered particularly supportive to initiate and facilitate ITC learning? To answer these research questions, the virtual InteractionSuitcase will be designed and implemented. This paper presents the empirical preliminary work and interim results of the development and evaluation of the InteractionSuitcase, its usage, and the significance of this project for Human- Computer Interaction (HCI) and English as Foreign Language (EFL) research.
J. Heß, P. Karageorgos, T. Richter, B. Müller, A. Riedmann, P. Schaper, B. Lugrin,
Digitale Leseförderung: Konzeption und Evaluation der "Mobilen Leseförderung", In Round Table-Session präsentiert auf der Online-Tagung der Fachgruppe Pädagogische Psychologie der Deutschen Gesellschaft für Psychologie (DGPs).
2021. [BibSonomy]
@conference{hess2021digitale,
author = {J. Heß and P. Karageorgos and T. Richter and B. Müller and A. Riedmann and P. Schaper and B. Lugrin},
year = {2021},
booktitle = {Round Table-Session präsentiert auf der Online-Tagung der Fachgruppe Pädagogische Psychologie der Deutschen Gesellschaft für Psychologie (DGPs)},
title = {Digitale Leseförderung: Konzeption und Evaluation der "Mobilen Leseförderung"}
}
Abstract:
Desirée Weber, Stephan Hertweck, Hisham Alwanni, Lukas D. J. Fiederer, Xi Wang, Fabian Unruh, Martin Fischbach, Marc Erich Latoschik, Tonio Ball,
A Structured Approach to Test the Signal Quality of Electroencephalography Measurements During Use of Head-Mounted Displays for Virtual Reality Applications, In Frontiers in Neuroscience, Vol. 15, p. 1527.
2021. [Download][BibSonomy][Doi]
@article{weber2021structured,
author = {Desirée Weber and Stephan Hertweck and Hisham Alwanni and Lukas D. J. Fiederer and Xi Wang and Fabian Unruh and Martin Fischbach and Marc Erich Latoschik and Tonio Ball},
journal = {Frontiers in Neuroscience},
url = {https://www.frontiersin.org/article/10.3389/fnins.2021.733673},
year = {2021},
pages = {1527},
volume = {15},
doi = {10.3389/fnins.2021.733673},
title = {A Structured Approach to Test the Signal Quality of Electroencephalography Measurements During Use of Head-Mounted Displays for Virtual Reality Applications}
}
Abstract:
Joint applications of virtual reality (VR) systems and electroencephalography (EEG) offer numerous new possibilities ranging from behavioral science to therapy. VR systems allow for highly controlled experimental environments, while EEG offers a non-invasive window to brain activity with a millisecond-ranged temporal resolution. However, EEG measurements are highly susceptible to electromagnetic (EM) noise and the influence of EM noise of head-mounted-displays (HMDs) on EEG signal quality has not been conclusively investigated. In this paper, we propose a structured approach to test HMDs for EM noise potentially harmful to EEG measures. The approach verifies the impact of HMDs on the frequency- and time-domain of the EEG signal recorded in healthy subjects. The verification task includes a comparison of conditions with and without an HMD during (i) an eyes-open vs. eyes-closed task, and (ii) with respect to the sensory- evoked brain activity. The approach is developed and tested to derive potential effects of two commercial HMDs, the Oculus Rift and the HTC Vive Pro, on the quality of 64-channel EEG measurements. The results show that the HMDs consistently introduce artifacts, especially at the line hum of 50 Hz and the HMD refresh rate of 90 Hz, respectively, and their harmonics. The frequency range that is typically most important in non-invasive EEG research and applications (<50 Hz) however, remained largely unaffected. Hence, our findings demonstrate that high-quality EEG recordings, at least in the frequency range up to 50 Hz, can be obtained with the two tested HMDs. However, the number of commercially available HMDs is constantly rising. We strongly suggest to thoroughly test such devices upfront since each HMD will most likely have its own EM footprint and this article provides a structured approach to implement such tests with arbitrary devices.
Thomas Schröter, Jennifer Tiede, Silke Grafe, Marc Erich Latoschik,
Fostering Teacher Educator Technology Competencies (TETCs) in and with Virtual Reality. Results from an Exploratory Study., In Theo Bastiaens (Eds.), Proceedings of Innovate Learning Summit 2021, p. 160–170. Online, United States:
Association for the Advancement of Computing in Education (AACE),
2021. [BibSonomy]
@inproceedings{SchrTiedGraf2021yx,
author = {Thomas Schröter and Jennifer Tiede and Silke Grafe and Marc Erich Latoschik},
year = {2021},
booktitle = {Proceedings of Innovate Learning Summit 2021},
editor = {Theo Bastiaens},
publisher = {Association for the Advancement of Computing in Education (AACE)},
address = {Online, United States},
pages = {160–170},
title = {Fostering Teacher Educator Technology Competencies (TETCs) in and with Virtual Reality. Results from an Exploratory Study.}
}
Abstract:
This exploratory study presents the findings and implications of a three-hour further development workshop implemented with a convenience sample of six teacher educators from a German university. The workshop aimed at fostering the Teacher Educator Technology Competencies (TETCs)—with a special focus on virtual reality (VR)—while reverting to the didactic principle of action orientation. To test whether this didactic principle is suitable for fostering the media pedagogical competencies of the target group in and with VR and to identify further design principles, a mixed-methods approach was utilized. The data collection consisted of focus group interviews, which were analyzed via qualitative content analysis, and Teacher Educator Technology Surveys (TETS). The analysis revealed that action orientation is a valuable addition to established didactical models in higher education (HE). Also...
Thomas Schröter, Jennifer Tiede, Marc Erich Latoschik,
Fostering Teacher Educator Technology Competencies (TETCs) in and with Virtual Reality. A Case Study, In T. Bastiaens (Eds.), Proceedings of EdMedia + Innovate Learning, pp. 617-629.
Association for the Advancement of Computing in Education (AACE),
2021. [BibSonomy]
@conference{schroter2021fostering,
author = {Thomas Schröter and Jennifer Tiede and Marc Erich Latoschik},
year = {2021},
booktitle = {Proceedings of EdMedia + Innovate Learning},
editor = {T. Bastiaens},
publisher = {Association for the Advancement of Computing in Education (AACE)},
pages = {617-629},
title = {Fostering Teacher Educator Technology Competencies (TETCs) in and with Virtual Reality. A Case Study}
}
Abstract:
Sebastian von Mammen,
Motivating Interactive Self-Organisation, In Lifelike Computing Systems Workshop 2020 and 2021.
2021. [Download][BibSonomy]
@article{Mammen:2021ab,
author = {Sebastian von Mammen},
journal = {Lifelike Computing Systems Workshop 2020 and 2021},
url = {http://ceur-ws.org/Vol-3007/2020-invited.pdf},
year = {2021},
title = {Motivating Interactive Self-Organisation}
}
Abstract:
Kristina Foerster, Rebecca Hein, Silke Grafe, Marc Erich Latoschik, Carolin Wienrich,
Fostering Intercultural Competencies in Initial Teacher Education. Implementation of Educational Design Prototypes using a Social VR Environment, In Theo Bastiaens (Eds.), Proceedings of Innovate Learning Summit 2020 2021, pp. 73--86. Online, United States:
Association for the Advancement of Computing in Education (AACE),
2021. [Download][BibSonomy]
@inproceedings{FoerHeinGraf2021ys,
author = {Kristina Foerster and Rebecca Hein and Silke Grafe and Marc Erich Latoschik and Carolin Wienrich},
url = {https://www.learntechlib.org/pv/220276/},
year = {2021},
booktitle = {Proceedings of Innovate Learning Summit 2020 2021},
editor = {Theo Bastiaens},
publisher = {Association for the Advancement of Computing in Education (AACE)},
address = {Online, United States},
pages = {73--86},
title = {Fostering Intercultural Competencies in Initial Teacher Education. Implementation of Educational Design Prototypes using a Social VR Environment}
}
Abstract:
The combination of globalization and digitalization emphasizes the importance of media-related and intercultural competencies of teacher educators and pre-service teachers. This article reports on the initial prototypical implementation of a pedagogical concept to foster such competencies of pre-service teachers. The proposed pedagogical concept utilizes a social VR framework since related work on the characteristics of VR indicate that this medium is particularly well suited for intercultural professional development processes. The development is integrated into a larger design-based-research approach that develops a theory-guided and empirically grounded professional development concept for teacher educators with a special focus on TETC 8. TETC provide a suitable competence framework capable of aligning both requirements for media-related as well as intercultural competencies. In an exploratory study with student teachers we designed...
Florian Kern, Thore Keser, Florian Niebling, Marc Erich Latoschik,
Using Hand Tracking and Voice Commands to Physically Align Virtual Surfaces in AR for Handwriting and Sketching with HoloLens 2, In 27th ACM Symposium on Virtual Reality Software and Technology, pp. 1-3. Osaka, Japan:
Association for Computing Machinery,
2021. Best poster award. 🏆 [Download][BibSonomy][Doi]
@inproceedings{kern2021using,
author = {Florian Kern and Thore Keser and Florian Niebling and Marc Erich Latoschik},
url = {https://doi.org/10.1145/3489849.3489940},
year = {2021},
booktitle = {27th ACM Symposium on Virtual Reality Software and Technology},
publisher = {Association for Computing Machinery},
address = {Osaka, Japan},
series = {VRST '21},
pages = {1-3},
doi = {10.1145/3489849.3489940},
title = {Using Hand Tracking and Voice Commands to Physically Align Virtual Surfaces in AR for Handwriting and Sketching with HoloLens 2}
}
Abstract:
In this paper, we adapt an existing VR framework for handwriting and sketching on physically aligned virtual surfaces to AR environments using the Microsoft HoloLens 2. We demonstrate a multimodal input metaphor to control the framework’s calibration features using hand tracking and voice commands. Our technical evaluation of fingertip/surface accuracy and precision on physical tables and walls is in line with existing measurements on comparable hardware, albeit considerably lower compared to previous work using controller-based VR devices. We discuss design considerations and the benefits of our unified input metaphor suitable for controller tracking and hand tracking systems. We encourage extensions and replication by providing a publicly available reference implementation (https://go.uniwue.de/hci-otss-hololens).
Florian Kern, Matthias Popp, Peter Kullmann, Elisabeth Ganal, Marc Erich Latoschik,
3D Printing an Accessory Dock for XR Controllers and its Exemplary Use as XR Stylus, In 27th ACM Symposium on Virtual Reality Software and Technology, pp. 1-3. Osaka, Japan:
Association for Computing Machinery,
2021. [Download][BibSonomy][Doi]
@inproceedings{kern2021printing,
author = {Florian Kern and Matthias Popp and Peter Kullmann and Elisabeth Ganal and Marc Erich Latoschik},
url = {https://doi.org/10.1145/3489849.3489949},
year = {2021},
booktitle = {27th ACM Symposium on Virtual Reality Software and Technology},
publisher = {Association for Computing Machinery},
address = {Osaka, Japan},
series = {VRST '21},
pages = {1-3},
doi = {10.1145/3489849.3489949},
title = {3D Printing an Accessory Dock for XR Controllers and its Exemplary Use as XR Stylus}
}
Abstract:
This article introduces the accessory dock, a 3D printed multipurpose extension for consumer-grade XR controllers that enables flexible mounting of self-made and commercial accessories. The uniform design of our concept opens new opportunities for XR systems being used for more diverse purposes, e.g., researchers and practitioners could use and compare arbitrary XR controllers within their experiments while ensuring access to buttons and battery housing. As a first example, we present a stylus tip accessory to build an XR Stylus, which can be directly used with frameworks for handwriting, sketching, and UI interaction on physically aligned virtual surfaces. For new XR controllers, we provide instructions on how to adjust the accessory dock to the controller’s form factor. A video tutorial for the construction and the source files for 3D printing are publicly available for reuse, replication, and extension (https://go.uniwue.de/hci-otss-accessory-dock).
Karim BenSiSaid, Noureddine Ababou, Amina Ababou, Daniel Roth, Sebastian von Mammen,
Tracking rower motion without on-body sensors using an instrumented machine and an artificial neural network, In Proceedings of the Institution of Mechanical Engineers, Part P: Journal of Sports Engineering and Technology.
SAGE Publications Sage UK: London, England,
2021. [Download][BibSonomy]
@article{BenSiSaid:2021aa,
author = {Karim BenSiSaid and Noureddine Ababou and Amina Ababou and Daniel Roth and Sebastian von Mammen},
journal = {Proceedings of the Institution of Mechanical Engineers, Part P: Journal of Sports Engineering and Technology},
url = {https://journals.sagepub.com/doi/abs/10.1177/17543371211014108},
year = {2021},
publisher = {SAGE Publications Sage UK: London, England},
title = {Tracking rower motion without on-body sensors using an instrumented machine and an artificial neural network}
}
Abstract:
Lukas Schreiner, Sebastian von Mammen,
Modding Support of Game Engines, In The 16th International Conference on the Foundations of Digital Games (FDG) 2021. New York, NY, USA:
Association for Computing Machinery,
2021. [Download][BibSonomy][Doi]
@inproceedings{Schreiner:2021ab,
author = {Lukas Schreiner and Sebastian von Mammen},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Schreiner2021aa.pdf},
year = {2021},
booktitle = {The 16th International Conference on the Foundations of Digital Games (FDG) 2021},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {FDG'21},
doi = {10.1145/3472538.3472574},
title = {Modding Support of Game Engines}
}
Abstract:
This paper aims at showing how different game engines support modding. To this end,
the concepts of modding and game engines alongside their historical evolution are
introduced first. Additionally, some well-known game engines and mods are presented,
and their unique features are described. Further, the effects of using game engines
and those that modding has on games, developers, and players are laid out. Particular
focus is placed on the effects on competitive games. These effects include an enhanced
lifespan for games and the requirement for more development effort. Finally, the modding-support
of selected game engines is illustrated and compared with each other. These engines
are the id Tech engine(s), Unreal Engine, the Source engine, and Unity. The paper
is then concluded with an outlook on future research possibilities and recommendations
to developers.
Marc Mußmann, Samuel Truman, Sebastian von Mammen,
Game-Ready Inventory Systems for Virtual Reality, In IEEE COG 2021, Vol. 2021.
2021. Best student paper award 🏆 [Download][BibSonomy]
@article{mussmann2021gameready,
author = {Marc Mußmann and Samuel Truman and Sebastian von Mammen},
journal = {IEEE COG 2021},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Mu%C3%9Fmann2021aa.pdf},
year = {2021},
volume = {2021},
title = {Game-Ready Inventory Systems for Virtual Reality}
}
Abstract:
Yanyan Qi, Dorothée Bruch, Philipp Krop, Martin J Herrmann, Marc Erich Latoschik, Jürgen Deckert, Grit Hein,
Social buffering of human fear is shaped by gender, social concern and the presence of real vs virtual agents, In Translational psychiatry, Vol. 11(1), p. 1–10.
Nature Publishing Group,
2021. [Download][BibSonomy]
@article{qi2021social,
author = {Yanyan Qi and Dorothée Bruch and Philipp Krop and Martin J Herrmann and Marc Erich Latoschik and Jürgen Deckert and Grit Hein},
journal = {Translational psychiatry},
number = {1},
url = {https://psyarxiv.com/qx6jg/download?format=pdf},
year = {2021},
publisher = {Nature Publishing Group},
pages = {1–10},
volume = {11},
title = {Social buffering of human fear is shaped by gender, social concern and the presence of real vs virtual agents}
}
Abstract:
The presence of a partner can attenuate physiological fear responses, a phenomenon known as social buffering. However, not all individuals are equally sociable. Here we investigated whether social buffering of fear is shaped by sensitivity to social anxiety (social concern) and whether these effects are different in females and males. We collected skin conductance responses (SCRs) and affect ratings of female and male participants when they experienced aversive and neutral sounds alone (alone treatment) or in the presence of an unknown person of the same gender (social treatment). Individual differences in social concern were assessed based on a well-established questionnaire. Our results showed that social concern had a stronger effect on social buffering in females than in males. The lower females scored on social concern, the stronger the SCRs reduction in the social compared to the alone treatment. The effect of social concern on social buffering of fear in females disappeared if participants were paired with a virtual agent instead of a real person. Together, these results showed that social buffering of human fear is shaped by
gender and social concern. In females, the presence of virtual agents can buffer fear, irrespective of individual differences in social concern. These findings specify factors that shape the social modulation of human fear, and thus might be relevant for the treatment of anxiety disorders.
Carolin Wienrich, Philipp Komma, Stephanie Vogt, Marc Erich Latoschik,
Spatial Presence in Mixed Realities--Considerations about the Concept, Measures, Design, and Experiments, In Richard Skarbez (Eds.), Frontiers in Virtual Reality, Vol. 2, p. 141.
Frontiers,
2021. [Download][BibSonomy][Doi]
@article{wienrichspatial,
author = {Carolin Wienrich and Philipp Komma and Stephanie Vogt and Marc Erich Latoschik},
journal = {Frontiers in Virtual Reality},
url = {https://www.frontiersin.org/article/10.3389/frvir.2021.694315},
year = {2021},
editor = {Richard Skarbez},
publisher = {Frontiers},
pages = {141},
volume = {2},
doi = {10.3389/frvir.2021.694315},
title = {Spatial Presence in Mixed Realities--Considerations about the Concept, Measures, Design, and Experiments}
}
Abstract:
Plenty of theories, models, measures, and investigations target the understanding of virtual presence, i.e., the sense of presence in immersive Virtual Reality (VR). Other varieties of the so-called eXtended Realities (XR), e.g., Augmented and Mixed Reality (AR and MR) incorporate immersive features to a lesser degree and continuously combine spatial cues from the real physical space and the simulated virtual space. This blurred separation questions the applicability of the accumulated knowledge about the similarities of virtual presence and presence occurring in other varieties of XR, and corresponding outcomes. The present work bridges this gap by analyzing the construct of presence in mixed realities (MR). To achieve this, the following presents (1) a short review of definitions, dimensions, and measurements of presence in VR, and (2) the state of the art views on MR. Additionally, we (3) derived a working definition of MR, extending the Milgram continuum. This definition is based on entities reaching from real to virtual manifestations at one time point. Entities possess different degrees of referential power, determining the selection of the frame of reference. Furthermore, we (4) identified three research desiderata, including research questions about the frame of reference, the corresponding dimension of transportation, and the dimension of realism in MR. Mainly the relationship between the main aspects of virtual presence of immersive VR, i.e., the place-illusion, and the plausibility-illusion, and of the referential power of MR entities are discussed regarding the concept, measures, and design of presence in MR. Finally, (5) we suggested an experimental setup to reveal the research heuristic behind experiments investigating presence in MR. The present work contributes to the theories and the meaning of and approaches to simulate and measure presence in MR. We hypothesize that research about essential underlying factors determining user experience (UX) in MR simulations and experiences is still in its infancy and hopes this article provides an encouraging starting point to tackle related questions.
David Fernes, Sebastian Oberdörfer, Marc Erich Latoschik,
Recreating a Medieval Mill as a Virtual Learning Environment, In Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology (VRST '21). New York, NY, USA:
Association for Computing Machinery,
2021. [Download][BibSonomy][Doi]
@inproceedings{fernes2021recreating,
author = {David Fernes and Sebastian Oberdörfer and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2021-vrst-medieval-mill-preprint.pdf},
year = {2021},
booktitle = {Proceedings of the 27th ACM Symposium on Virtual Reality Software and Technology (VRST '21)},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
doi = {10.1145/3489849.3489899},
title = {Recreating a Medieval Mill as a Virtual Learning Environment}
}
Abstract:
Historic buildings shown in open-air museums often lack a good accessibility and visitors rarely can interact with them as well as displayed tools to learn about processes. Providing these buildings in Virtual Reality could be a great supplement for museums to provide accessible and interactive offers. To investigate the effectiveness of this approach and to derive design guidelines, we developed an interactive virtual replicate of a medieval mill. We present the design of the mill and the results of a preliminary usability evaluation.
Birgit Lugrin, Matthias Rehm,
Culture for Socially Interactive Agents, In Birgit Lugrin, Catherine Pelachaud, David Traum (Eds.), The Handbook on Socially Interactive Agents – 20 Years of Research on Embodied Conversational Agents, Intelligent Virtual Agents, and Social Robotics, Volume 1: Methods, Behavior, Cognition, pp. 463-494.
ACM,
2021. [Download][BibSonomy][Doi]
@incollection{Lugrin_2021,
author = {Birgit Lugrin and Matthias Rehm},
url = {https://doi.org/10.1145%2F3477322.3477336},
year = {2021},
booktitle = {The Handbook on Socially Interactive Agents – 20 Years of Research on Embodied Conversational Agents, Intelligent Virtual Agents, and Social Robotics, Volume 1: Methods, Behavior, Cognition},
editor = {Birgit Lugrin and Catherine Pelachaud and David Traum},
publisher = {ACM},
pages = {463-494},
doi = {10.1145/3477322.3477336},
title = {Culture for Socially Interactive Agents}
}
Abstract:
Birgit Lugrin,
Introduction to Socially Interactive Agents, In Birgit Lugrin, Catherine Pelachaud, David Traum (Eds.), The Handbook on Socially Interactive Agents – 20 Years of Research on Embodied Conversational Agents, Intelligent Virtual Agents, and Social Robotics, Volume 1: Methods, Behavior, Cognition, pp. 1-20.
ACM,
2021. [Download][BibSonomy][Doi]
@incollection{Lugrin_2021,
author = {Birgit Lugrin},
url = {https://doi.org/10.1145%2F3477322.3477324},
year = {2021},
booktitle = {The Handbook on Socially Interactive Agents – 20 Years of Research on Embodied Conversational Agents, Intelligent Virtual Agents, and Social Robotics, Volume 1: Methods, Behavior, Cognition},
editor = {Birgit Lugrin and Catherine Pelachaud and David Traum},
publisher = {ACM},
pages = {1-20},
doi = {10.1145/3477322.3477324},
title = {Introduction to Socially Interactive Agents}
}
Abstract:
Birgit Lugrin, Catherine Pelachaud, David Traum,
The Handbook on Socially Interactive Agents – 20 Years of Research on Embodied Conversational Agents, Intelligent Virtual Agents, and Social
Robotics, Volume 1: Methods, Behavior, Cognition.
ACM,
2021. [Download][BibSonomy][Doi]
@book{2021,
author = {Birgit Lugrin and Catherine Pelachaud and David Traum},
url = {https://doi.org/10.1145%2F3477322},
year = {2021},
publisher = {ACM},
doi = {10.1145/3477322},
title = {The Handbook on Socially Interactive Agents – 20 Years of Research on Embodied Conversational Agents, Intelligent Virtual Agents, and Social
Robotics, Volume 1: Methods, Behavior, Cognition}
}
Abstract:
Lucas Plabst, Sebastian Oberdörfer, Oliver Happel, Florian Niebling,
Visualisation methods for patient monitoring in anaesthetic
procedures using augmented reality, In VRST '21: 27th ACM Symposium on Virtual Reality Software and Technology, Osaka, Japan, December 2021.
2021. [BibSonomy][Doi]
@inproceedings{plabst2021visualisation,
author = {Lucas Plabst and Sebastian Oberdörfer and Oliver Happel and Florian Niebling},
year = {2021},
booktitle = {VRST '21: 27th ACM Symposium on Virtual Reality Software and Technology, Osaka, Japan, December 2021},
doi = {https://doi.org/10.1145/3489849.3489908},
title = {Visualisation methods for patient monitoring in anaesthetic
procedures using augmented reality}
}
Abstract:
In health care, there are still many devices with poorly designed user
interfaces that can lead to user errors. Especially in acute care, an
error can lead to critical conditions in patients. Previous research
has shown that the use of augmented reality can help to better
monitor the condition of patients and better detect unforeseen
events. The system created in this work is intended to aid in the
detection of changes in patient and equipment-data in order to
increase detection of critical conditions or errors.
Rebecca M Hein, Jeanine Steinbock, Maria Eisenmann, Carolin Wienrich, Marc Erich Latoschik,
Inter- und Transkulturelles Lernen in Virtual Reality - Ein Seminarkonzept für die Lehrkräfteausbildung im Fach Englisch, In In: Söbke, H. & Weise, M. (Hrsg.), Wettbewerbsband AVRiL 2021, pp. 34-39.
2021. [Download][BibSonomy][Doi]
@article{hein2021inter,
author = {Rebecca M Hein and Jeanine Steinbock and Maria Eisenmann and Carolin Wienrich and Marc Erich Latoschik},
journal = {In: Söbke, H. & Weise, M. (Hrsg.), Wettbewerbsband AVRiL 2021},
url = {https://dl.gi.de/handle/20.500.12116/37439},
year = {2021},
pages = {34-39},
doi = {10.18420/avril2021_05},
title = {Inter- und Transkulturelles Lernen in Virtual Reality - Ein Seminarkonzept für die Lehrkräfteausbildung im Fach Englisch}
}
Abstract:
Inter- und transkulturelle Kompetenzen sind zentrale Elemente einer aktiven Teilhabe in einer modernen Gesellschaft. Die Fremdsprachenforschung setzt sich im Kontext der Digitalisierung mit neuen Konzepten zum Erwerb dieser Kompetenzen im Fremdsprachenunterricht auseinander. Eine Zusammenarbeit mit der HCI-Forschung zum Einsatz von social VR im Klassenzimmer lässt auf einen positiven Erkenntnisgewinn schließen. So lautet die zentrale Forschungshypothese, dass vollimmersive Lernumgebungen verstärkt inter- und transkulturelle Lernprozesse fördern, weil Lernende VR-Welten nicht nur betrachten, sondern innerhalb dieser selbstwirksam interagieren und das Umfeld eigenständig manipulieren können. Darauf aufbauend wurde ein Seminarkonzept entwickelt, in dem Lehramtsstudierende für das Fach Englisch inter- und transkulturelle Lehr/Lernszenarien in VR konzipieren, die sie dann in ihrer Unterrichtspraxis einsetzen können. So lernen Studierende das Potenzial von VR nicht nur kennen, sondern ebenso dieses unterrichtlich umzusetzen.
S. Münster, F. Maiwald, J. Bruschke, C. Kröber, R. Dietz, H. Messemer, F. Niebling,
WHERE ARE WE NOW ON THE ROAD TO 4D URBAN HISTORY RESEARCH AND DISCOVERY?, In ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. VIII-M-1-2021, pp. 109--116.
2021. [Download][BibSonomy][Doi]
@article{isprs-annals-VIII-M-1-2021-109-2021,
author = {S. Münster and F. Maiwald and J. Bruschke and C. Kröber and R. Dietz and H. Messemer and F. Niebling},
journal = {ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
url = {https://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/VIII-M-1-2021/109/2021/},
year = {2021},
pages = {109--116},
volume = {VIII-M-1-2021},
doi = {10.5194/isprs-annals-VIII-M-1-2021-109-2021},
title = {WHERE ARE WE NOW ON THE ROAD TO 4D URBAN HISTORY RESEARCH AND DISCOVERY?}
}
Abstract:
Philipp Schaper, Melissa Donnermann, Nicole Doser, Martina Lein, Anna Riedmann, Sophia C. Steinhaeusser, Panagiotis Karageorgos, Bettina Müller, Tobias Richter, Birgit Lugrin,
Towards a digital syllable-based reading intervention: An interview study with second-graders, In British HCI Conference 2021 (BCS-HCI 2021).
2021. [BibSonomy]
@inproceedings{schaper2021towards,
author = {Philipp Schaper and Melissa Donnermann and Nicole Doser and Martina Lein and Anna Riedmann and Sophia C. Steinhaeusser and Panagiotis Karageorgos and Bettina Müller and Tobias Richter and Birgit Lugrin},
year = {2021},
booktitle = {British HCI Conference 2021 (BCS-HCI 2021)},
title = {Towards a digital syllable-based reading intervention: An interview study with second-graders}
}
Abstract:
Philipp Schaper, Anna Riedmann, Birgit Lugrin,
Internalisation of Situational Motivation in an E-Learning Scenario Using Gamification, In International Conference on Artificial Intelligence in Education (AIED 2021), pp. 314-319.
2021. [BibSonomy]
@inproceedings{schaper2021internalisation,
author = {Philipp Schaper and Anna Riedmann and Birgit Lugrin},
year = {2021},
booktitle = {International Conference on Artificial Intelligence in Education (AIED 2021)},
pages = {314-319},
title = {Internalisation of Situational Motivation in an E-Learning Scenario Using Gamification}
}
Abstract:
Yann Glémarec, Jean-luc Lugrin, Anne-Gwenn Bosser, Cédric Buche, Marc Erich Latoschik,
Conference Talk Training With a Virtual Audience System, In ACM Symposium on Virtual Reality Software and Technology.
2021. [Download][BibSonomy][Doi]
@article{noauthororeditor,
author = {Yann Glémarec and Jean-luc Lugrin and Anne-Gwenn Bosser and Cédric Buche and Marc Erich Latoschik},
journal = {ACM Symposium on Virtual Reality Software and Technology},
url = {https://dl.acm.org/doi/10.1145/3489849.3489939},
year = {2021},
doi = {https://doi.org/10.1145/3489849.3489939},
title = {Conference Talk Training With a Virtual Audience System}
}
Abstract:
This paper presents the first prototype of a virtual audience sys-tem (VAS) specifically designed as a training tool for conferencetalks. This system has been tailored for university seminars dedi-cated to the preparation and delivery of scientific talks. We describethe required features which have been identified during the de-velopment process. We also summarize the preliminary feedbackreceived from lecturers and students during the first deployment ofthe system in seminars for bachelor and doctoral students. Finally,we discuss future work and research directions. We believe oursystem architecture and features are providing interesting insightson the development and integration of VR-based educational toolsinto university curriculum.
Andrea Bartl, Stephan Wenninger, Erik Wolf, Mario Botsch, Marc Erich Latoschik,
Affordable But Not Cheap: A Case Study of the Effects of Two 3D-Reconstruction Methods of Virtual Humans, In Frontiers in Virtual Reality, Vol. 2.
2021. [Download][BibSonomy][Doi]
@article{bartl2021affordable,
author = {Andrea Bartl and Stephan Wenninger and Erik Wolf and Mario Botsch and Marc Erich Latoschik},
journal = {Frontiers in Virtual Reality},
url = {https://www.frontiersin.org/article/10.3389/frvir.2021.694617},
year = {2021},
volume = {2},
doi = {10.3389/frvir.2021.694617},
title = {Affordable But Not Cheap: A Case Study of the Effects of Two 3D-Reconstruction Methods of Virtual Humans}
}
Abstract:
Realistic and lifelike 3D-reconstruction of virtual humans has various exciting and important use cases. Our and others' appearances have notable effects on ourselves and our interaction partners in virtual environments, e.g., on acceptance, preference, trust, believability, behavior (the Proteus effect), and more. Today, multiple approaches for the 3D-reconstruction of virtual humans exist. They significantly vary in terms of the degree of achievable realism, the technical complexities, and finally, the overall reconstruction costs involved. This article compares two 3D-reconstruction approaches with very different hardware requirements. The high-cost solution uses a typical complex and elaborated camera rig consisting of 94 digital single-lens reflex (DSLR) cameras. The recently developed low-cost solution uses a smartphone camera to create videos that capture multiple views of a person. Both methods use photogrammetric reconstruction and template fitting with the same template model and differ in their adaptation to the method-specific input material.
Each method generates high-quality virtual humans ready to be processed, animated, and rendered by standard XR simulation and game engines such as Unreal or Unity. We compare the results of the two 3D-reconstruction methods in an immersive virtual environment against each other in a user study. Our results indicate that the virtual humans from the low-cost approach are perceived similarly to those from the high-cost approach regarding the perceived similarity to the original, human-likeness, beauty, and uncanniness, despite significant differences in the objectively measured quality. The perceived feeling of change of the own body was higher for the low-cost virtual humans. Quality differences were perceived more strongly for one's own body than for other virtual humans.
Philipp Krop, Samantha Straka, Melanie Ullrich, Maximilian Ertl, Marc Erich Latoschik,
IT-Supported request management for clinical radiology: Analyzing the Radiological Order Workflow through Contextual Interviews, In Mensch und Computer 2021 (MuC '21), September 5-8, 2021, Ingolstadt, Germany, pp. 1-7. New York, NY, USA:
Association for Computing Machinery,
2021. [Download][BibSonomy][Doi]
@inproceedings{kropstraka21,
author = {Philipp Krop and Samantha Straka and Melanie Ullrich and Maximilian Ertl and Marc Erich Latoschik},
url = {https://dl.acm.org/doi/pdf/10.1145/3473856.3473992},
year = {2021},
booktitle = {Mensch und Computer 2021 (MuC '21), September 5-8, 2021, Ingolstadt, Germany},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
pages = {1-7},
doi = {10.1145/3473856.3473992},
title = {IT-Supported request management for clinical radiology: Analyzing the Radiological Order Workflow through Contextual Interviews}
}
Abstract:
Requests for radiological examinations in large medical facilities are a distributed and complex process with potential health-related risks for patients. A user-centered qualitative analysis with contextual interviews uncovered nine core problems, which hinder work efficiency and patient care: (1) Difficulties to access patient data & requests, (2) the large number of phone calls, (3) restricted & abused access rights, (4) request status difficult to track, (5) paper notes used for patient data, (6) lack of assistance for data entry, (7) frustration through documentation, (8) IT-systems not self-explanatory, and (9) conflict between physicians and radiologists.Contextual interviews were found to be a well fitting method to analyze and understand this complex process with multiple user roles. This analysis showed that there is room for improvement in the underlying IT systems, workflows and infrastructure. Our data gave useful insight into solutions to these problems and how we can use technology to improve all aspects of the request management. We are currently addressing those issues with a user-centered design process to design and implement a mobile application, which we will present in future work.
Gabriela Ripka, Silke Grafe, Marc Erich Latoschik,
Peer group supervision in Zoom and social VR-Preparing preservice teachers for planning and designing digital media integrated classes, In EdMedia+ Innovate Learning, pp. 602-616.
2021. [Download][BibSonomy]
@inproceedings{ripka2021peer,
author = {Gabriela Ripka and Silke Grafe and Marc Erich Latoschik},
url = {/brokenurl# https://downloads.hci.informatik.uni-wuerzburg.de/2021-edmedia-peer-group-supervision-in-zoom-and-social-vr.pdf},
year = {2021},
booktitle = {EdMedia+ Innovate Learning},
pages = {602-616},
title = {Peer group supervision in Zoom and social VR-Preparing preservice teachers for planning and designing digital media integrated classes}
}
Abstract:
21-century challenges demand a change towards collaborative and constructive seminar designs in initial teacher education regarding preservice teachers acquiring meta-conceptual awareness (TPACK) about how to implement emerging technologies in their future profession. Against this background the paper addresses the following research questions: 1) How should a pedagogical concept for remote initial teacher education be designed to promote metacognitive learning processes of preservice teachers? 2) How do preservice teachers perceive these learning processes in video-based communication and social VR? Regarding the pedagogical concept, peer group supervision and an action-and development-oriented approach using Zoom and social VR were identified as relevant for an instructional design that provides collaborative and constructive learning processes for students. In this exploratory study, 17 students participated in two iterative cycles of peer group supervision performing design tasks in groups. A content analysis of reflective video statements and qualitative group interviews was carried out using a qualitative research design. Results indicate the successful implementation of peer group supervision. Regarding media's implementation, Zoom's screen-sharing option and breakout session benefitted the consultation process as well as social VR's "realistic" experience of creating a "sense of community".
Dominik Gall, Daniel Roth, Jan-Philipp Stauffert, Julian Zarges, Marc Erich Latoschik,
Virtual Body Ownership Intensifies Emotional Responses to Virtual Stimuli, In Steve Richard DiPaola, Ulysses Bernardet, Jonathan Gratch (Eds.), Frontiers in Psychology - Cognitive Science, Vol. 12, p. 3833.
2021. [Download][BibSonomy][Doi]
@article{gall2021virtual,
author = {Dominik Gall and Daniel Roth and Jan-Philipp Stauffert and Julian Zarges and Marc Erich Latoschik},
journal = {Frontiers in Psychology - Cognitive Science},
url = {https://www.frontiersin.org/article/10.3389/fpsyg.2021.674179},
year = {2021},
editor = {Steve Richard DiPaola and Ulysses Bernardet and Jonathan Gratch},
pages = {3833},
volume = {12},
doi = {10.3389/fpsyg.2021.674179},
title = {Virtual Body Ownership Intensifies Emotional Responses to Virtual Stimuli}
}
Abstract:
Modulating emotional responses to virtual stimuli is a fundamental goal of many immersive interactive applications. In this study, we leverage the illusion of illusory embodiment and show that owning a virtual body provides means to modulate emotional responses. In a single-factor repeated-measures experiment, we manipulated the degree of illusory embodiment and assessed the emotional responses to virtual stimuli. We presented emotional stimuli in the same environment as the virtual body. Participants experienced higher arousal, dominance, and more intense valence in the high embodiment condition compared to the low embodiment condition. The illusion of embodiment thus intensifies the emotional processing of the virtual environment. This result suggests that artificial bodies can increase the effectiveness of immersive applications psychotherapy, entertainment, computer-mediated social interactions, or health applications.
Sebastian Oberdörfer, David Heidrich, Sandra Birnstiel, Marc Erich Latoschik,
Enchanted by Your Surrounding? Measuring the Effects of Immersion and Design of Virtual Environments on Decision-Making, In Frontiers in Virtual Reality, Vol. 2, p. 101.
2021. [Download][BibSonomy][Doi]
@article{oberdorfer2021enchanted,
author = {Sebastian Oberdörfer and David Heidrich and Sandra Birnstiel and Marc Erich Latoschik},
journal = {Frontiers in Virtual Reality},
url = {https://www.frontiersin.org/articles/10.3389/frvir.2021.679277/full},
year = {2021},
pages = {101},
volume = {2},
doi = {10.3389/frvir.2021.679277},
title = {Enchanted by Your Surrounding? Measuring the Effects of Immersion and Design of Virtual Environments on Decision-Making}
}
Abstract:
Impaired decision-making leads to the inability to distinguish between advantageous and disadvantageous choices. The impairment of a person’s decision-making is a common goal of gambling games. Given the recent trend of gambling using immersive Virtual Reality it is crucial to investigate the effects of both immersion and the virtual environment (VE) on decision-making. In a novel user study, we measured decision-making using three virtual versions of the Iowa Gambling Task (IGT). The versions differed with regard to the degree of immersion and design of the virtual environment. While emotions affect decision-making, we further measured the positive and negative affect of participants. A higher visual angle on a stimulus leads to an increased emotional response. Thus, we kept the visual angle on the Iowa Gambling Task the same between our conditions. Our results revealed no significant impact of immersion or the VE on the IGT. We further found no significant difference between the conditions with regard to positive and negative affect. This suggests that neither the medium used nor the design of the VE causes an impairment of decision-making. However, in combination with a recent study, we provide first evidence that a higher visual angle on the IGT leads to an effect of impairment.
Fabian Unruh, Maximilian Landeck, Sebastian Oberdörfer, Jean-Luc Lugrin, Marc Erich Latoschik,
The Influence of Avatar Embodiment on Time Perception - Towards VR for Time-Based Therapy, In Frontiers in Virtual Reality, Vol. 2, p. 71.
2021. [Download][BibSonomy][Doi]
@article{10.3389/frvir.2021.658509,
author = {Fabian Unruh and Maximilian Landeck and Sebastian Oberdörfer and Jean-Luc Lugrin and Marc Erich Latoschik},
journal = {Frontiers in Virtual Reality},
url = {https://www.frontiersin.org/article/10.3389/frvir.2021.658509},
year = {2021},
pages = {71},
volume = {2},
doi = {10.3389/frvir.2021.658509},
title = {The Influence of Avatar Embodiment on Time Perception - Towards VR for Time-Based Therapy}
}
Abstract:
Psycho-pathological conditions, such as depression or schizophrenia, are often accompanied by a distorted perception of time. People suffering from this conditions often report that the passage of time slows down considerably and that they are ``stuck in time.'' Virtual Reality (VR) could potentially help to diagnose and maybe treat such mental conditions. However, the conditions in which a VR simulation could correctly diagnose a time perception deviation are still unknown. In this paper, we present an experiment investigating the difference in time experience with and without a virtual body in VR, also known as avatar. The process of substituting a person's body with a virtual body is called avatar embodiment. Numerous studies demonstrated interesting perceptual, emotional, behavioral, and psychological effects caused by avatar embodiment. However, the relations between time perception and avatar embodiment are still unclear. Whether or not the presence or absence of an avatar is already influencing time perception is still open to question. Therefore, we conducted a between-subjects design with and without avatar embodiment as well as a real condition (avatar vs. no-avatar vs. real). A group of 105 healthy subjects had to wait for seven and a half minutes in a room without any distractors (e.g., no window, magazine, people, decoration) or time indicators (e.g., clocks, sunlight). The virtual environment replicates the real physical environment. Participants were unaware that they will be asked to estimate their waiting time duration as well as describing their experience of the passage of time at a later stage. Our main finding shows that the presence of an avatar is leading to a significantly faster perceived passage of time. It seems to be promising to integrate avatar embodiment in future VR time-based therapy applications as they potentially could modulate a user's perception of the passage of time. We also found no significant difference in time perception between the real and the VR conditions (avatar, no-avatar), but further research is needed to better understand this outcome.
Andreas Halbig, Marc Erich Latoschik,
A Systematic Review of Physiological Measurements, Factors, Methods, and Applications in Virtual Reality, In Frontiers in Virtual Reality, Vol. 2, p. 89.
2021. [Download][BibSonomy][Doi]
@article{10.3389/frvir.2021.694567,
author = {Andreas Halbig and Marc Erich Latoschik},
journal = {Frontiers in Virtual Reality},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2021-frotniers-review-physiological-measurements.pdf},
year = {2021},
pages = {89},
volume = {2},
doi = {10.3389/frvir.2021.694567},
title = {A Systematic Review of Physiological Measurements, Factors, Methods, and Applications in Virtual Reality}
}
Abstract:
Measurements of physiological parameters provide an objective, often non-intrusive, and (at least semi-)automatic evaluation and utilization of user behavior. In addition, specific hardware devices of Virtual Reality (VR) often ship with built-in sensors, i.e. eye-tracking and movements sensors. Hence, the combination of physiological measurements and VR applications seems promising. Several approaches have investigated the applicability and benefits of this combination for various fields of applications. However, the range of possible application fields, coupled with potentially useful and beneficial physiological parameters, types of sensor, target variables and factors, and analysis approaches and techniques is manifold. This article provides a systematic overview and an extensive state-of-the-art review of the usage of physiological measurements in VR. We identified 1,119 works that make use of physiological measurements in VR. Within these, we identified 32 approaches that focus on the classification of characteristics of experience, common in VR applications. The first part of this review categorizes the 1,119 works by field of application, i.e. therapy, training, entertainment, and communication and interaction, as well as by the specific target factors and variables measured by the physiological parameters. An additional category summarizes general VR approaches applicable to all specific fields of application since they target typical VR qualities. In the second part of this review, we analyze the target factors and variables regarding the respective methods used for an automatic analysis and, potentially, classification. For example, we highlight which measurement setups have been proven to be sensitive enough to distinguish different levels of arousal, valence, anxiety, stress, or cognitive workload in the virtual realm. This work may prove useful for all researchers wanting to use physiological data in VR and who want to have a good overview of prior approaches taken, their benefits and potential drawbacks.
Sebastian Oberdörfer, Sandra Birnstiel, Marc Erich Latoschik, Silke Grafe,
Mutual Benefits: Interdisciplinary Education of Pre-Service Teachers and HCI Students in VR/AR Learning Environment Design, In Frontiers in Education, Vol. 6, p. 233.
2021. [Download][BibSonomy][Doi]
@article{oberdorfer2021mutual,
author = {Sebastian Oberdörfer and Sandra Birnstiel and Marc Erich Latoschik and Silke Grafe},
journal = {Frontiers in Education},
url = {https://www.frontiersin.org/articles/10.3389/feduc.2021.693012/full},
year = {2021},
pages = {233},
volume = {6},
doi = {10.3389/feduc.2021.693012},
title = {Mutual Benefits: Interdisciplinary Education of Pre-Service Teachers and HCI Students in VR/AR Learning Environment Design}
}
Abstract:
The successful development and classroom integration of Virtual (VR) and Augmented Reality (AR) learning environments requires competencies and content knowledge with respect to media didactics and the respective technologies. The paper discusses a pedagogical concept specifically aiming at the interdisciplinary education of pre-service teachers in collaboration with human-computer interaction students. The students’ overarching goal is the interdisciplinary realization and integration of VR/AR learning environments in teaching and learning concepts. To assist this approach, we developed a specific tutorial guiding the developmental process. We evaluate and validate the effectiveness of the overall pedagogical concept by analyzing the change in attitudes regarding 1) the use of VR/AR for educational purposes and in competencies and content knowledge regarding 2) media didactics and 3) technology. Our results indicate a significant improvement in the knowledge of media didactics and technology. We further report on four STEM learning environments that have been developed during the seminar.
Sebastian Oberdörfer, Anne Elsässer, Silke Grafe, Marc Erich Latoschik,
Grab the Frog: Comparing Intuitive Use and User Experience of a Smartphone-only, AR-only, and Tangible AR Learning Environment, In Proceedings of the 23rd International Conference on Mobile Human-Computer Interaction (MobileHCI '21). New York, NY, USA:
Association for Computing Machinery,
2021. [Download][BibSonomy][Doi]
@inproceedings{oberdorfer2021comparing,
author = {Sebastian Oberdörfer and Anne Elsässer and Silke Grafe and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2021-mhci-frog-preprint.pdf},
year = {2021},
booktitle = {Proceedings of the 23rd International Conference on Mobile Human-Computer Interaction (MobileHCI '21)},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
doi = {10.1145/3447526.3472016},
title = {Grab the Frog: Comparing Intuitive Use and User Experience of a Smartphone-only, AR-only, and Tangible AR Learning Environment}
}
Abstract:
The integration of Augmented Reality (AR) in teaching concepts allows for the visualization of complex learning contents and can simultaneously enhance the learning motivation. By providing Tangible Augmented Reality (TAR), an AR learning environment receives a haptic aspect and allows for a direct manipulation of augmented learning materials. However, manipulating tangible objects while using handheld AR might reduce the intuitive use and hence user experience. Users need to simultaneously control the application and manipulate the tangible object. Therefore, we compare the differences in intuitive use and user experience evoked by varied technologies of knowledge presentation in an educational context. In particular, we compare a TAR learning environment targeting the learning of the anatomy of vertebrates to its smartphone-only and AR-only versions. The three versions of the learning environment only differ in their method of knowledge presentation. The three versions show a similar perceived intuitive use. The TAR version, however, yielded a significantly higher attractiveness and stimulation than AR-only and smartphone-only. This suggests a positive effect of TAR learning environments on the overall learning experience.
Yann Glémarec, Jean-Luc Lugrin, Anne-Gwenn Bosser, Aryana Collins Jackson, Cédric Buche, Marc Erich Latoschik,
Indifferent or Enthusiastic? Virtual Audiences Animation and Perception in Virtual Reality, In Frontiers in Virtual Reality, Vol. 2, p. 72.
2021. [Download][BibSonomy][Doi]
@article{10.3389/frvir.2021.666232,
author = {Yann Glémarec and Jean-Luc Lugrin and Anne-Gwenn Bosser and Aryana Collins Jackson and Cédric Buche and Marc Erich Latoschik},
journal = {Frontiers in Virtual Reality},
url = {https://www.frontiersin.org/article/10.3389/frvir.2021.666232},
year = {2021},
pages = {72},
volume = {2},
doi = {10.3389/frvir.2021.666232},
title = {Indifferent or Enthusiastic? Virtual Audiences Animation and Perception in Virtual Reality}
}
Abstract:
In this paper, we present a virtual audience simulation system for Virtual Reality (VR). The system implements an audience perception model controlling the nonverbal behaviors of virtual spectators, such as facial expressions or postures. Groups of virtual spectators are animated by a set of nonverbal behavior rules representing a particular audience attitude (e.g., indifferent or enthusiastic). Each rule specifies a nonverbal behavior category: posture, head movement, facial expression and gaze direction as well as three parameters: type, frequency and proportion. In a first user-study, we asked participants to pretend to be a speaker in VR and then create sets of nonverbal behaviour parameters to simulate different attitudes. Participants manipulated the nonverbal behaviours of single virtual spectator to match a specific levels of engagement and opinion toward them. In a second user-study, we used these parameters to design different types of virtual audiences with our nonverbal behavior rules and evaluated their perceptions. Our results demonstrate our system’s ability to create virtual audiences with three types of different perceived attitudes: indifferent, critical, enthusiastic. The analysis of the results also lead to a set of recommendations and guidelines regarding attitudes and expressions for future design of audiences for VR therapy and training applications.
Florian Kern, Peter Kullmann, Elisabeth Ganal, Kristof Korwisi, Rene Stingl, Florian Niebling, Marc Erich Latoschik,
Off-The-Shelf Stylus: Using XR Devices for Handwriting and Sketching on Physically Aligned Virtual Surfaces, In Daniel Zielasko (Eds.), Frontiers in Virtual Reality, Vol. 2, p. 69.
2021. [Download][BibSonomy][Doi]
@article{kern2021offtheshelf,
author = {Florian Kern and Peter Kullmann and Elisabeth Ganal and Kristof Korwisi and Rene Stingl and Florian Niebling and Marc Erich Latoschik},
journal = {Frontiers in Virtual Reality},
url = {https://www.frontiersin.org/articles/10.3389/frvir.2021.684498},
year = {2021},
editor = {Daniel Zielasko},
pages = {69},
volume = {2},
doi = {10.3389/frvir.2021.684498},
title = {Off-The-Shelf Stylus: Using XR Devices for Handwriting and Sketching on Physically Aligned Virtual Surfaces}
}
Abstract:
This article introduces the Off-The-Shelf Stylus (OTSS), a framework for 2D interaction (in 3D) as well as for handwriting and sketching with digital pen, ink, and paper on physically aligned virtual surfaces in Virtual, Augmented, and Mixed Reality (VR, AR, MR: XR for short). OTSS supports self-made XR styluses based on consumer-grade six-degrees-of-freedom XR controllers and commercially available styluses. The framework provides separate modules for three basic but vital features: 1) The stylus module provides stylus construction and calibration features. 2) The surface module provides surface calibration and visual feedback features for virtual-physical 2D surface alignment using our so-called 3ViSuAl procedure, and surface interaction features. 3) The evaluation suite provides a comprehensive test bed combining technical measurements for precision, accuracy, and latency with extensive usability evaluations including handwriting and sketching tasks based on established visuomotor, graphomotor, and handwriting research. The framework’s development is accompanied by an extensive open source reference implementation targeting the Unity game engine using an Oculus Rift S headset and Oculus Touch controllers. The development compares three low-cost and low-tech options to equip controllers with a tip and includes a web browser-based surface providing support for interacting, handwriting, and sketching. The evaluation of the reference implementation based on the OTSS framework identified an average stylus precision of 0.98 mm (SD = 0.54 mm) and an average surface accuracy of 0.60 mm (SD = 0.32 mm) in a seated VR environment. The time for displaying the stylus movement as digital ink on the web browser surface in VR was 79.40 ms on average (SD = 23.26 ms), including the physical controller’s motion-to-photon latency visualized by its virtual representation (M = 42.57 ms, SD = 15.70 ms). The usability evaluation (N = 10) revealed a low task load, high usability, and high user experience. Participants successfully reproduced given shapes and created legible handwriting, indicating that the OTSS and it’s reference implementation is ready for everyday use. We provide source code access to our implementation, including stylus and surface calibration and surface interaction features, making it easy to reuse, extend, adapt and/or replicate previous results (https://go.uniwue.de/hci-otss).
Melissa Donnermann, Martina Lein, Tanja Messingschlager, Anna Riedmann, Philipp Schaper, Sophia Steinhaeusser, Birgit Lugrin,
Social robots and gamification for technology supported learning: An empirical study on engagement and motivation, In Computers in Human Behavior, Vol. 121, p. 106792.
Elsevier,
2021. [BibSonomy][Doi]
@article{Donnermann_2021,
author = {Melissa Donnermann and Martina Lein and Tanja Messingschlager and Anna Riedmann and Philipp Schaper and Sophia Steinhaeusser and Birgit Lugrin},
journal = {Computers in Human Behavior},
year = {2021},
publisher = {Elsevier},
pages = {106792},
volume = {121},
doi = {10.1016/j.chb.2021.106792},
title = {Social robots and gamification for technology supported learning: An empirical study on engagement and motivation}
}
Abstract:
Nina Döllinger, Carolin Wienrich, Marc Erich Latoschik,
Challenges and Opportunities of Immersive Technologies for Mindfulness Meditation: A Systematic Review, In Frontiers in Virtual Reality, Vol. 2, p. 29.
2021. [Download][BibSonomy][Doi]
@article{10.3389/frvir.2021.644683,
author = {Nina Döllinger and Carolin Wienrich and Marc Erich Latoschik},
journal = {Frontiers in Virtual Reality},
url = {https://www.frontiersin.org/article/10.3389/frvir.2021.644683},
year = {2021},
pages = {29},
volume = {2},
doi = {10.3389/frvir.2021.644683},
title = {Challenges and Opportunities of Immersive Technologies for Mindfulness Meditation: A Systematic Review}
}
Abstract:
Mindfulness is considered an important factor of an individual's subjective well-being. Consequently, Human-Computer Interaction (HCI) has investigated approaches that strengthen mindfulness, i.e., by inventing multimedia technologies to support mindfulness meditation. These approaches often use smartphones, tablets, or consumer-grade desktop systems to allow everyday usage in users' private lives or in the scope of organized therapies. Virtual, Augmented, and Mixed Reality (VR, AR, MR; in short: XR) significantly extend the design space for such approaches. XR covers a wide range of potential sensory stimulation, perceptive and cognitive manipulations, content presentation, interaction, and agency. These facilities are linked to typical XR-specific perceptions that are conceptually closely related to mindfulness research, such as (virtual) presence and (virtual) embodiment. However, a successful exploitation of XR that strengthens mindfulness requires a systematic analysis of the potential interrelation and influencing mechanisms between XR technology, its properties, factors, and phenomena and existing models and theories of the construct of mindfulness. This article reports such a systematic analysis of XR-related research from HCI and life sciences to determine the extent to which existing research frameworks on HCI and mindfulness can be applied to XR technologies, the potential of XR technologies to support mindfulness, and open research gaps. Fifty papers of ACM Digital Library and National Institutes of Health's National Library of Medicine (PubMed) with and without empirical efficacy evaluation were included in our analysis. The results reveal that at the current time, empirical research on XR-based mindfulness support mainly focuses on therapy and therapeutic outcomes. Furthermore, most of the currently investigated XR-supported mindfulness interactions are limited to vocally guided meditations within nature-inspired virtual environments. While an analysis of empirical research on those systems did not reveal differences in mindfulness compared to non-mediated mindfulness practices, various design proposals illustrate that XR has the potential to provide interactive and body-based innovations for mindfulness practice. We propose a structured approach for future work to specify and further explore the potential of XR as mindfulness-support. The resulting framework provides design guidelines for XR-based mindfulness support based on the elements and psychological mechanisms of XR interactions.
Yasin Raies, Sebastian von Mammen,
A Swarm Grammar-Based Approach to Virtual World Generation, In Juan Romero, Tiago Martins, Nereida Rodríguez-Fernández (Eds.), Artificial Intelligence in Music, Sound, Art and Design, pp. 459--474. Cham:
Springer International Publishing,
2021. [Download][BibSonomy]
@inproceedings{10.1007/978-3-030-72914-1_30,
author = {Yasin Raies and Sebastian von Mammen},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Raies2021aa.pdf},
year = {2021},
booktitle = {Artificial Intelligence in Music, Sound, Art and Design},
editor = {Juan Romero and Tiago Martins and Nereida Rodríguez-Fernández},
publisher = {Springer International Publishing},
address = {Cham},
pages = {459--474},
title = {A Swarm Grammar-Based Approach to Virtual World Generation}
}
Abstract:
In this work we formulate and propose an extended version of the multi-agent Swarm Grammar (SG) model for the generation of virtual worlds. It unfolds a comparatively small database into a complex world featuring terrain, vegetation and bodies of water. This approach allows for adaptivity of generated assets to their environment, unbounded worlds and interactivity in their generation. In order to evaluate the model, we conducted sensitivity analyses at a local interaction scale. In addition, at a global scale, we investigated two virtual environments, discussing notable interactions, recurring configuration patterns, and obstacles in working with SGs. These analyses showed that SGs can create visually interesting virtual worlds, but require further work in ease of use. Lastly we identified which future extensions might shrink required database sizes.
Johannes Büttner, Christian Merz, Sebastian von Mammen,
Playing with Dynamic Systems - Battling Swarms in Virtual Reality, In Pedro A. Castillo, Juan Luis Jiménez Laredo (Eds.), Applications of Evolutionary Computation, pp. 309--324. Cham:
Springer International Publishing,
2021. [Download][BibSonomy]
@inproceedings{Buttner:2021ab,
author = {Johannes Büttner and Christian Merz and Sebastian von Mammen},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2021-EvoStar-PlayingWithDynamicSystems.pdf},
year = {2021},
booktitle = {Applications of Evolutionary Computation},
editor = {Pedro A. Castillo and Juan Luis Jiménez Laredo},
publisher = {Springer International Publishing},
address = {Cham},
pages = {309--324},
title = {Playing with Dynamic Systems - Battling Swarms in Virtual Reality}
}
Abstract:
In this paper, we present a serious game with the goal to provide an engaging and immersive experience to foster the players' understanding of dynamic networked systems. Confronted with attacking swarm networks, the player has to analyse their underlying network topologies and to systematically dismantle the swarms using a set of different weapons. We detail the game design, including the artificial intelligence of the swarm, the play mechanics and the level designs. Finally, we conducted an analysis of the play performances of a test group over the course of the game which revealed a positive learning outcome.
Carolin Wienrich, Marc Erich Latoschik,
eXtended Artificial Intelligence: New Prospects of Human-AI Interaction Research, In Frontiers in Virtual Reality, Vol. 2, p. 94.
2021. [Download][BibSonomy][Doi]
@article{wienrich2021extended,
author = {Carolin Wienrich and Marc Erich Latoschik},
journal = {Frontiers in Virtual Reality},
url = {https://www.frontiersin.org/article/10.3389/frvir.2021.686783},
year = {2021},
pages = {94},
volume = {2},
doi = {10.3389/frvir.2021.686783},
title = {eXtended Artificial Intelligence: New Prospects of Human-AI Interaction Research}
}
Abstract:
Artificial Intelligence (AI) covers a broad spectrum of computational problems and use cases. Many of those implicate profound and sometimes intricate questions of how humans interact or should interact with AIs. Moreover, many users or future users do have abstract ideas of what AI is, significantly depending on the specific embodiment of AI applications. Human-centered-design approaches would suggest evaluating the impact of different embodiments on human perception of and interaction with AI. An approach that is difficult to realize due to the sheer complexity of application fields and embodiments in reality. However, here XR opens new possibilities to research human-AI interactions. The article's contribution is twofold: First, it provides a theoretical treatment and model of human-AI interaction based on an XR-AI continuum as a framework for and a perspective of different approaches of XR-AI combinations. It motivates XR-AI combinations as a method to learn about the effects of prospective human-AI interfaces and shows why the combination of XR and AI fruitfully contributes to a valid and systematic investigation of human-AI interactions and interfaces. Second, the article provides two exemplary experiments investigating the aforementioned approach for two distinct AI-systems. The first experiment reveals an interesting gender effect in human-robot interaction, while the second experiment reveals an Eliza effect of a recommender system. Here the article introduces two paradigmatic implementations of the proposed XR testbed for human-AI interactions and interfaces and shows how a valid and systematic investigation can be conducted. In sum, the article opens new perspectives on how XR benefits human-centered AI design and development.
Sophia C. Steinhaeusser, Philipp Schaper, Ohenewa Bediako Akuffo, Paula Friedrich, Jülide Ön, Birgit Lugrin,
Anthropomorphize me! - Effects of Robot Gender on Listeners' Perception of the Social Robot NAO in a Storytelling Use Case, In Cindy Bethel, Ana Paiva, Elizabeth Broadbent, David Feil-Seifer, Daniel Szafir (Eds.), Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (HRI 2021).
ACM,
2021. [Download][BibSonomy][Doi]
@inproceedings{Steinhaeusser_2021,
author = {Sophia C. Steinhaeusser and Philipp Schaper and Ohenewa Bediako Akuffo and Paula Friedrich and Jülide Ön and Birgit Lugrin},
url = {https://doi.org/10.1145%2F3434074.3447228},
year = {2021},
booktitle = {Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (HRI 2021)},
editor = {Cindy Bethel and Ana Paiva and Elizabeth Broadbent and David Feil-Seifer and Daniel Szafir},
publisher = {ACM},
doi = {10.1145/3434074.3447228},
title = {Anthropomorphize me! - Effects of Robot Gender on Listeners' Perception of the Social Robot NAO in a Storytelling Use Case}
}
Abstract:
Sophia C. Steinhaeusser, Philipp Schaper, Birgit Lugrin,
Comparing a Robotic Storyteller versus Audio Book with Integration of Sound Effects and Background Music, In Cindy Bethel, Ana Paiva, Elizabeth Broadbent, David Feil-Seifer, Daniel Szafir (Eds.), Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (HRI 2021).
ACM,
2021. [Download][BibSonomy][Doi]
@inproceedings{Steinhaeusser_2021,
author = {Sophia C. Steinhaeusser and Philipp Schaper and Birgit Lugrin},
url = {https://doi.org/10.1145%2F3434074.3447186},
year = {2021},
booktitle = {Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (HRI 2021)},
editor = {Cindy Bethel and Ana Paiva and Elizabeth Broadbent and David Feil-Seifer and Daniel Szafir},
publisher = {ACM},
doi = {10.1145/3434074.3447186},
title = {Comparing a Robotic Storyteller versus Audio Book with Integration of Sound Effects and Background Music}
}
Abstract:
Sophia C. Steinhaeusser, Juliane J. Gabel, Birgit Lugrin,
Your New Friend NAO vs. Robot No. 783 - Effects of Personal or Impersonal Framing in a Robotic Storytelling Use Case, In Cindy Bethel, Ana Paiva, Elizabeth Broadbent, David Feil-Seifer, Daniel Szafir (Eds.), Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (HRI 2021).
ACM,
2021. [Download][BibSonomy][Doi]
@inproceedings{Steinhaeusser_2021,
author = {Sophia C. Steinhaeusser and Juliane J. Gabel and Birgit Lugrin},
url = {https://doi.org/10.1145%2F3434074.3447187},
year = {2021},
booktitle = {Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (HRI 2021)},
editor = {Cindy Bethel and Ana Paiva and Elizabeth Broadbent and David Feil-Seifer and Daniel Szafir},
publisher = {ACM},
doi = {10.1145/3434074.3447187},
title = {Your New Friend NAO vs. Robot No. 783 - Effects of Personal or Impersonal Framing in a Robotic Storytelling Use Case}
}
Abstract:
Melissa Donnermann, Philipp Schaper, Birgit Lugrin,
Towards Adaptive Robotic Tutors in Universities: A Field Study, In International Conference on Persuasive Technology (PERSUASIVE 2021).
Springer,
2021. [BibSonomy]
@inproceedings{donnermann2021towards,
author = {Melissa Donnermann and Philipp Schaper and Birgit Lugrin},
year = {2021},
booktitle = {International Conference on Persuasive Technology (PERSUASIVE 2021)},
publisher = {Springer},
title = {Towards Adaptive Robotic Tutors in Universities: A Field Study}
}
Abstract:
Carla Winter, Florian Kern, Dominik Gall, Marc Erich Latoschik, Paul Pauli, Ivo Käthner,
Immersive virtual reality during gait rehabilitation increases walking speed and motivation: A usability evaluation with healthy participants and patients with multiple sclerosis and stroke, In Journal of NeuroEngineering and Rehabilitation, Vol. 18(68).
2021. [Download][BibSonomy][Doi]
@article{winter2021immersive,
author = {Carla Winter and Florian Kern and Dominik Gall and Marc Erich Latoschik and Paul Pauli and Ivo Käthner},
journal = {Journal of NeuroEngineering and Rehabilitation},
number = {68},
url = {https://jneuroengrehab.biomedcentral.com/articles/10.1186/s12984-021-00848-w},
year = {2021},
volume = {18},
doi = {https://doi.org/10.1186/s12984-021-00848-w},
title = {Immersive virtual reality during gait rehabilitation increases walking speed and motivation: A usability evaluation with healthy participants and patients with multiple sclerosis and stroke}
}
Abstract:
Background. The rehabilitation of gait disorders in multiple sclerosis (MS) and stroke patients is often based on conventional treadmill training. Virtual reality (VR)-based treadmill training can increase motivation and improve therapy outcomes.
Objective. The present study aimed at (1) demonstrating the feasibility and acceptance of an immersive virtual reality application (presented via head-mounted display, HMD) for gait rehabilitation with patients, and (2) compare its effects to a semi-immersive presentation (via a monitor) and a conventional treadmill training without VR.
Methods and results. 36 healthy participants and 14 persons with MS or stroke participated in each of the three experimental conditions. For both groups, the walking speed in the HMD condition was higher than in treadmill training without VR. Healthy participants reported a higher motivation after the HMD condition as compared with the other conditions. Importantly, no side effects in the sense of simulator sickness occurred and usability ratings were high. Most of the healthy study participants (89 %) and patients (71 %) preferred the HMD-based training among the three conditions and most patients could imagine using it more frequently.
Conclusion. The study demonstrated the feasibility of combining a treadmill training with immersive VR. Due to its high usability and low side effects, the immersive system could serve as a valid alternative to conventional treadmill training in gait rehabilitation. It might be particularly suited for patients to improve training motivation and training outcome e. g. the walking speed compared with treadmill training using no or only semi-immersive VR.
Martina Mara, Jan-Philipp Stein, Marc Erich Latoschik, Birgit Lugrin, Constanze Schreiner, Rafael Hostettler, Markus Appel,
User Responses to a Humanoid Robot Observed in Real Life, Virtual Reality, 3D and 2D, In and (Eds.), Frontiers in Psychology - Human-Media Interaction, Vol. 12, p. 1152.
2021. [Download][BibSonomy][Doi]
@article{mara2021responses,
author = {Martina Mara and Jan-Philipp Stein and Marc Erich Latoschik and Birgit Lugrin and Constanze Schreiner and Rafael Hostettler and Markus Appel},
journal = {Frontiers in Psychology - Human-Media Interaction},
url = {https://www.frontiersin.org/article/10.3389/fpsyg.2021.633178},
year = {2021},
editor = { and},
pages = {1152},
volume = {12},
doi = {10.3389/fpsyg.2021.633178},
title = {User Responses to a Humanoid Robot Observed in Real Life, Virtual Reality, 3D and 2D}
}
Abstract:
Humanoid robots (i.e., robots with a human-like body) are projected to be mass marketed in the future in several fields of
application. Today, however, user evaluations of humanoid robots are often based on mediated depictions rather than actual observations or interactions with a robot, which holds true not least for scientific user studies. People can be confronted with robots in various modes of presentation, among them (1) 2D videos, (2) 3D, i.e., stereoscopic videos, (3) immersive Virtual Reality (VR), or (4) live on site. A systematic investigation into how such differential modes of presentation influence user perceptions of a robot is still lacking. Thus, the current study systematically compares the effects of different presentation modes with varying immersive potential on user evaluations of a humanoid service robot. Participants (N = 120) observed an interaction between a humanoid service robot and an actor either on 2D or 3D video, via a virtual reality headset (VR) or live. We found support for the expected effect of the presentation mode on perceived immediacy. Effects regarding the degree of human likeness that was attributed to the robot were mixed. The presentation mode had no influence on evaluations in terms of eeriness, likability, and purchase intentions. Implications for empirical research on humanoid robots and practice are discussed.
Markus Appel, Birgit Lugrin, Mayla Kühle, Corinna Heindl,
The emotional robotic storyteller: On the influence of affect congruency on narrative transportation, robot perception, and persuasion, In Computers in Human Behavior, p. 106749.
Elsevier,
2021. [Download][BibSonomy][Doi]
@article{Appel_2021,
author = {Markus Appel and Birgit Lugrin and Mayla Kühle and Corinna Heindl},
journal = {Computers in Human Behavior},
url = {https://doi.org/10.1016%2Fj.chb.2021.106749},
year = {2021},
publisher = {Elsevier},
pages = {106749},
doi = {10.1016/j.chb.2021.106749},
title = {The emotional robotic storyteller: On the influence of affect congruency on narrative transportation, robot perception, and persuasion}
}
Abstract:
David Obremski, Jean-Luc Lugrin, Philipp Schaper, Birgit Lugrin,
Non-native speaker perception of Intelligent Virtual Agents in two languages: the impact of amount and type of grammatical mistakes, In Journal on Multimodal User Interfaces.
2021. [Download][BibSonomy][Doi]
@article{Obremski2021,
author = {David Obremski and Jean-Luc Lugrin and Philipp Schaper and Birgit Lugrin},
journal = {Journal on Multimodal User Interfaces},
url = {https://doi.org/10.1007/s12193-021-00369-9},
year = {2021},
doi = {10.1007/s12193-021-00369-9},
title = {Non-native speaker perception of Intelligent Virtual Agents in two languages: the impact of amount and type of grammatical mistakes}
}
Abstract:
Having a mixed-cultural membership becomes increasingly common in our modern society. It is thus beneficial in several ways to create Intelligent Virtual Agents (IVAs) that reflect a mixed-cultural background as well, e.g., for educational settings. For research with such IVAs, it is essential that they are classified as non-native by members of a target culture. In this paper, we focus on variations of IVAs' speech to create the impression of non-native speakers that are identified as such by speakers of two different mother tongues. In particular, we investigate grammatical mistakes and identify thresholds beyond which the agents is clearly categorised as a non-native speaker. Therefore, we conducted two experiments: one for native speakers of German, and one for native speakers of English. Results of the German study indicate that beyond 10\% of word order mistakes and 25\% of infinitive mistakes German-speaking IVAs are perceived as non-native speakers. Results of the English study indicate that beyond 50\% of omission mistakes and 50\% of infinitive mistakes English-speaking IVAs are perceived as non-native speakers. We believe these thresholds constitute helpful guidelines for computational approaches of non-native speaker generation, simplifying research with IVAs in mixed-cultural settings.
Octávia Madeira, Daniel Gromer, Marc Erich Latoschik, Paul Pauli,
Effects of Acrophobic Fear and Trait Anxiety on Human Behavior in a Virtual Elevated Plus-Maze, In Frontiers in Virtual Reality, Vol. 2, p. 19.
2021. [Download][BibSonomy][Doi]
@article{madeira2021effects,
author = {Octávia Madeira and Daniel Gromer and Marc Erich Latoschik and Paul Pauli},
journal = {Frontiers in Virtual Reality},
url = {https://www.frontiersin.org/article/10.3389/frvir.2021.635048},
year = {2021},
pages = {19},
volume = {2},
doi = {10.3389/frvir.2021.635048},
title = {Effects of Acrophobic Fear and Trait Anxiety on Human Behavior in a Virtual Elevated Plus-Maze}
}
Abstract:
The Elevated Plus-Maze (EPM) is a well-established apparatus to measure anxiety in rodents, i.e. animals exhibiting an increased relative time spent in the closed versus the open arms are considered anxious. To examine whether such anxiety-modulated behavior is conserved in humans, we re-translated this paradigm to a human setting using virtual reality in a Cave Automatic Virtual Environment (CAVE) system. In two studies, we examined whether the EPM exploration behavior of humans is modulated by their trait anxiety, but also assessed the individuals’ levels of acrophobia (fear of height), claustrophobia (fear of confined spaces), sensation seeking and the reported anxiety when on the maze. First, we constructed an exact virtual copy of the animal EPM adjusted to human proportions. In analogy to animal EPM studies, participants (N = 30) freely explored the EPM for five minutes. In the second study (N = 61), we redesigned the EPM to make it more human-adapted and to differentiate influences of trait anxiety and acrophobia by introducing various floor textures and lower walls of closed arms to the height of common handrails. In the first experiment, hierarchical regression analyses of exploration behavior revealed the expected association between open arm avoidance and Trait Anxiety, but an even stronger association with acrophobic fear. In the second study, results revealed that acrophobia was associated with avoidance of open arms with mesh-floor texture, whereas for trait anxiety, claustrophobia and sensation seeking no effect was detected. In addition, subjects’ fear rating was moderated by all psychometrics but trait anxiety. In sum, both studies consistently indicate that humans show no general open arm avoidance analogous to rodents and that human EPM behavior is modulated strongest by acrophobic fear, whereas trait anxiety plays a subordinate role. Thus, we conclude that the criteria for a cross-species validity are met insufficiently in this case. Despite of the exploratory nature, our studies provide in-depth insights into human exploration behavior on the virtual EPM.
Jan-Philipp Stauffert, Kristof Korwisi, Florian Niebling, Marc Erich Latoschik,
Ka-Boom!!! Visually Exploring Latency Measurements for XR, In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. New York, NY, USA:
Association for Computing Machinery,
2021. [Download][BibSonomy][Doi]
@inproceedings{stauffert_kaboom_2021,
author = {Jan-Philipp Stauffert and Kristof Korwisi and Florian Niebling and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2021-acmchi-latency-comic-preprint.pdf},
year = {2021},
booktitle = {Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {CHI EA '21},
doi = {10.1145/3411763.3450379},
title = {Ka-Boom!!! Visually Exploring Latency Measurements for XR}
}
Abstract:
Latency can be detrimental for the experience of Virtual Reality. High latency can lead to loss of performance and cybersickness. There are simple approaches to measure approximate latency and more elaborated for more insight into latency behavior. Yet there are still researchers who do not measure the latency of the system they are using to conduct VR experiments.
This paper provides an illustrated overview of different approaches to measure latency of VR applications, as well as a small decision-making guide to assist in the choice of the measurement method. The visual style offers a more approachable way to understand how to measure latency.
Andrea Bartl, Sungchul Jung, Peter Kullmann, Stephan Wenninger, Jascha Achenbach, Erik Wolf, Christian Schell, Robert W. Lindeman, Mario Botsch, Marc Erich Latoschik,
Self-Avatars in Virtual Reality: A Study Protocol for Investigating the Impact of the Deliberateness of Choice and the Context-Match, In 2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), pp. 565-566.
2021. [Download][BibSonomy][Doi]
@inproceedings{bartl2021selfavatars,
author = {Andrea Bartl and Sungchul Jung and Peter Kullmann and Stephan Wenninger and Jascha Achenbach and Erik Wolf and Christian Schell and Robert W. Lindeman and Mario Botsch and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2021-ieeevr-deliberateness-contextmatch-poster-preprint.pdf},
year = {2021},
booktitle = {2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
pages = {565-566},
doi = {10.1109/VRW52623.2021.00165},
title = {Self-Avatars in Virtual Reality: A Study Protocol for Investigating the Impact of the Deliberateness of Choice and the Context-Match}
}
Abstract:
The illusion of virtual body ownership (VBO) plays a critical role in virtual reality (VR). VR applications provide a broad design space which includes contextual aspects of the virtual surroundings as well as user-driven deliberate choices of their appearance in VR potentially influencing VBO and other well-known effects of VR. We propose a protocol for an experiment to investigate the influence of deliberateness and context-match on VBO and presence. In a first study, we found significant interactions with the environment. Based on our results we derive recommendations for future experiments.
Sebastian Oberdörfer, Samantha Straka, Marc Erich Latoschik,
Effects of Immersion and Visual Angle on Brand Placement Effectiveness, In Proceedings of the 28th IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VR '21), pp. 440-441.
2021. [Download][BibSonomy][Doi]
@inproceedings{oberdorfer2021effects,
author = {Sebastian Oberdörfer and Samantha Straka and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2021-ieeevr-advrtize-poster-preprint.pdf},
year = {2021},
booktitle = {Proceedings of the 28th IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VR '21)},
pages = {440-441},
doi = {10.1109/VRW52623.2021.00102},
title = {Effects of Immersion and Visual Angle on Brand Placement Effectiveness}
}
Abstract:
Typical inherent properties of immersive Virtual Reality (VR) such as felt presence might have an impact on how well brand placements are remembered. In this study, we exposed participants to brand placements in four conditions of varying degrees of immersion and visual angle on the stimulus. Placements appeared either as poster or as puzzle. We measured the recall and recognition of these placements. Our study revealed that neither immersion nor the visual angle had a significant impact on memory for brand placements.
Sebastian Oberdörfer, David Heidrich, Sandra Birnstiel, Marc Erich Latoschik,
Measuring the Effects of Virtual Environment Design on Decision-Making, In Proceedings of the 28th IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VR '21), pp. 442-443.
2021. [Download][BibSonomy][Doi]
@inproceedings{oberdorfer2021measuring,
author = {Sebastian Oberdörfer and David Heidrich and Sandra Birnstiel and Marc Erich Latoschik},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2021-ieeevr-iowa-gambling-task-2-preprint.pdf},
year = {2021},
booktitle = {Proceedings of the 28th IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VR '21)},
pages = {442-443},
doi = {10.1109/VRW52623.2021.00103},
title = {Measuring the Effects of Virtual Environment Design on Decision-Making}
}
Abstract:
Recent research indicates an impairment in decision-making in immersive Virtual Reality (VR) when completing the Iowa Gambling Task (IGT). There is a high potential for emotions to explain the IGT decision-making behavior. The design of a virtual environment (VE) can influence a user’s mood and hence potentially the decisionmaking. In a novel user study, we measure decision-making using three virtual versions of the IGT. The versions differ with regard to the degree of immersion and design of the VE. Our results revealed no significant impact of the VE on the IGT and hence on decision-making.
Negin Hamzeheinejad, Daniel Roth, Samantha Monty, Julian Breuer, Anuschka Rodenberg, Marc Erich Latoschik,
The Impact of Implicit and Explicit Feedback on Performance and Experience during VR-Supported Motor Rehabilitation, In 2021 IEEE Virtual Reality and 3D User Interfaces (VR), pp. 382-391.
IEEE,
2021. [Download][BibSonomy][Doi]
@inproceedings{hamzeheinejad2021impact,
author = {Negin Hamzeheinejad and Daniel Roth and Samantha Monty and Julian Breuer and Anuschka Rodenberg and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2021-ieeevr-vrgait-feedback-preprint.pdf},
year = {2021},
booktitle = {2021 IEEE Virtual Reality and 3D User Interfaces (VR)},
publisher = {IEEE},
pages = {382-391},
doi = {10.1109/VR50410.2021.00061},
title = {The Impact of Implicit and Explicit Feedback on Performance and Experience during VR-Supported Motor Rehabilitation}
}
Abstract:
This paper examines the impact of implicit and explicit feedback in Virtual Reality (VR) on performance and user experience during motor rehabilitation. In this work, explicit feedback consists of visual and auditory cues provided by a virtual trainer, compared to traditional feedback provided by a real physiotherapist. Implicit feedback was generated by the walking motion of the virtual trainer accompanying the patient during virtual walks. Here, the potential synchrony of movements between the trainer and trainee is intended to create an implicit visual affordance of motion adaption. We hypothesize that this will stimulate the activation of mirror neurons, thus fostering neuroadaptive processes. We conducted a clinical user study in a rehabilitation center employing a gait robot. We investigated the performance outcome and subjective experience of four resulting VR-supported rehabilitation conditions: with/without explicit feedback, and with/without implicit (synchronous motion) stimulation by a virtual trainer. We further included two baseline conditions reflecting the current NonVR procedure in the rehabilitation center. Our results show that additional feedback generally resulted in better patient performance, objectively assessed by the necessary applied support force of the robot. Additionally, our VR supported rehabilitation procedure improved enjoyment and satisfaction, while no negative impacts could be observed. Implicit feedback and adapted motion synchrony by the virtual trainer led to higher mental demand, giving rise to hopes of increased neural activity and neuroadaptive stimulation.
Erik Wolf, Nathalie Merdan, Nina Döllinger, David Mal, Carolin Wienrich, Mario Botsch, Marc Erich Latoschik,
The Embodiment of Photorealistic Avatars Influences Female Body Weight Perception in Virtual Reality, In 2021 IEEE Virtual Reality and 3D User Interfaces (VR), pp. 65-74.
2021. [Download][BibSonomy][Doi]
@inproceedings{wolf2021embodiment,
author = {Erik Wolf and Nathalie Merdan and Nina Döllinger and David Mal and Carolin Wienrich and Mario Botsch and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2021-ieeevr-body_weight_perception_embodiment-preprint.pdf},
year = {2021},
booktitle = {2021 IEEE Virtual Reality and 3D User Interfaces (VR)},
pages = {65-74},
doi = {10.1109/VR50410.2021.00027},
title = {The Embodiment of Photorealistic Avatars Influences Female Body Weight Perception in Virtual Reality}
}
Abstract:
Embodiment and body perception have become important research topics in the field of virtual reality (VR). VR is considered a particularly promising tool to support research and therapy in regard to distorted body weight perception. However, the influence of embodiment on body weight perception has yet to be clarified. To address this gap, we compared body weight perception of 56 female participants of normal weight using a VR application. They either (a) self-embodied a photorealistic, non-personalized virtual human and performed body movements in front of a virtual mirror or (b) only observed the virtual human as other's avatar (or agent) performing the same movements in front of them. Afterward, participants had to estimate the virtual human's body weight. Additionally, we considered the influence of the participants' body mass index (BMI) on the estimations and captured the participants' feelings of presence and embodiment. Participants estimated the body weight of the virtual human as their embodied self-avatars significantly lower compared to participants rating the virtual human as other's avatar. Furthermore, the estimations of body weight were significantly predicted by the participant's BMI with embodiment, but not without. Our results clearly highlight embodiment as an important factor influencing the perception of virtual humans' body weights in VR.
Carolin Wienrich, Nina Ines Döllinger, Rebecca Hein,
Behavioral Framework of Immersive Technologies (BehaveFIT): How and Why Virtual Reality can Support Behavioral Change Processes, In Frontiers Virtual Reality, Vol. 2, p. 84.
2021. [Download][BibSonomy][Doi]
@article{wienrich2020mind,
author = {Carolin Wienrich and Nina Ines Döllinger and Rebecca Hein},
journal = {Frontiers Virtual Reality},
url = {https://www.frontiersin.org/articles/10.3389/frvir.2021.627194/full},
year = {2021},
pages = {84},
volume = {2},
doi = {https://doi.org/10.3389/frvir.2021.627194},
title = {Behavioral Framework of Immersive Technologies (BehaveFIT): How and Why Virtual Reality can Support Behavioral Change Processes}
}
Abstract:
The design and evaluation of assisting technologies to support behavior change processes have become an essential topic within the field of human-computer interaction research in general and the field of immersive intervention technologies in particular. The mechanisms and success of behavior change techniques and interventions are broadly investigated in the field of psychology. However, it is not always easy to adapt these psychological findings to the context of immersive technologies. The lack of theoretical foundation also leads to a lack of explanation as to why and how immersive interventions support behavior change processes. The Behavioral Framework for immersive Technologies (BehaveFIT) addresses this lack by 1) presenting an intelligible categorization and condensation of psychological barriers and immersive features, by 2) suggesting a mapping that shows why and how immersive technologies can help to overcome barriers and finally by 3) proposing a generic prediction path that enables a structured, theory-based approach to the development and evaluation of immersive interventions. These three steps explain how BehaveFIT can be used, and include guiding questions for each step. Further, two use cases illustrate the usage of BehaveFIT. Thus, the present paper contributes to guidance for immersive intervention design and evaluation, showing that immersive interventions support behavior change processes and explain and predict 'why' and 'how' immersive interventions can bridge the intention-behavior-gap.
2020
Niko Wißmann, Martin Mišiak, Arnulph Fuhrmann, Marc Erich Latoschik,
A Low-Cost Approach to Fish Tank Virtual Reality with Semi-Automatic Calibration Support, In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), pp. 598-599.
IEEE,
2020. [Download][BibSonomy][Doi]
@inproceedings{wissmann2020lowcost,
author = {Niko Wißmann and Martin Mišiak and Arnulph Fuhrmann and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2020-ieeevr-fishtankVR-preprint.pdf},
year = {2020},
booktitle = {2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
publisher = {IEEE},
pages = {598-599},
doi = {10.1109/VRW50115.2020.00150},
title = {A Low-Cost Approach to Fish Tank Virtual Reality with Semi-Automatic Calibration Support}
}
Abstract:
We describe the components and implementation of a cost-effective fish tank virtual reality system. It is based on commodity hardware and provides accurate view tracking combined with high resolution stereoscopic rendering. The system is calibrated very quickly in a semi-automatic step using computer vision. By avoiding the resolution disadvantages of current VR headsets, our prototype is suitable for a wide range of perceptual VR studies.
Niko Wißmann, Martin Mišiak, Arnulph Fuhrmann, Marc Erich Latoschik,
Accelerated Stereo Rendering with Hybrid Reprojection-Based Rasterization and Adaptive Ray-Tracing, In Proceedings of the 27th IEEE Virtual Reality conference (IEEE VR '20).
IEEE,
2020. Best paper award 🏆 [Download][BibSonomy]
@inproceedings{wissmann2020accelerated,
author = {Niko Wißmann and Martin Mišiak and Arnulph Fuhrmann and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2020-ieeevr-stereo-rendering-preprint.pdf},
year = {2020},
booktitle = {Proceedings of the 27th IEEE Virtual Reality conference (IEEE VR '20)},
publisher = {IEEE},
title = {Accelerated Stereo Rendering with Hybrid Reprojection-Based Rasterization and Adaptive Ray-Tracing}
}
Abstract:
Stereoscopic rendering is a prominent feature of virtual reality applications to generate depth cues and to provide depth perception in the virtual world. However, straight-forward stereo rendering methods usually are expensive since they render the scene from two eye-points which in general doubles the frame times. This is particularly problematic since virtual reality sets high requirements for real-time capabilities and image resolution. Hence, this paper presents a hybrid rendering system that combines classic rasteriza- tion and real-time ray-tracing to accelerate stereoscopic rendering. The system reprojects the pre-rendered left half of the stereo image pair into the right perspective using a forward grid warping technique and identifies resulting reprojection errors, which are then efficiently resolved by adaptive real-time ray-tracing. A final analysis shows that the system achieves a significant performance gain, has a neg- ligible quality impact, and is suitable even for higher rendering resolutions.
Anne Vetter, Daniel Götz, Sebastian von Mammen,
The Art of Loci: Immerse to Memorize, In 2020 IEEE Conference on Games (CoG), pp. 642-645. Osaka, Japan:
IEEE,
2020. [BibSonomy][Doi]
@inproceedings{Vetter:2020ab,
author = {Anne Vetter and Daniel Götz and Sebastian von Mammen},
year = {2020},
booktitle = {2020 IEEE Conference on Games (CoG)},
publisher = {IEEE},
address = {Osaka, Japan},
pages = {642-645},
doi = {10.1109/CoG47356.2020.9231610},
title = {The Art of Loci: Immerse to Memorize}
}
Abstract:
There are uncountable things to be remembered, but most people were never shown how to memorize effectively. With this paper, the application The Art of Loci (TAoL) is presented, to provide the means for efficient and fun learning. In a virtual reality (VR) environment users can express their study material visually, either creating dimensional mind maps or experimenting with mnemonic strategies, such as mind palaces. We also present common memorization techniques in light of their underlying pedagogical foundations and discuss the respective features of TAoL in comparison with similar software applications.
Johannes Büttner, Christian Merz, Sebastian von Mammen,
Horde Battle III or How to Dismantle a Swarm, In 2020 IEEE Conference on Games (CoG), pp. 640-641. Osaka, Japan:
IEEE,
2020. [Download][BibSonomy][Doi]
@inproceedings{Buttner:2020aa,
author = {Johannes Büttner and Christian Merz and Sebastian von Mammen},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Buttner2020aa.pdf},
year = {2020},
booktitle = {2020 IEEE Conference on Games (CoG)},
publisher = {IEEE},
address = {Osaka, Japan},
pages = {640-641},
doi = {10.1109/CoG47356.2020.9231746},
title = {Horde Battle III or How to Dismantle a Swarm}
}
Abstract:
In this demo paper, we present the design of a virtual reality (VR) first-person shooter (FPS) in which the player fends off waves of hostile flying swarm robots that took over the Earth. The purpose of this serious game is to train the player in understanding networks by learning how to dismantle them. We explain the play and game mechanics and the level designs tailored to provide an engaging experience and to re-enforce the network perspective of the swarm dynamics.
Andreas Knote, Sebastian von Mammen, YunYun Gao, Andrea Thorn,
Immersive Analysis of Crystallographic Diffraction Data., In Robert J. Teather, Chris Joslin, Wolfgang Stuerzlinger, Pablo Figueroa, Yaoping Hu, Anil Ufuk Batmaz, Wonsook Lee, Francisco R. Ortega (Eds.), VRST, pp. 58:1-58:3.
ACM,
2020. [Download][BibSonomy]
@inproceedings{Knote:2020aa,
author = {Andreas Knote and Sebastian von Mammen and YunYun Gao and Andrea Thorn},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Knote2020aa.pdf},
year = {2020},
booktitle = {VRST},
editor = {Robert J. Teather and Chris Joslin and Wolfgang Stuerzlinger and Pablo Figueroa and Yaoping Hu and Anil Ufuk Batmaz and Wonsook Lee and Francisco R. Ortega},
publisher = {ACM},
pages = {58:1-58:3},
title = {Immersive Analysis of Crystallographic Diffraction Data.}
}
Abstract:
Carolin Wienrich, Nina Ines Döllinger, Rebecca Hein,
Mind the Gap: A Framework (BehaveFIT) Guiding The Use of Immersive
Technologies in Behavior Change Processes, In arXiv preprint arXiv:2012.10912..
2020. cite arxiv:2012.10912 [Download][BibSonomy]
@preprint{wienrich2020framework,
author = {Carolin Wienrich and Nina Ines Döllinger and Rebecca Hein},
journal = {arXiv preprint arXiv:2012.10912.},
url = {http://arxiv.org/abs/2012.10912},
year = {2020},
title = {Mind the Gap: A Framework (BehaveFIT) Guiding The Use of Immersive
Technologies in Behavior Change Processes}
}
Abstract:
The design and evaluation of assisting technologies to support behavior
change processes have become an essential topic within the field of
human-computer interaction research in general and the field of immersive
intervention technologies in particular. The mechanisms and success of behavior
change techniques and interventions are broadly investigated in the field of
psychology. However, it is not always easy to adapt these psychological
findings to the context of immersive technologies. The lack of theoretical
foundation also leads to a lack of explanation as to why and how immersive
interventions support behavior change processes. The Behavioral Framework for
immersive Technologies (BehaveFIT) addresses this lack by (1) presenting an
intelligible categorization and condensation of psychological barriers and
immersive features, by (2) suggesting a mapping that shows why and how
immersive technologies can help to overcome barriers, and finally by (3)
proposing a generic prediction path that enables a structured, theory-based
approach to the development and evaluation of immersive interventions. These
three steps explain how BehaveFIT can be used, and include guiding questions
and one example for each step. Thus, the present paper contributes to guidance
for immersive intervention design and evaluation, showing that immersive
interventions support behavior change processes and explain and predict 'why'
and 'how' immersive interventions can bridge the intention-behavior-gap.
Elisabeth Brandenburg, Oliver Ringel, Thomas Zimmermann, Nina Döllinger, Rainer Stark,
Digital fitting instructions for further usage in product-lifecycle.
TESConf,
2020. [BibSonomy]
@article{brandenburg2020digital,
author = {Elisabeth Brandenburg and Oliver Ringel and Thomas Zimmermann and Nina Döllinger and Rainer Stark},
year = {2020},
publisher = {TESConf},
title = {Digital fitting instructions for further usage in product-lifecycle}
}
Abstract:
Gabriela Ripka, Silke Grafe, Marc Erich Latoschik,
Preservice Teachers' encounter with Social VR – Exploring Virtual Teaching and Learning Processes in Initial Teacher Education, In Elizabeth Langran (Eds.), SITE Interactive Conference, pp. 549--562.
Association for the Advancement of Computing in Education (AACE),
2020. [Download][BibSonomy]
@inproceedings{ripka2020preservice,
author = {Gabriela Ripka and Silke Grafe and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2020-site-preservice-teacher-svr.pdf},
year = {2020},
booktitle = {SITE Interactive Conference},
editor = {Elizabeth Langran},
publisher = {Association for the Advancement of Computing in Education (AACE)},
pages = {549--562},
title = {Preservice Teachers' encounter with Social VR – Exploring Virtual Teaching and Learning Processes in Initial Teacher Education}
}
Abstract:
With 21st century challenges ahead, higher education teaching and learning need new pedagogical concepts. Technologies like social VR enable student-centered, action-oriented, and situated learning. This paper presents findings of the pedagogical implementation of a distributed social VR prototype, a fully immersive VR learning environment, into an Initial Teacher Education program in Germany. The exploratory study addressed the following research questions: 1) How do preservice teachers perceive teaching and learning activities in fully immersive VR and 2) how should teaching and learning processes using social VR in Teacher Education be designed? It followed a design-based research approach. The pedagogical concept for teaching and learning in social VR was based on principles of action-orientation. A convenience sample of three groups of five students each took part in a 90-minute teaching and learning scenario using a fully immersive VR learning environment. During these seminar units, students engaged in qualitative group interviews and shared their perception of the action-oriented teaching and learning activities in VR. The results showed that preservice teachers had the feeling of being less distracted in social VR. Additionally, during group activities, missing social and behavioral cues made communication procedures more challenging for participants. However, some participants noticed a stronger sense of community while collaborating with others.
Heike Messemer, Walpola Layantha Perera, Matthias Heinz, Florian Niebling, Ferdinand Maiwald,
Supporting Learning in Art History--Artificial Intelligence in Digital Humanities Education, In Workshop Gemeinschaften in Neuen Medien (GeNeMe) 2020.
2020. [BibSonomy]
@inproceedings{messemer2020supporting,
author = {Heike Messemer and Walpola Layantha Perera and Matthias Heinz and Florian Niebling and Ferdinand Maiwald},
year = {2020},
booktitle = {Workshop Gemeinschaften in Neuen Medien (GeNeMe) 2020},
title = {Supporting Learning in Art History--Artificial Intelligence in Digital Humanities Education}
}
Abstract:
Andrea Deublein, Birgit Lugrin,
(Expressive) Social Robot or Tablet? – On the Benefits of Embodiment and Non-verbal Expressivity of the Interface for a Smart Environment, In Gram-Hansen S., Jonasen T., Midden C. (Eds.), International Conference on Persuasive Technology (PERSUASIVE 2020), Vol. 12064.
Springer,
2020. [BibSonomy][Doi]
@inproceedings{deublein2021expressive,
author = {Andrea Deublein and Birgit Lugrin},
year = {2020},
booktitle = {International Conference on Persuasive Technology (PERSUASIVE 2020)},
editor = {Gram-Hansen S. and Jonasen T. and Midden C.},
publisher = {Springer},
series = {Lecture Notes in Computer Science},
volume = {12064},
doi = {10.1007/978-3-030-45712-9_7},
title = {(Expressive) Social Robot or Tablet? – On the Benefits of Embodiment and Non-verbal Expressivity of the Interface for a Smart Environment}
}
Abstract:
Peter Ziegler, Sebastian von Mammen,
Generating Real-Time Strategy Heightmaps using Cellular Automata, In FDG '20: Proceedings of the 15th International Conference on the Foundations of Digital Games. Bugibba, Malta.
2020. [BibSonomy][Doi]
@inproceedings{Ziegler:2020ab,
author = {Peter Ziegler and Sebastian von Mammen},
year = {2020},
booktitle = {FDG '20: Proceedings of the 15th International Conference on the Foundations of Digital Games},
address = {Bugibba, Malta},
doi = {https://doi.org/10.1145/3402942.3402956},
title = {Generating Real-Time Strategy Heightmaps using Cellular Automata}
}
Abstract:
This paper presents a new approach of heightmap generation for Real-Time Strategy games (RTS) based on Cellular Automata (CA) in the context of various established techniques. The proposed approach uses different CA rulesets to generate and modify maps for the RTS game Supreme Commander. To evaluate the quality of the generated maps, a survey was conducted asking 30 participants about map quality compared to user-generated maps. The participants rated the maps more balanced and novel but less aesthetically pleasing. The paper concludes with according future work propositions to improve the presented approach.
Sebastian Wodarczyk, Sebastian von Mammen,
Emergent Multiplayer Games, In IEEE CoG '20: Proceedings of the 2nd International Conference on Games, pp. 33-40. Osaka, Japan.
2020. [BibSonomy][Doi]
@inproceedings{Wodarczyk:2020aa,
author = {Sebastian Wodarczyk and Sebastian von Mammen},
year = {2020},
booktitle = {IEEE CoG '20: Proceedings of the 2nd International Conference on Games},
address = {Osaka, Japan},
pages = {33-40},
doi = {10.1109/CoG47356.2020.9231834},
title = {Emergent Multiplayer Games}
}
Abstract:
Samuel Truman, Sebastian von Mammen,
An Integrated Design of World-in-Miniature Navigation in Virtual Reality, In FDG '20: Proceedings of the 15th International Conference on the Foundations of Digital Games, pp. 1--9. Bugibba, Malta:
ACM,
2020. [Download][BibSonomy][Doi]
@inproceedings{Truman:2020ab,
author = {Samuel Truman and Sebastian von Mammen},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Truman2020ab.pdf},
year = {2020},
booktitle = {FDG '20: Proceedings of the 15th International Conference on the Foundations of Digital Games},
publisher = {ACM},
address = {Bugibba, Malta},
pages = {1--9},
doi = {https://doi.org/10.1145/3402942.3402994},
title = {An Integrated Design of World-in-Miniature Navigation in Virtual Reality}
}
Abstract:
Navigation is considered one of the most fundamental challenges in Virtual Reality (VR) and has been extensively researched 11. The world-in-miniature (WIM) navigation metaphor allows users to travel in large-scale virtual environments (VEs) regardless of available physical space while maintaining a high-level overview of the VE. It relies on a hand-held, scaled-down duplicate of the entire VE, where the user's current position is displayed, and an interface provided to introduce his/her next movements 17. There are several extensions to deal with challenges of this navigation technique, e.g. scaling and scrolling 23. In this work, a WIM is presented that integrates state-of-the-art research insights and incorporates additional features that became apparent during the integration process. These features are needed to improve user interactions and to provide both look-ahead and post-travel feedback. For instance, a novel occlusion handling feature hides the WIM geometry in a rounded space reaching from the user's hand to his/her forearm. This allows the user to interact with occluded areas of the WIM such as buildings. Further extensions include different visualizations for occlusion handling, an interactive preview screen, post-travel feedback, automatic WIM customization, a unified diegetic UI design concerning WIM and user representation, and an adaptation of widely established gestures to control scaling and scrolling of the WIM. Overall, the presented WIM design integrates and extends state-of-the-art interaction tasks and visualization concepts to overcome open conceptual gaps and to provide a comprehensive practical solution for traveling in VR.
Johannes Büttner, Christian Merz, Sebastian von Mammen,
Horde Battle III or How to Dismantle a Swarm, In 2020 IEEE Conference on Games (CoG), pp. 640-641. Osaka, Japan:
IEEE,
2020. [Download][BibSonomy][Doi]
@inproceedings{9231746,
author = {Johannes Büttner and Christian Merz and Sebastian von Mammen},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Buttner2020aa.pdf},
year = {2020},
booktitle = {2020 IEEE Conference on Games (CoG)},
publisher = {IEEE},
address = {Osaka, Japan},
pages = {640-641},
doi = {10.1109/CoG47356.2020.9231746},
title = {Horde Battle III or How to Dismantle a Swarm}
}
Abstract:
In this demo paper, we present the design of a virtual reality (VR) first-person shooter (FPS) in which the player fends off waves of hostile flying swarm robots that took over the Earth. The purpose of this serious game is to train the player in understanding networks by learning how to dismantle them. We explain the play and game mechanics and the level designs tailored to provide an engaging experience and to re-enforce the network perspective of the swarm dynamics.
Anne Vetter, Daniel Götz, Sebastian von Mammen,
The Art of Loci: Immerse to Memorize, In 2020 IEEE Conference on Games (CoG), pp. 642-645. Osaka, Japan:
IEEE,
2020. [Download][BibSonomy][Doi]
@inproceedings{9231610,
author = {Anne Vetter and Daniel Götz and Sebastian von Mammen},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Vetter2020aa.pdf},
year = {2020},
booktitle = {2020 IEEE Conference on Games (CoG)},
publisher = {IEEE},
address = {Osaka, Japan},
pages = {642-645},
doi = {10.1109/CoG47356.2020.9231610},
title = {The Art of Loci: Immerse to Memorize}
}
Abstract:
There are uncountable things to be remembered, but most people were never shown how to memorize effectively. With this paper, the application The Art of Loci (TAoL) is presented, to provide the means for efficient and fun learning. In a virtual reality (VR) environment users can express their study material visually, either creating dimensional mind maps or experimenting with mnemonic strategies, such as mind palaces. We also present common memorization techniques in light of their underlying pedagogical foundations and discuss the respective features of TAoL in comparison with similar software applications.
Jan-Philipp Stauffert, Florian Niebling, Marc Erich Latoschik,
Latency and Cybersickness: Impact, Causes, and Measures. A Review, In Frontiers in Virtual Reality, Vol. 1, p. 31.
2020. [Download][BibSonomy][Doi]
@article{stauffert:2020b,
author = {Jan-Philipp Stauffert and Florian Niebling and Marc Erich Latoschik},
journal = {Frontiers in Virtual Reality},
url = {https://www.frontiersin.org/article/10.3389/frvir.2020.582204},
year = {2020},
pages = {31},
volume = {1},
doi = {10.3389/frvir.2020.582204},
title = {Latency and Cybersickness: Impact, Causes, and Measures. A Review}
}
Abstract:
Latency is a key characteristic inherent to any computer system. Motion-to-Photon (MTP) latency describes the time between the movement of a tracked object and its corresponding movement rendered and depicted by computer-generated images on a graphical output screen. High MTP latency can cause a loss of performance in interactive graphics applications and, even worse, can provoke cybersickness in Virtual Reality (VR) applications. Here, cybersickness can degrade VR experiences or may render the experiences completely unusable. It can confound research findings of an otherwise sound experiment. Latency as a contributing factor to cybersickness needs to be properly understood. Its effects need to be analyzed, its sources need to be identified, good measurement methods need to be developed, and proper counter measures need to be developed in order to reduce potentially harmful impacts of latency on the usability and safety of VR systems. Research shows that latency can exhibit intricate timing patterns with various spiking and periodic behavior. These timing behaviors may vary, yet most are found to provoke cybersickness. Overall, latency can differ drastically between different systems interfering with generalization of measurement results. This review article describes the causes and effects of latency with regard to cybersickness. We report on different existing approaches to measure and report latency. Hence, the article provides readers with the knowledge to understand and report latency for their own applications, evaluations, and experiments. It should also help to measure, identify, and finally control and counteract latency and hence gain confidence into the soundness of empirical data collected by VR exposures. Low latency increases the usability and safety of VR systems.
Konstantin Kobs, Tobias Koopmann, Albin Zehe, David Fernes, Philipp Krop, Andreas Hotho,
Where to Submit? Helping Researchers to Choose the Right Venue, In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pp. 878--883. Online:
Association for Computational Linguistics,
2020. [Download][BibSonomy]
@inproceedings{kobs-etal-2020-submit,
author = {Konstantin Kobs and Tobias Koopmann and Albin Zehe and David Fernes and Philipp Krop and Andreas Hotho},
url = {https://www.aclweb.org/anthology/2020.findings-emnlp.78},
year = {2020},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings},
publisher = {Association for Computational Linguistics},
address = {Online},
pages = {878--883},
title = {Where to Submit? Helping Researchers to Choose the Right Venue}
}
Abstract:
Sophia C. Steinhaeusser, Birgit Lugrin,
Horror Laboratory and Forest Cabin - A Horror Game Series for Desktop Computer, Virtual Reality, and Smart Substitutional Reality, In Extended Abstracts of the 2020 Annual Symposium on Computer-Human Interaction in Play.
ACM,
2020. [BibSonomy][Doi]
@inproceedings{Steinhaeusser_2020,
author = {Sophia C. Steinhaeusser and Birgit Lugrin},
year = {2020},
booktitle = {Extended Abstracts of the 2020 Annual Symposium on Computer-Human Interaction in Play},
publisher = {ACM},
doi = {10.1145/3383668.3419924},
title = {Horror Laboratory and Forest Cabin - A Horror Game Series for Desktop Computer, Virtual Reality, and Smart Substitutional Reality}
}
Abstract:
Stephan Wenninger, Jascha Achenbach, Andrea Bartl, Marc Erich Latoschik, Mario Botsch,
Realistic Virtual Humans from Smartphone Videos., In Robert J. Teather, Chris Joslin, Wolfgang Stuerzlinger, Pablo Figueroa, Yaoping Hu, Anil Ufuk Batmaz, Wonsook Lee, Francisco Ortega (Eds.), VRST, pp. 29:1-29:11.
ACM,
2020. Best paper award 🏆 [Download][BibSonomy]
@inproceedings{conf/vrst/WenningerABLB20,
author = {Stephan Wenninger and Jascha Achenbach and Andrea Bartl and Marc Erich Latoschik and Mario Botsch},
url = {https://dl.acm.org/doi/pdf/10.1145/3385956.3418940},
year = {2020},
booktitle = {VRST},
editor = {Robert J. Teather and Chris Joslin and Wolfgang Stuerzlinger and Pablo Figueroa and Yaoping Hu and Anil Ufuk Batmaz and Wonsook Lee and Francisco Ortega},
publisher = {ACM},
pages = {29:1-29:11},
title = {Realistic Virtual Humans from Smartphone Videos.}
}
Abstract:
This paper introduces an automated 3D-reconstruction method for generating high-quality virtual humans from monocular smartphone cameras. The input of our approach are two video clips, one capturing the whole body and the other providing detailed close-ups of head and face. Optical flow analysis and sharpness estimation select individual frames, from which two dense point clouds for the body and head are computed using multi-view reconstruction. Automatically detected landmarks guide the fitting of a virtual human body template to these point clouds, thereby reconstructing the geometry. A graph-cut stitching approach reconstructs a detailed texture. Our results are compared to existing low-cost monocular approaches as well as to expensive multi-camera scan rigs. We achieve visually convincing reconstructions that are almost on par with complex camera rigs while surpassing similar low-cost approaches. The generated high-quality avatars are ready to be processed, animated, and rendered by standard XR simulation and game engines such as Unreal or Unity
Chris Zimmerer, Ronja Heinrich, Martin Fischbach, Jean-Luc Lugrin, Marc Erich Latoschik,
Computing Object Selection Difficulty in VR Using Run-Time Contextual Analysis, In 26th ACM Symposium on Virtual Reality Software and Technology. New York, NY, USA:
Association for Computing Machinery,
2020. Best Poster Award 🏆 [Download][BibSonomy][Doi]
@inproceedings{10.1145/3385956.3422089,
author = {Chris Zimmerer and Ronja Heinrich and Martin Fischbach and Jean-Luc Lugrin and Marc Erich Latoschik},
url = {https://doi.org/10.1145/3385956.3422089},
year = {2020},
booktitle = {26th ACM Symposium on Virtual Reality Software and Technology},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {VRST '20},
doi = {10.1145/3385956.3422089},
title = {Computing Object Selection Difficulty in VR Using Run-Time Contextual Analysis}
}
Abstract:
This paper introduces a method for computing the difficulty of selection tasks in virtual environments using pointing metaphors by operationalizing an established human motor behavior model. In contrast to previous work, the difficulty is calculated automatically at run-time for arbitrary environments. We present and provide the implementation of our method within Unity 3D. The difficulty is computed based on a contextual analysis of spatial boundary conditions, i.e., target object size and shape, distance to the user, and occlusion. We believe our method will enable developers to build adaptive systems that automatically equip the user with the most appropriate selection technique according to the context. Further, it provides a standard metric to better evaluate and compare different selection techniques.
Chris Zimmerer, Erik Wolf, Sara Wolf, Martin Fischbach, Jean-Luc Lugrin, Marc Erich Latoschik,
Finally on Par?! Multimodal and Unimodal Interaction for Open Creative Design Tasks in Virtual Reality, In 2020 International Conference on Multimodal Interaction, p. 222–231.
2020. Best Paper Nominee 🏆 [Download][BibSonomy][Doi]
@inproceedings{10.1145/3382507.3418850,
author = {Chris Zimmerer and Erik Wolf and Sara Wolf and Martin Fischbach and Jean-Luc Lugrin and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2020-icmi-1169-preprint.pdf},
year = {2020},
booktitle = {2020 International Conference on Multimodal Interaction},
pages = {222–231},
doi = {10.1145/3382507.3418850},
title = {Finally on Par?! Multimodal and Unimodal Interaction for Open Creative Design Tasks in Virtual Reality}
}
Abstract:
Multimodal Interfaces (MMIs) have been considered to provide promising interaction paradigms for Virtual Reality (VR) for some time. However, they are still far less common than unimodal interfaces (UMIs). This paper presents a summative user study comparing an MMI to a typical UMI for a design task in VR. We developed an application targeting creative 3D object manipulations, i.e., creating 3D objects and modifying typical object properties such as color or size. The associated open user task is based on the Torrence Tests of Creative Thinking. We compared a synergistic multimodal interface using speech-accompanied pointing/grabbing gestures with a more typical unimodal interface using a hierarchical radial menu to trigger actions on selected objects. Independent judges rated the creativity of the resulting products using the Consensual Assessment Technique. Additionally, we measured the creativity-promoting factors flow, usability, and presence. Our results show that the MMI performs on par with the UMI in all measurements despite its limited flexibility and reliability. These promising results demonstrate the technological maturity of MMIs and their potential to extend traditional interaction techniques in VR efficiently.
B. Lugrin, E. Ströle, D. Obremski, F. Schwab, B. Lange,
What if it speaks like it was from the village? Effects of a Robot speaking in Regional Language Variations on Users’ Evaluations, In 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2020), pp. 1315-1320.
2020. [Download][BibSonomy][Doi]
@inproceedings{9223432,
author = {B. Lugrin and E. Ströle and D. Obremski and F. Schwab and B. Lange},
url = {https://ieeexplore.ieee.org/document/9223432/},
year = {2020},
booktitle = {29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2020)},
pages = {1315-1320},
doi = {10.1109/RO-MAN47096.2020.9223432},
title = {What if it speaks like it was from the village? Effects of a Robot speaking in Regional Language Variations on Users’ Evaluations}
}
Abstract:
The present contribution investigates the effects of spoken language varieties, in particular non-standard / regional language compared to standard language (in our study: High German), in social robotics. Based on (media) psychological and sociolinguistic research, we assumed that a robot speaking in regional language (i.e., dialect and regional accent) would be considered less competent compared to the same robot speaking in standard language (H1). Contrarily, we assumed that regional language might enhance perceived social skills and likability of a robot, at least so when taking into account whether and how much the human observers making the evaluations talk in regional language themselves. More precisely, it was assumed that the more the study participants spoke in regional language, the better their ratings of the dialect-speaking robot on social skills and likeability would be (H2). We also investigated whether the robot’s gender (male vs. female voice) would have an effect on the ratings (RQ). H1 received full, H2 limited empirical support by the data, while the robot’s gender (RQ) turned out to be a mostly negligible factor. Based on our results, practical implications for robots speaking in regional language varieties are suggested.
David Obremski, Carolin Wienrich, Astrid Carolus,
What the user’s voice tells us about UX - Analysing parameters of the voice as indicators of the User Experience of the usage of intelligent voice assistants.
Gesellschaft für Informatik e.V.,
2020. [Download][BibSonomy][Doi]
@inproceedings{https://doi.org/10.18420/muc2020-ws105-377,
author = {David Obremski and Carolin Wienrich and Astrid Carolus},
url = {http://dl.gi.de/handle/20.500.12116/33544},
year = {2020},
publisher = {Gesellschaft für Informatik e.V.},
doi = {10.18420/MUC2020-WS105-377},
title = {What the user’s voice tells us about UX - Analysing parameters of the voice as indicators of the User Experience of the usage of intelligent voice assistants}
}
Abstract:
Sander Münster, Ferdinand Maiwald, Christoph Lehmann, Taras Lazariv, Mathias Hofmann, Florian Niebling,
An Automated Pipeline for a Browser-Based, City-Scale Mobile 4D VR Application Based on Historical Images, In Proceedings of the 2nd Workshop on Structuring and Understanding of Multimedia HeritAge Contents, p. 33–40. New York, NY, USA:
Association for Computing Machinery,
2020. [Download][BibSonomy][Doi]
@inproceedings{10.1145/3423323.3425748,
author = {Sander Münster and Ferdinand Maiwald and Christoph Lehmann and Taras Lazariv and Mathias Hofmann and Florian Niebling},
url = {https://doi.org/10.1145/3423323.3425748},
year = {2020},
booktitle = {Proceedings of the 2nd Workshop on Structuring and Understanding of Multimedia HeritAge Contents},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {SUMAC'20},
pages = {33–40},
doi = {10.1145/3423323.3425748},
title = {An Automated Pipeline for a Browser-Based, City-Scale Mobile 4D VR Application Based on Historical Images}
}
Abstract:
The process for automatically creating 3D city models from contemporary photographs and visualizing them on mobile devices is now well established, but historical 4D city models are more challenging. The fourth dimension here is time. This article describes an automated VR pipeline based on historical photographs and resulting in an interactive browser-based device-rendered 4D visualization and information system for mobile devices. Since the pipeline shown is currently still under development, initial results for stages of the process will be shown and assessed for accuracy and usability.
Yann Glémarec, Jean-Luc Lugrin, Anne-Gwenn Bosser, Paul Cagniat, Cédric Buche, Marc Erich Latoschik,
Pushing Out the Classroom Walls: A Scalability Benchmark for a Virtual Audience Behaviour Model in Virtual Reality.
Gesellschaft für Informatik e.V.,
2020. [Download][BibSonomy][Doi]
@article{https://doi.org/10.18420/muc2020-ws134-337,
author = {Yann Glémarec and Jean-Luc Lugrin and Anne-Gwenn Bosser and Paul Cagniat and Cédric Buche and Marc Erich Latoschik},
url = {http://dl.gi.de/handle/20.500.12116/33554},
year = {2020},
publisher = {Gesellschaft für Informatik e.V.},
doi = {10.18420/MUC2020-WS134-337},
title = {Pushing Out the Classroom Walls: A Scalability Benchmark for a Virtual Audience Behaviour Model in Virtual Reality}
}
Abstract:
Daniel Roth, Marc Erich Latoschik,
Construction of the Virtual Embodiment Questionnaire (VEQ), In IEEE Transactions on Visualization and Computer Graphics (TVCG), Vol. 26(12), pp. 3546-3556.
2020. [Download][BibSonomy][Doi]
@article{9199571,
author = {Daniel Roth and Marc Erich Latoschik},
journal = {IEEE Transactions on Visualization and Computer Graphics (TVCG)},
number = {12},
url = {https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9199571},
year = {2020},
pages = {3546-3556},
volume = {26},
doi = {10.1109/TVCG.2020.3023603},
title = {Construction of the Virtual Embodiment Questionnaire (VEQ)}
}
Abstract:
User embodiment is important for many virtual reality (VR) applications, for example, in the context of social interaction, therapy, training, or entertainment. However, there is no data-driven and validated instrument to empirically measure the perceptual aspects of embodiment, necessary to reliably evaluate this important phenomenon. To provide a method to assess components of virtual embodiment in a reliable and consistent fashion, we constructed a Virtual Embodiment Questionnaire (VEQ). We reviewed previous literature to identify applicable constructs and questionnaire items, and performed a confirmatory factor analysis (CFA) on the data from three experiments (N = 196). The analysis confirmed three factors: (1) ownership of a virtual body, (2) agency over a virtual body, and (3) the perceived change in the body schema. A fourth study (N = 22) was conducted to confirm the reliability and validity of the scale, by investigating the impacts of latency and latency jitter present in the simulation. We present the proposed scale and study results and discuss resulting implications.
Stefan Lindner, Marc-Erich Latoschik, Heike Rittner,
Virtual Reality als Baustein in der Behandlung akuter und chronischer Schmerzen, In AINS-Anästhesiologie· Intensivmedizin· Notfallmedizin· Schmerztherapie, Vol. 55(09), pp. 549--561.
Georg Thieme Verlag KG,
2020. [Download][BibSonomy]
@article{lindner2020virtual,
author = {Stefan Lindner and Marc-Erich Latoschik and Heike Rittner},
journal = {AINS-Anästhesiologie· Intensivmedizin· Notfallmedizin· Schmerztherapie},
number = {09},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2020-ains-vr-chronischer-schmerz-preprint.pdf},
year = {2020},
publisher = {Georg Thieme Verlag KG},
pages = {549--561},
volume = {55},
title = {Virtual Reality als Baustein in der Behandlung akuter und chronischer Schmerzen}
}
Abstract:
Schmerzbehandlung zählt zu den täglichen Routinen klinischer Anästhesisten.
Im Rahmen eines wohlüberlegten Einsatzes von Schmerzmedikamenten sind Alternativen zur medikamentösen Schmerztherapie notwendig. Virtual Reality (VR) konnte sich in den letzten Jahren durch immer kostengünstigere und bessere Technologien als realistische Ergänzung etablieren. Möglichkeiten der VR sowie Indikationen und Kontraindikationen werden aufgezeigt.
Wienrich Carolin, Maria Eisenmann, Marc Erich Latoschik, Silke Grafe,
CoTeach – Connected Teacher Education, In Michael Schwaiger (Eds.), Boosting Virtual Reality in Learning.
E.N.T.E.R.,
2020. [Download][BibSonomy]
@incollection{carolin2020coteach,
author = {Wienrich Carolin and Maria Eisenmann and Marc Erich Latoschik and Silke Grafe},
url = {https://www.enter-network.eu/3d-flip-book/focus-europe-vrinsight-greenpaper/},
year = {2020},
booktitle = {Boosting Virtual Reality in Learning},
editor = {Michael Schwaiger},
publisher = {E.N.T.E.R.},
series = {VRinSight Green Paper},
title = {CoTeach – Connected Teacher Education}
}
Abstract:
CoTeach develops and evaluates innovative teaching and learning contexts for student teachers and scholars. One work package couples the potential of VR with principles of intercultural learning to create tangible experiences with pedagogically responsible value
Elisabeth Ganal, Andrea Bartl, Franziska Westermeier, Daniel Roth, Marc Erich Latoschik,
Developing a Study Design on the Effects of Different Motion Tracking Approaches on the User Embodiment in Virtual Reality, In C. Hansen, A. Nürnberger, B. Preim (Eds.), Mensch und Computer 2020.
Gesellschaft für Informatik e.V.,
2020. [Download][BibSonomy][Doi]
@inproceedings{https://doi.org/10.18420/muc2020-ws134-341,
author = {Elisabeth Ganal and Andrea Bartl and Franziska Westermeier and Daniel Roth and Marc Erich Latoschik},
journal = {Mensch und Computer 2020},
url = {https://dl.gi.de/bitstream/handle/20.500.12116/33557/muc2020-ws-341.pdf?sequence=1&isAllowed=y},
year = {2020},
editor = {C. Hansen and A. Nürnberger and B. Preim},
publisher = {Gesellschaft für Informatik e.V.},
doi = {10.18420/MUC2020-WS134-341},
title = {Developing a Study Design on the Effects of Different Motion Tracking Approaches on the User Embodiment in Virtual Reality}
}
Abstract:
Melissa Donnermann, Philipp Schaper, Birgit Lugrin,
Integrating a Social Robot in Higher Education – A Field Study, In 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2020), pp. 573-579.
2020. [BibSonomy]
@inproceedings{donnermann2020integrating,
author = {Melissa Donnermann and Philipp Schaper and Birgit Lugrin},
year = {2020},
booktitle = {29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2020)},
pages = {573-579},
title = {Integrating a Social Robot in Higher Education – A Field Study}
}
Abstract:
Maximilian Landeck, Fabian Unruh, Jean-Luc Lugrin, Marc Erich Latoschik,
Metachron: A framework for time perception research in VR, In Proceedings of the 26th ACM Conference on Virtual Reality Software and Technology.
2020. [Download][BibSonomy]
@inproceedings{Landeck2020Metachron,
author = {Maximilian Landeck and Fabian Unruh and Jean-Luc Lugrin and Marc Erich Latoschik},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2020-vrst-metachron-preprint.pdf},
year = {2020},
booktitle = {Proceedings of the 26th ACM Conference on Virtual Reality Software and Technology},
title = {Metachron: A framework for time perception research in VR}
}
Abstract:
Florian Niebling, Jonas Bruschke, Heike Messemer, Markus Wacker, Sebastian von Mammen,
Analyzing Spatial Distribution of Photographs in Cultural Heritage Applications, In Fotis Liarokapis, Athanasios Voulodimos, Nikolaos Doulamis, Anastasios Doulamis (Eds.), Visual Computing for Cultural Heritage, pp. 391-408. Cham:
Springer,
2020. [Download][BibSonomy][Doi]
@inbook{Niebling:2020aa,
author = {Florian Niebling and Jonas Bruschke and Heike Messemer and Markus Wacker and Sebastian von Mammen},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Niebling2020aa.pdf},
year = {2020},
booktitle = {Visual Computing for Cultural Heritage},
editor = {Fotis Liarokapis and Athanasios Voulodimos and Nikolaos Doulamis and Anastasios Doulamis},
publisher = {Springer},
address = {Cham},
pages = {391-408},
doi = {https://doi.org/10.1007/978-3-030-37191-3_20},
title = {Analyzing Spatial Distribution of Photographs in Cultural Heritage Applications}
}
Abstract:
Erik Wolf, Nina Döllinger, David Mal, Carolin Wienrich, Mario Botsch, Marc Erich Latoschik,
Body Weight Perception of Females using Photorealistic Avatars in Virtual and Augmented Reality, In 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 462-473.
2020. [Download][BibSonomy][Doi]
@inproceedings{wolf2020bodyweight,
author = {Erik Wolf and Nina Döllinger and David Mal and Carolin Wienrich and Mario Botsch and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2020-ismar-body-weight-perception-vr-ar-preprint.pdf},
year = {2020},
booktitle = {2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
pages = {462-473},
doi = {10.1109/ISMAR50242.2020.00071},
title = {Body Weight Perception of Females using Photorealistic Avatars in Virtual and Augmented Reality}
}
Abstract:
The appearance of avatars can potentially alter changes in their users' perception and behavior. Based on this finding, approaches to support the therapy of body perception disturbances in eating or body weight disorders by mixed reality (MR) systems gain in importance. However, the methodological heterogeneity of previous research has made it difficult to assess the suitability of different MR systems for therapeutic use in these areas. The effects of MR system properties and related psychometric factors on body-related perceptions have so far remained unclear. We developed an interactive virtual mirror embodiment application to investigate the differences between an augmented reality see-through head-mounted-display (HMD) and a virtual reality HMD on the before-mentioned factors. Additionally, we considered the influence of the participant's body-mass-index (BMI) and the BMI difference between participants and their avatars on the estimations. The 54 normal-weight female participants significantly underestimated the weight of their photorealistic, generic avatar in both conditions. Body weight estimations were significantly predicted by the participants' BMI and the BMI difference. We also observed partially significant differences in presence and tendencies for differences in virtual body ownership between the systems. Our results offer new insights into the relationships of body weight perception in different MR environments and provide new perspectives for the development of therapeutic applications.
P. Ziebell, J. Stümpfig, M. Eidel, S. C. Kleih, A. Kübler, M. E. Latoschik, S. Halder,
Stimulus modality influences session-to-session transfer of training effects in auditory and tactile streaming-based P300 brain–computer interfaces, In Scientific Reports, Vol. 10(1), pp. 11873--.
2020. [Download][BibSonomy][Doi]
@article{ziebell2020stimulus,
author = {P. Ziebell and J. Stümpfig and M. Eidel and S. C. Kleih and A. Kübler and M. E. Latoschik and S. Halder},
journal = {Scientific Reports},
number = {1},
url = {https://doi.org/10.1038/s41598-020-67887-6},
year = {2020},
pages = {11873--},
volume = {10},
doi = {10.1038/s41598-020-67887-6},
title = {Stimulus modality influences session-to-session transfer of training effects in auditory and tactile streaming-based P300 brain–computer interfaces}
}
Abstract:
Despite recent successes, patients suffering from locked-in syndrome (LIS) still struggle to communicate using vision-independent brain–computer interfaces (BCIs). In this study, we compared auditory and tactile BCIs, regarding training effects and cross-stimulus-modality transfer effects, when switching between stimulus modalities. We utilized a streaming-based P300 BCI, which was developed as a low workload approach to prevent potential BCI-inefficiency. We randomly assigned 20 healthy participants to two groups. The participants received three sessions of training either using an auditory BCI or using a tactile BCI. In an additional fourth session, BCI versions were switched to explore possible cross-stimulus-modality transfer effects. Both BCI versions could be operated successfully in the first session by the majority of the participants, with the tactile BCI being experienced as more intuitive. Significant training effects were found mostly in the auditory BCI group and strong evidence for a cross-stimulus-modality transfer occurred for the auditory training group that switched to the tactile version but not vice versa. All participants were able to control at least one BCI version, suggesting that the investigated paradigms are generally feasible and merit further research into their applicability with LIS end-users. Individual preferences regarding stimulus modality should be considered.
Sebastian Oberdörfer, Anne Elsässer, David Schraudt, Silke Grafe, Marc Erich Latoschik,
Horst – The Teaching Frog: Learning the Anatomy of a Frog Using Tangible AR, In Proceedings of the 2020 Mensch und Computer Conference (MuC '20), pp. 303-307. New York, NY, USA:
Association for Computing Machinery,
2020. [Download][BibSonomy][Doi]
@inproceedings{oberdorfer2020horst,
author = {Sebastian Oberdörfer and Anne Elsässer and David Schraudt and Silke Grafe and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2020-muc-frog-ar-preprint.pdf},
year = {2020},
booktitle = {Proceedings of the 2020 Mensch und Computer Conference (MuC '20)},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
pages = {303-307},
doi = {10.1145/3404983.3410007},
title = {Horst – The Teaching Frog: Learning the Anatomy of a Frog Using Tangible AR}
}
Abstract:
Learning environments targeting Augmented Reality (AR) visualize complex facts, can increase a learner's motivation, and allow for the application of learning contents. When using tangible user interfaces, the learning process receives a physical aspect improving the overall intuitive use. We present a tangible AR system targeting the learning of a frog's anatomy. The learning environment is based on a plushfrog containing removable markers. Detecting the markers, replaces them with 3D models of the organs. By extracting individual organs, learners can inspect them up close and learn more about their functions. Our AR frog further includes a quiz for a self-assessment of the learning progress and a gamification system to raise the overall motivation.
Jan-Philipp Stauffert, Florian Niebling, Jean-Luc Lugrin, Marc Erich Latoschik,
Guided Sine Fitting for Latency Estimation in Virtual Reality, In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), pp. 707--708.
2020. [Download][BibSonomy]
@inproceedings{stauffert2020guided,
author = {Jan-Philipp Stauffert and Florian Niebling and Jean-Luc Lugrin and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2020-ieeevr-poster-auto-sine-preprint.pdf},
year = {2020},
booktitle = {2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
pages = {707--708},
title = {Guided Sine Fitting for Latency Estimation in Virtual Reality}
}
Abstract:
Jan-Philipp Stauffert, Florian Niebling, Marc Erich Latoschik,
Simultaneous Run-Time Measurement of Motion-to-Photon Latency and Latency Jitter, In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 636--644.
2020. [Download][BibSonomy]
@inproceedings{stauffert2020simultaneous,
author = {Jan-Philipp Stauffert and Florian Niebling and Marc Erich Latoschik},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2020-ieeevr-latency-preprint.pdf},
year = {2020},
booktitle = {2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
pages = {636--644},
title = {Simultaneous Run-Time Measurement of Motion-to-Photon Latency and Latency Jitter}
}
Abstract:
Astrid Marieke Rosenthal von der Pütten, Birgit Lugrin, Sophia C. Steinhaeusser, Lina Klass,
Context Matters! Identifying Social Context Factors and Assessing Their Relevance for a Socially Assistive Robot, In Companion of the 2020 International Conference on Human-Robot Interaction, pp. 409-411.
ACM,
2020. [Download][BibSonomy][Doi]
@inproceedings{Rosenthal_von_der_P_tten_2020,
author = {Astrid Marieke Rosenthal von der Pütten and Birgit Lugrin and Sophia C. Steinhaeusser and Lina Klass},
url = {https://doi.org/10.1145%2F3371382.3378370},
year = {2020},
booktitle = {Companion of the 2020 International Conference on Human-Robot Interaction},
publisher = {ACM},
pages = {409-411},
doi = {10.1145/3371382.3378370},
title = {Context Matters! Identifying Social Context Factors and Assessing Their Relevance for a Socially Assistive Robot}
}
Abstract:
Dominik Gall, Jan Preßler, Jörn Hurtienne, Marc Erich Latoschik,
Self-organizing knowledge management might improve the quality of person-centered dementia care: A qualitative study, In International Journal of Medical Informatics, Vol. 139, p. 104132.
2020. [Download][BibSonomy][Doi]
@article{gall2020selforganizing,
author = {Dominik Gall and Jan Preßler and Jörn Hurtienne and Marc Erich Latoschik},
journal = {International Journal of Medical Informatics},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2020-gall-self-organizing-kowledge-management.pdf},
year = {2020},
pages = {104132},
volume = {139},
doi = {10.1016/j.ijmedinf.2020.104132},
title = {Self-organizing knowledge management might improve the quality of person-centered dementia care: A qualitative study}
}
Abstract:
Background: In institutional dementia care, person-centered care improves care processes and the quality of life of residents. However, communication gaps impede the implementation of person-centered care in favor of routinized care.
Objective: We evaluated whether self-organizing knowledge management reduces communication gaps and improves the quality of person-centered dementia care.
Method: We implemented a self-organizing knowledge management system. Eight significant others of residents with severe dementia and six professional caregivers used a mobile application for six months. We conducted qualitative interviews and focus groups afterward.
Main findings: Participants reported that the system increased the quality of person-centered care, reduced communication gaps, increased the task satisfaction of caregivers and the wellbeing of significant others.
Conclusions: Based on our findings, we develop the following hypotheses: Self-organizing knowledge management might provide a promising tool to improve the quality of person-centered care. It might reduce communication barriers that impede person-centered care. It might allow transferring content-maintaining tasks from caregivers to significant others. Such distribution of tasks, in turn, might be beneficial for both parties. Furthermore, shared knowledge about situational features might guide person-centered interventions.
Gabriela Ripka, Jennifer Tiede, Silke Grafe, Marc Erich Latoschik,
Teaching and Learning Processes in Immersive VR – Comparing Expectations of Preservice Teachers and Teacher Educators, In Society for Information Technology & Teacher Education (SITE) International Conference.
2020. [Download][BibSonomy]
@inproceedings{ripka2020teaching,
author = {Gabriela Ripka and Jennifer Tiede and Silke Grafe and Marc Erich Latoschik},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2020-ripka-teaching-and-learning-processes-ivr-preprint.pdf},
year = {2020},
booktitle = {Society for Information Technology & Teacher Education (SITE) International Conference},
title = {Teaching and Learning Processes in Immersive VR – Comparing Expectations of Preservice Teachers and Teacher Educators}
}
Abstract:
The usage of VR in higher education is not uncommon anymore. However, concepts are mainly still focusing on technical rather than pedagogical aspects of VR in the classroom. The exploration of the expectations of teacher educators as well as of preservice teachers appears indispensable (1) to achieve a sound understanding of requirements, (2) to identify potential design spaces, and finally (3) to create and to derive suitable pedagogical approaches for VR in initial teacher education. This paper presents results of guideline-based qualitative interviews comparing the expectations of teacher educators and of preservice teachers regarding teaching and learning in immersive virtual learning environments. The results showed that preservice teachers and teacher educators expect VR to enrich classes through interactive engagement in situations that would otherwise be too costly or dangerous. Regarding the design, teacher educators put the emphasis on functionality. Student teachers emphasized that they do not want to miss social interactions with their peers. Furthermore, both groups stated preferred modes of collaboration and interaction taking into account the characteristics of a virtual learning surrounding such as being able to use diverse learning spaces for group work. Interviewees agreed on two vital factors for effective learning and teaching processes: flexibility and the possibility of customization considering technical properties that are to deal with. Apart from this, preservice teachers emphasized strongly their worries about data usage and the ethics regarding using avatars and agents for representation.
David Mal,
DC The Impact of Social Interactions on an Embodied Individual’s Self-perception in Virtual Environments, In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), pp. 545-546. Atlanta, GA, USA, USA:
IEEE,
2020. [Download][BibSonomy][Doi]
@conference{mal2020impact,
author = {David Mal},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2020-ieeevr-dc-the-impact-of-social-interactions-on-an-embodied-individual-s-self-perception-in-virtual-environments-preprint.pdf},
year = {2020},
booktitle = {2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)},
publisher = {IEEE},
address = {Atlanta, GA, USA, USA},
pages = {545-546},
doi = {10.1109/VRW50115.2020.00124},
title = {DC The Impact of Social Interactions on an Embodied Individual’s Self-perception in Virtual Environments}
}
Abstract:
In shared immersive virtual reality, users can interact with other participants and experience them as being present in the environment. Thereby different aspects of the respective interaction partners can have an impact on the perceived quality of the communication and possibly also the self-perception of an embodied user. This paper
describes various factors the author aims to investigate during his doctoral studies. As the research area of embodied social interactions is broad, relevant factors and concrete research questions have been identified to investigate how social contact with one or multiple other persons may affect one’s self-perception, behavior and her or
his relationship with the others while being embodied in a virtual environment.
Daniel Schlör, Albin Zehe, Konstantin Kobs, Blerta Veseli, Franziska Westermeier, Larissa Brübach, Daniel Roth, Marc Erich Latoschik, Andreas Hotho,
Improving Sentiment Analysis with Biofeedback Data, In Proceedings of the Workshop on peOple in laNguage, vIsiOn and the miNd (ONION).
2020. [Download][BibSonomy]
@inproceedings{schlor2020improving,
author = {Daniel Schlör and Albin Zehe and Konstantin Kobs and Blerta Veseli and Franziska Westermeier and Larissa Brübach and Daniel Roth and Marc Erich Latoschik and Andreas Hotho},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2018-ieeevr-lugrin-vr-teacher-training/2020-onion-sentiment-eeg-preprint.pdf},
year = {2020},
booktitle = {Proceedings of the Workshop on peOple in laNguage, vIsiOn and the miNd (ONION)},
title = {Improving Sentiment Analysis with Biofeedback Data}
}
Abstract:
Humans frequently are able to read and interpret emotions of others by directly taking verbal and non-verbal signals in human-to-human communication into account or to infer or even experience emotions from mediated stories. For computers, however, emotion recognition is a complex problem: Thoughts and feelings are the roots of many behavioural responses and they are deeply entangled with neurophysiological changes within humans. As such, emotions are very subjective, often are expressed in a subtle manner, and are highly depending on context. For example, machine learning approaches for text-based sentiment analysis often rely on incorporating sentiment lexicons or language models to capture the contextual meaning. This paper explores if and how we further can enhance sentiment analysis using biofeedback of humans which are experiencing emotions while reading texts. Specifically, we record the heart rate and brain waves of readers that are presented with short texts which have been annotated with the emotions they induce. We use these physiological signals to improve the performance of a lexicon-based sentiment classifier. We find that the combination of several biosignals can improve the ability of a text-based classifier to detect the presence of a sentiment in a text on a per-sentence level.
Akimi Oyanagi, Takuji Narumi, Jean-Luc Lugrin, Hideyuki Ando, Ren Ohmura,
Reducing the Fear of Height by Inducing Proteus Effect of a Dragon Avatar, In Transactions of the Virtual Reality Society of Japan, Vol. 25(1).
2020. Best Paper Award 🏆 [Download][BibSonomy]
@inproceedings{oyanagi,
author = {Akimi Oyanagi and Takuji Narumi and Jean-Luc Lugrin and Hideyuki Ando and Ren Ohmura},
journal = {Transactions of the Virtual Reality Society of Japan},
number = {1},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2020-Journal-VR-Japan-Reducing-the-Fear-of-Height-by-Inducing-Proteus-Effect-of-a-Dragon-Avatar-preprint.pdf},
year = {2020},
volume = {25},
title = {Reducing the Fear of Height by Inducing Proteus Effect of a Dragon Avatar}
}
Abstract:
Existing studies have reported that the Full Body Ownership Illusion let users perceive a virtual body as our own body. It has also revealed the Proteus Effect that avatars' appearance could affect user's behavior, attitude and mental condition by inducing the Full Body Ownership Illusion. While many studies have focused on a humanoid avatar and its psychological effects, a previous study has reported that the Full Body Ownership Transfer can be induced even in the case of an animal avatar. In case of inducing the Full Body Ownership Transfer on an animal avatar, it can be expected to induce the psychological effect different from the one by a human avatar. Hence, this study examines a dragon avatar, which has impression of strong body and flight ability, can reduce the fear of height as the Proteus Effect by the Full Body Ownership Transfer. We carried out an experiment with some scenarios where a subject transformed into a dragon and flied into a height, comparing with operating a human avatar. The results showed that transforming into the dragon avatar can improve subjective score and physiological reaction for the fear of height
Dominik Gall, Marc Erich Latoschik,
Visual angle modulates affective responses to audiovisual stimuli, In Computers in Human Behavior, Vol. 109, p. 106346.
2020. [Download][BibSonomy][Doi]
@article{gall2020visual,
author = {Dominik Gall and Marc Erich Latoschik},
journal = {Computers in Human Behavior},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2020-visual-angle-gall-latoschik.pdf},
year = {2020},
pages = {106346},
volume = {109},
doi = {https://doi.org/10.1016/j.chb.2020.106346},
title = {Visual angle modulates affective responses to audiovisual stimuli}
}
Abstract:
What we see influences our emotions.
Technology often mediates the visual content we perceive.
Visual angle is an essential parameter of how we see such content.
It operationalizes visible properties of human-computer interfaces.
However, we know little about the content-independent effect of visual angle on emotional responses to audiovisual stimuli.
We show that visual angle alone affects emotional responses to audiovisual features, independent of object perception.
We conducted a 2 x 2 x 3 factorial repeated-measures experiment with 143 undergraduate students.
We simultaneously presented monochrome rectangles with pure tones and assessed valence, arousal, and dominance.
In the high visual angle condition, arousal increased, valence and dominance decreased, and lightness modulated arousal.
In the low visual angle condition, pitch modulated arousal, and lightness affected valence.
Visual angle weights the affective relevance of perception modalities independent of spatial representations.
Visual angle serves as an early-stage perceptual feature for organizing emotional responses.
Control of this presentation layer allows for provoking or avoiding emotional response where intended.
Kristina Bucher, Sebastian Oberdörfer, Silke Grafe, Marc Erich Latoschik,
Von Medienbeiträgen und Applikationen - ein interdisziplinäres Konzept zum Lehren und Lernen mit Augmented und Virtual Reality für die Hochschullehre, In Thomas Knaus, Olga Merz (Eds.), Schnittstellen und Interfaces - Digitaler Wandel in Bildungseinrichtungen, Vol. 7, pp. 225-238. Munich, Germany:
kopaed,
2020. [Download][BibSonomy]
@incollection{bucher2020medienbeitrgen,
author = {Kristina Bucher and Sebastian Oberdörfer and Silke Grafe and Marc Erich Latoschik},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2020-framediale-medienbeitraege-preprint.pdf},
year = {2020},
booktitle = {Schnittstellen und Interfaces - Digitaler Wandel in Bildungseinrichtungen},
editor = {Thomas Knaus and Olga Merz},
publisher = {kopaed},
address = {Munich, Germany},
series = {fraMediale},
pages = {225-238},
volume = {7},
title = {Von Medienbeiträgen und Applikationen - ein interdisziplinäres Konzept zum Lehren und Lernen mit Augmented und Virtual Reality für die Hochschullehre}
}
Abstract:
Augmented Reality (AR) und Virtual Reality (VR) finden zunehmend Eingang in die Bildungspraxis. Mit ihrem Einsatz sind sowohl Potentiale als auch mögliche Problemlagen für Lehr- und Lernprozesse verbunden. Daher ist es Aufgabe der Lehrerbildung, (angehenden) Lehrpersonen einen Kompetenzerwerb für die Einbindung von AR und VR in Lehr- und Lernprozesse zu ermöglichen. Vor diesem Hintergrund wurde ein interdisziplinäres Konzept für die Hochschullehre entwickelt und hinsichtlich der Zielerreichung empirisch evaluiert. Im Beitrag werden zunächst bedeutsame Gestaltungsaspekte des Konzepts sowie erste Befunde aus einer Pilotuntersuchung vorgestellt. Im Anschluss werden handlungspraktische Erfahrungen der interdisziplinären Zusammenarbeit reflektiert und diskutiert.
Sebastian Oberdörfer, David Heidrich, Marc Erich Latoschik,
Think Twice: The Influence of Immersion on Decision Making during Gambling in Virtual Reality, In Proceedings of the 27th IEEE Virtual Reality conference (VR '20), pp. 483-492. Atlanta, USA.
2020. [Download][BibSonomy][Doi]
@inproceedings{oberdorfer2020think,
author = {Sebastian Oberdörfer and David Heidrich and Marc Erich Latoschik},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2020-ieeevr-oberdoerfer-think-twice.pdf},
year = {2020},
booktitle = {Proceedings of the 27th IEEE Virtual Reality conference (VR '20)},
address = {Atlanta, USA},
pages = {483-492},
doi = {10.1109/VR46266.2020.00-35},
title = {Think Twice: The Influence of Immersion on Decision Making during Gambling in Virtual Reality}
}
Abstract:
Immersive Virtual Reality (VR) is increasingly being explored as an alternative medium for gambling games to attract players. Typically, gambling games try to impair a player’s decision making, usually for the disadvantage of the players’ financial outcome. An impaired decision making results in the inability to differentiate between advantageous and disadvantageous options. We investigated if and how immersion impacts decision making using a VR-based realization of the Iowa Gambling Task (IGT) to pinpoint potential risks and effects of gambling in VR. During the IGT, subjects are challenged to draw cards from four different decks of which two are advantageous. The selections made serve as a measure of a participant’s decision making during the task. In a novel user study, we compared the effects of immersion on decision making between a low-immersive desktop-3D-based IGT realization and a high immersive VR version. Our results revealed significantly more disadvantageous decisions when playing the immersive VR version. This indicates an impair- ing effect of immersion on simulated real life decision making and provides empirical evidence for a high risk potential of gambling games targeting immersive VR.
Sander Münster, Florian Niebling, Jonas Bruschke, Kristina Barthel, Kristina Friedrichs, Cindy Kröber, Ferdinand Maiwald,
Urban History Research and Discovery in the Age of Digital Repositories. A Report About Users and Requirements, In Horst Kremers (Eds.), Digital Cultural Heritage, pp. 63--84. Cham:
Springer International Publishing,
2020. [Download][BibSonomy][Doi]
@inbook{Münster2020,
author = {Sander Münster and Florian Niebling and Jonas Bruschke and Kristina Barthel and Kristina Friedrichs and Cindy Kröber and Ferdinand Maiwald},
url = {https://doi.org/10.1007/978-3-030-15200-0_5},
year = {2020},
booktitle = {Digital Cultural Heritage},
editor = {Horst Kremers},
publisher = {Springer International Publishing},
address = {Cham},
pages = {63--84},
doi = {10.1007/978-3-030-15200-0_5},
title = {Urban History Research and Discovery in the Age of Digital Repositories. A Report About Users and Requirements}
}
Abstract:
The research group on four-dimensional research and communication of urban history (HistStadt4D) investigates and develops methods and technologies to transfer extensive repositories of historical photographs and their contextual information into a three-dimensional spatial model, with an additional temporal component. This will make content accessible to researchers and the public, via a 4D browser as well as a location-dependent augmented reality representation. Against this background, this article highlights users and requirements of both scholarly and touristic usage of digital information about urban history, in particular historical photographs.
2019
Daniel Roth, Sebastian von Mammen, Julian Keil, Manuel Schildknecht, Marc Erich Latoschik,
Approaching Difficult Terrain with Sensitivity: A Virtual Reality Game on the Five Stages of Grief, In 2019 11th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games), pp. 1--4.
2019. [BibSonomy]
@inproceedings{Roth:2019ac,
author = {Daniel Roth and Sebastian von Mammen and Julian Keil and Manuel Schildknecht and Marc Erich Latoschik},
year = {2019},
booktitle = {2019 11th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games)},
pages = {1--4},
title = {Approaching Difficult Terrain with Sensitivity: A Virtual Reality Game on the Five Stages of Grief}
}
Abstract:
Andreas Müller, Samuel Truman, Sebastian von Mammen, Kirsten Brukamp,
Engineering a Showcase of Virtual Reality Exposure Therapy, In 2019 11th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games), pp. 1--2.
2019. [BibSonomy]
@inproceedings{Muller:2019aa,
author = {Andreas Müller and Samuel Truman and Sebastian von Mammen and Kirsten Brukamp},
year = {2019},
booktitle = {2019 11th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games)},
pages = {1--2},
title = {Engineering a Showcase of Virtual Reality Exposure Therapy}
}
Abstract:
Mary Katherine Heinrich, Sebastian von Mammen, Daniel Nicolas Hofstadler, Mostafa Wahby, Payam Zahadat, Tomasz Skrzypczak, Mohammad Divband Soorati, Rafał Krela, Wojciech Kwiatkowski, Thomas Schmickl, others,
Constructing living buildings: a review of relevant technologies for a novel application of biohybrid robotics, In Journal of the Royal Society Interface, Vol. 16(156), p. 20190238.
The Royal Society,
2019. [BibSonomy]
@article{Heinrich:2019aa,
author = {Mary Katherine Heinrich and Sebastian von Mammen and Daniel Nicolas Hofstadler and Mostafa Wahby and Payam Zahadat and Tomasz Skrzypczak and Mohammad Divband Soorati and Rafał Krela and Wojciech Kwiatkowski and Thomas Schmickl and others},
journal = {Journal of the Royal Society Interface},
number = {156},
year = {2019},
publisher = {The Royal Society},
pages = {20190238},
volume = {16},
title = {Constructing living buildings: a review of relevant technologies for a novel application of biohybrid robotics}
}
Abstract:
Kristina Bucher, Tim Blome, Stefan Rudolph, Sebastian von Mammen,
VReanimate II: training first aid and reanimation in virtual reality, In Journal of Computers in Education, Vol. 6(1), pp. 53--78.
Springer,
2019. [BibSonomy]
@article{Bucher:2019aa,
author = {Kristina Bucher and Tim Blome and Stefan Rudolph and Sebastian von Mammen},
journal = {Journal of Computers in Education},
number = {1},
year = {2019},
publisher = {Springer},
pages = {53--78},
volume = {6},
title = {VReanimate II: training first aid and reanimation in virtual reality}
}
Abstract:
Martin Mišiak, Niko Wissmann, Arnulph Fuhrmann, Marc Erich Latoschik,
The Impact of Stereo Rendering on the Perception of Normal Mapped Geometry in Virtual Reality, In Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology, pp. 92:1-92:2. New York, NY, USA:
Association for Computing Machinery,
2019. [Download][BibSonomy][Doi]
@inproceedings{misiak2019impact,
author = {Martin Mišiak and Niko Wissmann and Arnulph Fuhrmann and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2019-vrst-normal-maps-preprint.pdf},
year = {2019},
booktitle = {Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {VRST '19},
pages = {92:1-92:2},
doi = {10.1145/3359996.3364811},
title = {The Impact of Stereo Rendering on the Perception of Normal Mapped Geometry in Virtual Reality}
}
Abstract:
This paper investigates the effects of normal mapping on the perception of geometric depth between stereoscopic and non-stereoscopic views. Results show, that in a head-tracked environment, the addition of binocular disparity has no impact on the error rate in the detection of normal-mapped geometry. It does however significantly shorten the detection time.
Daniel Roth, S. von Mammen, Julian Keil, Manuel Schildknecht, Marc Erich Latoschik,
Approaching Difficult Terrain with Sensitivity: A Virtual Reality Game on the Five Stages of Grief, In 2019 11th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games), pp. 1--4.
2019. [Download][BibSonomy]
@inproceedings{Roth:2019aa,
author = {Daniel Roth and S. von Mammen and Julian Keil and Manuel Schildknecht and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Roth2019aa.pdf},
year = {2019},
booktitle = {2019 11th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games)},
pages = {1--4},
title = {Approaching Difficult Terrain with Sensitivity: A Virtual Reality Game on the Five Stages of Grief}
}
Abstract:
Andreas Knote, Sabine Fischer, Sylvain Cussat-Blanc, Florian Niebling, David Bernard, Florian Cogoni, S. von Mammen,
Immersive Analysis of 3D Multi-cellular In-Vitro and In-Silico Cell Cultures, In 2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), pp. 82-827.
2019. [Download][BibSonomy][Doi]
@inproceedings{Knote:2019ab,
author = {Andreas Knote and Sabine Fischer and Sylvain Cussat-Blanc and Florian Niebling and David Bernard and Florian Cogoni and S. von Mammen},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Knote2019ab.pdf},
year = {2019},
booktitle = {2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)},
pages = {82-827},
doi = {10.1109/AIVR46125.2019.00021},
title = {Immersive Analysis of 3D Multi-cellular In-Vitro and In-Silico Cell Cultures}
}
Abstract:
Mary Katherine Heinrich, S. von Mammen, Daniel Nicolas Hofstadler, Mostafa Wahby, Payam Zahadat, Tomasz Skrzypczak, Mohammad Divband Soorati, Rafał Krela, Wojciech Kwiatkowski, Thomas Schmickl, others,
Constructing living buildings: a review of relevant technologies for a novel application of biohybrid robotics, In Journal of the Royal Society Interface, Vol. 16(156), p. 20190238.
The Royal Society,
2019. [Download][BibSonomy]
@article{Heinrich:2019aa,
author = {Mary Katherine Heinrich and S. von Mammen and Daniel Nicolas Hofstadler and Mostafa Wahby and Payam Zahadat and Tomasz Skrzypczak and Mohammad Divband Soorati and Rafał Krela and Wojciech Kwiatkowski and Thomas Schmickl and others},
journal = {Journal of the Royal Society Interface},
number = {156},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Heinrich2019aa.pdf},
year = {2019},
publisher = {The Royal Society},
pages = {20190238},
volume = {16},
title = {Constructing living buildings: a review of relevant technologies for a novel application of biohybrid robotics}
}
Abstract:
Kristina Bucher, Tim Blome, Stefan Rudolph, S. von Mammen,
VReanimate II: training first aid and reanimation in virtual reality, In Journal of Computers in Education, Vol. 6(1), pp. 53--78.
Springer,
2019. [Download][BibSonomy]
@article{Bucher:2019aa,
author = {Kristina Bucher and Tim Blome and Stefan Rudolph and S. von Mammen},
journal = {Journal of Computers in Education},
number = {1},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Bucher2019aa.pdf},
year = {2019},
publisher = {Springer},
pages = {53--78},
volume = {6},
title = {VReanimate II: training first aid and reanimation in virtual reality}
}
Abstract:
Jan Burkholz, S. von Mammen,
Empathy & Information: Ingredients for a Children's Game on Diabetes, In 2019 11th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games), pp. 1--2.
2019. Best poster paper award 🏆 [Download][BibSonomy]
@inproceedings{Burkholz:2019aa,
author = {Jan Burkholz and S. von Mammen},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Burkholz2019aa.pdf},
year = {2019},
booktitle = {2019 11th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games)},
pages = {1--2},
title = {Empathy & Information: Ingredients for a Children's Game on Diabetes}
}
Abstract:
Andreas Knote, S. von Mammen,
Interactive Agent-Based Biological Cell Simulations for Morphogenesis, pp. 115-124.
Kassel University Press,
2019. [Download][BibSonomy]
@inbook{Knote:2019aa,
author = {Andreas Knote and S. von Mammen},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Knote2019aa.pdf},
year = {2019},
publisher = {Kassel University Press},
pages = {115-124},
title = {Interactive Agent-Based Biological Cell Simulations for Morphogenesis}
}
Abstract:
Andreas Knote, David Bernard, S. von Mammen, Sylvain Cussat-Blanc,
Interactive Visualization of Multi-Cellular Tumor Spheroids.
2019. [Download][BibSonomy]
@misc{Knote:2019ac,
author = {Andreas Knote and David Bernard and S. von Mammen and Sylvain Cussat-Blanc},
url = {http://vizbi.org/Posters/2019/A14},
year = {2019},
title = {Interactive Visualization of Multi-Cellular Tumor Spheroids}
}
Abstract:
Katharina Anna Maria Heydn, Marc Philipp Dietrich, Marcus Barkowsky, Götz Winterfeldt, S. von Mammen, Andreas Nüchter,
The Golden Bullet: A Comparative Study for Target Acquisition, Pointing and Shooting, In 2019 11th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games), pp. 1--8.
2019. [Download][BibSonomy]
@inproceedings{Heydn:2019aa,
author = {Katharina Anna Maria Heydn and Marc Philipp Dietrich and Marcus Barkowsky and Götz Winterfeldt and S. von Mammen and Andreas Nüchter},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Heydn2019aa.pdf},
year = {2019},
booktitle = {2019 11th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games)},
pages = {1--8},
title = {The Golden Bullet: A Comparative Study for Target Acquisition, Pointing and Shooting}
}
Abstract:
S. von Mammen, Hans-Günter Schmidt,
Wenn eine Miniatur laufen lernt, In Bibliotheksforum Bayern, Vol. 13(4), pp. 250--253.
2019. [Download][BibSonomy]
@article{S.-von-Mammen:2019aa,
author = { S. von Mammen and Hans-Günter Schmidt},
journal = {Bibliotheksforum Bayern},
number = {4},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/von-Mammen2019aa.pdf},
year = {2019},
pages = {250--253},
volume = {13},
title = {Wenn eine Miniatur laufen lernt}
}
Abstract:
Carolin Wienrich, Nina Döllinger, Simon Kock, Klaus Gramann,
User-Centered Extension of a Locomotion Typology: Movement-Related Sensory Feedback and Spatial Learning., In VR, pp. 690-698.
IEEE,
2019. [Download][BibSonomy]
@inproceedings{conf/vr/WienrichDKG19,
author = {Carolin Wienrich and Nina Döllinger and Simon Kock and Klaus Gramann},
url = {http://dblp.uni-trier.de/db/conf/vr/vr2019.html#WienrichDKG19},
year = {2019},
booktitle = {VR},
publisher = {IEEE},
pages = {690-698},
title = {User-Centered Extension of a Locomotion Typology: Movement-Related Sensory Feedback and Spatial Learning.}
}
Abstract:
Diana Löffler, Robert Tscharn, Philipp Schaper, Melissa Hollenbach, Viola Mocke,
Tight Times: Semantics and Distractibility of Pneumatic Compression Feedback for Wearable Devices, In Proceedings of Mensch und Computer 2019.
ACM,
2019. [BibSonomy][Doi]
@inproceedings{Löffler_2019,
author = {Diana Löffler and Robert Tscharn and Philipp Schaper and Melissa Hollenbach and Viola Mocke},
year = {2019},
booktitle = {Proceedings of Mensch und Computer 2019},
publisher = {ACM},
doi = {10.1145/3340764.3340796},
title = {Tight Times: Semantics and Distractibility of Pneumatic Compression Feedback for Wearable Devices}
}
Abstract:
Andreas Müller, Samuel Truman, Sebastian von Mammen, Kirsten Brukamp,
Engineering a Showcase of Virtual Reality Exposure Therapy, In 2019 11th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games), pp. 1-2.
IEEE,
2019. [Download][BibSonomy][Doi]
@inproceedings{conf/vsgames/MullerTMB19,
author = {Andreas Müller and Samuel Truman and Sebastian von Mammen and Kirsten Brukamp},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Muller2019aa.pdf},
year = {2019},
booktitle = {2019 11th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games)},
publisher = {IEEE},
pages = {1-2},
doi = {10.1109/VS-Games.2019.8864536},
title = {Engineering a Showcase of Virtual Reality Exposure Therapy}
}
Abstract:
Numerous research studies and controlled trials have unveiled the potential of serious games in various health-related areas 1. Their range of application can be even further extended by the use of virtual reality (VR) technology, which allows the realistic representation of interactive contents. Virtual Reality Exposure Therapy (VRET) is a very promising novel use case for the development of serious games. Held in a virtual environment (VE) adaptive to the needs of the patient, this form of therapy can outperform traditional realworld measures 2, 3. One of its major success factors is the engagement of the patient, which can be increased by an immersive gaming experience. We show a demonstrator of VRET application for a fire-related post-traumatic stress disorder (PTSD). In this demonstrator, features to support actively guided VR experiences are improved on, focusing on the interactive adaptivity of the VE.
E. Wolf, R. Heinrich, A. Michalek, D. Schraudt, A. Hohm, R. Hein, T. Grundgeiger, O. Happel,
Rapid Preparation of Eye Tracking Data For Debriefing In Medical Training: A Feasibility Study, In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 63(1), pp. 733--737.
SAGE Publications,
2019. [Download][BibSonomy][Doi]
@article{Wolf_2019,
author = {E. Wolf and R. Heinrich and A. Michalek and D. Schraudt and A. Hohm and R. Hein and T. Grundgeiger and O. Happel},
journal = {Proceedings of the Human Factors and Ergonomics Society Annual Meeting},
number = {1},
url = {https://doi.org/10.1177%2F1071181319631032},
year = {2019},
publisher = {SAGE Publications},
pages = {733--737},
volume = {63},
doi = {10.1177/1071181319631032},
title = {Rapid Preparation of Eye Tracking Data For Debriefing In Medical Training: A Feasibility Study}
}
Abstract:
Birgit Lugrin, Astrid M. Rosenthal von der Pütten, Svenja Hahn,
Identifying Social Context Factors Relevant for a Robotic Elderly Assistant., In Miguel A. Salichs, Shuzhi Sam Ge, Emilia Ivanova Barakova, John-John Cabibihan, Alan R. Wagner, Álvaro Castro González, Hongsheng He (Eds.), International Conference on Social Robotics (ICSR), Vol. 11876, pp. 558-567.
Springer,
2019. [Download][BibSonomy]
@inproceedings{conf/socrob/LugrinPH19,
author = {Birgit Lugrin and Astrid M. Rosenthal von der Pütten and Svenja Hahn},
url = {http://dblp.uni-trier.de/db/conf/socrob/icsr2019.html#LugrinPH19},
year = {2019},
booktitle = {International Conference on Social Robotics (ICSR)},
editor = {Miguel A. Salichs and Shuzhi Sam Ge and Emilia Ivanova Barakova and John-John Cabibihan and Alan R. Wagner and Álvaro Castro González and Hongsheng He},
publisher = {Springer},
series = {Lecture Notes in Computer Science},
pages = {558-567},
volume = {11876},
title = {Identifying Social Context Factors Relevant for a Robotic Elderly Assistant.}
}
Abstract:
Leyla Dewitz, Cindy Kröber, Heike Messemer, Ferdinand Maiwald, Sander Münster, Jonas Bruschke, Florian Niebling,
HISTORICAL PHOTOS AND VISUALIZATIONS: POTENTIAL FOR RESEARCH, In ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XLII-2/W15, pp. 405--412.
2019. [Download][BibSonomy][Doi]
@article{isprs-archives-XLII-2-W15-405-2019,
author = {Leyla Dewitz and Cindy Kröber and Heike Messemer and Ferdinand Maiwald and Sander Münster and Jonas Bruschke and Florian Niebling},
journal = {ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
url = {https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLII-2-W15/405/2019/},
year = {2019},
pages = {405--412},
volume = {XLII-2/W15},
doi = {10.5194/isprs-archives-XLII-2-W15-405-2019},
title = {HISTORICAL PHOTOS AND VISUALIZATIONS: POTENTIAL FOR RESEARCH}
}
Abstract:
Benjamin Eckstein, Florian Niebling, Birgit Lugrin,
Reflected Reality: A Mixed Reality Knowledge Representation for Context-aware Systems., In 11th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games), pp. 1-4.
IEEE,
2019. [Download][BibSonomy]
@inproceedings{conf/vsgames/EcksteinNL19,
author = {Benjamin Eckstein and Florian Niebling and Birgit Lugrin},
url = {http://dblp.uni-trier.de/db/conf/vsgames/vsgames2019.html#EcksteinNL19},
year = {2019},
booktitle = {11th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games)},
publisher = {IEEE},
pages = {1-4},
title = {Reflected Reality: A Mixed Reality Knowledge Representation for Context-aware Systems.}
}
Abstract:
Florian Niebling,
Visualizing Orientations of Large Numbers of Photographs, In Proceedings of the 1st Workshop on Structuring and Understanding of Multimedia heritAge Contents, pp. 1--2. New York, NY, USA:
ACM,
2019. [Download][BibSonomy][Doi]
@inproceedings{Niebling:2019:VOL:3347317.3352729,
author = {Florian Niebling},
url = {http://doi.acm.org/10.1145/3347317.3352729},
year = {2019},
booktitle = {Proceedings of the 1st Workshop on Structuring and Understanding of Multimedia heritAge Contents},
publisher = {ACM},
address = {New York, NY, USA},
series = {SUMAC '19},
pages = {1--2},
doi = {10.1145/3347317.3352729},
title = {Visualizing Orientations of Large Numbers of Photographs}
}
Abstract:
Florian Niebling, Michael Haas, Andre´ Blessing,
Integration of Services for Software Development in DH: A Case Study of Image Classification using Convolutional Neural Networks, In Claude Draude, Martin Lange, Bernhard Sick (Eds.), INFORMATIK 2019: 50 Jahre Gesellschaft für Informatik – Informatik für Gesellschaft (Workshop-Beiträge), Vol. P-295, pp. 163-168. Bonn:
Gesellschaft für Informatik e.V.,
2019. [BibSonomy][Doi]
@inproceedings{mci/Niebling2019,
author = {Florian Niebling and Michael Haas and Andre´ Blessing},
year = {2019},
booktitle = {INFORMATIK 2019: 50 Jahre Gesellschaft für Informatik – Informatik für Gesellschaft (Workshop-Beiträge)},
editor = {Claude Draude and Martin Lange and Bernhard Sick},
publisher = {Gesellschaft für Informatik e.V.},
address = {Bonn},
series = {Lecture Notes in Informatics (LNI) - Proceedings},
pages = {163-168},
volume = {P-295},
doi = {10.18420/inf2019_ws17},
title = {Integration of Services for Software Development in DH: A Case Study of Image Classification using Convolutional Neural Networks}
}
Abstract:
Jean-Luc Lugrin, Fabian Unruh, Maximilian Landeck, Yoan Lamour, Marc Erich Latoschik, Kai Vogeley, Marc Wittmann,
Experiencing Waiting Time in Virtual Reality, In Proceedings of the 25th ACM Conference on Virtual Reality Software and Technology.
2019. [Download][BibSonomy]
@inproceedings{lugrin2019experiencing,
author = {Jean-Luc Lugrin and Fabian Unruh and Maximilian Landeck and Yoan Lamour and Marc Erich Latoschik and Kai Vogeley and Marc Wittmann},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2019-vrst-experiencing-waiting-time-preprint.pdf},
year = {2019},
booktitle = {Proceedings of the 25th ACM Conference on Virtual Reality Software and Technology},
title = {Experiencing Waiting Time in Virtual Reality}
}
Abstract:
Yann Glémarec, Anne-Gwenn Bosser, Jean-Luc Lugrin, Mathieu Chollet, Cédric Buche, Maximilian Landeck, Marc Erich Latoschik,
A Scalability Benchmark for a Virtual Audience
Perception Model in Virtual Reality, In Proceedings of the 25th ACM Conference on Virtual Reality Software and Technology, Vol. VRST'19.
2019. [Download][BibSonomy]
@inproceedings{glemarec2019scalability,
author = {Yann Glémarec and Anne-Gwenn Bosser and Jean-Luc Lugrin and Mathieu Chollet and Cédric Buche and Maximilian Landeck and Marc Erich Latoschik},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2019-vrst-atmo-benchmarking-preprint.pdf},
year = {2019},
booktitle = {Proceedings of the 25th ACM Conference on Virtual Reality Software and Technology},
volume = {VRST'19},
title = {A Scalability Benchmark for a Virtual Audience
Perception Model in Virtual Reality}
}
Abstract:
Jean-Luc Lugrin, Andreas Juchno, Philipp Schaper, Maximilian Landeck, Marc Erich Latoschik,
Drone-Steering: A Novel VR Traveling Technique, In Proceedings of the 25th ACM Conference on Virtual Reality Software, Technology (Eds.), Proceedings of the 25th ACM Conference on Virtual Reality Software and Technology, Vol. VRST'19.
2019. [Download][BibSonomy]
@inproceedings{lugrin2019dronesteering,
author = {Jean-Luc Lugrin and Andreas Juchno and Philipp Schaper and Maximilian Landeck and Marc Erich Latoschik},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2019-vrst-drone-steering-preprint.pdf},
year = {2019},
booktitle = {Proceedings of the 25th ACM Conference on Virtual Reality Software and Technology},
editor = {Proceedings of the 25th ACM Conference on Virtual Reality Software and Technology},
volume = {VRST'19},
title = {Drone-Steering: A Novel VR Traveling Technique}
}
Abstract:
Daniel Roth, Gary Bente, Peter Kullmann, David Mal, Christian Felix Purps, Kai Vogeley, Marc Erich Latoschik,
Technologies for Social Augmentations in User-Embodied Virtual Reality, In 25th ACM Symposium on Virtual Reality Software and Technology (VRST), pp. 1-12.
2019. [Download][BibSonomy][Doi]
@conference{roth2019technologies,
author = {Daniel Roth and Gary Bente and Peter Kullmann and David Mal and Christian Felix Purps and Kai Vogeley and Marc Erich Latoschik},
url = {https://dl.acm.org/doi/pdf/10.1145/3359996.3364269},
year = {2019},
booktitle = {25th ACM Symposium on Virtual Reality Software and Technology (VRST)},
series = {VRST '19},
pages = {1-12},
doi = {https://doi.org/10.1145/3359996.3364269},
title = {Technologies for Social Augmentations in User-Embodied Virtual Reality}
}
Abstract:
Technologies for Virtual, Mixed, and Augmented Reality (VR, MR, and AR) allow to artificially augment social interactions and thus to go beyond what is possible in real life. Motivations for the use of social augmentations are manifold, for example, to synthesize behavior when sensory input is missing, to provide additional affordances
in shared environments, or to support inclusion and training of individuals with social communication disorders. We review and categorize augmentation approaches and propose a software architecture based on four data layers. Three components further handle the status analysis, the modification, and the blending of behaviors. We present a prototype (injectX) that supports behavior tracking (body motion, eye gaze, and facial expressions from the lower face), status analysis, decision-making, augmentation, and behavior blending in immersive interactions. Along with a critical reflection, we consider further technical and ethical aspects.
Sebastian Lammers, Gary Bente, Ralf Tepest, Mathis Jording, Daniel Roth, Kai Vogeley,
Introducing ACASS: An Annotated Character Animation Stimulus Set for Controlled (e)Motion Perception Studies, In Frontiers in Robotics and AI, Vol. 6, p. 94.
2019. [Download][BibSonomy][Doi]
@article{10.3389/frobt.2019.00094,
author = {Sebastian Lammers and Gary Bente and Ralf Tepest and Mathis Jording and Daniel Roth and Kai Vogeley},
journal = {Frontiers in Robotics and AI},
url = {https://www.frontiersin.org/article/10.3389/frobt.2019.00094},
year = {2019},
pages = {94},
volume = {6},
doi = {10.3389/frobt.2019.00094},
title = {Introducing ACASS: An Annotated Character Animation Stimulus Set for Controlled (e)Motion Perception Studies}
}
Abstract:
Others' movements inform us about their current activities as well as their intentions and emotions. Research on the distinct mechanisms underlying action recognition and emotion inferences has been limited due to a lack of suitable comparative stimulus material. Problematic confounds can derive from low-level physical features (e.g., luminance), as well as from higher-level psychological features (e.g., stimulus difficulty). Here we present a standardized stimulus dataset, which allows to address both action and emotion recognition with identical stimuli. The stimulus set consists of 792 computer animations with a neutral avatar based on full body motion capture protocols. Motion capture was performed on 22 human volunteers, instructed to perform six everyday activities (mopping, sweeping, painting with a roller, painting with a brush, wiping, sanding) in three different moods (angry, happy, sad). Five-second clips of each motion protocol were rendered into AVI-files using two virtual camera perspectives for each clip. In contrast to video stimuli, the computer animations allowed to standardize the physical appearance of the avatar and to control lighting and coloring conditions, thus reducing the stimulus variation to mere movement. To control for low level optical features of the stimuli, we developed and applied a set of MATLAB routines extracting basic physical features of the stimuli, including average background-foreground proportion and frame-by-frame pixel change dynamics. This information was used to identify outliers and to homogenize the stimuli across action and emotion categories. This led to a smaller stimulus subset (n = 83 animations within the 792 clip database) which only contained two different actions (mopping, sweeping) and two different moods (angry, happy). To further homogenize this stimulus subset with regard to psychological criteria we conducted an online observer study (N = 112 participants) to assess the recognition rates for actions and moods, which led to a final sub-selection of 32 clips (eight per category) within the database. The ACASS database and its subsets provide unique opportunities for research applications in social psychology, social neuroscience, and applied clinical studies on communication disorders. All 792 AVI-files, selected subsets, MATLAB code, annotations, and motion capture data (FBX-files) are available online.
Sebastian von Mammen, Andreas Müller, Marc Erich Latoschik, Mario Botsch, Kirsten Brukamp, Carsten Schröder, Michael Wacker,
VIA VR: A Technology Platform for Virtual Adventures for Healthcare and Well-Being, In VS-Games, pp. 1-2.
2019. [Download][BibSonomy]
@inproceedings{Mammen:2019aa,
author = {Sebastian von Mammen and Andreas Müller and Marc Erich Latoschik and Mario Botsch and Kirsten Brukamp and Carsten Schröder and Michael Wacker},
journal = {VS-Games},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2019-viavr-vsgames.pdf},
year = {2019},
pages = {1-2},
title = {VIA VR: A Technology Platform for Virtual Adventures for Healthcare and Well-Being}
}
Abstract:
To harness the potential of virtual reality (VR) in the healthcare sector, the expenditures for users such as clinics, doctors or health insurances, have to be reduced -- a requirement the technology platform VIA VR (an acronym from ``virtual reality adventures'' and VR) promises to fulfill by combining several key technologies to allow specialists from the healthcare sector to create high-impact VR adventures without the need for a background in programming or the design of virtual worlds. This paper fleshes out the concept of VIA VR, its technological pillars and the planned R&D agenda.
Alexander Geiger, Gary Bente, Sebastian Lammers, Ralf Tepest, Daniel Roth, Danilo Bzdok, Kai Vogeley,
Distinct functional roles of the mirror neuron system and the mentalizing system, In NeuroImage, Vol. 202, p. 116102.
2019. [Download][BibSonomy][Doi]
@article{geiger2019distinct,
author = {Alexander Geiger and Gary Bente and Sebastian Lammers and Ralf Tepest and Daniel Roth and Danilo Bzdok and Kai Vogeley},
journal = {NeuroImage},
url = {http://www.sciencedirect.com/science/article/pii/S1053811919306937},
year = {2019},
pages = {116102},
volume = {202},
doi = {https://doi.org/10.1016/j.neuroimage.2019.116102},
title = {Distinct functional roles of the mirror neuron system and the mentalizing system}
}
Abstract:
Movements can inform us about what people are doing and also about how they feel. This phenomenologically evident distinction has been suggested to correspond functionally with differential neural correlates denoted as mirror neuron system (MNS) and mentalizing system (MENT). To separate out the roles of the underlying systems we presented identical stimuli under different task demands: character animations showing everyday activities (mopping, sweeping) performed in different moods (angry, happy). Thirty-two participants were undergoing functional magnetic resonance imaging (fMRI) while asked to identify either the performed movement or the displayed mood. Univariate GLM analysis revealed the expected activation of either in MNS or MENT depending on the task. A complementary multivariate pattern-learning analysis based on the “social brain atlas” confirmed the expected recruitment of both systems. In conclusion, both approaches converge onto clearly distinct functional roles of both social neural networks in a novel dynamic social perception paradigm.
Daniel Roth, Larissa Brübach, Franziska Westermeier, Christian Schell, Tobias Feigl, Marc Erich Latoschik,
A Social Interaction Interface Supporting Affective
Augmentation Based on Neuronal Data, In Symposium on Spatial User Interaction (SUI '19), October 19--20, 2019, New Orleans, LA, USA, Vol. SUI '19.
2019. [BibSonomy][Doi]
@inproceedings{roth2019toappearsocial,
author = {Daniel Roth and Larissa Brübach and Franziska Westermeier and Christian Schell and Tobias Feigl and Marc Erich Latoschik},
year = {2019},
booktitle = {Symposium on Spatial User Interaction (SUI '19), October 19--20, 2019, New Orleans, LA, USA},
volume = {SUI '19},
doi = {10.1145/3357251.3360018},
title = {A Social Interaction Interface Supporting Affective
Augmentation Based on Neuronal Data}
}
Abstract:
In this demonstration we present a prototype for an avatar-mediated
social interaction interface that supports the replication of head-
and eye movement in distributed virtual environments. In addition
to the retargeting of these natural behaviors, the system is capable
of augmenting the interaction based on the visual presentation of
affective states. We derive those states using neuronal data captured
by electroencephalographic (EEG) sensing in combination with a
machine learning driven classification of emotional states.
Sebastian Oberdörfer, Marc Erich Latoschik,
Predicting Learning Effects of Computer Games Using the Gamified Knowledge Encoding Model, In Matthias Rauterberg, Fotis Liarokapis (Eds.), Entertainment Computing, Vol. 32, p. 100315.
2019. [Download][BibSonomy][Doi]
@article{oberdorfer2019predicting,
author = {Sebastian Oberdörfer and Marc Erich Latoschik},
journal = {Entertainment Computing},
url = {https://www.sciencedirect.com/science/article/abs/pii/S1875952119300059},
year = {2019},
editor = {Matthias Rauterberg and Fotis Liarokapis},
pages = {100315},
volume = {32},
doi = {10.1016/j.entcom.2019.100315},
title = {Predicting Learning Effects of Computer Games Using the Gamified Knowledge Encoding Model}
}
Abstract:
Game mechanics encode a computer game’s underlying principles as their internal rules. These game rules consist of information relevant to a specific learning content in the case of a serious game. This paper describes an approach to predict the learning effect of computer games by analyzing the structure of the provided game mechanics. In particular, we utilize the Gamified Knowledge Encoding model to predict the learning effects of playing the computer game Kerbal Space Program (KSP). We tested the correctness of the prediction in a user study evaluating the learning effects of playing KSP. Participants achieved a significant increase in knowledge about orbital mechanics during their first gameplay hours. In the second phase of the study, we assessed KSP’s applicability as an educational tool and compared it to a traditional learning method in respect to the learning outcome. The results indicate a highly motivating and effective knowledge learning. Also, participants used KSP to validate complex theoretical spaceflight concepts.
Carla Winter, Florian Kern, Ivo Käthner, Dominik Gall, Marc Erich Latoschik, Paul Pauli,
Virtuelle Realität als Ergänzung des Laufbandtrainings zur Rehabilitation von Gangstörungen bei Patienten mit Schlaganfall und Multipler Sklerose, In Ethik in der Medizin, Vol. 14(15).
2019. [BibSonomy]
@presentation{winter2019virtuelle,
author = {Carla Winter and Florian Kern and Ivo Käthner and Dominik Gall and Marc Erich Latoschik and Paul Pauli},
journal = {Ethik in der Medizin},
number = {15},
year = {2019},
volume = {14},
title = {Virtuelle Realität als Ergänzung des Laufbandtrainings zur Rehabilitation von Gangstörungen bei Patienten mit Schlaganfall und Multipler Sklerose}
}
Abstract:
Die Technik der virtuellen Realität (VR) bietet neue Behandlungsmöglichkeiten in der Rehabilitation neurologischer Erkrankungen. Vorherige Studien haben gezeigt, dass ein VR-basiertes Laufbandtraining bei Patienten mit Gangstörungen nicht nur den physischen, sondern auch den psychischen Therapieerfolg steigert und damit eine sinnvolle Ergänzung zum herkömmlichen Gangtraining darstellt.
In der vorliegenden Studie wurde untersucht, welche Auswirkungen die immersive Darbietung einer virtuellen Umgebung (über ein Head-Mounted-Display, HMD) gegenüber der semi-immersiven Darbietung der VR (über einen Flachbildmonitor) und dem herkömmlichen Laufbandtraining ohne VR hat.
Dazu durchliefen zunächst 36 gesunde Probanden und anschließend 14 MS- bzw. Schlaganfallpatienten mit Gangstörungen jeweils die drei verschiedenen Laufbandbedingungen (immersiv, semi-immersiv und ohne VR).
Die eingesetzte virtuelle Umgebung enthielt Gamification-Elemente zur Motivationssteigerung und wurde auf Grundlage der Selbstbestimmungstheorie nach Ryan und Deci implementiert. Die Studie mit gesunden Probanden diente dazu, die Gebrauchstauglichkeit (Usability) zu prüfen und technische Defizite aufzudecken. In einer Proof-of-Concept-Studie mit 14 MS- und Schlaganfallpatienten wurde anschließend anhand der Anwendung getestet, ob Patienten mit einem VR-gestützten Laufbandtraining im Rahmen der Behandlung ihrer Gangstörungen (EDSS < 6) ihre Lauffähigkeiten verbessern können. Primäres Outcome Maß war in beiden Studien die durchschnittliche Laufgeschwindigkeit innerhalb der einzelnen Bedingungen. Mittels standardisierter Fragebögen wurden zusätzlich Motivation, Benutzerfreundlichkeit, Präsenzerleben (Igroup Presence Questionnaire) und Nebenwirkungen des VR-Systems (Simulator Sickness Questionnaire) untersucht. Außerdem wurde die Präferenz der Studienteilnehmer in Bezug auf die drei Bedingungen erfragt.
Sowohl in der Studie mit gesunden Teilnehmern als auch in der Patientenstudie zeigte sich bei der HMD-Bedingung eine signifikant höhere durchschnittliche Laufgeschwindigkeit als beim Laufbandtraining ohne VR. Das Präsenzerleben war in beiden Studien in der HMD-Bedingung signifikant höher als in der Monitor-Bedingung. Darüber hinaus sind keine Nebenwirkungen durch die virtuelle Welt im Sinne einer Simulator Sickness aufgetreten. Die Studienteilnehmer hatten keine relevanten VR-bedingten Haltungsschwierigkeiten oder Probleme in Bezug auf das visuelle Display. Während die Motivation bei den gesunden Probanden nach dem HMD-Durchgang im Vergleich zu den anderen beiden Bedingungen höher ausfiel, wurden in der Patientenstudie keine signifikanten Unterschiede detektiert. Dennoch gaben die Patienten an, die virtuelle Welt in der HMD-Bedingung als motivierender empfunden zu haben als in der Monitor-Bedingung. Unter allen drei Bedingungen wurde das HMD-Laufbandtraining von 71 % (n = 14) der Patienten und 89 % (n = 36) der gesunden Versuchsteilnehmer präferiert. Ebenfalls 71 % der Patienten könnten sich vorstellen, das HMD-Laufbandtraining in Zukunft häufiger zu nutzen.
Johann Schmitt, Jean-Luc Lugrin, and Wienrich Carolin, Marc Erich Latoschik,
Investigating Gesture-based Commands for First-Person Shooter Games in Virtual Reality, In Proceedings of User-embodied Interaction in Virtual Reality, Mensch und Computer 2019.
2019. [Download][BibSonomy]
@inproceedings{LugrinGesture2019,
author = {Johann Schmitt and Jean-Luc Lugrin and and Wienrich Carolin and Marc Erich Latoschik},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2019-muc-uivr-workshop-Investigating-gesture-based-commands-for-first-person-shooter-games-in-vr.pdf},
year = {2019},
booktitle = {Proceedings of User-embodied Interaction in Virtual Reality, Mensch und Computer 2019},
title = {Investigating Gesture-based Commands for First-Person Shooter Games in Virtual Reality}
}
Abstract:
Jean-Luc Lugrin, Florian Kern, Constantin Kleinbeck, Daniel Roth, Christian Daxery, Tobias Feigl, Christopher Mutschler, Marc Erich Latoschik,
A Framework for Location-Based VR Applications, In Proceedings of the GI VR/AR - Workshop.
Shaker Verlag,
2019. [Download][BibSonomy][Doi]
@inproceedings{LugrinHolopark2019,
author = {Jean-Luc Lugrin and Florian Kern and Constantin Kleinbeck and Daniel Roth and Christian Daxery and Tobias Feigl and Christopher Mutschler and Marc Erich Latoschik},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2019-gi-vr-ar-framework-for-location-based-vr-applications.pdf},
year = {2019},
booktitle = {Proceedings of the GI VR/AR - Workshop},
publisher = {Shaker Verlag},
doi = {http://dx.doi.org/10.2370/9783844068870},
title = {A Framework for Location-Based VR Applications}
}
Abstract:
This paper presents a framework to develop and investigate location-based Virtual Reality (VR) applications. We demonstrate our framework by introducing a novel type of VR museum, designed to support a large number of simultaneous co-located users. These visitors are walking in a hangar-scale tracking zone (600 m2), while sharing a ten times bigger virtual space (7000 m2). Co-located VR applications like this one are opening novel VR perspectives. However, sharing a limitless virtual world using a large, but limited, tracking space is also raising numerous challenges: from financial considerations and technical implementation to interactions and evaluations (e.g., user’s representation, navigation, health & safety, monitoring). How to design, develop and evaluate such a VR system is still an open question. Here, we describe a fully implemented framework with its specific features and performance optimizations. We also illustrate our framework’s viability with a first VR application and discuss its potential benefits for education and future evaluation.
Ferdinand Maiwald, Jonas Bruschke, Christoph Lehmann, Florian Niebling,
A 4D information system for the exploration of multitemporal images and maps using photogrammetry, Web technologies and VR/AR, In José Luis Lerma (Eds.), Virtual Archaeology Review, Vol. 10(21), pp. 1-13.
2019. [BibSonomy][Doi]
@article{maiwald2019information,
author = {Ferdinand Maiwald and Jonas Bruschke and Christoph Lehmann and Florian Niebling},
journal = {Virtual Archaeology Review},
number = {21},
year = {2019},
editor = {José Luis Lerma},
pages = {1-13},
volume = {10},
doi = {https://doi.org/10.4995/var.2019.11867},
title = {A 4D information system for the exploration of multitemporal images and maps using photogrammetry, Web technologies and VR/AR}
}
Abstract:
Nina Döllinger, Carolin Wienrich, Erik Wolf, Mario Botsch, Marc Erich Latoschik,
ViTraS - Virtual Reality Therapy by Stimulation of Modulated Body Image - Project Outline, In 2019 Mensch und Computer - Workshopband, pp. 606-611.
2019. [Download][BibSonomy][Doi]
@inproceedings{dollinger2019toappearvitras,
author = {Nina Döllinger and Carolin Wienrich and Erik Wolf and Mario Botsch and Marc Erich Latoschik},
url = {https://dl.gi.de/handle/20.500.12116/25250},
year = {2019},
booktitle = {2019 Mensch und Computer - Workshopband},
pages = {606-611},
doi = {10.18420/muc2019-ws-633},
title = {ViTraS - Virtual Reality Therapy by Stimulation of Modulated Body Image - Project Outline}
}
Abstract:
In the recent decades, obesity has become one of the major public health issues and is associated with severe other diseases. Although current multidisciplinary therapy approaches already include behavioral therapy techniques, the oftentimes remaining lack of psychotherapeutic support after surgery leads to relapses and renewed weight gain. This paper presents an overview of the project ViTraS-Virtual Reality Therapy by Stimulation of Modulated BodyImage - that addresses these challenges by (i) Developing an integrative model predicting the influential paths of immersive media for an effective behavioral change; (ii) Developing an augmented reality (AR)-mirror system enabling an effective therapy on body self-perception of patients, and (iii)Developing a multi-user virtual reality (VR)-system supply-ing social support from therapists and other patients. The three components of the ViTraS projects are briefly introduced, as well as a first VR-based prototype of the mirror system.
Erik Wolf, Ronja Heinrich, Annabell Michalek, David Schraudt, Anna Hohm, Rebecca Hein, Tobias Grundgeiger, Oliver Happel,
Rapid Preparation of Eye Tracking Data for Debriefung in Medical Training: A Feasibility Study, In 2019 Human Factors and Ergonomics Society Annual Meeting, Vol. 63(1), pp. 733-737.
2019. [Download][BibSonomy][Doi]
@inproceedings{wolf2019rapid,
author = {Erik Wolf and Ronja Heinrich and Annabell Michalek and David Schraudt and Anna Hohm and Rebecca Hein and Tobias Grundgeiger and Oliver Happel},
number = {1},
url = {https://journals.sagepub.com/doi/pdf/10.1177/1071181319631032},
year = {2019},
booktitle = {2019 Human Factors and Ergonomics Society Annual Meeting},
pages = {733-737},
volume = {63},
doi = {10.1177/1071181319631032},
title = {Rapid Preparation of Eye Tracking Data for Debriefung in Medical Training: A Feasibility Study}
}
Abstract:
Simulation-based medical training is an increasingly used method to improve the technical and non-technical performance of clinical staff. An essential part of training is the debriefing of the participants, often using audio, video, or even eye tracking recordings. We conducted a practice-oriented feasibility study to test an eye tracking data preparation procedure, which automatically provided information about the gaze distribution on areas of interest such as the vital sign monitor or the patient simulator. We acquired eye tracking data during three simulation scenarios and provided gaze distribution data for debriefing within 30 minutes. Additionally, we qualitatively evaluated the usefulness of the generated eye tracking data for debriefings. Participating students and debriefers were mostly positive about the data provided; however, future research should improve the technical side of the procedure and investigate best practices regarding how to present and use the data in debriefings.
Erik Wolf, Sara Klüber, Chris Zimmerer, Jean-Luc Lugrin, Marc Erich Latoschik,
"Paint that object yellow": Multimodal Interaction to Enhance Creativity During Design Tasks in VR, In 2019 International Conference on Multimodal Interaction, pp. 195-204.
2019. Best Paper Runner-Up 🏆 [Download][BibSonomy][Doi]
@inproceedings{wolf2019paint,
author = {Erik Wolf and Sara Klüber and Chris Zimmerer and Jean-Luc Lugrin and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2019-icmi-creativity-in-vr.pdf},
year = {2019},
booktitle = {2019 International Conference on Multimodal Interaction},
pages = {195-204},
doi = {10.1145/3340555.3353724},
title = {"Paint that object yellow": Multimodal Interaction to Enhance Creativity During Design Tasks in VR}
}
Abstract:
Virtual reality (VR) has always been considered a promising medium to support designers with alternative work environments. Still, graphical user interfaces are prone to induce attention shifts between the user interface and the manipulated target objects which hampers the creative process. This work proposes a speech-and-gesture-based interaction paradigm for creative tasks in VR. We developed a multimodal toolbox (MTB) for VR-based design applications and compared it to a typical unimodal menu-based toolbox (UTB). The comparison uses a design-oriented use-case and mea-sures flow, usability, and presence as relevant characteristicsfor a VR-based design process. The multimodal approach (1) led to a lower perceived task duration and a higher reported feeling of flow. It (2) provided a higher intuitive use and a lower mental workload while not being slower than an UTB. Finally, it (3) generated a higher feeling of presence. Overall, our results confirm significant advantages of the proposed multimodal interaction paradigm and the developed MTB for important characteristics of design processes in VR.
Benjamin Eckstein, Eva Krapp, Anne Elsässer, Birgit Lugrin,
Smart substitutional reality: Integrating the smart home into virtual reality, In Entertainment Computing, Vol. 31, p. 100306.
2019. [Download][BibSonomy][Doi]
@article{ECKSTEIN2019100306,
author = {Benjamin Eckstein and Eva Krapp and Anne Elsässer and Birgit Lugrin},
journal = {Entertainment Computing},
url = {http://www.sciencedirect.com/science/article/pii/S1875952119300151},
year = {2019},
pages = {100306},
volume = {31},
doi = {https://doi.org/10.1016/j.entcom.2019.100306},
title = {Smart substitutional reality: Integrating the smart home into virtual reality}
}
Abstract:
Recent advances in VR technology allow users to consume immersive content within their own living room. Substitutional Reality (SR) promises to enhance this experience by integrating the physical environment into the simulation. We propose a novel approach, called Smart Substitutional Reality (SSR), that extends the passive haptics of SR with the interactive functionality of a smart home environment. SSR is meant to serve as a foundation for serious games. In this paper, we describe the concept of SSR alongside the implementation of a prototype in our smart lab. We designed multiple virtual environments with a varying degree of mismatch compared to the real world, and added selected objects to induce additional haptic and thermal stimuli to increase immersion. In two user studies we investigate their impact on spatial presence and intrinsic motivation which are especially important for serious games. Results suggest that spatial presence is maintained with higher mismatch, and increased by inducing additional stimuli. Intrinsic motivation is increased in both cases.
Hendrik Striepe, Melissa Donnermann, Martina Lein, Birgit Lugrin,
Modeling and Evaluating Emotion, Contextual Head Movement and Voices for a Social Robot Storyteller, In International Journal of Social Robotics.
2019. [BibSonomy][Doi]
@article{Striepe2019,
author = {Hendrik Striepe and Melissa Donnermann and Martina Lein and Birgit Lugrin},
journal = {International Journal of Social Robotics},
year = {2019},
doi = {10.1007/s12369-019-00570-7},
title = {Modeling and Evaluating Emotion, Contextual Head Movement and Voices for a Social Robot Storyteller}
}
Abstract:
Daniel Roth, Franziska Westermeier, Larissa Brübach, Tobias Feigl, Christian Schell, Marc Erich Latoschik,
Brain 2 Communicate: EEG-based Affect Recognition to Augment Virtual Social Interactions, In Mensch und Computer 2019 - Workshopband. Bonn:
Gesellschaft für Informatik e.V.,
2019. [Download][BibSonomy][Doi]
@conference{roth2019toappearbrain,
author = {Daniel Roth and Franziska Westermeier and Larissa Brübach and Tobias Feigl and Christian Schell and Marc Erich Latoschik},
url = {https://dl.gi.de/handle/20.500.12116/25205},
year = {2019},
booktitle = {Mensch und Computer 2019 - Workshopband},
publisher = {Gesellschaft für Informatik e.V.},
address = {Bonn},
doi = {10.18420/muc2019-ws-571},
title = {Brain 2 Communicate: EEG-based Affect Recognition to Augment Virtual Social Interactions}
}
Abstract:
Philipp Schaper, Tobias Grundgeiger,
Commission Errors with Forced Response Lag, In Quarterly Journal of Experimental Psychology, Vol. 72(10), pp. 2380-2392.
2019. [BibSonomy][Doi]
@article{schaper2019commission,
author = {Philipp Schaper and Tobias Grundgeiger},
journal = {Quarterly Journal of Experimental Psychology},
number = {10},
year = {2019},
pages = {2380-2392},
volume = {72},
doi = {10.1177/1747021819840583},
title = {Commission Errors with Forced Response Lag}
}
Abstract:
Reed M. Reynolds, Eric Novotny, Joomi Lee, Daniel Roth, Gary Bente,
Ambiguous Bodies: The Role of Displayed Arousal in Emotion MisPerception, In Journal of Nonverbal Behavior.
2019. [Download][BibSonomy][Doi]
@article{reynolds2019toappearambiguous,
author = {Reed M. Reynolds and Eric Novotny and Joomi Lee and Daniel Roth and Gary Bente},
journal = {Journal of Nonverbal Behavior},
url = {https://doi.org/10.1007/s10919-019-00312-3},
year = {2019},
doi = {10.1007/s10919-019-00312-3},
title = {Ambiguous Bodies: The Role of Displayed Arousal in Emotion MisPerception}
}
Abstract:
Daniel Roth, Jan-Philipp Stauffert, Marc Erich Latoschik,
Avatar Embodiment, Behavior Replication, and Kinematics in Virtual Reality, In William R. Sherman (Eds.), VR Developer Gems, Vol. 1, pp. 321-348.
Springer US,
2019. [BibSonomy]
@inbook{roth2019avatar,
author = {Daniel Roth and Jan-Philipp Stauffert and Marc Erich Latoschik},
year = {2019},
booktitle = {VR Developer Gems},
editor = {William R. Sherman},
publisher = {Springer US},
pages = {321-348},
volume = {1},
title = {Avatar Embodiment, Behavior Replication, and Kinematics in Virtual Reality}
}
Abstract:
Daniel Roth, Carola Bloch, Josephine Schmitt, Lena Frischlich, Marc Erich Latoschik, Gary Bente,
Perceived Authenticity, Empathy, and Pro-social Intentions Evoked Through Avatar-mediated Self-disclosures, In Proceedings of Mensch Und Computer 2019, pp. 21--30. New York, NY, USA:
ACM,
2019. [Download][BibSonomy][Doi]
@inproceedings{roth2019toappearperceived,
author = {Daniel Roth and Carola Bloch and Josephine Schmitt and Lena Frischlich and Marc Erich Latoschik and Gary Bente},
url = {http://doi.acm.org/10.1145/3340764.3340797},
year = {2019},
booktitle = {Proceedings of Mensch Und Computer 2019},
publisher = {ACM},
address = {New York, NY, USA},
series = {MuC'19},
pages = {21--30},
doi = {10.1145/3340764.3340797},
title = {Perceived Authenticity, Empathy, and Pro-social Intentions Evoked Through Avatar-mediated Self-disclosures}
}
Abstract:
Tobias Feigl, Daniel Roth, Stefan Gradl, Markus Gerhard Wirth, Michael Philippsen, Marc Erich Latoschik, Bjoern Eskofier, Christopher Mutschler,
Sick Moves! Motion Parameters as Indicators of Simulator Sickness, In IEEE Transactions on Visualization and Computer Graphics (TVCG), Vol. 25(11), pp. 3146-3157.
2019. [Download][BibSonomy][Doi]
@article{feigl2019toappearmoves,
author = {Tobias Feigl and Daniel Roth and Stefan Gradl and Markus Gerhard Wirth and Michael Philippsen and Marc Erich Latoschik and Bjoern Eskofier and Christopher Mutschler},
journal = {IEEE Transactions on Visualization and Computer Graphics (TVCG)},
number = {11},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2019-sick-moves-tvcg-preprint.pdf},
year = {2019},
pages = {3146-3157},
volume = {25},
doi = {10.1109/TVCG.2019.2932224},
title = {Sick Moves! Motion Parameters as Indicators of Simulator Sickness}
}
Abstract:
We explore motion parameters, more specifically gait parameters, asan objective indicator to assess simulator sickness in Virtual Reality(VR). We discuss the potential relationships between simulator sick-ness, immersion, and presence. We used two different camera pose(position and orientation) estimation methods for the evaluation ofmotion tasks in a large-scale VR environment: a simple model andan optimized model that allows for a more accurate and natural map-ping of human senses. Participants performed multiple motion tasks(walking, balancing, running) in three conditions: a physical realitybaseline condition, a VR condition with the simple model, and a VRcondition with the optimized model. We compared these conditionswith regard to the resulting sickness and gait, as well as the perceivedpresence in the VR conditions. The subjective measures confirmedthat the optimized pose estimation model reduces simulator sick-ness and increases the perceived presence. The results further showthat both models affect the gait parameters and simulator sickness,which is why we further investigated a classification approach thatdeals with non-linear correlation dependencies between gait param-eters and simulator sickness. We argue that our approach could beused to assess and predict simulator sickness based on human gaitparameters and we provide implications for future research
Sophia C Steinhaeusser, Anna Riedmann, Max Haller, Sebastian Oberdörfer, Kristina Bucher, Marc Erich Latoschik,
Fancy Fruits - An Augmented Reality Application
for Special Needs Education, In Proceedings of the 11th International Conference on Virtual Worlds and Games for Serious Applications (VS Games 2019), pp. 1-4.
2019. [Download][BibSonomy][Doi]
@inproceedings{steinhaeusser2019fancy,
author = {Sophia C Steinhaeusser and Anna Riedmann and Max Haller and Sebastian Oberdörfer and Kristina Bucher and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2019-vsgames-fancy-fruits-preprint.pdf},
year = {2019},
booktitle = {Proceedings of the 11th International Conference on Virtual Worlds and Games for Serious Applications (VS Games 2019)},
pages = {1-4},
doi = {10.1109/VS-Games.2019.8864547},
title = {Fancy Fruits - An Augmented Reality Application
for Special Needs Education}
}
Abstract:
Augmented Reality (AR) allows for a connection between real and virtual worlds, thus providing a high potential for Special Needs Education (SNE). We developed an educational application called Fancy Fruits to teach disabled children the components of regional fruits and vegetables. The app includes marker-based AR elements connecting the real situation with virtual information. To evaluate the application, a field study was conducted. Eleven children with mental disabilities took part in the study. The results show a high enjoyment of the participants. The study also validated the app's child-friendly design.
Doris Aschenbrenner, Florian Leutert, Argun Cencen, Jouke Verlinden, Klaus Schilling, Marc Erich Latoschik, Stephan Lukosch,
Comparing Human Factors for Augmented Reality supported Single and Cooperative Repair Operations of Industrial Robots, In Frontiers in Robotics and AI, Vol. 6, p. 37.
Frontiers,
2019. [Download][BibSonomy]
@article{aschenbrenner2019comparing,
author = {Doris Aschenbrenner and Florian Leutert and Argun Cencen and Jouke Verlinden and Klaus Schilling and Marc Erich Latoschik and Stephan Lukosch},
journal = {Frontiers in Robotics and AI},
url = {https://www.frontiersin.org/articles/10.3389/frobt.2019.00037/full?&utm_source=Email_to_authors_&utm_medium=Email&utm_content=T1_11.5e1_author&utm_campaign=Email_publication&field=&journalName=Frontiers_in_Robotics_and_AI&id=428452},
year = {2019},
publisher = {Frontiers},
pages = {37},
volume = {6},
title = {Comparing Human Factors for Augmented Reality supported Single and Cooperative Repair Operations of Industrial Robots}
}
Abstract:
In order to support the decision-making process of industry on how to implement Augmented Reality (AR) in production, this article wants to provide guidance through a set of comparative user studies. The results are obtained from the feedback of 160 participants who performed the same repair task on a switch cabinet of an industrial robot. The studies compare several AR instruction applications on different display devices (head-mounted display, handheld tablet PC and projection-based spatial AR) with baseline conditions (paper instructions and phone support), both in a single-user and a collaborative setting. Next to insights on the performance of the individual device types for the single mode operation, the study is able to show significant indications on AR techniques are being especially helpful in a collaborative setting.
F. Maiwald, F. Henze, J. Bruschke, F. Niebling,
GEO-INFORMATION TECHNOLOGIES FOR A MULTIMODAL ACCESS ON HISTORICAL PHOTOGRAPHS AND MAPS FOR RESEARCH AND COMMUNICATION IN URBAN HISTORY, In ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XLII-2/W11, pp. 763--769.
2019. 2nd International Conference of Geomatics and Restoration. Best paper award. 🏆 [Download][BibSonomy][Doi]
@article{isprs-archives-XLII-2-W11-763-2019,
author = {F. Maiwald and F. Henze and J. Bruschke and F. Niebling},
journal = {ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
url = {https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLII-2-W11/763/2019/},
year = {2019},
pages = {763--769},
volume = {XLII-2/W11},
doi = {10.5194/isprs-archives-XLII-2-W11-763-2019},
title = {GEO-INFORMATION TECHNOLOGIES FOR A MULTIMODAL ACCESS ON HISTORICAL PHOTOGRAPHS AND MAPS FOR RESEARCH AND COMMUNICATION IN URBAN HISTORY}
}
Abstract:
David Obremski, Jean-Luc Lugrin, Philipp Schaper, Birgit Lugrin,
Non-Native Speaker Generation and Perception for Mixed-Cultural Settings, In ACM International Conference on Intelligent Virtual Agents 2019.
ACM,
2019. [BibSonomy][Doi]
@article{obremski2019nonnative,
author = {David Obremski and Jean-Luc Lugrin and Philipp Schaper and Birgit Lugrin},
journal = {ACM International Conference on Intelligent Virtual Agents 2019},
year = {2019},
publisher = {ACM},
doi = {https://doi.org/10.1145/3308532.3329427},
title = {Non-Native Speaker Generation and Perception for Mixed-Cultural Settings}
}
Abstract:
This paper presents an experiment evaluating the effects of virtual agents' language proficiency on whether they are perceived as a native speakers or not. Our first results indicate that beyond 10% word order mistakes and 25% infinitive mistakes, virtual agents are perceived as non-native speakers, even though their appearance and non-verbal behaviour were not altered. We believe these thresholds constitute interesting guidelines possibly simplifying the design of non-native speaker simulations.
Sander Münster, Heike Messemer, Peggy Große, Piotr Kuroczyński, Florian Niebling,
Das Wissen in der 3D-Rekonstruktion, In Patrick Sahle (Eds.), DHd 2019 Digital Humanities: multimedial & multimodal. Konferenzabstracts, Frankfurt am Main, pp. 63-65.
2019. [BibSonomy][Doi]
@inbook{RN15,
author = {Sander Münster and Heike Messemer and Peggy Große and Piotr Kuroczyński and Florian Niebling},
year = {2019},
booktitle = {DHd 2019 Digital Humanities: multimedial & multimodal. Konferenzabstracts, Frankfurt am Main},
editor = {Patrick Sahle},
pages = {63-65},
doi = {10.5281/zenodo.2596095},
title = {Das Wissen in der 3D-Rekonstruktion}
}
Abstract:
Leyla Dewitz, Sander Münster, Florian Niebling,
Usability-Testing für Softwarewerkzeuge in den Digital Humanities am Beispiel von Bildrepositorien, In Patrick Sahle (Eds.), DHd 2019 Digital Humanities: multimedial & multimodal. Konferenzabstracts, Frankfurt am Main, pp. 52-55.
2019. [BibSonomy][Doi]
@article{dewitz2019usabilitytesting,
author = {Leyla Dewitz and Sander Münster and Florian Niebling},
journal = {DHd 2019 Digital Humanities: multimedial & multimodal. Konferenzabstracts, Frankfurt am Main},
year = {2019},
editor = {Patrick Sahle},
pages = {52-55},
doi = {10.5281/zenodo.2596095},
title = {Usability-Testing für Softwarewerkzeuge in den Digital Humanities am Beispiel von Bildrepositorien}
}
Abstract:
Sebastian Oberdörfer, Marc Erich Latoschik,
Knowledge Encoding in Game Mechanics: Transfer-Oriented Knowledge Learning in Desktop-3D and VR, In International Journal of Computer Games Technology, Vol. 2019.
2019. [Download][BibSonomy][Doi]
@article{oberdorfer2019knowledge,
author = {Sebastian Oberdörfer and Marc Erich Latoschik},
journal = {International Journal of Computer Games Technology},
url = {https://www.hindawi.com/journals/ijcgt/2019/7626349/},
year = {2019},
volume = {2019},
doi = {10.1155/2019/7626349},
title = {Knowledge Encoding in Game Mechanics: Transfer-Oriented Knowledge Learning in Desktop-3D and VR}
}
Abstract:
Affine Transformations (ATs) are a complex and abstract learning content. Encoding the AT knowledge in GameMechanics (GMs) achieves a repetitive knowledge application and audiovisual demonstration. Playing a serious game providing these GMs leads to motivating and effective knowledge learning. Using immersive Virtual Reality (VR) has the potential to even further increase the serious game’s learning outcome and learning quality.This paper compares the effectiveness and efficiency of desktop-3D and VR in respect to the achieved learning outcome. Also, the present study analyzes the effectiveness of an enhanced audiovisual knowledge encoding and the provision of a debriefing system. The results validate the effectiveness of the knowledge encoding in GMs to achieve knowledge learning. The study also indicates that VR is beneficial for the overall learning quality and that an enhanced audiovisual encoding has only a limited effect on the learning outcome.
Jean-Luc Lugrin, Marc Erich Latoschik, Yann Glémarec, Anne-Gween Bosser, Mathieu Chollet, Birgit Lugrin,
Towards Narrative-Driven Atmosphere for Virtual Classroom, In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1-6.
2019. [Download][BibSonomy]
@inproceedings{lugrin2019towards,
author = {Jean-Luc Lugrin and Marc Erich Latoschik and Yann Glémarec and Anne-Gween Bosser and Mathieu Chollet and Birgit Lugrin},
url = {http://downloads.hci.informatik.uni-wuerzburg.de/2019-chi-lbw-narrative-driven-atmosphere-preprint.pdf},
year = {2019},
booktitle = {Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems},
pages = {1-6},
title = {Towards Narrative-Driven Atmosphere for Virtual Classroom}
}
Abstract:
Jean-Luc Lugrin, Maximilian Landeck, Marc Erich Latoschik,
Simulated Reference Frame Effects on Steering, Jumping and Sliding, In Proceedings of the 26th IEEE Virtual Reality (VR) conference, pp. 1062-1063.
2019. [Download][BibSonomy]
@inproceedings{lugrin2019simulated,
author = {Jean-Luc Lugrin and Maximilian Landeck and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2019-ieeevr-simulated-frame-effect-vr-preprint.pdf},
year = {2019},
booktitle = {Proceedings of the 26th IEEE Virtual Reality (VR) conference},
pages = {1062-1063},
title = {Simulated Reference Frame Effects on Steering, Jumping and Sliding}
}
Abstract:
Stephan Hertweck, Desirée Weber, Hisham Alwanni, Fabian Unruh, Martin Fischbach, Marc Erich Latoschik, Tonio Ball,
Brain Activity in Virtual Reality: Assessing Signal Quality of
High-Resolution EEG While Using Head-Mounted Displays, In Proceedings of the 26th IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 970-971.
IEEE,
2019. [Download][BibSonomy]
@inproceedings{hertweck2019brain,
author = {Stephan Hertweck and Desirée Weber and Hisham Alwanni and Fabian Unruh and Martin Fischbach and Marc Erich Latoschik and Tonio Ball},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2019-ieeevr-brain-activity-vr-preprint.pdf},
year = {2019},
booktitle = {Proceedings of the 26th IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
publisher = {IEEE},
pages = {970-971},
title = {Brain Activity in Virtual Reality: Assessing Signal Quality of
High-Resolution EEG While Using Head-Mounted Displays}
}
Abstract:
Biometric measures such as the electroencephalogram (EEG) promise to become viable alternatives to subjective questionnaire ratings for the evaluation of psychophysical effects associated with Virtual Reality (VR) systems, as they provide objective and continuous measurements without breaking the exposure. The extent to which the EEG signal can be disturbed by the presence of VR sys- tems, however, has been barely investigated. This study outlines how to evaluate the compatibility of a given EEG-VR setup on the example of two commercial head-mounted displays (HMDs), the Oculus Rift and the HTC Vive Pro. We use a novel experimental protocol to compare the spectral composition between conditions with and without an HMD present during an eyes-open vs. eyes-closed task. We found general artifacts at the line hum of 50 Hz, and additional HMD refresh rate artifacts (90 Hz) for the Oculus rift exclusively. Frequency components typically most interesting to non-invasive EEG research and applications (<50 Hz), however, remained largely unaffected. We observed similar topographies of visually-induced modulation of alpha band power for both HMD conditions in all subjects. Hence, the study introduces a necessary validation test for HMDs in combination with EEG and further promotes EEG as a potential biometric measurement method for psychophysical effects in VR systems.
Florian Kern, Carla Winter, Dominik Gall, Ivo Käthner, Paul Pauli, Marc Erich Latoschik,
Immersive Virtual Reality and Gamification Within Procedurally Generated Environments to Increase Motivation During Gait Rehabilitation, In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 500-509.
2019. [Download][BibSonomy][Doi]
@inproceedings{kern2019immersive,
author = {Florian Kern and Carla Winter and Dominik Gall and Ivo Käthner and Paul Pauli and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2019-ieeevr-homecoming.pdf},
year = {2019},
booktitle = {2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
pages = {500-509},
doi = {10.1109/VR.2019.8797828},
title = {Immersive Virtual Reality and Gamification Within Procedurally Generated Environments to Increase Motivation During Gait Rehabilitation}
}
Abstract:
Virtual Reality (VR) technology offers promising opportunities to improve traditional treadmill-based rehabilitation programs. We present an immersive VR rehabilitation system that includes a head-mounted display and motion sensors. The application is designed to promote the experience of relatedness, autonomy, and competence. The application uses procedural content generation to generate diverse landscapes. We evaluated the effect of the immersive rehabilitation system on motivation and affect. We conducted a repeated measures study with 36 healthy participants to compare the immersive program to a traditional rehabilitation program. Participants reported significant greater enjoyment, felt more competent and experienced higher decision freedom and meaningfulness in the immersive VR gait training compared to the traditional training. They experienced significantly lower physical demand, simulator sickness, and state anxiety, and felt less pressured while still perceiving a higher personal performance. We derive three design implications for future applications in gait rehabilitation: Immersive VR provides a promising augmentation for gait rehabilitation. Gamification features provide a design guideline for content creation in gait rehabilitation. Relatedness and autonomy provide critical content features in gait rehabilitation.
Sebastian Oberdörfer, David Heidrich, Marc Erich Latoschik,
Usability of Gamified Knowledge Learning in VR and Desktop-3D, In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1-13. New York, NY, USA:
Association for Computing Machinery,
2019. [Download][BibSonomy][Doi]
@inproceedings{oberdorfertobepublishedusability,
author = {Sebastian Oberdörfer and David Heidrich and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2019-chi-getit-usability-preprint.pdf},
year = {2019},
booktitle = {Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
series = {CHI ’19},
pages = {1-13},
doi = {10.1145/3290605.3300405},
title = {Usability of Gamified Knowledge Learning in VR and Desktop-3D}
}
Abstract:
Affine Transformations (ATs) often escape an intuitive approach due to their high complexity. Therefore, we developed GEtiT that directly encodes ATs in its game mechanics and scales the knowledge's level of abstraction. This results in an intuitive application as well as audiovisual presentation of ATs and hence in a knowledge learning. We also developed a specific Virtual Reality (VR) version to explore the effects of immersive VR on the learning outcomes. This paper presents our approach of directly encoding abstract knowledge in game mechanics, the conceptual design of GEtiT and its technical implementation. Both versions are compared in regard to their usability in a user study. The results show that both GEtiT versions induce a high degree of flow and elicit a good intuitive use. They validate the effectiveness of the design and the resulting knowledge application requirements. Participants favored GEtiT VR thus showing a potentially higher learning quality when using VR.
David Heidrich, Sebastian Oberdörfer, Marc Erich Latoschik,
The Effects of Immersion on Harm-Inducing Factors in Virtual Slot Machines, In Proceedings of the 26th IEEE Virtual Reality (VR) conference, pp. 793-801.
IEEE,
2019. [Download][BibSonomy][Doi]
@inproceedings{heidrichtobepublishedeffects,
author = {David Heidrich and Sebastian Oberdörfer and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2019-ieeevr-gambling-vr-preprint.pdf},
year = {2019},
booktitle = {Proceedings of the 26th IEEE Virtual Reality (VR) conference},
publisher = {IEEE},
pages = {793-801},
doi = {10.1109/VR.2019.8798021},
title = {The Effects of Immersion on Harm-Inducing Factors in Virtual Slot Machines}
}
Abstract:
Slot machines are one of the most played games by pathological gamblers. New technologies, e.g. immersive Virtual Reality (VR), offer more possibilities to exploit erroneous beliefs in the context of gambling. However, the risk potential of VR-based gambling has not been researched, yet. A higher immersion might increase harmful aspects, thus making VR realizations more dangerous. Measuring harm-inducing factors reveals the risk potential of virtual gambling. In a user study, we analyze a slot machine realized as a desktop 3D and as an immersive VR version. Both versions are compared in respect to effects on dissociation, urge to gamble, dark flow, and illusion of control. Our study shows significantly higher values of dissociation, dark flow, and urge to gamble in the VR version. Presence significantly correlates with all measured harm-inducing factors. We demonstrate that VR-based gambling has a higher risk potential. This creates the importance of regulating VR-based gambling.
Negin Hamzeheinejad,
DC VR Simulation as a Motivator in Gait Rehabilitation, In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pp. 1383-1384.
IEEE,
2019. [Download][BibSonomy][Doi]
@inproceedings{hamzeheinejad2019simulation,
author = {Negin Hamzeheinejad},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2019-ieeevr-dc-vr-gait-preprint.pdf},
year = {2019},
booktitle = {2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)},
publisher = {IEEE},
pages = {1383-1384},
doi = {10.1109/VR.2019.8797872},
title = {DC VR Simulation as a Motivator in Gait Rehabilitation}
}
Abstract:
Gait rehabilitation is a necessary process for patients suffering from post-stroke motor impairments. The patients are required to perform repetitive practices using a robot-assisted gait device. Repeated exercises can become extremely frustrating and the patients lose their motivation over time. In my PhD research, I focus on Virtual Reality (VR) as a medium to improve gait rehabilitation in terms of enjoyment, motivation, efficiency, and effectiveness. The objective is to systematically investigate different factors, such as the presence of a trainer, interactivity, gamification, and storytelling in a two-step process. First, by evaluating the applicability of different factors for the clinical use with healthy subjects. Second, by evaluating the effectiveness of the VR simulation with patients with gait deficits, especially stroke patients in collaboration with a country clinic.
Negin Hamzeheinejad, Daniel Roth, Daniel Götz, Franz Weilbach, Marc Erich Latoschik,
Physiological Effectivity and User Experience of Immersive Gait Rehabilitation, In The First IEEE VR Workshop on Applied VR for Enhanced Healthcare (AVEH), pp. 1421-1429.
IEEE,
2019. [Download][BibSonomy][Doi]
@inproceedings{hamzeheinejad2019physiological,
author = {Negin Hamzeheinejad and Daniel Roth and Daniel Götz and Franz Weilbach and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2019-ieeevr-workshop-vr-gait-preprint.pdf},
year = {2019},
booktitle = {The First IEEE VR Workshop on Applied VR for Enhanced Healthcare (AVEH)},
publisher = {IEEE},
pages = {1421-1429},
doi = {10.1109/VR.2019.8797763},
title = {Physiological Effectivity and User Experience of Immersive Gait Rehabilitation}
}
Abstract:
Gait impairments from neurological injuries require repeated and exhaustive physical exercises for rehabilitation. Prolonged physical training in clinical environments can easily become frustrating and de-motivating for various reasons which in turn risks to decrease efficiency during the healing process. This paper introduces an immersive VR system for gait rehabilitation which targets user experience and increase of motivation while evoking comparable physiological responses needed for successful training effects. The system provides a virtual environment consisting of open fields, forest, mountains, waterfalls, animals, and a beach for inspiring strolls and is able to include a virtual trainer as a companion during the walks. We evaluated the ecological validity of the system with healthy subjects before performing the clinical trial. We assessed the system\u0027s target qualities with a longitudinal study with 45 healthy participants in three consecutive days in comparison to a baseline non-VR condition. The system was able to evoke similar physiological responses. The workload was increased for the VR condition but the system also elicited a higher enjoyment and motivation which was the main goal. The latter benefits slightly decreased over time (as did workload) while they were still higher than in the non-VR condition. The virtual trainer did not show to be beneficial, the corresponding implications are discussed. Overall, the approach shows promising results which renders the system a viable alternative for the given use case while it motivates interesting direction for future work.
Marc Erich Latoschik, Florian Kern, Jan-Philipp Stauffert, Andrea Bartl, Mario Botsch, Jean-Luc Lugrin,
Not Alone Here?! Scalability and User Experience of Embodied Ambient Crowds in Distributed Social Virtual Reality, In IEEE Transactions on Visualization and Computer Graphics (TVCG), Vol. 25(5), pp. 2134-2144.
2019. [Download][BibSonomy][Doi]
@article{latoschik2019alone,
author = {Marc Erich Latoschik and Florian Kern and Jan-Philipp Stauffert and Andrea Bartl and Mario Botsch and Jean-Luc Lugrin},
journal = {IEEE Transactions on Visualization and Computer Graphics (TVCG)},
number = {5},
url = {https://ieeexplore.ieee.org/document/8643417},
year = {2019},
pages = {2134-2144},
volume = {25},
doi = {10.1109/TVCG.2019.2899250},
title = {Not Alone Here?! Scalability and User Experience of Embodied Ambient Crowds in Distributed Social Virtual Reality}
}
Abstract:
This article investigates performance and user experience in Social Virtual Reality (SVR) targeting distributed, embodied, and immersive, face-to-face encounters. We demonstrate the close relationship between scalability, reproduction accuracy, and the resulting performance characteristics, as well as the impact of these characteristics on users co-located with larger groups of embodied virtual others. System scalability provides a variable number of co-located avatars and AI-controlled agents with a variety of different appearances, including realistic-looking virtual humans generated from photogrammetry scans. The article reports on how to meet the requirements of embodied SVR with today\u0027s technical off-the-shelf solutions and what to expect regarding features, performance, and potential limitations. Special care has been taken to achieve low latencies and sufficient frame rates necessary for reliable communication of embodied social signals. We propose a hybrid evaluation approach which coherently relates results from technical benchmarks to subjective ratings and which confirms required performance characteristics for the target scenario of larger distributed groups. A user-study reveals positive effects of an increasing number of co-located social companions on the quality of experience of virtual worlds, i.e., on presence, possibility of interaction, and co-presence. It also shows that variety in avatar/agent appearance might increase eeriness but might also stimulate an increased interest of participants about the environment.
Sara Klueber, Erik Wolf, Tobias Grundgeiger, Birgit Brecknell, Ismail Mohamed, Penelope M. Sanderson,
Head-Worn Displays and Spearcons: Supporting Multiple Patient Monitoring, In Applied Ergonomics, Vol. 78, pp. 86-96.
2019. [Download][BibSonomy][Doi]
@article{klueber2019headworn,
author = {Sara Klueber and Erik Wolf and Tobias Grundgeiger and Birgit Brecknell and Ismail Mohamed and Penelope M. Sanderson},
journal = {Applied Ergonomics},
url = {https://www.sciencedirect.com/science/article/pii/S0003687019300158},
year = {2019},
pages = {86-96},
volume = {78},
doi = {10.1016/j.apergo.2019.01.009},
title = {Head-Worn Displays and Spearcons: Supporting Multiple Patient Monitoring}
}
Abstract:
In hospitals, clinicians often need to monitor several patients while performing other tasks. However, visual displays that show patients' vital signs are in fixed locations and auditory alarms intended to alert clinicians may be missed. Information such as spearcons (time-compressed speech earcons) that ‘travels’ with the clinician and is delivered by earpiece and/or head-worn displays (HWDs), might overcome these problems. In this study, non-clinicians monitored five simulated patients in three 10-min scenarios while performing a demanding tracking task. Monitoring accuracy was better for participants using spearcons and a HWD (88.7%) or a HWD alone (86.2%) than for participants using spearcons alone (74.1%). Participants using the spearcons and HWD (37.7%) performed the tracking task no differently from participants using spearcons alone (37.1%) but participants using the HWD alone performed worse overall (33.1%). The combination of both displays may be a suitable solution for monitoring multiple patients.
Penelope M. Sanderson, Birgit Brecknell, SokYee Leong, Sara Klueber, Erik Wolf, Anna Hickling, Tsz-Lok Tang, Emilea Bell, Simon Y. W. Li, Robert G. Loeb,
Monitoring Vital Signs with Time-Compressed Speech, In Journal of Experimental Psychology: Applied, Vol. 25(4), p. 647–673.
2019. [Download][BibSonomy][Doi]
@article{sanderson2019monitoring,
author = {Penelope M. Sanderson and Birgit Brecknell and SokYee Leong and Sara Klueber and Erik Wolf and Anna Hickling and Tsz-Lok Tang and Emilea Bell and Simon Y. W. Li and Robert G. Loeb},
journal = {Journal of Experimental Psychology: Applied},
number = {4},
url = {https://psycnet.apa.org/record/2019-14255-001?doi=1},
year = {2019},
pages = {647–673},
volume = {25},
doi = {10.1037/xap0000217},
title = {Monitoring Vital Signs with Time-Compressed Speech}
}
Abstract:
Spearcons—time-compressed speech phrases—may be an effective way of communicating vital signs to clinicians without disturbing patients and their families. Four experiments tested the effectiveness of spearcons for conveying oxygen saturation (SpO2) and heart rate (HR) of one or more patients. Experiment 1 demonstrated that spearcons were more effective than earcons (abstract auditory motifs) at conveying clinical ranges. Experiment 2 demonstrated that casual listeners could not learn to decipher the spearcons whereas listeners told the exact vocabulary could. Experiment 3 demonstrated that participants could interpret sequences of sounds representing multiple patients better with spearcons than with pitch-based earcons, especially when tones replaced the spearcons for normal patients. Experiment 4 compared multiple-patient monitoring of two vital signs with either spearcons, a visual display showing SpO2 and HR in the same temporal sequence as the spearcons, or a visual display showing multiple patient levels simultaneously. All displays conveyed which patients were abnormal with high accuracy. Visual displays better conveyed the vital sign levels for each patient, but cannot be used eyes-free. All displays showed accuracy decrements with working memory load. Spearcons may be viable for single and multiple patient monitoring. Further research should test spearcons with more vital signs, during multitasking, and longitudinally.
2018
Heiko Hamann, Carlo Pinciroli, Sebastian von Mammen,
A Gamification Concept for Teaching Swarm Robotics, In 2018 12th European Workshop on Microelectronics Education (EWME), pp. 83--88.
2018. [BibSonomy]
@inproceedings{Hamann:2018aa,
author = {Heiko Hamann and Carlo Pinciroli and Sebastian von Mammen},
year = {2018},
booktitle = {2018 12th European Workshop on Microelectronics Education (EWME)},
pages = {83--88},
title = {A Gamification Concept for Teaching Swarm Robotics}
}
Abstract:
Marc Erich Latoschik, Sebastian von Mammen,
BSc Games EngineeringBjörn Bartholdy, Linda Breitlauch, André Czauderna, Gundolf S Freyermuth (Eds.), , Vol. Games Studieren - was, wie, wo?, pp. 491-502.
Transcript - Bild und Bit.,
2018. [Download][BibSonomy]
@inbook{Latoschik:2018aa,
author = {Marc Erich Latoschik and Sebastian von Mammen},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/2018-GamesEngineering-Studiengang-Wuerzburg.pdf},
year = {2018},
editor = {Björn Bartholdy and Linda Breitlauch and André Czauderna and Gundolf S Freyermuth},
publisher = {Transcript - Bild und Bit.},
pages = {491-502},
volume = {Games Studieren - was, wie, wo?},
title = {BSc Games Engineering}
}
Abstract:
Sebastian von Mammen, Andreas Knote, Daniel Roth, Marc Erich Latoschik,
Games Engineering. Wissenschaft mit, über und für Interaktive SystemeBjörn Bartholdy, Linda Breitlauch, André Czauderna, Gundolf S Freyermuth (Eds.), , Vol. Games Studieren - was, wie, wo?, pp. 269-318.
Transcript - Bild und Bit.,
2018. [BibSonomy]
@inbook{Mammen:2018aa,
author = {Sebastian von Mammen and Andreas Knote and Daniel Roth and Marc Erich Latoschik},
year = {2018},
editor = {Björn Bartholdy and Linda Breitlauch and André Czauderna and Gundolf S Freyermuth},
publisher = {Transcript - Bild und Bit.},
pages = {269-318},
volume = {Games Studieren - was, wie, wo?},
title = {Games Engineering. Wissenschaft mit, über und für Interaktive Systeme}
}
Abstract:
Alix Fenies, Andreas Knote, David Bernard, Yves Duthen, Valérie Lobjois, Bernard Ducommun, Sebastian Von Mammen, Sylvain Cussat-Blanc,
Exploring multi-cellular tumor spheroids in virtual reality, In 14e Journees Canceropole Grand Sud-Ouest (JGSO 2018), p. 191. La Grande Motte, France.
2018. [Download][BibSonomy]
@inproceedings{Fenies:2018aa,
author = {Alix Fenies and Andreas Knote and David Bernard and Yves Duthen and Valérie Lobjois and Bernard Ducommun and Sebastian Von Mammen and Sylvain Cussat-Blanc},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/fenieshal-02950721.pdf},
year = {2018},
booktitle = {14e Journees Canceropole Grand Sud-Ouest (JGSO 2018)},
address = {La Grande Motte, France},
pages = {191},
title = {Exploring multi-cellular tumor spheroids in virtual reality}
}
Abstract:
Heiko Hamann, Carlo Pinciroli, S. von Mammen,
A Gamification Concept for Teaching Swarm Robotics, In 2018 12th European Workshop on Microelectronics Education (EWME), pp. 83--88.
2018. [Download][BibSonomy]
@inproceedings{Hamann:2018aa,
author = {Heiko Hamann and Carlo Pinciroli and S. von Mammen},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Hamann2018aa.pdf},
year = {2018},
booktitle = {2018 12th European Workshop on Microelectronics Education (EWME)},
pages = {83--88},
title = {A Gamification Concept for Teaching Swarm Robotics}
}
Abstract:
Sebastian von Mammen, Andreas Knote, Daniel Roth, Marc Erich Latoschik,
Games Engineering. Wissenschaft mit, über und für Interaktive SystemeBjörn Bartholdy, Linda Breitlauch, André Czauderna, Gundolf S Freyermuth (Eds.), , Vol. Games Studieren - was, wie, wo? Staatliche Studienangebote im Bereich digitaler Spiele, pp. 269--318.
Transcript - Bild und Bit,
2018. [Download][BibSonomy]
@inbook{Mammen:2018aa,
author = {Sebastian von Mammen and Andreas Knote and Daniel Roth and Marc Erich Latoschik},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Mammen2018aa.pdf},
year = {2018},
editor = {Björn Bartholdy and Linda Breitlauch and André Czauderna and Gundolf S Freyermuth},
publisher = {Transcript - Bild und Bit},
pages = {269--318},
volume = {Games Studieren - was, wie, wo? Staatliche Studienangebote im Bereich digitaler Spiele},
title = {Games Engineering. Wissenschaft mit, über und für Interaktive Systeme}
}
Abstract:
Daniel Rapp, Jonas Müller, Kristina Bucher, S. von Mammen,
Pathomon: A Social Augmented Reality Serious Game, In 2018 10th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games), pp. 1--4.
2018. [Download][BibSonomy]
@inproceedings{Rapp:2018aa,
author = {Daniel Rapp and Jonas Müller and Kristina Bucher and S. von Mammen},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Rapp2018aa.pdf},
year = {2018},
booktitle = {2018 10th International Conference on Virtual Worlds and Games for Serious Applications (VS-Games)},
pages = {1--4},
title = {Pathomon: A Social Augmented Reality Serious Game}
}
Abstract:
A. Knote, S. von Mammen,
Adaptation and Integration of GPU-Driven Physics for a Biology Research RIS, In 2018 IEEE 11th Workshop on Software Engineering and Architectures for Real-time Interactive Systems (SEARIS), pp. 1-7.
2018. [Download][BibSonomy][Doi]
@inproceedings{Knote:2018aa,
author = {A. Knote and S. von Mammen},
url = {https://downloads.hci.informatik.uni-wuerzburg.de/Knote2018aa.pdf},
year = {2018},
booktitle = {2018 IEEE 11th Workshop on Software Engineering and Architectures for Real-time Interactive Systems (SEARIS)},
pages = {1-7},
doi = {10.1109/SEARIS44442.2018.9180233},
title = {Adaptation and Integration of GPU-Driven Physics for a Biology Research RIS}
}
Abstract:
Carolin Wienrich, Kristina Schindler, Nina Döllinger, Simon Kock, Ole Traupe,
Social Presence and Cooperation in Large-Scale Multi-User Virtual Reality - The Relevance of Social Interdependence for Location-Based Environments., In Kiyoshi Kiyokawa, Frank Steinicke, Bruce H. Thomas, Greg Welch (Eds.), VR, pp. 207-214.
IEEE Computer Society,
2018. [Download][BibSonomy]
@inproceedings{conf/vr/WienrichSDKT18,
author = {Carolin Wienrich and Kristina Schindler and Nina Döllinger and Simon Kock and Ole Traupe},
url = {http://dblp.uni-trier.de/db/conf/vr/vr2018.html#WienrichSDKT18},
year = {2018},
booktitle = {VR},
editor = {Kiyoshi Kiyokawa and Frank Steinicke and Bruce H. Thomas and Greg Welch},
publisher = {IEEE Computer Society},
pages = {207-214},
title = {Social Presence and Cooperation in Large-Scale Multi-User Virtual Reality - The Relevance of Social Interdependence for Location-Based Environments.}
}
Abstract:
Carolin Wienrich, Nina Döllinger, Simon Kock, Kristina Schindler, Ole Traupe,
Assessing User Experience in Virtual Reality - A Comparison of Different Measurements., In Aaron Marcus, Wentao Wang (Eds.), HCI (18), Vol. 10918, pp. 573-589.
Springer,
2018. [Download][BibSonomy]
@inproceedings{conf/hci/WienrichDKST18,
author = {Carolin Wienrich and Nina Döllinger and Simon Kock and Kristina Schindler and Ole Traupe},
url = {http://dblp.uni-trier.de/db/conf/hci/hci2018-18.html#WienrichDKST18},
year = {2018},
booktitle = {HCI (18)},
editor = {Aaron Marcus and Wentao Wang},
publisher = {Springer},
series = {Lecture Notes in Computer Science},
pages = {573-589},
volume = {10918},
title = {Assessing User Experience in Virtual Reality - A Comparison of Different Measurements.}
}
Abstract:
Samuel Truman, Nicolas Rapp, Daniel Roth, Sebastian von Mammen,
Rethinking real-time strategy games for virtual reality, In Proceedings of the 13th International Conference on the Foundations of Digital Games, pp. 31:1-31:6.
ACM,
2018.