Eine Übersicht aller Sessions/Sitzungen dieser Tagung.
Bitte wählen Sie einen Ort oder ein Datum aus, um nur die betreffenden Sitzungen anzuzeigen. Wählen Sie eine Sitzung aus, um zur Detailanzeige zu gelangen.

MCI-SE05: Virtual and Augmented Reality
Dienstag, 10.09.2019:
11:00 - 12:30

Chair der Sitzung: Jens Gerken
Ort: Hauptgebäude Hörsaal A
Hauptgebäude, Hörsaal A, (feste Bestuhlung), Kapazität 622

11:00 - 11:18

Understanding Visual-Haptic Integration of Avatar Hands using a Fitts' Law Task in Virtual Reality

Valentin Schwind1, Jan Leusmann2, Niels Henze1

1University of Regensburg; 2University of Stuttgart

Virtual reality (VR) is becoming more and more ubiquitous to interact with digital content and often requires renderings of avatars as they enable improved spatial localization and high levels of presence. Previous work shows that visual-haptic integration of virtual avatars depends on body ownership and spatial localization in VR. However, there are different conclusions about how and which stimuli of the own appearance are integrated into the own body scheme. In this work, we investigate if systematic changes of model and texture of a users' avatar affect the input performance measured in a two-dimensional Fitts' law target selection task. Interestingly, we found that the throughput remained constant between our conditions and that neither model nor texture of the avatar significantly affected the average duration to complete the task even when participants felt different levels of presence and body ownership. In line with previous work, we found that the illusion of virtual limb-ownership does not necessarily correlate to the degree to which vision and haptics are integrated into the own body scheme. Our work supports findings indicating that body ownership and spatial localization are potentially independent mechanisms in visual-haptic integration.

11:18 - 11:36

A VR Study on Freehand vs. Widgets for 3D Manipulation Tasks

Robin Schlünsen, Oscar Ariza, Frank Steinicke

Hamburg Universität, Germany

We present in this article the results of a study based on a hand-based 3D UI toolkit we developed which can be easily adapted and reused in VR applications with hand-tracking. We evaluated hand-based and widget-based manipulation methods as well as different multimodal cues for 3D manipulation and system-control tasks. Our study compared the manipulation methods in terms of performance, accuracy, and user acceptance. We found that free-hand manipulation is faster and preferred by the participants. We also analyzed the influence of the multimodal cues, finding valuable insights to integrate these cues to improve the user experience and accuracy in a 3D manipulation task.

11:36 - 11:54

Turn Your Head Half Round: VR Rotation Techniques for Situations With Physically Limited Turning Angle

Eike Langbehn, Joel Wittig, Nikolaos Katzakis, Frank Steinicke

Universität Hamburg, Deutschland

Rotational tracking enables Virtual Reality (VR) users to turn their head around freely 360° while looking around the environment. However, there are situations when physical head rotation is only possible for not more than a certain range, e. g., when the user sits in a bus or plane while she is wearing a VR headset. For these situations, rotation gains were introduced to decouple virtual and real rotations. We present two more techniques that allow 360° virtual turning in a physically limited space: Dynamic Rotation Gains and Scrolling. We conducted an experiment to compare those three rotation techniques and a baseline condition regarding VR sickness, spatial orientation, and usability. We found a significant underestimation of rotation angles for the dynamic rotation gains which might mean that this technique is more subtle than others. Furthermore, usability was higher and VR sickness lower for the dynamic rotation gains while scrolling caused the highest VR sickness. Finally, we conducted a confirmatory study to prove the applicability of dynamic rotation gains in an actual VR experience and got promising feedback.

11:54 - 12:12

Of Portals and Orbs: An Evaluation of Scene Transition Techniques for Virtual Reality

Malte Husung, Eike Langbehn

Universität Hamburg, Deutschland

A lot of virtual reality (VR) experiences require switching between different environments or scenes. To achieve a plausible and perspicuous transition between those scenes, transition techniques can be used. These techniques visualize the change and should support continuity in storytelling and a high degree of presence while respecting the interactive nature of VR.

We implemented six different transition techniques. Three of them are inspired by techniques which are used in movies since decades: Cut, Fade, and Dissolve. Furthermore, we added three techniques that leverage the peculiarities of VR: Portal, Orb, and Transformation.

The techniques were compared in a user study regarding presence, continuity, usability, and preference. The results showed significant differences for all variables. In general, Orb and Portal received highest ratings.

12:12 - 12:30

Remote Guidance for Machine Maintenance Supported by Physical LEDs and Virtual Reality

Philipp Ladwig, Bastian Dewitz, Hendrik Preu, Mitja Säger

University of Applied Sciences Duesseldorf, Deutschland

Machines that are used in industry often require dedicated technicians to fix them in case of defects. This involves travel expenses and certain amount of time, both of which may be significantly reduced by installing small extensions on a machine as we describe in this paper. The goal is that an authorized local worker, guided by a remote expert, can fix the problem on the real machine himself. Our approach is to equip a machine with multiple inexpensive LEDs (light-emitting diodes) and a simple micro controller, which is connected to the internet, to remotely light up certain LEDs, that are close to machine parts of interest. The blinking of an LED can be induced on a virtual 3D model (digital twin) of the machine by a remote expert using a virtual reality application to draw the local worker's attention to certain areas. We conducted an initial user study on this concept with 36 participants and found significantly shorter completion times and less errors for our approach compared to only voice guidance with no visual LED feedback.