Veranstaltungsprogramm

Eine Übersicht aller Sessions/Sitzungen dieser Tagung.
Bitte wählen Sie einen Ort oder ein Datum aus, um nur die betreffenden Sitzungen anzuzeigen. Wählen Sie eine Sitzung aus, um zur Detailanzeige zu gelangen.

 
Sitzungsübersicht
Sitzung
MCI-SE08: Mobile and Wearable Interaction
Zeit:
Mittwoch, 11.09.2019:
9:00 - 10:30

Chair der Sitzung: Raphael Wimmer
Ort: Hauptgebäude Hörsaal A
Hauptgebäude, Hörsaal A, (feste Bestuhlung), Kapazität 622

Präsentationen
9:00 - 9:18

Clear All: A Large-Scale Observational Study on Mobile Notification Drawers

Dominik Weber1, Alexandra Voit1, Niels Henze2

1Universität Stuttgart, Deutschland; 2Universität Regensburg, Deutschland

Notifications are an essential feature of smartphones. The notification drawer is the central place to view and attend notifications. Although a body of work already investigated how many and which types of notifications users receive and value, an in-depth analysis of notification drawers has been missing. In this paper, we report the results of a large-scale observational in-the-wild study on mobile notification drawers. We periodically sampled the notification drawer content of 3,953 Android devices, resulting in over 8.8 million notification drawer snapshots. Our findings show that users have, on average, 3.4 notifications pending in the notification drawer. We saw notifications accumulate overnight and being attended to in the morning. We discuss the prominent positioning of messaging notifications compared to other notification types. Finally, inspired by prior work on the management of email inboxes, we propose the three user types "Frequent Cleaners", "Notification Regulators", and "Notification Hoarders" and discuss implications for future notification management systems.



9:18 - 9:36

Smile to Me: Investigating Emotions and their Representation in Text-based Messaging in the Wild

Romina Poguntke1, Tamara Mantz1, Mariam Hassib2, Albrecht Schmidt3, Stefan Schneegaß4

1Universität Stuttgart, Deutschland; 2Bundeswehr Universität München, Deutschland; 3Ludwig-Maximilians-Universität München, Deutschtland; 4Universität Duisburg-Essen, Deutschland

Emotions are part of human communication shaping mimics and representing

feelings. For this, conveying emotions has been integrated

in text-based messaging applications using emojis. While visualizing

emotions in text messages has been investigated in previous

work, we studied the effects of emotion sharing by augmented the

WhatsApp Web user interface – a text messenger people already use

on daily basis. For this, we designed and developed four different

visualizations to represent emotions detected through facial expression

recognition of chat partners using a web cam. Investigating

emotion representation and its effects, we conducted a four weeks

longitudinal study with 28 participants being inquired via 48 semistructured

interviews and 64 questionnaires. Our findings revealed

that users want to maintain control over their emotions, particularly

regarding sharing, and that they preferably view positive emotions

avoiding unpleasant social situations. Based on these insights, we

phrased four design recommendations stimulating novel approaches

for augmenting chats.



9:36 - 9:54

KnuckleTouch: Enabling Knuckle Gestures on Capacitive Touchscreens using Deep Learning

Robin Schweigert1, Jan Leusmann1, Simon Hagenmayer1, Maximilian Weiß1, Huy Viet Le1, Sven Mayer1,2, Andreas Bulling1

1Universität Stuttgart, Deutschland; 2Carnegie Mellon University, US

While mobile devices have become essential for social communication and have paved the way for work on the go, their interactive capabilities are still limited to simple touch input. A promising enhancement for touch interaction is knuckle input but recognizing knuckle gestures robustly and accurately remains challenging. We present a method to differentiate between 17 finger and knuckle gestures based on a long short-term memory (LSTM) machine learning model. Furthermore, we introduce an open source approach that is ready-to-deploy on commodity touch-based devices. The model was trained on a new dataset that we collected in a mobile interaction study with 18 participants. We show that our method can achieve an accuracy of 86.8% on recognizing one of the 17 gestures and an accuracy of 94.6% to differentiate between finger and knuckle. In our evaluation study, we validate our models and found that the LSTM gestures recognizing archived an accuracy of 88.6%. We show that KnuckleTouch can be used to improve the input expressiveness and to provide shortcuts to frequently used functions.



9:54 - 10:12

A Qualitative Comparison Between Augmented and Virtual Reality Collaboration with Handheld Devices

Jens Müller, Johannes Zagermann, Jonathan Wieland, Ulrike Pfeil, Harald Reiterer

Universität Konstanz, Deutschland

Handheld Augmented Reality (AR) displays offer a see-through option to create the illusion of virtual objects being integrated into the viewer's physical environment. Some AR display technologies also allow for the deactivation of the see-through option, turning AR tablets into Virtual Reality (VR) devices that integrate the virtual objects into an exclusively virtual environment. Both display configurations are typically available on handheld devices, raising the question of their influence on users' experience during collaborative activities. In two experiments, we studied how the different display configurations influence user experience, workload, and team performance of co-located and distributed collaborators during a spatial referencing task. A mixed-methods approach revealed that participants' opinions were polarized towards the two display configurations, regardless of the spatial distribution of collaboration. Based on our findings, we identify critical aspects to be addressed in future research to better understand and support co-located and distributed collaboration using AR and VR displays.



10:12 - 10:30

Tight Times: Semantics and Distractibility of Pneumatic Compression Feedback for Wearable Devices

Diana Löffler1, Robert Tscharn2, Philipp Schaper3, Melissa Hollenbach3, Viola Mocke3

1Universität Siegen; 2Fünfpunktnull GmbH; 3Universität Würzburg

Notifications on wrist worn devices can be delivered visual, auditory or haptic. Haptic notifications are hands and eyes-free and at the same time discrete. As an alternative to vibrotactile notifications we explore the use of compression notifications for a variety of semantic contexts. We present a prototype to deliver squeeze cues and present the results of two empirical studies focusing on context-dependent interpretation and distractibility of squeeze notifications. In the first study, 20 participants rated the desirability and intuitive understanding of squeeze-based notifications in a variety of contexts. In the second study, 39 participants completed a set of cognitive tasks interrupted by squeeze-distractors. Our observations suggest that by using simple squeeze signals it is possible to convey a range of context-dependent information that requires little learning and does not distract users from their main activity. These findings help to further investigate the use of compression notifications as an attention-preserving communication channel.