Eine Übersicht aller Sessions/Sitzungen dieser Tagung.
Bitte wählen Sie einen Ort oder ein Datum aus, um nur die betreffenden Sitzungen anzuzeigen. Wählen Sie eine Sitzung aus, um zur Detailanzeige zu gelangen.

MCI-WS13: Interacting with Robots and Virtual Agents? Robotic Systems in Situated Action and Social Encounters
Montag, 09.09.2019:
9:00 - 17:30

Chair der Sitzung: Karola Pitsch
Ort: Ost Seminarraum 222
Ost, Seminarraum 222, (loses Gestühl), Kapazität 45

Zusammenfassung der Sitzung

Research in informatics and the engineering sciences strives to endow technical systems – like (humanoid)
robots, embodied conversational agents, voice interfaces etc. – with abilities that should allow the systems to
“interact with people in a natural, interpersonal manner” (Breazeal et al. 2016: 1935). While the evaluation of
such technologies has a strong tradition in the fields of psychology and cognitive sciences investigating the
robot’s/agent’s usability and the users’ perception and attitudes using questionnaires and quantitative measures,
it remains unclear as how these results are related to the concrete interactional conduct of the robot/agent, how
users spontaneously attempt to deal with such technologies, which resources they mobilize to coordinate their
actions with those of the robot/agent, and how the artefact and its agency are constructed. This workshop aims
at addressing these open questions in that it suggests an interactional and praxeological approach based on the
micro-analysis of video-taped recordings of encounters between humans and robots and a research
methodology based on Ethnography and Conversation Analysis. It brings together researchers from the
humanities and social sciences who investigate the ways in which robotic systems feature in situated action and
social encounters ‘in the wild’.

Externe Ressource:

Laughing at the robot: Incongruent robot actions as laughables

Brian Due

University of Copenhagen, Denmark

Laughter is a common occurrence when people interact with social robots. Among the many reasons for the production of laughter, one phenomenon is when the robot responds inadequately and or in a contextually inappropriate manner to the ongoing interaction. This paper is grounded in studies from a semi-experimental setting in which course participants naturally interact with the humanoid robot Pepper in a Danish context. Building upon video recordings and ethnomethodological conversation analysis, the paper explores situations where the robot produces an action that somehow diverges from the expected trajectory of social actions and consequently establishes an incongruency. This research contributes to our understanding of the finetuned nature of human sociality and hence requirements for Human-Robot-Interaction.

Why (pre)closing matters. The case of human-robot interaction

Nicolas Rollet1,2,3, Christian Licoppe1,2,3

1Télécom Paris, France; 2Institut polytechnique de Paris; 3I3 CNRS

Using a conversationalist (CA) approach to study social interactions with artificial agents, we’ve collected « face-to-face » interactions between humans and the robot Pepper. As part of the topic of (dis)engagement, our attention has been focused on the last seconds of exchanges, namely the way humans manage to leave or close the interaction. The data revealed how much sequential issues, accountable actions as well as ritual considerations matter in many cases.

Intuitive Interfaces? Interface Design and its Impact on Human-Robot Interaction

Florian Muhle, Indra Bock

Universität Bielefeld, Deutschland

One goal of developing humanoid robots and virtual agents is to allow for intuitive and natural interaction with technical systems. However, up to now existing robotic systems do not live up to this promise. Based on an empirical approach that combines the analysis of the interface design of three different robot/agent systems with the micro-analysis of empirical encounters between humans and respective systems, we show that and how the systems provide contradictory 'affordances', which make it systematically difficult to start and continue satisfying interactions with them.

Referential Practices for a Museum Guide Robot. Human-Robot-Interaction as a Methodological Tool to Investigate Multimodal Interaction

Karola Pitsch

Universität Duisburg-Essen, Deutschland

An autonomous robot system was equipped with basic means to monitor the users’ success/failure in following a robot’s verbal-gestural deictic reference to an object and – in case of problems – to provide additional help, i.e. to suggest a ‘repair’ action. A real-world field trial with the robot acting as museum guide constitutes the basis for analysis of the users’ reactions. This example is used to explore HRI as a tool to investigate multimodal interaction.

"A Stubborn Child" - How Robot Sounds are Oriented to in Everyday Situated Interaction at Home

Hannah Pelikan

Linköping University, Schweden

Humans make sense of robot actions in the situated context that these actions occur in. This paper takes a conversation analytic approach in studying how the social robot Cozmo is received in a family home, focusing on the non-lexical sounds that the robot uses to communicate. Preliminary findings suggest that participants treat the robot similar to a young child or pet and orient to the robot’s sounds in the local context of the interaction.

When an emotional robot meets real customers Exploring HRI in a customer relationship setting

Julia Velkovska

Orange Labs, France

Spoon, a robot described by its designers as “social”, “emotional”, “empathic” and also “sympathic”, was put for three months period during last autumn in a telephone and IT shop in the center of Paris with the mission “to help” sales advisors to receive customers and answer their first questions (like orientation in this big two-floor shop, how to meet an advisor etc.). Building on the video-ethnographic study I conducted at this occasion, the paper explores the interactions between the robot and the customers as well as its inscription in the spatial configuration and work activities of this commercial space.

Interacting with Wheelchair Mounted Navigator Robot

Akiko Yamazaki1, Keiichi Yamazaki2, Yusuke Arano2, Yosuke Saito2, Emi Iiyama2, Hisato Fukuda2, Yoshinori Kobayashi2, Yoshinori Kuno2

1Tokyo University of Technology, Japan; 2Saitama University

Currently, robotic researchers focus on developing robot systems that are explicitly designed to operate cooperatively with people in public, and provide the resources for projections for humans in public places.Our socio-technological project, engineers developed a robotic wheelchair with attaching a robot in order to provide embodied projective signals to human and designed two settings for the robot’s behavior. One is the robot turns its face towards the human (Face-to-Face model), the other is robot turns its face and they turn around its body in order to index where to go (Body Torque model). The reasons of attaching a robot to a robotic wheelchair and designed two settings are, by analysis of sociologiosts, we reveal how embodied actions of the robot as a resource for projection and considered what kind of projection are possible and how such projections provide the coordination of co-operative actions between multiple people in the public places.

Doing Scheduling? The Construction of Agency and Memory while Programming a Reminder Robot with a Person with Severe Brain Injury

Antonia Krummheuer, Matthias Rehm, Kasper Rodil

Aalborg University, Dänemark

The paper argues that the field of human-robot interaction needs a distributed and socially situated understanding of reminding and scheduling practices to meet the needs of people with cognitive disabilities in the design of reminder robots. These results are based on a embodied interaction analysis of video recorded interactions of a co-creation process in which the participants test a reminder-robot prototype that was designed for and with people with acquired brain injury.

Learning how to talk: Co-producing action with and around voice agents

Stuart Reeves1, Joel E Fischer1, Martin Porcheron1, Rein Sikveland2

1University of Nottingham, United Kingdom; 2Loughborough University, United Kingdom

The domestication of voice interfaces, made accessible in consumer devices such as the Apple HomePod, Google Home or the Amazon Echo, has led to everyday talk becoming intertwined with—as well as acting as—device input. Whether intending to interact with voice interfaces or not, conversationalists must learn ‘how to talk’ to and around them as a matter of this domestication work. Taking an ethnomethodological conversation analysis approach, this paper interrogates some of the ways in which conversationalists deploy a variety of methods so as to manage and design input in line with the strictures of voice interface capabilities and collaboratively accomplish—co-produce—actions with and around such devices.

Lenny the bot as a resource for sequential analysis: exploring the treatment of Next Turn Repair Initiation in the beginnings of unsolicited calls

Marc Relieu1, Merve Sahin2, Aurelien Francillon3

1Telecom Paris, France; 2SAP Security Research; 3Eurecom

Based on conversation analysis, this study examines a corpus of naturally produced telemarketing phone calls with a chatbot called Lenny. Initially designed to trick the authors of unsolicited calls, Lenny has a methodological interest for Conversation Analysis and permits a fine understanding of bot/human professional calls. Because the design of its “turns” never changes, Lenny facilitates the comparisons between sequential phenomena. In this paper, we focus on repair sequences initiated with a specific “trouble with hearing” Next Turn Repair Initiator during beginnings and pre- beginnings. We show how the caller preserves the progressivity of the call while trying to solve the repair issue.