psyrobophi

From Human-Human Joint Action

to Human-Robot Joint Action

and vice-versa !

April 4-5, 2016  - Toulouse, France

Program

April 4th

8:45 Welcome

9:00 Lost In Translation

Tamara Lorenz - University of Cincinnati

In our visions of the future, robots will enter the human surrounding even more than they already do. Although humans are experts in adapting to their environment, safety considerations require the interactions with robots also to be intuitive in unattended moments. As humans are able to interact sufficiently well with other humans, researching human joint action is identified as one way to address the problem of designing safe and intuitive human-robot joint action. Although this is not a new idea, the problem of how to actually transfer knowledge from HHI to HRI is still a big elephant in the room. Experts in all disciplines involved try their best to dedicate their knowledge into the development of user-friendly robots – and still the process of exchange still seems difficult. While it is very difficult already to debate across traditionally more interrelated areas (i.e. philosophers talking to psychologists; electrical engineers talking to computer scientists) – it is the big challenge of this idea to span the bow even wider. And we are lost in translation.
In my talk I will outline why it is so difficult to speak “engineering” and “psychology” at the same time, and why I think that formal descriptions (for example psychophysical or dynamic models) are the key to creating a common ground. I will also share some pitfalls as well as surprising encounters experienced in the past years making my way from engineering to psychology. I will conclude with ideas and concepts I encountered on the way that deserve more attention from all involved disciplines.

pdf of the presentation

 

9:35 The jointness in infants’ and young children’s joint action and joint attention

Malinda Carpenter - School of Psychology and Neuroscience University of St Andrews St Andrews, Scotland, UK and Department of Developmental and Comparative Psychology Max Planck Institute for Evolutionary Anthropology Leipzig, Germany

Our research group has made some strong claims about infants’ and young children’s ability to participate in activities involving shared intentionality (e.g., Tomasello et al., 2005).  These claims are sometimes met with skepticism:  How could 1-year-old infants possibly engage in truly joint joint attention and joint action, when, according to philosophers, so much complex cognition is involved in doing this?  In this talk I will recommend the adoption of a strict, high-level definition regarding the jointness inherent in joint attention and joint action, but at the same time provide empirical evidence from our lab that infants and young children do indeed engage in true joint attention and joint action.  I will briefly discuss a possible simple solution to the philosophical problem of how 1-year-old infants might be able to jointly attend and act with others, and mention some of the consequences of joint attention and joint action in terms of things like action understanding, obligations, and preferences.  I will conclude that jointness is indeed complex, yet simple enough for infants.  It will be interesting to discuss whether it is within the capability of robots too.

pdf of the presentation

 

10:10 Coffee break

10:40 Planning, Coordination, and Control in Joint Action

Natalie Sebanz and Guenther Knoblich - Central European University - Budapest, Hungary

Performing joint actions such as carrying a heavy box together or playing a piano duet requires planning, coordinating, and controlling one’s actions with others in mind. Our talk will provide an overview of research addressing these three different components of joint action. While preparing for joint action people set up plans that enable flexible integration of contributions to joint action outcomes. While performing joint actions people flexibly choose from different coordination strategies. These range from applying predictive coordination models to general ‘coordination heuristics’ such as speeding up in order to make oneself predictable for a partner. Finally, participating in joint action modulates people’s sense of control (agency) and makes them systematically over- or underestimate their control over perceived action outcomes. A better understanding of the basic processes supporting joint action in humans may help to build robots that are useful and pleasurable to interact with.

pdf of the presentation part 1

pdf of the presentation part 2

 

11:30 Panel 1

12:20 Lunch

13:20 Using sensorimotor communication to enhance on-line social interactions: data and modeling

Cordula Vesper - Central European University - Budapest - Hungary

Giovanni Pezzulo - Institute of Cognitive Sciences and Technologies - Roma, Italy

During on-line social interactions, humans often use non-verbal, sensorimotor forms of communication to send coordination signals. For example, when two actors are transporting a heavy object like a table together, one of the actors can push it in a certain direction in order to signal to his co-actor where and when he intends to place the object. Another example is a volleyball player who exaggerates her movements to help her teammates discriminating between (say) a pass to the right or to the left - or to feint an adversary. In this talk, we will review recent evidence showing that exchanging coordination signals helps co-actors making their behavior more predictable or their intentions easier to read - thus enhancing on-line social interactions. Furthermore, we will discuss the costs and benefits of sensorimotor communication from a theoretical and computational perspective, highlighting the conditions under which sensorimotor communication arises and how it could link to more symbolic communication. Finally, we will propose that forms of sensorimotor communication should be included in realistic human-robot interactions.

pdf of the presentation part 1

pdf of the presentation part 2

 

14:30 Seemingly Automatic Adjustments in Human-Robot Joint Action

Kerstin Fischer - South Denmark University - Sonderborg, Denmark

In this presentation, I investigate the relationship between seemingly automatic processes such as interactive alignment and cognitive processes, such as adaptation based on partner models. While in (verbal) HRI alignment can indeed be observed on all linguistic levels, it is open to what degree this alignment is really automatic, what it is contingent on and in particular whether it is indeed prior to partner modeling. I analyze verbal human-robot interactions in which the robot's verbal behavior is scripted so that we can trace which linguistic features of the robot's utterances are aligned with by the users and to what degree this is due to automatic alignment. My results show clearly that the partner model determines both the extent with which users align and the type of features aligned with.

pdf of the presentation

 

15:05 Panel 2

15:55 Coffee break

16:25 Makeing Meaning With Narrative as Joint Action

Peter Ford Dominey - Robot Cognition Laboratory, INSERM - Lyon, France

Over the last ten years we have been using grammatical constructions and shared plans to allow for adaptive human-robot cooperation.  The meaning of what was happening in the cooperative interaction was clear to the human but less so for the robot.  We are now beginning to address how experience acquired in an autobiographical memory, and narrative structure from interaction with the human, can allow the robot to create meaning related to goals and intentions.

17:00 LAAS Visit

18:30 Cocktail 

21:00

 

 

April 5th

 

9:00 Why Team Reasoning Requires Group Intentions

Raul Hakli - University of Helsinki - Helsinki, Finland

Joint action between humans and robots requires coordination of action to reach common goals. People seem to be quite good at coordinating their actions but it is not clear how to implement similar coordination in robots. One common method is to use game-theoretic approaches, but they have a problem with Hi-Lo type of situations. Team reasoning has been offered as an alternative model of decision-making that can deal with such situations. The question is how to model the team reasoning leading to action coordination in the standard BDI-framework  of rational agency? Can team reasoning be modelled only with
individual intentions, perhaps with Bratman's "intentions that", or does  it require strongly collective intentions, like Tuomela's group intentions.
I will argue that team reasoning does conceptually require group intentions. This will be seen if we analyse the practical reasoning of agents more carefully than is usually done.

pdf of the presentation

 

9:35 Forms and levels of sharing in joint action

Elisabeth Pacherie - ENS - Paris, France

Successful joint action depends on the efficient coordination of participant agents' goals, plans, and actions. Motivational, instrumental or common ground uncertainty can hinder coordination. I will argue that shared intentions and joint commitments  work as uncertainty reduction tools. However, in many ordinary forms of joint action there are typically also other routes to uncertainty reduction. I will discuss the forms of complementarity that exist between shared commitments and intentions and other uncertainty reduction devices.

pdf of the presentation

 

10:10 Coffee break

10:40 Human-Robot Collaboration: Personalisation and Developmental Aspects

Yiannis Demiris - Personal Robotics Laboratory, Imperial College - London, UK

As humans and robots increasingly co-exist in home and rehabilitation settings for extended periods of time, it is crucial to factor in the participants’ constantly evolving profiles and adapt the interaction to the personal characteristics of the individuals involved. In this talk, I will describe our computational architectures for enabling human robot interaction in joint tasks, and discuss the related computational problems, including attention, perspective taking, prediction of forthcoming states, machine learning and shared control. I will give some examples from human robot collaboration in musical tasks, robotic wheelchairs for joint control with disabled kids and adults, among others.

pdf of the presentation

 

11:15 What a robot needs to assist humans or collaborate with them

 Rachid Alami - LAAS CNRS - Toulouse, France

This talk addresses some key decisional issues that are necessary for a cognitive robot which shares space and tasks with a human. We adopt a constructive approach based on the identification and the effective implementation of individual and collaborative skills. The system is comprehensive since it aims at dealing with a complete set of abilities articulated so that the robot controller is effectively able to conduct in a flexible manner a human-robot collaborative problem solving and task achievement. These abilities include geometric reasoning and situation assessment based essentially on perspective-taking and affordances, management and exploitation of each agent (human and robot) knowledge in a separate cognitive model, human-aware task planning and interleaved execution of shared plans.

pdf of the presentation

 

11:50 Panel 3

12:40 Lunch

13:40 How to employ social norms in societies with humans and robots?

Alessandro SAFFIOTTI - Cognitive Robotic Systems Laboratory - Örebro, Sweden

Human interactions within a society are often regulated by social norms which involve assuming certain roles and acting differently depending on the assumed role and social context. In this talk, we will concern ourselves with how to employ social norms in societies with humans and robots. In particular, we will focus on the issue of making social norms amenable to robots for reasoning, and address issues connected to how norms are represented and which cognitive processes should be used by robots to account for social norms in their behavior. We will exploit a notion of institution to encapsulate social norms and to associate them with roles, artifacts and actions. We will also also show an illustrative example and experimental trial involving the use of an institution with a real robotic system.

pdf of the presentation

 

14:15 Social learning during joint human-robot action

Mohamed Chetouani - ISIR - Paris, France

One of the most significant challenges in robotics is to achieve closer interactions between humans and robots. Mutual behaviors occurring during interpersonal interaction provide unique insights into the complexities of the processes underlying human-robot coordination. In particular, interpersonal interaction, the process by which two or more people exchange information through verbal (what is said) and non-verbal (how it is said) messages could be exploited to both establish interaction and inform about the quality of interaction. In this talk, we will report our recent works on social learning for (i) detecting individual traits such as pathology and identity during human-robot interaction, (ii) task learning from unlabeled teaching signals. We will also describe how these frameworks could be employed to investigate coordination mechanisms and in particular when it comes to pathologies such as autism spectrum disorders.

pdf of the presentation

 

14:50 Discussions & Wrap-up

15:30 Coffee break

Online user: 1