Companion robots are devoted to enter the same “niche” in the human society as pets and have been also applied as alternatives to animal assisted therapy when the utilization of living animals is not feasible (Melson et al. 2009; Shibata & Wada 2011). These social robots are designed to interact with people and they should provide some kind of "entertainment" or social enrichment for humans. In general, they have the characteristics to induce an emotional response in humans (Kaplan 2001; Donath 2004), and they are able to evoke human interactions. However, the behavior and communicative abilities of the modern companion robots are not sophisticated enough to maintain that social interest in humans. Still, this limited behavioral repertoire can be enough to raise humans’ attention and induce affective relationship in short term (Donath 2004). In long-term however, we argue that more conceptual emphasis should be put on the inter-specific interaction between humans and social robots. We suggest that human – dog interaction provides a rich source of knowledge for designing social robots that are able to interact with humans under a wide range of conditions (Miklósi & Gácsi, 2012). One key feature in human-dog interactions is that humans attribute intentions to the dogs when communicating with them. Hereby, we suggest that the intention attribution can be essential in enhancing the believability of a social robot’s behavior and in enriching human-robot interactions that the users can attribute intentions to the robotic partners as well. In a series of experiments we investigated whether human subjects can attribute intentions to robots and to comprehend the robots’ showing behavior (through visual communicational signals). We set the robots’ behavior using the relevant dog behaviors in similar situations.
A total of 32 participants took part in our experiment, including 22 males and 10 females, aged between 19 and 27. Majority of the participants (25) were students of control engineering and robotics at Wroclaw University of Technology, the rest were either students of other universities in Wrocław or young graduates.
Site and equipment
The experiment was carried out in a flat (belonging to WRUT), appropriately adapted to the needs of the experiment. One can see two rooms of the size 4m x 5m on the flat's plan. In the left room there is a Wizard of Oz(WoZ) operator's station, indicated by O. In the right room the participants and the robot are supposed to carry out a scenario. The important components of the room's equipment are:
- L – a high floor lamp (about 2m);
- T1, T2, T3 - plush toys, located on a lamp, a table and a wall shelf;
- S – a sofa on which a participant is supposed to be seated;
- R – FLASH;
There is a quite broad passage (about 1.5m) between two rooms, so that the robot can be under permanent control of the WoZ operator during all phases of the experiment. It should be noted that on the table of the WoZ operator, along the front edge, a small wall from a white cardboard is placed hiding the operator's hand and arm movements from the participant. FLASH has been chosen to this study due its high capabilities of expressing non-verbal signals. From this perspective a large potential is located in the very dynamic head and hands/arms of FLASH, equipped with many degrees of freedom.
Robot R moves back and forth in the immediate vicinity of the floor lamp L (see Figure a). A green toy T1 sitting on top of the lamp, which is the subject of interest of the robot, is too high to be reached. Two other toys T2, T3 present in the room are not interesting to the robot. At some time instant a person enters the room by the entrance next to the sofa S. This person wants to relax for a moment on the sofa by reading a magazine. After a while the robot recognizes that the person can help him to reach the toy. Therefore, the robot comes close to the person (Figure b) looking at him/her all the time. Next, the robot is standing in front of the person for a while to attract his/her attention. Then the robot goes towards the lamp with the toy. In the middle of the distance he stops, turns his head towards the person to see if he/she focuses his/her attention on him and then views at the toy (twice, see Figure c). Then, regardless of the person's behavior, the robot approaches the lamp, turns sideways to the person and to the lamp, and finally points with his hand and gazes at the toy (Figure d). The robot turns his head twice toward the person and the toy. If the person pays attention to the robot and reads his intentions then he/she will come up to the lamp, take the toy and give it to the robot (Figure e). Otherwise, the robot again tries to attract attention in the same way as he did before.
The experiment was recorded using an AirLive monitoring system consisting of four cameras WN-200 HD and a network video recorder NVR8. The cameras were placed at the corners of the ceiling. They were operated from the station O, which was also used to control the robot.