Skip to main content

Social Reactivity for the Nao Robot (UROS 2016)

 

INTRODUCTION

The primary aim of the project is to design and implement a fundamental socially appropriate behaviour on the Nao (a small humanoid robot). This means a behaviour that will allow the robot to autonomously behave in a way that is appropriately reactive to people nearby. This is to be achieved within the framework of ROS (Robot Operating System).

Human robot interaction is becoming an important field of study, because we have realised that in the future robots could be useful in performing a variety of tasks in areas such as entertainment, security and as carerers. Because of this we must ensure that these robots interact with humans in an appropriate manner. Humans and environmental variables are unpredictable and so using pre-defined scripted behaviors do not normally produce great results.

The need for robots to be reactive has been researched and proved numerous times for example a study conducted by Wills et al, at the University of Plymouth tested the behavior of people interacting with a socially contingent vs non-reactive robot by the amount of charity donations received. The study found a 32% increase in the amount of donations received by the contingent robot. The study showed that people were more comfortable and felt more inclined to donate money when interacting with a robot that exhibited a range of social cues based on its environment compared to the static robot that used a pre-scripted behavior.

Our robot must interact with the human in a way that allows the human to feel comfortable during this interaction, multiple social cues were taken into consideration with this project in order to achieve this goal. Broz et al studied the gaze of human participants interacting with each other using eye trackers found that people need eye gaze in order to feel comfortable and to function adequately while interacting with others but hypothesized that high amounts of gaze may cause a person to avoid returning gaze making them feel uncomfortable. Our robot had to be able to maintain eye contact but make sure the person did not feel too uncomfortable.

Andrist et al developed and applied a method of how gaze aversion is used in conversations to social robots allow them to react similar to humans during conversations, describing gaze aversion as.

“The intentional redirection away from the face of an interlocutor—is an important nonverbal cue that serves a number of conversational functions, including signalling cognitive effort, regulating a conversation’s intimacy level, and managing the conversational floor.” (Andrist et al.)

Andrist et al found robots implementing gaze aversion appeared to be more thoughtful and effectively managed the conversation floor. Implementing gaze aversion would be extremely useful in making our robot act in a socially appropriate manner, though because the Nao doesn’t have articulated eyes, we had to rely on using a full head tilting motion to the left or right to avert the gaze or change the current target in focus.

Installation

For compatibility purposes we recommend installing Ubuntu version 14.04 to ensure everything runs smoothly.

ROS and Nao SDK`s

  1. Install ROS Indigo (http://wiki.ros.org/indigo/Installation/Ubuntu). Follow steps 1.0 to 1.7
  2. Configure your ROS Environment by creating a catkin workspace (http://wiki.ros.org/ROS/Tutorials/InstallingandConfiguringROSEnvironment). Staring from tutorial number 3 select ‘catkin’ and follow the steps to make your work space
  3. Follow the guide to installing the NAOQI SDK and setting up your python bindings from 1.2 to including 1.3 (http://wiki.ros.org/nao/Tutorials/Installation#ROS). Make sure you change the name, version and path of your downloaded SDK to match all commands that need to be executed. You will need to download both the python and C++ SDK
  4. To verify the SDK`s are installed properly with the correct python file paths you shouldn’t receive any errors when you run the following
    1. In a terminal window type python
    2. Inside the python shell type, the following from naoqi import ALProxy
    3. If this works you should have no errors but a blank line (see image below). Common causes of errors at this point is installing the wrong architecture SDK’s (64 bit instead of 32 or vice versa) or setting up an incorrect python path in step 3

Dependencies

  1. Open the src folder of your catkin workspace you created in the first part in a terminal using cd ~/catkin_ws/src
  2. Download the required dependencies into your src folder
    • Naoqi Bridge

git clone https://github.com/ros-naoqi/naoqi_bridge.git

    • Drivers

git clone https://github.com/ros-naoqi/naoqi_driver.git

    • Rviz Meshes

sudo apt-get install ros-indigo-nao-meshes

  1. Download our repository as well using

git clone https://github.com/vainmouse/NaoSocial

  1. Navigate back to the root of the catkin folder

cd ~/catkin_ws/src

  1. Create your workspace by typing

catkin_make

The working directory for this project is located in the src folder of your catkin workspace named Nao Social. You can directly navigate to this folder in a terminal using

cd ~/catkin_ws/src/NaoSocial

and run python NaoBehavior.py – -ip [robots ip]

Without the ip flag NaoBehavior will run with the default ip set in the code. If the installation worked perfectly you should not receive any dependencies missing errors and the Nao should stand up. if the terminal window looks similar to the image below, this is because the Nao cannot be found, you may be on a different network to the Nao.

NaoBehavior

Functions

NaoBehavior acts as a bridge between ROS and the Nao`s operating system. It allows users to execute pre-defined functions through ROS topics exposed by NaoBehavior.

System overview Diagram

 

System

NaoBehavior.py

Start/Shutdown

On start NaoBehavior will go to the StandInit posture (the NAO`s default standing pose) and enable breathing, the robot performs a breathing animation, subtle arm and leg movements to simulate life. Nao rests, stiffness and breathing are also disabled on shutdown

Dialogue

A message published to /nao_behavior/enable_Diag will enable the pre written dialogue. The topic file is located and must be on Nao`s filesystem /home/nao/top/mytopic_enu.top. This can be modified to include extra conversation capabilities. The topic file is composed using Qichat, syntax and code example for modifying and adding additional conversational scripts can be found at http://doc.aldebaran.com/2-1/naoqi/audio/dialog/dialog.html?highlight=qichat. A lexicon file is provided in the same folder also includes numerous concepts available to be used and is referenced in the topic file.

Publishing to /nao_behavior/reset_Diag resets the dialogue and so any deactivated proposals can be used again for example if a new person has arrived.

Running Pre-Installed Animations / Behaviors

There are two topics these behaviours can be published on

/nao_behavior/add/blocking

/nao_behavior/add/nonblocking

Items published to the non-blocking function will use the Nao’s post function and run in a separate thread. This only applies to moving and running behaviours and not any of the other functions provided on the topic.

The following cab be published to both topics, type string. The name of the system behaviour can be published to this topic. Multiple behaviours published at the same time will stack and run one after each other. The following can also be published to this topic

  • wakeup (StandInit Posture)
  • rest (Seated Position)
  • say + text
  • say animated+ text
  • help provides list of system behaviours
  • move (moves forward and backwards)
  • aware (enables basic awareness and person tracking)
  • any other string is treated as any of the built in system animations/behaviours

Once basic awareness is enabled NaoBehavior publishes to the tracking topic ‘/nao_behavior/tracking’ the state of the tracker, True if a person is currently being tracked

Example code

In a terminal window you can directly publish to the exposed ROS topics with the following replacing the text in quotes to the required function

rostopic pub –/nao_behavior/add/blocking String — ‘System/animations/Stand/Emotions/Neutral/Hello_1

Or for publishing multiple commands in a python script

pub = rospy.Publisher(‘/nao_behavior/add/blocking’, String, queue_size=5)

rospy.init_node(‘talker’, anonymous=True)

str = ‘System/animations/Stand/Emotions/Neutral/Hello_1’

pub.publish(str)

str = ‘say hello I am a robot how are you’

pub.publish(str)

str = ‘sayanimated I am talking with gestures’

pub.publish(str)

str = ‘move 0.2’

pub.publish(str)

str = ‘rest’

pub.publish(str)

Rviz Visualization

This visualizes the Nao in Rviz as well as updating all joint states. When basic awareness is enabled the location of people detected is also displayed in front of the robot as well as the current target being tracked as a red circle. Only people detected consistently over 2 seconds are shown with an undetected time out count of 25 seconds. A person out of view will remain for 25 seconds before being removed.

This can be run by executing the following (NaoBehavior must be running)

Open Rviz with the following in a terminal window

$ rosrun rviz rviz

A pre-configured Rviz file is included in the project folder named NaoRviz.rviz . Open the file in rviz by going to

FILE -> Open Config. Navigate to the location of the provided configuration file

Rviz can be opened with the configuration file already loaded by running

rosrunrvizrviz-d

Once open rviz should look like the following image, the robot model in the centre and the camera stream in the bottom left corner C:\Users\chris\AppData\Local\Microsoft\Windows\INetCache\Content.Word\rviz

Nao Social

On start, Nao social enables basic awareness, this then publishes to the tracking topic to which Nao social is subscribed to. When a person is detected the conversation dialogue is enabled. On first detection the Nao will wave and say hello. Whiles a person is detected every x seconds the Nao will attempt to change its focus to another person, if there is no other person in view, the Nao will avert its gaze and then look back a second later.

System Overview Diagram

 

Process flow

 

References

Sean Andrist, Xiang Zhi Tan, Michael Gleicher, and Bilge Mutlu. 2014. Conversational gaze aversion for humanlike robots. In Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction (HRI ’14). ACM, New York, NY, USA, 25-32.

P. Wills, P. Baxter, J. Kennedy, E. Senft and T. Belpaeme, “Socially contingent humanoid robot head behaviour results in increased charity donations,” 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Christchurch, 2016, pp. 533-534.

F. Broz, H. Lehmann, C. L. Nehaniv and K. Dautenhahn, “Mutual gaze, personality, and familiarity: Dual eye-tracking during conversation,” 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication, Paris, 2012, pp. 858-864.

Charles Osei

Dynamics 365 Consultant @ RSM MCSE Business Applications Computer Science – University of Lincoln 2017 Nottingham D365/CRM user group Chapter Leader TDG – Community leader Twitter @charliedevxyz LinedIn https://www.linkedin.com/in/charlesosei/

Leave a Reply

%d bloggers like this: