AttentionBot: A Robot Keeping Human Attention

AttentionBot’s purpose is to conduct researches in how to keep human attention. We are interested in communication inconsistency meaning that a person’s behaviors and speeches are inconsistent. In design field, people call the inconsistency “anti-affordance” meaning that the form and the function are unrelated. This leads to our research question: can anti-affordance robot designs attract and keep more human attention? We design two appearances (kind-looking Santa and evil-looking witch) and two characters (mean and nice). The kind-looking Santa with mean character and the evil-looking witch with nice character are anti-affordance robots. The other two are affordance robots. The robot asks the participant to mimic the sound that it says. The nice robot is encouraging and friendly while the mean robot is a terrible tease. Although the results of the experiment did not show significant differences in anti-affordance and affordance robots, we found that people got confused in identifying the character while interacting with anti-affordance robots. We conclude that for short term interaction, the influence level of appearance is larger than the one of character.


In Barnlund’s communication theory in “Communication: The Context of Change” article, he mentioned that inconsistency kills communication. When a person’s behaviors and speeches are inconsistent, others usually feel uncomfortable. This phenomenon exists in human-human communication. What about human-robot communication? Usually humans do not expect a robot to respond in the way that humans do. But humans want the robot to communicate with them in their own interaction ways. Will the same emotional response in inconsistent communication happen between human and robot?

Research Question

Our research question is “Can anti-affordance robot designs attract and keep more human attention?” We focus on the issue of communication inconsistency. In design field, people call the inconsistency “anti-affordance[1]” meaning that the form and the function of the product are unrelated. The product’s sensory characteristics do not intuitively imply its functionality and use. It is hard to guess the function according to the appearance. We extend the idea to the robot design and conduct an experiment to test the result of inconsistent communication between human and robot.

Design Concept

In this table, the X axis represents form (appearance) and the Y axis represents function (characteristics). The robot has two types of appearances which are a witch or a Santa. It also has two completely different characteristics which are nice and mean. The combinations are either affordance or anti-affordance. For example, people usually think that a witch is mean. So we call the robot with witch’s appearance and nice characteristic “anti-affordance”.

First Hypothesis

People will pay more attention to robots with anti-affordance regarding appearance and characteristics.

Second Hypothesis

In anti-affordance scenarios, there are differences in kind appearance robot with mean character and evil appearance robot with nice character.

Design Constraints

  • Simple Rules
    Having simple game rules allows users to learn how to interact with the robot in a short time.
  • Short Rounds
    Many rounds can be played without exhausting the participant. The uncertainty of the next round makes the game interesting.
  • Easy to Implement
    The system architecture needs to be easy to implement. This can also reduce the time for debugging.
  • Task Oriented
    Users have a clear goal to achieve instead of interacting aimlessly.
  • Need to Hide a Person
    A person can hide under the table to give candy and scare users.

Wizard with Oz

Our experiment uses the “Wizard with Oz[13]” method. We use a real robot and real technologies. The wizard replaces the artificial intelligence of the robot.

Measurements of Success

We use the “Engagement[5][14]” metric to measure the efficacy (capacity to produce an effect) of various social characteristics (emotion, dialogue, personality, etc.) for capturing attention (acquisition time) and holding interest (duration).

  • The interaction level of participants.
  • Interaction Ratio = NumPI/NumPP
    NumPI: the amount of people interacting with the robot
    NumPP: the amount of people passing by

Game Procedure

We design a sound mimic game for interaction. The robot asks the participant to mimic the sound that it says. The nice robot gives sounds that are easy to mimic and friendly feedback. It encourages users and gives them candies. On the opposite, the mean robot gives sounds that are hard to mimic and mean feedback. It teases users and is reluctant to give them candies.

[content] and [feedback] vary according to the game type. When a user approaches the robot, it greets the user and asks if she/he wants to play. If the user agrees by pressing the button, the robot plays the game content. Then the user plays and presses the button after finishing his/her round. The robot evaluates the user’s performance and gives feedback. The robot also tells the wizard whether he/she needs to give the user a candy or not. Then the user presses the button to tell the robot that he/she wants to play the next round. If nothing happens, the state of the robot goes to the beginning. But in the experiment, the button is fake and has no function. We monitor the webcam, teleoperate the robot, and control the time to go to the next state manually. This prevents the problem that participants may push the button multiple times.

Greeting Scripts

The robot greets users passing by the area.

Asking Scripts

The robot asks users to play games, go to next round, or wait for the researcher.

Sound Mimic Game Scripts

The game rule is that users mimic the sound that the robot says.

The table above shows the contents of sound mimic game in every round.

Table above: The robot always provides the same sequences of feedback.

System Architecture

The system architecture has three layers: human, interface, and control. The human layer indicates users interacting with the robot. The interface layer contains a robot playing with users and giving candies, a fake button for users to push, a camera and microphone monitoring users, a monitor displaying robot’s feedback, and a speaker playing robot’s feedback. The control layer includes servo motors controlling the robot, Hummingbird which controls the servo motors, a Java program as a teleoperation control center, and a wizard teleoperating the robot. The Java program on the computer communicates with Hummingbird, receives information from the camera and microphone, outputs texts to the monitor, and output sounds to the speaker.

Teleoperation User Interface

The programming environment is Java on NetBeans IDE. This is an open source project using MIT licence:

Top View and Sections

This is the original design of the robot’s physical structure. We change the design later by replacing the candy hole to a candy jar for giving participants rewards.

3D Rendering

This is the 3D rendering of the original robot design.

Witch Robot

The witch robot has a evil appearance. We also put a fake button without function. The robot asks the participant to push the button and go to the next round. The reason that we put the fake button is to make participants think that this is an autonomous robot.

Santa Robot

The physical structures of Santa and Witch robot are the same. We replace the appearance to a kind-looking Santa.


We conducted the experiment at the atrium in Newell Simon Hall, Carnegie Mellon University.


The table above shows the data of participants.

Differences in Two Experiments

The table above indicates the differences in two experiments.

Survey Questions

  • How do you like the robot in general?
    (Not at all) 1 2 3 4 5 6 7 (Very much)
  • Why do you like/dislike the robot?
    Open-ended question
  • How do you think of the appearance of the robot?
    (Kind) 1 2 3 4 5 6 7 (Evil)
  • What else do you think about the appearance rather than kind or evil?
    Open-ended question
  • How well were you able to observe the distinct character of the robot?
    (Not at all) 1 2 3 4 5 6 7 (Very much)
  • How do you think of the character of the robot?
    (Nice) 1 2 3 4 5 6 7 (Mean)
  • Do you think the robot performed any malfunctions?
    Open-ended question
  • How many candies did you get?
    Open-ended question
  • Did you get more than one candy at one time?
    Open-ended question
  • How much do you know about robotics?
    (Not at all) 1 2 3 4 5 6 7 (Very much)

Interaction Score (Engagement Metric)

We measure the interaction by using a score. The interaction score ranges from 0 to 2 at each round.

  • 0 Score (Low)
    The participant does not say anything, is reluctant, or walks away
  • 1 Score (Moderate)
    The participant says something
  • 2 Score (High)
    The participant enjoys playing with the robot

Analysis 1: Anti-affordance & Affordance

We used a one-tailed t test to analyze the interaction level and interaction experience in anti-affordance and affordance scenarios. Interaction level is the sum of interaction scores at each round. Interaction experience is the score of the survey question.

Analysis 2: Kind+Mean & Evil+Nice

In anti-affordance scenario, we used a two-tailed t test to analyze the interaction level and interaction experience in evil looking robot with nice character and kind looking robot with mean character.

Analysis 3: Awareness

We used a one-tailed t test to analyze the awareness of character (nice or mean) and appearance (kind or evil looking). The scores are based on survey questions.

Analysis 4: Appearance & Character Comparison

We used a one-tailed t test to compare the awareness of character and appearance in anti-affordance and affordance scenarios. The scores are based on survey questions.


Cannot Prove First Hypothesis
According to analysis 1, we cannot prove the first hypothesis that “People will pay more attention to robots with anti-affordance regarding appearance and characteristics.” There is no significant difference between anti-afffordance and affordance interaction scenarios.

Cannot Prove Second Hypothesis
According to analysis 2, we cannot prove the second hypothesis that ”In anti-affordance scenarios, there are differences in kind appearance robot with mean character and evil appearance robot with nice character.” There is no significant difference between different kinds of anti-affordance robots.

Awareness (Appearance > Character)
According to analysis 3, People are much more aware of the appearance than the character. There is significant difference in appearance awareness. But there is no significant difference in character awareness. We are curious about why this happens and do the analysis 4.

Influence Level (Appearance > Character)
We find that the reason why there is no significant difference in character awareness is because the level that appearance influences character is greater than the one that character influences appearance. According to analysis 4, anti-affordance causes a lot of confusion in character awareness. Anti-affordance robot tends to confuse people more in character awareness than appearance one. This means that appearance awareness is insusceptible to inconsistent character. But Character awareness is susceptible to inconsistent appearance.


The Reason We Cannot Prove The Hypotheses
We believe that there are three reasons. First, we do not have enough participants. Second, the emotional expressions of the robot is not exaggerated enough for participants to distinguish. Third, the group of participants is biased. The location of the experiments is at CMU Robotics Institute. Our participants are mainly computer science engineers. They may think that the robot’s emotion is fake and is programmed by the designer.

The Reward May Affect The Result
Because the nice robot gives more candies (5 candies) than the mean one (2 candies), this may affect the result. In fact in open-ended survey questions, we find that some people think that the robot is nice only because it gives candies.

Success in Wizard with Oz Method
A lot of people think that the robot uses voice recognition software, infrared tracking, and is autonomous. There are participants saying that “Oh, what! It isn’t autonomous?“ The fake button also plays an important role. We want people to think that they need to press the button to trigger the events. But in fact, the robot is teleoperated by a wizard.

In the Witch robot experiment, people think that the robot has malfunction when it starts playing the round 3. Round 3 is a random generated sound. So we change the script of round 3 in Santa robot experiment.

Candy Stealing
There are three kinds of stealing behaviors. First, in the Witch robot experiment, it is easy to steal candies from the jar. We find that people are likely to steal candies from the robot with mean character while playing. The mean robot tends to keep candies and does not want to give them to people. Second, people like to take a lot of candies from the mean robot and to put them back afterwards. Third, some people just take candies from the robot and walk away without playing.

Why Interaction Ratio 0.3(Witch)>0.075(Santa)?
In the Santa robot experiment, the interaction ratio is very low (0.075). It means that our robot only catches 7.5% of the attention of the people passing by. But in the witch robot experiment, the interaction ratio is much more higher (0.3). We think that there are several reasons. First, the Santa robot experiment is later than the Witch one. Some people seeing the Witch robot before said that the Santa one is the same robot changing the appearance. Second, the male voice of the Santa robot is not attractive. Third, the appearance of the Santa robot is too normal. People may not feel surprised.

Confounded Variables
Anti-affordance is not the only independent variable in this experiment. Significant difference happens in interaction experience while comparing the appearance and character. In interaction experience (survey questions), people like kind appearance and nice character more than evil appearance and mean character. But in interaction level, there is no significant difference. Why we do not see the difference in interaction level (observations)? We think the reason is that social norms and values influence people more in filling out the survey questions than interacting with the robot.


  1. Bainbridge, W. A., Hart, J., Kim, E. S., & Scassellati, B. (2008). The effect of presence on human-robot interaction. RO-MAN 2008 – The 17th IEEE International Symposium on Robot and Human Interactive Communication (pp. 701–706). IEEE.
  2. Beale, R. (2007). Slanty design. Communications of the ACM
  3. Breazeal, C. (2001). Affective interaction between humans and robots. Advances in Artificial Life.
  4. Breazeal, C., & Scassellati, B. (1999). How to build robots that make friends and influence people. Intelligent Robots and Systems, IROS ’99.
  5. Bruce, a., Nourbakhsh, I., & Simmons, R. (2001). The role of expressiveness and attention in human-robot interaction. Proceedings 2002 IEEE International Conference on Robotics and Automation
  6. Gockley, R., Forlizzi, J., & Simmons, R. (2006). Interactions with a moody robot. 2006 ACM International Conference on Human-Robot Interaction (HRI)
  7. Goodrich, M. a., & Schultz, A. C. (2007). Human-Robot Interaction: A Survey. Foundations and Trends® in Human-Computer Interaction
  8. Kahn, P. H., Freier, N. G., Kanda, T., Ishiguro, H., Ruckert, J. H., Severson, R. L., & Kane, S. K. (2008). Design patterns for sociality in human-robot interaction. Proceedings of the 3rd international conference on Human robot interaction
  9. Lauwers, T., & Nourbakhsh, I. (2010). Designing the Finch: Creating a Robot Aligned to Computer Science Concepts. First AAAI Symposium on Educational Advances in Artificial Intelligence
  10. Lee, C., Kim, K., Breazeal, C., & Picard, R. (2008). Shybot: friend-stranger interaction for children living with autism. 2008 ACM CHI conference
  11. Lombard, M., Ditton, T., & Crane, D. (2000). Measuring presence: A literature-based approach to the development of a standardized paper-and-pencil instrument. The Third International Workshop on Presence
  12. Short, E., Hart, J., Vu, M., & Scassellati, B. (2010). No fair!! An interaction with a cheating robot. 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI)
  13. Steinfeld, A. (2009). The oz of wizard: Simulating the human for interaction research. HRI ’09: Proceedings of the 4th ACM/IEEE international conference on human-robot interaction.
  14. Steinfeld, A., Fong, T., & Kaber, D. (2006). Common metrics for human-robot interaction. 2006 ACM International Conference on Human-Robot Interaction (HRI).
  15. Kidd, C. D., & Breazeal, C. (2004). Effect of a robot on user perceptions. 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems

Team Members

Yen-Chia Hsu: master student, tangible interaction design, CMU
Idea Development and Research Discussion
Design the Interaction Type and Robot Form
Robot System Architecture
Robot Interaction Software
Robot 3D Model Rendering
Interaction Scripts and Scenarios
Experiment Procedure
Website and Presentation PPT
Voice Recording for Game Scripts
Help in Qualitative and Quantitative Analysis

Zheng Yang: master student, robotics institute, CMU
Idea Development and Research Discussion
Design the Interaction Type and Robot Form
Survey Questions for Experiment
Metrics and Measurements of Success
Qualitative and Quantitative Analysis
Help in Building Robot Physical Structure
Robot Mechanics Design

Valerie A Gonzalez: undergraduate student, mechanical engineering, CMU
Idea Development and Research Discussion
Design the Interaction Type and Robot Form
Build Robot Physical Structure
Robot Appearance
Robot Theme Decoration
Robot Mechanics Design

Prof. Illah R. Nourbakhsh, robotics institute, Carnegie Mellon University, U.S.A.
Project Advisor


This is a project in “16867-Principles of Human-Robot Interaction” class in Robotics Institute, Carnegie Mellon University, U.S.A. Thanks to the supports of the project advisor, Prof. Illah R. Nourbakhsh. And also thanks to CREATE Lab‘s Hummingbird platform so that our group can easily develop the control system of the robot.


One comment

  1. Pingback: AttentionBot: A Robot Keeping Human Attention | Yen-Chia Hsu

Comments are closed.