Posts tagged "robots"

Note:

At present, I write here infrequently. You can find my current, regular blogging over at The Deliberate Owl.

Hi, my name is Mox!

This story begins in 2013, in a preschool in Boston, where I hide, with laptop, headphones, and microphone, in a little kitchenette. Ethernet cables trail across the hall to the classroom, where 17 children eagerly await their turn to talk to a small fluffy robot.

fluffy blue dragonbot robot

Dragonbot is a squash-and-stretch robot designed for playing with young children.

"Hi, my name is Mox! I'm very happy to meet you."

The pitch of my voice is shifted up and sent over the somewhat laggy network. My words, played by the speakers of Mox the robot and picked up by its microphone, echo back with a two-second delay into my headphones. It's tricky to speak at the right pace, ignoring my own voice bouncing back, but I get into the swing of it pretty quickly.

We're running show-and-tell at the preschool on this day. It's one of our pilot tests before we embark on an upcoming experimental study. The children take turns telling the robot about their favorite animals. The robot (with my voice) replies with an interesting fact about each animal, Did you know that capybaras are the largest rodents on the planet?" (Yes, one five-year-old's favorite animal is a capybara.) Later, we share how the robot is made and talk about motors, batteries, and 3D printers. We show them the teleoperation interface for remote-controlling the robot. All the kids try their hand at triggering the robot's facial expressions.

Then one kid asks if he can teach the robot how to make a paper airplane.

two paper airplanes, one has been colored on by a young child

Two paper airplanes that a child gave to DragonBot.

We'd just told them all how the robot was controlled by a human. I ask: Does he want to teach me how to make a paper airplane?

No, the robot, he says.

Somehow, there was a disconnect between what he had just learned about the robot and the robot's human operator, and the character that he perceived the robot to be.

Relationships with robots?

girl reaching across table to touch a fluffy robot's face

A child touches Tega's face while playing a language learning game.

In the years since that playtest, I've watched several hundred children interact with both teleoperated and autonomous robots. The children talk with the robots. They laugh. They give hugs, drawings, and paper airplanes. One child even invited the robot to his preschool's end-of-year picnic.

Mostly, though, I've seen kids treat the robots as social beings. But not quite like how they treat people. And not quite like how they treat pets, plants, or computers.

These interactions were clues: There's something interesting going on here. Children ascribed physical attributes to robots—they can move, they can see, they can feel tickles—but also mental attributes: thinking, feeling sad, wanting companionship. A robot could break, yes, and it is made by a person, yes, but it can be interested in things. It can like stories; it can be nice. Maybe, as one child suggested, if it were sad, it would feel better if we gave it ice cream.

girl hugs fluffy dragon robot in front of a small play table

A child listens to DragonBot tell a story during one of our research studies.

Although our research robots aren't commercially available, investigating how children understand robots isn't merely an academic exercise. Many smart technologies are joining us in our homes: Roomba, Jibo, Alexa, Google Home, Kuri, Zenbo...the list goes on. Robots and AI are here, in our everyday lives.

We ought to ask ourselves, what kinds of relationships do we want to have with them? Because, as we saw with the children in our studies, we will form relationships with them.

We see agency everywhere

One reason we can't help ourselves from forming relationships with robots is that humans have evolved to see agency and intention everywhere. If an object moves independently in an apparently goal-directed way, we interpret that as agency—that is, we see the object as an agent. Even in something as simple as a couple of animated triangles moving around on a screen, we look for, and project, agency and intentionality.

If you think about the theory of evolution, this makes sense. Is the movement I spotted out of the corner of my eye just a couple leaves dancing in the breeze, or is it a tiger? My survival relies on thinking it's a tiger.

But relationships aren't built on merely recognizing other agents; relationships are social constructs. And, humans are uniquely—unequivocally—social creatures. Social is the warp and weft of our lives. Everything is about our interactions with others: people, pets, characters in our favorite shows or books, even our plants or our cars. We need companionship and community to thrive. We pay close attention to social cues—like eye gaze, emotions, politeness—whether these cues come from a person...or from a machine.

Researchers have spent the past 25 years showing that humans respond to computers and machines as if those objects were people. There's even a classic book, published by Byron Reeves and Clifford Nass in 1996, titled, The Media Equation: How people treat computers, television, and new media like real people and places. Among their findings: people assign personalities to digital devices, people are polite to computers—for example, they evaluate a computer more positively when they had to tell it to its face. Merely telling people a computer was on their team leads them to rate it as more cooperative and friendly.

Research since that book has shown again and again that these findings still hold: Humans treat machines as social beings. And this brings us back to my work now.

Designing social robots to help kids

I'm a PhD student in the Personal Robots Group. We work in the field of human-robot interaction (HRI). HRI studies questions, such as: How do people think about and react to robots? How can we make robots that will help people in different areas of their lives—like manufacturing, healthcare, or education? How do we build autonomous robots—including algorithms for perception, social interaction, and learning? At the broadest scale, HRI encompasses anything where humans and robots come into contact and do things with, or near, each other.

jacqueline holding the red and blue stripy fluffy tega robot, wearing a red dress

Look, we match!

As you might guess based on the anecdotes I've shared in this post, the piece of HRI I'm working on is robots for kids.

There are numerous projects in our group right now focusing on different aspects of this: robots that help kids in hospitals, robots that help kids learn programming, robots that promote curiosity and a growth mindset, robots that help kids learn language skills.

In my research, I've been asking questions like: Can we build social robots that support children's early language and literacy learning? What design features of the robots affect children's learning—like the expressivity of the robot's voice, the robot's social contingency, or whether it provides personalized feedback? How, and what, do children think about these robots?

Will robots replace teachers?

When I tell people about the Media Lab's work with robots for children's education, a common question is: "Are you trying to replace teachers?"

To allay concerns: No, we aren't.

(There are also some parents who say that's nice, but can you build us some robot babysitters, soon, pretty please?)

We're not trying to replace teachers for two reasons:

  1. We don't want to.
  2. Even if we wanted to, we couldn't.

Teachers, tutors, parents, and other caregivers are irreplaceable. Despite all the research that seems to point to the conclusion "robots can be like people", there are also studies showing that children learn more from human tutors than from robot tutors. Robots don't have all the capabilities that people do for adapting to a particular child's needs. They have limited sensing and perception, especially when it comes to recognizing children's speech. They can't understand natural language (and we're not much closer to solving the underlying symbol grounding problem). So, for now, as often as science fiction has us believe otherwise (e.g., androids, cylons, terminators, and so on), robots are not human.

Even if we eventually get to the point where robots do have all the necessary human-like capabilities to be like human teachers and tutors—and we don't know how far in the future that would be or if it's even possible—humans are still the ones building the robots. We get to decide what we build. In our lab, we want to build robots that help humans and support human flourishing. That said, saying that we want to build helpful robots only goes so far. There's still more work to ensure that all the technology we build is beneficial, and not harmful, for humans. More on that later in this post.

a mother sits with her son holding a tablet

A mother reads a digital storybook with her child.

The role we foresee for robots and similar technologies is complementary: they are a new tool for education. Like affective pedagogical agents and intelligent tutoring systems, they can provide new activities and new ways of reaching kids. The teachers we've talked to in our research are excited about the prospects. They've suggested that the robot could provide personalized content, or connect learning in school to learning at home. We think robots could supplement what caregivers already do, support them in their efforts, and scaffold or model beneficial behaviors that caregivers may not know to use, or may not be able to use.

For example, one beneficial behavior during book reading is asking dialogic questions—that is, questions that prompt the child to think about the story, predict what might happen next, and engage more deeply with the material. Past work from our group has shown that when you add a virtual character to a digital storybook who models this dialogic questioning, it can help parents learn what kinds of questions they can ask, and remember to ask them more often.

In another Media Lab project, Natalie Freed—an alum of our group—made a simple vocabulary-learning game with a robot that children and their parents played together. The robot's presence encouraged communication and discussion. Parents guided and reinforced children's behavior in a way that aligned with the language learning goals of the game. Technology can facilitate child-caregiver interactions.

In summary, in the Personal Robots Group, we want our robots to augment existing relationships between children and their families, friends, and caregivers. Robots aren't human, and they won't replace people. But they will be robots.

Robots are friends—sort of?

In our research, we hear a lot of children's stories. Some are fictional: tales of penguins and snowmen, superheroes and villains, animals playing hide-and-seek and friends playing ball. Some are real: robots who look like rock stars, who ask questions and can listen, who might want ice cream when they're sad.

Such stories can tell you a lot about how children think. And we've found that not only will children learn new words and tell stories with robots, they think of the robots as active social partners.

In one study, preschool children talked about their favorite animals with two DragonBots, Green and Yellow. One robot was contingently responsive: it nodded and smiled at all the right times. The other was just as expressive, but not contingent—you might be talking, and it might be looking behind you, or it might interrupt you to say "mmhm!", rather than waiting until a pause in your speech.

a yellow dragonbot and a green dragonbot sitting on a table

Two DragonBots, ready to play!

Children were especially attentive to the more contingent robot, spending more time looking at it. We also asked children a couple questions to test whether they thought the robots were equally reliable informants. We showed children a new animal and asked them, "Which robot do you want to ask about this animal's name?" Children chose one of the robots.

But then each robot provided a different name! So we asked: "Which robot do you believe?" Regardless of which robot they had initially chosen (though most chose the contingent robot), almost all the children believed the contingent robot.

This targeted information seeking is consistent with previous psychology and education research showing that children are selective in choosing whom to question or endorse. They use their interlocutor's nonverbal social cues to decide how reliable that person is, or how reliable that robot is.

Then we performed a couple other studies to learn about children's word learning with robots. We found that here, too, children paid attention to the robot's social cues. As in their interactions with people, children followed the robot's gaze and watched the robot's body orientation to figure out which objects the robot was naming.

We looked at longer interactions. Instead of playing with the robot once, children got to play seven or eight times. For two months, we observed children take turns telling stories with a robot. Did they learn? Did they stay engaged, or get bored? The results were promising: The children liked telling their own stories to the robot. They copied parts of the robot's stories—borrowing characters, settings, and even some of the new vocabulary words that the robot had introduced.

We looked at personalization. If you have technology, after all, one of the benefits is that you can customize it for individuals. If the robot "leveled" its stories to match the child's current language abilities, would that lead to more learning? If the robot personalized the kinds of motivational strategies it used, would that increase learning or engagement?

a girl sits across from a dragon robot at a small play table

A girl looks up at DragonBot during a storytelling game.

Again and again, the results pointed to one thing: Children responded to these robots as social beings. Robots that acted more human-like—being more expressive, being responsive, personalizing content and responses—led to more engagement and learning by the children; even how expressive the robot's voice was mattered. When we compared a robot that had a really expressive voice to one that had a flat, boring voice (like a classic text-to-speech computer voice), we saw that with the expressive robot, children were more engaged, remembered the story more accurately, and used the key vocabulary words more often.

All these results make sense: There's a lot of research showing that these kinds of "high immediacy" behaviors are beneficial for building relationships, teaching, and communicating.

Beyond learning, we also looked at how children thought and felt about the robot.

We looked at how the robot was introduced to children: If you tell them it's a machine, rather than introducing it as a friend, do children treat the robot differently? We didn't see many differences. In general, children reacted in the moment to the social robot in front of them. You could say "it's just a robot, Frank," but like the little boy I mentioned earlier who wanted to teach the robot how to make a paper airplane, they didn't really get the distinction.

Or maybe they got it just fine, but to them, what it means to be a robot is different from what we adults think it means to be a robot.

Across all the studies, children claimed the robot was a friend. They knew it couldn't grow or eat like a person, but—as I noted earlier—they happily ascribed it with thinking, seeing, feeling tickles, and being happy or sad. They shared stories and personal information. They taught each other skills. Sure, the kids knew that a person had made the robot, and maybe it could break, but the robot was a nice, helpful character that was sort of like a person and sort of like a computer, but not really either.

And there was that one child who invited the robot to a picnic.

For children, the ontologies we adults know—the categories we see as clear-cut—are still being learned. Is something being real, or is it pretending? Is something a machine, or a person? Maybe it doesn't matter. To a child, someone can be imaginary and still be a friend. A robot can be in-between other things. It can be not quite a machine, not quite a pet, not quite friend, but a little of each.

But human-robot relationships aren't authentic!

One concern some people have when talking about relationships with social robots is that the robots are pretending to be a kind of entity that they are no—namely, an entity that can reciprocally engage in emotional experiences with us. That is, they're inauthentic (PDF): they provoke undeserved and unreciprocated emotional attachment, trust, caring, and empathy.

But why must reciprocality be a requirement for a significant, authentic relationship?

People already attach deeply to a lot of non-human things. People already have significant emotional and social relationships that are non-reciprocal: pets, cars, stuffed animals, favorite toys, security blankets, and pacifiers. Fictional characters in books, movies, and TV shows. Chatbots and virtual therapists, smart home devices, and virtual assistants.

A child may love their dog, and you may clearly see that the dog "loves" the child back, but not in a human-like way. We aren't afraid that the dog will replace the child's human relationships. We acknowledge that our relationships with our pets, our friends, our parents, our siblings, our cars, and our favorite fictional characters are all different, and all real. Yet the default assumption is generally that robots will replace human relationships.

If done right (more on that in a moment), human-robot relationships could just be one more different kind of relationship.

So we can make relational robots? Should we?

When we talk about how we can make robots that have relationships with kids, we also have to ask one big lurking question:

Should we?

Social robots have a lot of potential benefits. Robots can help kids learn; they can be used in therapy, education, and healthcare. How do we make sure we do it "right"? What guiding principles should we follow?

How do we build robots to help kids in a way that's not creepy and doesn't teach kids bad behaviors?

I think caring about building robots "right" is a good first step, because not everybody cares, and because it's up to us. We humans build robots. If we want them not to be creepy, we have to design and build them that way. If we want socially assistive robots instead of robot overlords, well, that's on us.

a drawing of two robots on a whiteboard

Tega says, 'What do you want to do tonight, DragonBot?' Dragonbot responds, 'The same thing we do every night, Tega! Try to take over the world!'

Fortunately, there's growing international interest in many disciplines for in-depth study into the ethics of placing robots in people's lives. For example, the Foundation for Responsible Robotics is thinking about future policy around robot design and development. The IEEE Standards Association has an initiative on ethical considerations for autonomous systems. The Open Roboethics initiative polls relevant stakeholders (like you and me) about important ethical questions to find out what people who aren't necessarily "experts" think: Should robots make life or death decisions? Would you trust a robot to take care of your grandma? There are an increasing number of workshops on robot policy and ethics at major robotics conferences—I've attended some myself. There's a whole conference on law and robots.

The fact that there's multidisciplinary interest is crucial. Not only do we have to care about building robots responsibly, but we also have to involve a lot of different people in making it happen. We have to work with people from related industries who face the same kinds of ethical dilemmas because robots aren't the only technology that could go awry.

We also have to involve all the relevant stakeholders—a lot more people than just the academics, designers, and engineers who build the robots. We have to work with parents and children. We have to work with clinicians, therapists, teachers. It may sound straightforward, but it can go a long way toward making sure the robots help and support the people they're supposed to help and support.

We have to learn from the mistakes made by other industries. This is a hard one, but there's certainly a lot to learn from. When we ask if robots will be socially manipulative, we can see how advertising and marketing have handled manipulation, and how we can avoid some of the problematic issues. We can study other persuasive technologies and addictive games. We can learn about creating positive behavior change instead. Maybe, as was suggested at one robot ethics workshop, we could create "warning labels" similar to nutrition labels or movie ratings, which explain the risks of interacting with particular technologies, what the technology is capable of, or even recommended "dosage", as a way of raising awareness of possible addictive or negative consequences.

For managing privacy, safety, and security, we can see what other surveillance technologies and internet of things devices have done wrong—such as not encrypting network traffic and failing to inform users of data breaches in a timely manner. Manufacturing already has standards for "safety by design" so could we create similar standards for "security by design"? We may need new regulations regarding what data can be collected, for example, requiring warrants to access any data from inside homes, or HIPAA-like protections for personal data. We may need roboticists to adopt an ethical code similar to the codes professionals in other fields follow, but one that emphasizes privacy, intellectual property, and transparency.

There are a lot of open questions. If you came into this discussion with concerns about the future of social robots, I hope I've managed to address them. But I'll be the first to tell you that our work is not even close to being done. There are many other challenges we still need to tackle, and opening up this conversation is an important first step. Making future technologies and robot companions beneficial for humans, rather than harmful, is going to take effort.

It's a work in progress.

Keep learning, think carefully, dream big

We're not done learning about robot ethics, designing positive technologies, or children's relationships with robots. In my dissertation work, I ask questions about how children think about robots, how they relate to them through time, and how their relationships are different from relationships with other people and things. Who knows: we may yet find that children do, in fact, realize that robots are "just pretending" (for now, anyway), but that kids are perfectly happy to suspend disbelief while they play with those robots.

As more and more robots and smart devices enter our lives, our attitudes toward them may change. Maybe the next generation of kids, growing up with different technology, and different relationships with technology, will think this whole discussion is silly because of course robots take whatever role they take and do whatever it is they do. Maybe by the time they grow up, we'll have appropriate regulations, ethical codes, and industry standards, too.

And maybe—through my work, and through opening up conversations about these issues—our future robot companions will make paper airplanes with us, attend our picnics, and bring us ice cream when we're sad.

small fluffy robot on a table looking at a bowl of ice cream

Miso the robot looks at a bowl of ice cream.

If you'd like to learn more about the topics in this post, I've compiled a list of relevant research and helpful links!

This article originally appeared on the MIT Media Lab website, June 2017

Acknowledgments:

The research I talk about in this post involved collaborations with, and help from, many people: Cynthia Breazeal, Polly Guggenheim, Sooyeon Jeong, Paul Harris, David DeSteno, Rosalind Picard, Edith Ackermann, Leah Dickens, Hae Won Park, Meng Xi, Goren Gordon, Michal Gordon, Samuel Ronfard, Jin Joo Lee, Nick de Palma, Siggi Aðalgeirsson, Samuel Spaulding, Luke Plummer, Kris dos Santos, Rebecca Kleinberger, Ehsan Hoque, Palash Nandy, David Nuñez, Natalie Freed, Adam Setapen, Marayna Martinez, Maryam Archie, Madhurima Das, Mirko Gelsomini, Randi Williams, Huili Chen, Pedro Reynolds-Cuéllar, Ishaan Grover, Nikhita Singh, Aradhana Adhikari, Stacy Ho, Lila Jansen, Eileen Rivera, Michal Shlapentokh-Rothman, Ryoya Ogishima.

This research was supported by an MIT Media Lab Learning Innovation Fellowship and by the National Science Foundation. Any opinions, findings and conclusions, or recommendations expressed in this paper are those of the authors and do not represent the views of the NSF.


0 comments

child leans over tablet showing a storybook, in front of a fluffy robot who is listening

Does the robot's expressivity affect children's learning and engagement?

Reading books is great. Reading picture books with kids is extra great, especially when kids are encouraged to actively process the story materials through dialogic reading (i.e., asking questions, talking about what's happening in the book and what might happen next, connecting stuff in the book to other stuff the kid knows). Dialogic reading can, e.g., help kids learn new words and remember the story better.

Since we were already studying how we could use social robots as language learning companions and tutors for young kids, we decided to explore whether social robots could effectively engage preschoolers in dialogic reading. Given that past work has shown that children can and do learn new words from social robots, we decided to also look at what factors may modulate their engagement and learning—such as the verbal expressiveness of the robot.

fluffy robot tells a story to a child, who leans in over a tablet storybook listening

Tega robot

For this study, we used the Tega robot. Designed and built in the Personal Robots Group, it's a squash-and-stretch robot specifically designed to be an expressive, friendly creature. An Android phone displays an animated face and runs control software. The phone's sensors can be used to capture audio and video, which we can stream to another computer so a teleoperator can figure out what the robot should do next, or, in other projects, as input for various behavior modules, such as speech entrainment or affect recognition. We can stream live human speech, with the pitch shifted up to sound more child-like, to play on the robot, or playback recorded audio files.

Here is a video showing one of the earlier versions of Tega. Here's research scientist Dr. Hae Won Park talking about Tega and some of our projects, with a newer version of the robot.

Study: Does vocal expressivity matter?

We wanted to understand how the robot's vocal expressiveness might impact children's engagement and learning during a story and dialogic reading activity. So we set up two versions of the robot. One used a voice with a wide range of intonation and emotion. The other read and conversed with a flat voice, which sounded similar to a classic text-to-speech engine and had little dynamic range. Both robots moved and interacted the exact same way—the only difference was the voice.

This video shows the robot's expressive and not-so-expressive voices.

Half of the 45 kids in the study heard the expressive voice; the other half heard the flat voice. They heard a story from the robot that had several target vocabulary words embedded in it. The robot asked dialogic questions during reading. Kids were asked to retell the story back to a fluffy purple toucan puppet (who had conveniently fallen asleep during the story and was so sad to have missed it).

We found that all children learned new words from the robot, emulated the robot's storytelling in their own story retells, and treated the robot as a social being. However, children who heard the story from the expressive robot showed deeper engagement, increased learning and story retention, and more emulation of the robot's story in their story retells.

This study provided evidence that children will show peer-to-peer modeling of a social robot's language. In addition, they will also emulate the robot's affect, and they will show deeper engagement and learning when the robot is expressive.

child smiling and looking up, beside fluffy robot and fluffy toucan puppet

Links

Publications

  • Kory-Westlund, J., Jeong, S., Park, H. W., Ronfard, S., Adhikari, A., Harris, P. L., David DeSteno, & Breazeal, C. (2017). Flat versus expressive storytelling: young children's learning and retention of a social robot's narrative. Frontiers in Human Neuroscience, 11. [PDF] [online]

0 comments

a girl reaches her hand toward the face of a fluffy red robot, which sits on the table in front of her

Socially Assistive Robotics

This project was part of the Year 3 thrust for the Socially Assistive Robotics: An NSF Expedition in Computing grant, which I was involved in at MIT in the Personal Robots Group.

The overall mission of this expedition was to develop the computational techniques that could enable the design, implementation, and evaluation of "relational" robots, in order to encourage social, emotional, and cognitive growth in children, including those with social or cognitive deficits. The expedition aimed to increase the effectiveness of technology-based interventions in education and healthcare and to enhance the lives of children who may require specialized support in developing critical skills.

The Year 1 project targeted nutrition; Year 3 targeted language learning (that's this project!); Year 5 targeted social skills.

Second-language learning companions

This project was part of our effort at MIT to develop robotic second-language learning companions for preschool children. (We did other work in this area too: e.g., several projects looking at what design features positively impact children's learning as well as how children learn and interact over time.)

The project had two main goals. First, we wanted to test whether a socially assistive robot could help children learn new words in a foreign language (in this case, Spanish) more effectively by personalizing its affective/emotional feedback.

Second, we wanted to demonstrate that we could create and deploy an fully autonomous robotic system at a school for several months.

a boy sits at a table with a fluffy robot on it and leans in to peer at the robot's face, while the robot looks down at a tablet

Tega Robot

We used the Tega robot. Designed and built in the Personal Robots Group, it's a squash-and-stretch robot specifically designed to be an expressive, friendly creature. An Android phone displays an animated face and runs control software. The phone's sensors can be used to capture audio and video, which we can stream to another computer so a teleoperator can figure out what the robot should do next, or, in other projects, as input for various behavior modules, such as speech entrainment or affect recognition. We can stream live human speech, with the pitch shifted up to sound more child-like, to play on the robot, or playback recorded audio files.

Here is a video showing one of the earlier versions of Tega. Here's research scientist Dr. Hae Won Park talking about Tega and some of our projects, with a newer version of the robot.

A fluffy red robot sits behind a tablet, which is laying on a table

Language learning game

We created an interactive game that kids could play with a fully autonomous robot and the robot’s virtual sidekick, a Toucan shown on a tablet screen. The game was designed to support second language acquisition. The robot and the virtual agent each took on the role of a peer or learning companion and accompanied the child on a make-believe trip to Spain, where they learned new words in Spanish together.

Two aspects of the interaction were personalized to each child: (1) the content of the game (i.e., which words were presented), and (2) the robot's affective responses to the child's emotional state and performance.

This video shows the robot, game, and interaction.

scene from a tablet app showing a toucan looking at things in a bdroom: a suitcaes, a closet, shirts, balls, a hat

Study

We conducted a 2-month study in three "special start" preschool classrooms at a public school in the Greater Boston Area. Thirty-four children ages 3-5, with 15 classified as special needs and 19 as typically developing, participated in the study.

The study took place over 9 sessions: Initial assessments, seven sessions playing the language learning game with the robot, and a final session with goodbyes with the robot and posttests.

We found that child learned new words presented during the interaction, children mimicked the robot's behavior, and that the robot's affective personalization led to greater positive responses from the children. This study provided evidence that children will engage a social robot as a peer over time, and personalizing a robot's behavior to children can lead to positive outcomes, such as greater liking of the interaction.

a girl mimics the head tilt and expression shown by a fluffy robot

Links

Publications

  • Kory-Westlund, J., Gordon, G., Spaulding, S., Lee, J., Plummer, L., Martinez, M., Das, M., & Breazeal, C. (2015). Learning a Second Language with a Socially Assistive Robot. In Proceedings of New Friends: The 1st International Conference on Social Robots in Therapy and Education. (*equal contribution). [PDF]

  • Kory-Westlund, J. M., Lee, J., Plummer, L., Faridia, F., Gray, J., Berlin, M., Quintus-Bosz, H., Harmann, R., Hess, M., Dyer, S., dos Santos, K., Adalgeirsson, S., Gordon, G., Spaulding, S., Martinez, M., Das, M., Archie, M., Jeong, S., & Breazeal, C. (2016). Tega: A Social Robot. In S. Sabanovic, A. Paiva, Y. Nagai, & C. Bartneck, Proceedings of the 11th ACM/IEEE International Conference on Human-Robot Interaction: Video Presentations (pp. 561). Best Video Nominee. [PDF] [Video]

  • Gordon, G., Spaulding, S., Kory-Westlund, J., Lee, J., Plummer, L., Martinez, M., Das, M., & Breazeal, C. (2016). Affective Personalization of a Social Robot Tutor for Children's Second Language Skills. Proceedings of the 30th AAAI Conference on Artificial Intelligence. AAAI: Palo Alto, CA. [PDF]

  • Kory-Westlund, J. M., Gordon, G., Spaulding, S., Lee, J., Plummer, L., Martinez, M., Das, M., & Breazeal, C. (2016). Lessons From Teachers on Performing HRI Studies with Young Children in Schools. In S. Sabanovic, A. Paiva, Y. Nagai, & C. Bartneck (Eds.), Proceedings of the 11th ACM/IEEE International Conference on Human-Robot Interaction: alt.HRI (pp. 383-390). [PDF]


0 comments

a pair of bright, fluffy dragon robots sitting beside each other on a table

Social robots as language learning companions for children

Language learning is, by nature, a social, interactive, interpersonal, activity. Children learn language not only by listening, but through active communication with a social actor. Social interaction is critical for language learning.

Thus, if we want to build technology to support young language learners, one intriguing direction is to use robots. Robots can be designed to use the same kinds of social, interactive behaviors that humans use—their physical presence and embodiment give them a leg up in social, interpersonal tasks compared to virtual agents or simple apps and games. They combine the adaptability, customizability, and scalability of technology with the embodied, situated world in which we operate.

The robot we used in these projects is called the DragonBot. Designed and built in the Personal Robots Group, it's a squash-and-stretch robot specifically designed to be an expressive, friendly creature. An Android phone displays an animated face and runs control software. The phone's sensors can be used to capture audio and video, which we can stream to another computer so a teleoperator can figure out what the robot should do next, or, in other projects, as input for various behavior modules, such as speech entrainment or affect recognition. We can stream live human speech, with the pitch shifted up to sound more child-like, to play on the robot, or playback recorded audio files.

Here is a video showing the original DragonBot robot, with a brief rundown of its cool features.

A child and a woman sit in front of a small table, looking at and talking with two fluffy dragon robots that are on the table

Social robots as informants

This was one of the very first projects I worked on at MIT! Funded by an NSF cyberlearning grant, the goal of this study and the studies following were to explore several questions regarding preschool children's word learning from social robots, namely:

  • What can make a robot an effective language learning companion?
  • What design features of the robots positively impact children's learning and attitudes?

In this study, we wanted to explore how different nonverbal social behaviors impacted children's perceptions of the robot as an informant and social companion.

We set up two robots. One was contingently responsive to the child—e.g., it would look at the child when the child spoke, it might nod and smile at the right times. The other robot was not contingent—it might be looking somewhere over there while the child was speaking, and while it was just as expressive, the timing of its nodding and smiling had nothing to do with what the child was doing.

For this study, the robots were both teleoperated by humans. I was one of the teleoperators—it was like controlling a robotic muppet!

Each child who participated in the study got to talk with both robots at the same time. The robots presented some facts about unusual animals (i.e., opportunities for the child to learn). We did some assessments and activities designed to give us insight into how the child thought about the robots and how willing they might be to learn new information from each robot—i.e., did the contingency of the robot's nonverbal behavior affect whether kids would treat the robots as equally reliable informants?

We found that children treated both robots as interlocutors and as informants from whom they could seek information. However, children were especially attentive and receptive to whichever robot displayed the greater nonverbal contingency. This selective information seeking is consistent with other recent research showing that children are, first, quite sensitive to their interlocutor's nonverbal signals, and use those signals as cues when determining which informants they question or endorse.

In sum: This study provided evidence that children show sensitivity to a robot's nonverbal social cues, like they are with humans, and they will use this information when deciding if a robot is a credible informant, as they do with humans.

Links

Publications

  • Breazeal, C., Harris, P., DeSteno, D., Kory, J., Dickens, L., & Jeong, S. (2016). Young children treat robots as informants. Topics in Cognitive Science, pp. 1-11. [PDF]

  • Kory, J., Jeong, S., & Breazeal, C. L. (2013). Robotic learning companions for early language development. In J. Epps, F. Chen, S. Oviatt, & K. Mase (Eds.), Proceedings of the 15th ACM on International conference on multimodal interaction, (pp. 71-72). ACM: New York, NY. [on ACM]

Word learning with social robots

We did two studies specifically looking at children's rapid learning of new words. Would kids learn words with a robot as well as they do from a human? Would they attend to the robot's nonverbal social cues, like they do with humans?

Study 1: Simple word learning

This study was pretty straightforward: Children looked at pictures of unfamiliar animals with a woman, with a tablet, and with a social robot. The interlocutor provided the names of the new animals—new words for the kids to learn. In this simple word-learning task, children learned new words equally well from all three interlocutors. We also found that children appraised the robot as an active, social partner.

In sum: This study provided evidence that children will learn from social robots, and will think of them as social partners. Great!

With that baseline in place, we compared preschoolers' learning of new words from a human and from a social robot in a somewhat more complex learning task...

Two panels: In the first, a child looks at a dragon robot, which looks at her while saying a word; in the second, the child watches the robot look down at a tablet

Study 2: Slightly less simple word learning

When learning from human partners, children pay attention to nonverbal signals, such as gaze and bodily orientation, to figure out what a person is looking at and why. They may follow gaze to determine what object or event triggered another's emotion, or to learn about the goal of another's ongoing action. They also follow gaze in language learning, using the speaker's gaze to figure out what new objects are being referred to or named. Would kids do that with robots, too? Children viewed two images of unfamiliar animals at once, and their interlocutor (human or robot) named one of the animals. Children needed to monitor the interlocutor's non-verbal cues (gaze and bodily orientation) to determine which picture was being referred to.

We added one more condition. How "big" of actions might the interlocutor need to do for the child to figure out what picture was being referred to? Half the children saw the images close together, so the interlocutor's cues were similar regardless of which animal was being attended to and named. The other half saw the images farther apart, which meant the interlocutor's cues were "bigger" and more distinct.

As you might expect, when the images were presented close together, children subsequently identified the correct animals at chance level with both interlocutors. So ... the nonverbal cues weren't distinct enough.

When the images were presented further apart, children identified the correct animals at better than chance level from both interlocutors. Now it was easier to see where the interlocutor was looking!

Children learned equally well from the robot and the human. Thus, this study provided evidence that children will attend to a social robot's nonverbal cues during word learning as a cue to linguistic reference, as they do with people.

Links

Publications

  • Kory-Westlund, J., Dickens, L., Jeong, S., Harris, P., DeSteno, D., & Breazeal, C. (2015). A Comparison of children learning from robots, tablets, and people. In Proceedings of New Friends: The 1st International Conference on Social Robots in Therapy and Education. [talk] [PDF]

  • Kory-Westlund., J. M., Dickens, L., Jeong, S., Harris, P. L., DeSteno, D., & Breazeal, C. L. (2017). Children use non-verbal cues to learn new words from robots as well as people. International Journal of Child-Computer Interaction. [PDF]


0 comments

a young girl hugging a fluffy dragon robot behind a little play table

Click here to see the video showing this project!

Study Overview

For my master's thesis at the MIT Media Lab, I created a social robotic learning companion that played a storytelling game with young kids.

Children’s oral language skills in preschool can predict their academic success later in life. Helping children improve their language and vocabulary skills early on could help them succeed later. Furthermore, language learning is a highly social, interactive activity. When creating technology to support children's language learning, technology that leverages the same social cues and social presence that people do—such as a social robot—will likely provide more benefit than using technology that ignores the critical social aspects of language learning.

As such, in this project, I examined the potential of a social robotic learning companion to support children's early long-term language development.

Boy sitting on the floor across a mini table from a dragon robot, looking at the robot intently

Study

The robot was designed as a social character, engaging children as a peer, not as a teacher, within a relational, dialogic context. The robot targeted the social, interactive nature of language learning through a storytelling game that the robot and child played together. The game was on a tablet—the tablet showed a couple characters that the robot or child could move around while telling their story, much like digital stick puppets. During the game, the robot introduced new vocabulary words and modeled good story narration skills.

Girl moving a picture on a tablet screen, with the tablet inset in a mini table that is between her and a dragon robot

Furthermore, because children may learn better when appropriately challenged, we asked whether a robot that Matched the “level” of complexity of the language it used to the general language ability of the child might help children improve more. For half the children, the robot told easier or harder stories based on an assessment of the child’s general language ability.

17 preschool children played the storytelling game with the robot eight times each over a two-month period.

I evaluated children's perceptions of the robot and the game, as well as whether the robot's matching influenced (i) whether children learned new words from the robot, (ii) the complexity and style of stories children told, and (iii) the similarity of children’s stories to the robot’s stories. I expected that children would learn more from a robot that matched, and that they would copy its stories and narration style more than they would with a robot that did not match. Children’s language use was tracked across sessions.

Boy touching a screen that is in a mini table that is between him and a dragon robot, the robot is also looking at the table

Results

I found that all children learned new vocabulary words, created new stories during the game, and enjoyed playing with the robot. In addition, children in the Matched condition maintained or increased the amount and diversity of the language they used during interactions with the robot more than children who played with the Unmatched robot.

Understanding how the robot influences children’s language, and how a robot could support language development will inform the design of future learning/teaching companions that engage children as peers in educational play.

Girl looking intently over a mini table at a dragon robot

Links

Publications

  • Kory, J. (2014). Storytelling with robots: Effects of robot language level on children's language learning. Master's Thesis, Media Arts and Sciences, Massachusetts Institute of Technology, Cambridge, MA. [PDF]

  • Kory, J., & Breazeal, C. (2014). Storytelling with Robots: Learning Companions for Preschool Children’s Language Development. In P. A. Vargas & R. Aylett (Eds.), Proceedings of the 23rd IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). IEEE: Washington, DC. [PDF]

  • Kory-Westlund, J., & Breazeal, C. (2015). The Interplay of Robot Language Level with Children's Language Learning during Storytelling. In J. A. Adams, W. Smart, B. Mutlu, & L. Takayama (Eds.), Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction: Extended Abstracts (pp. 65-66). [on ACM]

  • Kory-Westlund, J. (2015). Telling Stories with Green the DragonBot: A Showcase of Children's Interactions Over Two Months. In J. A. Adams, W. Smart, B. Mutlu, & L. Takayama (Eds.), Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction: Extended Abstracts (p. 263). [on ACM] [PDF] [Video] Winner of Best Video Award.

  • Kory-Westlund, J. M., & Breazeal, C. (2019). Exploring the effects of a social robot's speech entrainment and backstory on young children's emotion, rapport, relationships, and learning. Frontiers in Robotics and AI, 6. [PDF] [online]


0 comments