Recent thoughts

Note:

At present, I write here infrequently. You can find my current, regular blogging over at The Deliberate Owl.

silhouette of a person standing arms outstretched in front of a sunset

So where do you get good ideas?

Even at MIT, good ideas don't grow on trees.

Instead, I've found that good ideas have two ingredients: preparation and practice.

1. Preparation. The act of acquiring new knowledge and ideas. The foundation on which my good ideas will be built.

2. Practice. Generate lots of ideas. Engage with ideas in new ways. Think about what's next, what could be changed, what can be improved, how things work, what might happen if, implications, extrapolations.

Here's my method.

Preparation

I read outside my field, especially non-fiction. This gives me new information and new perspectives.

For example, I picked up Vera Johnson-Steiner's book Notebooks of the Mind, which was a nice qualitative discussion of creativity. I read Cal Newport's Deep Work, which changed how I approach my work time. Peter Gray's book on self-directed education, Free to Learn, is personally relevant, and discussed a lot of education research about how children learn (including anthropology work about hunter-gatherer tribes!), which influenced how I approach my research on kids, robots, and learning. I've read books on laughter, mutual causality and systems theory, the differences between ancient Chinese and Western medicine, the impact of socioeconomic status and race on language and society, the psychophysiology of stress, and many more.

I read papers in my field. I read the "future work" sections in papers I like. These sections are full of researchers' ideas that didn't quite make it into the current project, ways to extend their work, and ways to improve their work.

I try to have a regular academic reading group. Success has varied. With my lab group, some years we've managed to meet weekly! Some years, we're lucky if meet once a month, if at all. Right now, I'm also in a reading group organized around the broad topic of learning; we've read papers recently on the connections between Piaget and Vygotsky, Bandura's intrinsic motivation theory, and how stress affects learning.

We take turns choosing papers to read, which means I often read papers I may not otherwise have picked up. Some are highly relevant to my work, and some, not so much. One question I always try to ask is "How could I apply the ideas in this paper to my work?" That is, what can I learn from this paper? Having this question in mind helps me ground what I'm reading in what I already know.

Practice

Notebooks: I have one. Several, actually (along with some text files and unsent email drafts).  I jot down ideas regularly: thoughts on whatever I'm reading about, interesting things I notice about the world, how concepts connect back to other things I've learned. I review these notes periodically. I look for patterns. When deliberating dissertation topics, I noticed themes in what I highlighted in my notes, which helped me narrow in on what really interested me. I've developed new research ideas and come up with ways of building on my previous work.

Spend time thinking, processing, summarizing, planning, and synthesizing. For me, this often overlaps with "notebook time", in that I do a lot of this thinking and planning on paper. I find writing time (such as working on a paper) is also synthesizing time. The process of writing coherent paragraphs about a topic means I'm clarifying and summarizing my understanding of the topic at the same time. The important thing, however you do it, is to not only accumulate knowledge but also process what you've learned. I find it important to spend connecting ideas and deepening my understanding of how different pieces of knowledge fit together.

Use class projects as an opportunity to explore random ideas. I've benefited from the MIT Media Lab's project-heavy class structure, since there's ample space to try out new things, no long-term vision or research agenda required. In my final project for an Affective Computing class, I tested a hypothesis about the impact of introducing a social robot in a particular way might have on people's social judgements of the robot. I've also made light-up balls that change color in response to accelerometer data (we called them glorbs, and created life-size paper robot silhouettes to ask questions about the "aliveness" of robots.

Other people in my lab have, perhaps, gone in wackier directions—for example, two students did a project about enhancing creativity during early stages of sleep, which involved getting people to fall asleep wearing an EEG cap, and having a robot wake them up with questions every time they started to get comfortably dreamy.

I talk to people. For example, in my lab group, we used to all walk downstairs to get tea or coffee from the 3rd floor kitchen at least twice a day. We'd troop back up to the lab, steaming mugs in hand, and stand around throwing ideas off the wall for half an hour before getting back to work. We discuss some serious stuff, like the ethics of child-robot interaction, as well as random stuff, like ceiling robots that could unobtrusively steal leftover food from other people's meetings.

I also try to talk to people from outside my lab and outside my field. Hearing from people who see things from a different perspective or who need me to explain things in a different way can be incredibly helpful for gaining new insights and seeing things from a different point of view.

In all these conversations, notebooks, and classes, I try to keep asking, "And then what?" If my hypotheses are supported, what next? If I'm wrong about something, what are the implications? Where are the opportunities? What might happen if?

That's where my good ideas come from. Preparation and practice.

This article originally appeared on the MIT Graduate Student Blog, June 2018


0 comments

Exploring how the relational features of robots impact children's engagement and learning

One challenge I've faced in my research is assessment. That's because some of the stuff I'd like to measure is hard to measure—namely, kids' relationships with robots.

a child puts her arm around a fluffy red and blue robot and grins

During one study, the Tega robot asked kids to take a photo with it so it could remember them. We gave each kid a copy of their photo at the end of the study as a keepsake.

I study kids, learning, and how we can use social robots to help kids learn. The social robots I've worked with are fluffy, animated characters that are more akin to Disney sidekicks than to vacuum cleaners—Tega, and its predecessor, DragonBot. Both robots use Android phones to display an animated face; they squash and stretch as they move; they can playback sounds and respond to a variety of sensors.

In my work so far, I've found evidence that the social behaviors of the robot—such as its nonverbal behavior (e.g., gaze and posture), social contingency (e.g., performing the right social behaviors at the right times), and expressivity (such as using a very expressive voice versus a flat/boring one)—significantly impact how much kids learn, how engaged they are in the learning activities, and how credible they think the robot is.

I've also seen kids treat the robot as something kind of like a friend. As I've talked about before, kids treat the robot as something in between a pet, a tutor, and a technology. They show many social behaviors with robots—hugging, talking, tickling, giving presents, sharing stories, inviting to picnics—and they also show understanding that the robot can turn off and needs battery power to turn back on. In some of our studies, we've asked kids questions about the properties of the robot: Can it think? Can it break? Does it feel tickles? Kids' answers show that they understand that robot is a technological, human-made entity, but also that it shares properties with animate agents.

In many of our studies, we've deliberately tried to situate the robot as a peer. After all, one key way that children learn is through observing, cooperating with, and being in conflict with their peers. Putting the cute, fluffy robot in a peer-like role seemed natural. And over the past six years, I've seen kids mirror robots' behaviors and language use, learning from them the same way they learn from peers.

I began to wonder about the impact of the relational features of the robot on children's engagement and learning: that is, the stuff about the robot that influences children's relationships with the robot. These relational features include the social behaviors we have been investigating, as well as others: mirroring, entrainment, personalization, change over time in response to the interaction, references to a shared narrative, and more. Some teachers I've talked to have said that it's their relationship with their students that really matters in helping kids learn—what if the same was true with robots?

My hunch—one I'm exploring in my dissertation right now via a 12-week study at Boston-area schools—is that yes: kids' relationships with the robot do matter for learning.

But how do you measure that?

I dug into the literature. As it turns out, psychologists have observed and interviewed children, their parents, and their teachers about kids' peer relationships and friendship quality. There are also scales and questionnaires for assessing adults' relationships, personal space, empathy, and closeness to others.

I ran into two main problems. First, all of the work with kids involved assumptions about peer interactions that didn't hold with the robot. For example, several observation-based methodologies assumed that kids would be freely associating with other kids in a classroom. Frequency of contact and exclusivity were two variables they coded for (higher frequency and more exclusive contact meant the kids were more likely to be friends). Nope: Due to the setup of our experimental studies, kids only had the option of doing a fairly structured activity with the robot once a week, at specific times of the day.

The next problem was that all the work with adults assumed that the experimental subjects would be able to read. As you might imagine, five-year-olds aren't prime candidates for filling out written questionnaires full of "how do you feel about X, Y, or Z on a 1-5 scale." These kids are still working on language comprehension and self-reflection skills.

I found a lot of inspiration, though, including several gems that I thought could be adapted to work with my target age group of 4–6 year-olds. I ended up with an assortment of assessments that tap into a variety of methodologies: questions, interviews, activities, and observations.

three drawings of a robot, with the one on the left frowning, the middle one looking neutral, and the one on the right looking happy

We showed pictures of the robot to help kids choose an initial answer when asking some interview questions. These pictures were shown for the question, 'Let's pretend the robot didn't have any friends. Would the robot not mind or would the robot feel sad?'

We ask kids questions about how they think robots feel, trying to understand their perceptions of the robot as a social, relational agent. For example, one question was, "Does the robot really like you, or is the robot just pretending?" Another was, "Let's pretend the robot didn't have any friends. Would the robot not mind or would the robot feel sad?" For each question, we also ask kids to explain their answer, and whether they would feel the same way. This can reveal a lot about what criteria they use to determine whether the robot has social, relational qualities, such as having feelings, actions the robot takes, consequences of actions, or moral rules. For example, one boy thought the robot really liked him "because I'm nice" (i.e., because of the child's attributes), while another girl said the robot liked her "because I told her a story" (i.e., because of actions the child took).

seven cards, each with a picture of a pair of increasingly overlapping circles on it

The set of circles used in our adapted Inclusion of Other in the Self task.

Some of these questions used pictorial response options, such as our adaptation of the Inclusion of Other in the Self scale. In this scale, kids are shown seven pairs of increasingly overlapping circles, and asked to point to the pair of circles that best shows their relationship with someone. We ask not only about the robot, but also about kids' parents, pets, best friends, and a bad guy in the movies. This lets us see how kids rate the robot in relation to other characters in their lives.

a girl sits at a table with paper and pictures of different robots and things

This girl is doing the Robot Sorting Task, in which she decides how much like a person each entity is and places each picture in an appropriate place along the line.

Another activity we created asks kids to sort a set of pictures of various entities along a line—entities such as a frog, a cat, a baby, a robot from a movie (like Baymax, WALL-e, or R2D2), a mechanical robot arm, Tega, and a computer. The line is anchored on one end with a picture of a human adult, and on the other with a picture of a table. We want to see not only where kids put Tega in relation to the other entities, but also what kids say as they sort them. Their explanations of why they place each entity where they do can reveal what qualities they consider important for being like a person: The ability to move? Talk? Think? Feel?

In the behavioral assessments, the robot or experimenter does something, and we observe what kids do in response. For example, when kids played with the robot, we had the robot disclose personal information, such as skills it was good or bad at, or how it felt about its appearance: "Did you know, I think I'm good at telling stories because I try hard to tell nice stories. I also think my blue fluffy hair is cool." Then the robot prompted for information disclosure in return. Because people tend to disclosure more information, and more personal or sensitive information, to people to whom they feel closer, we listened to see whether kids disclosed anything to the robot: "I'm good at reading," "I can ride a bike," "My teacher says I'm bad at listening."

a fluffy red and blue tega robot with stickers stuck to its tummy

Tega sports several stickers given to it by one child.

Another activity looked at conflict and kids' tendency to share (like they might with another child). The experimenter holds out a handful of stickers and tells the child and robot that they can each have one. The child is allowed to pick a sticker first. The robot says, "Hey! I want that sticker!" We observe to see if the child says anything or spontaneously offers up their sticker to the robot. (Don't worry: If the child does give the robot the sticker, the experimenter fishes a duplicate sticker out of her pocket for the child.)

Using this variety of assessments—rather than using only questions or only observations—can give us more insight into how kids think and feel. We can see if what kids say aligns with what kids do. We can get at the same concepts and questions from multiple angles, which may give us a more accurate picture of kids' relationships and conceptualizations.

Through the process of searching for assessments I need, discovering nothing quite right existed, and creating new ways of capturing kids' behaviors, feelings, and thoughts, the importance of assessment really hit home. Measurement and assessment is one of the most important things I do in research. I could ask any number of questions, hypothesize any number of outcomes, but without performing an experiment and actually measuring something relevant to my questions, I would get no answers.

We've just published a conference paper on our first pilot study validating four of these assessments. The assessments were able to capture differences in children's relationships with a social robot, as expected, as well as how their relationships change over time. If you study relationships with young kids (or simply want to learn more), check it out!

This article originally appeared on the MIT Media Lab website, May 2018

Acknowledgments

The research I talk about in this post was only possible with help from multiple collaborators, most notably Cynthia Breazeal, Hae Won Park, Randi Williams, and Paul Harris.

This research was supported by a MIT Media Lab Learning Innovation Fellowship and by the National Science Foundation. Any opinions, findings and conclusions, or recommendations expressed in this article are those of the authors and do not represent the views of the NSF.


0 comments

Randy, Elian at 8 months (sporting his lab t-shirt!), and I

Starting a family in grad school

I wasn't married when I got to MIT, but I had a boyfriend named Randy who moved up to Boston with me. Two years in, we discover that it is, in fact, possible to simultaneously plan a wedding and write a master's thesis! Two years after that? I'm sitting uncomfortably in a floppy hospital gown at Mt. Auburn Hospital using my husband's phone to forward the reviews I'd just received on a recent journal paper submission, hoping labor doesn't kick in full force before I finish canceling all my meetings and telling people that I'll be taking maternity leave a month sooner than expected.

Baby Elian is born later that night, tiny and perfect. The next three weeks are spent writing my PhD proposal from the waiting room while we wait for Elian to grow big enough to leave the hospital's nursery.

Our decision to have a baby during grad school did not come lightly. For a lot of students, grad school falls smack in the middle of prime mate-finding and baby-making years. But my husband and I knew we wanted kids. We knew fertility decreases over time, and didn't want to wait too long. In 2016, I was done with classes, on to the purely research part of the PhD program. My schedule was as flexible as it would ever be. Plus, I work with computers and robotts—no cell cultures to keep alive, no chemicals I'd be concerned about while pregnant. Randy did engineering contract work (some for a professor at MIT) and was working on a small startup.

Was it the perfect time? As a fellow grad mom told me once, there's never a perfect time. Have babies when you're ready. That's it.

Okay, we agreed, now's the time. It'd be great, right? We'd have this adorable baby, then Randy would stay home most of the time and play with the baby while I finished up school. He'd even have time in the evenings and on weekends to continue his work.

Naiveté, hello.

Since my pregnancy was relatively easy (I got lucky—even my officemate's pickled cabbage and fermented fish didn't turn my stomach), we were optimistic that everything else would go well, too. The preterm birth was a surprise, sure, but maybe that was a fluke in our perfectly planned family adventure. Then it came time for me to go back to the lab full time. I'd read about attachment theory in psychology papers—i.e., the idea that babies form deep emotional bonds to their caregivers, in particular, their mothers. Cool theory, interesting implications about social relationships based on the kind of bond babies formed, and all that. It wasn't until the end of my maternity leave, when I handed our wailing three-month-old boy to my husband before walking out the door that I internalized it: Elian wasn't just sad that I was going away. He needed me. I mean, looking at it from an evolutionary perspective, it made perfect sense. There I was, his primary source of food, shelter, and comfort, walking in the opposite direction. He had no idea where I was going or whether I'd be back. If I were him, I'd wail, too.

Us: 0. Developmental psychology: 1.

Finding a balance

This was going to be more difficult than we'd thought. For various financial and personal reasons, we had already decided not to put the baby in daycare. Other people's stories ("when he started daycare, he cried for a month, but then he got used to it") weren't our cup of tea. But our plans of me spending my days in the lab while the baby was back at home? Not so much. In addition to Elian's distress at my absence, he generally refused pumped breast milk in favor of crying, hungry and sad.

So, we made new plans. These plans involved bringing Elian to the lab a lot (pretty easy at first: he'd happily wiggle on my desk for hours, entertained by his toes). Coincidentally, that's when I began to feel pressure to prove that what we're doing works. That I can do it. That I can be a woman, who has a baby, who's getting a PhD at MIT, who's healthy and happy and "having it all". "Having it all." No matter what I pick, kids or work or whatever, I'm making a choice about what's important. We all have limited time. What "all" do I want? What do I choose to do with my time? And am I happy with that choice?

Now, Elian's grown up wearing a Media Arts & Sciences onesie and a Personal Robots Group t-shirt. I'm fortunate that I can do this—I have a super supportive lab group and I know this definitely wouldn't work for everyone. Not only does our group do a lot of research with young kids, but my advisor has three kids of her own. My officemate has a six-year-old who I've watched grow up. Several other students have gotten married or had kids during their time here. As a bonus, the Media Lab has a pod for nursing mothers on the fifth floor, and a couple bathrooms even have changing tables. (That said, it's so much faster to just set the baby on the floor, whip off the old diaper, on with the new. If he tries to crawl away mid-change, as is his wont these days, he can only get so far as under my desk.)

Randy comes to campus more now, too. It's a common sight to see him from the Media Lab's glass-walled conference rooms, pacing the hallway with a sleeping baby in a carry pack while he answers emails on his tablet. I feed the baby between meetings, play for a while when Randy needs to run over to the Green Building for a contractor meeting, and it works out okay. We keep Elian from licking the robots and Elian makes friends from around the world, all of whom are way taller than he is. The best part? He's almost through the developmental stage in which he bursts into tears when he sees them!

I also have the luxury of working from home a lot. That's helped by two things: first, right now, I'm either writing code or writing papers— i.e., laptop? check. Good to go. Second, my lab has undergone construction multiple times the past year, so no one else wants to work there either with all the hammering and paint fumes.

Stronger, faster, better?

But it's not all sunshine, wobbly first steps, and happy baby coos. I think it's harder to be a parent in grad school as a woman. I know several guys who have kids; they can still manage a whole day—or three—of working non-stop, sleeping on a lab couch, all-night hacking sessions, attending conferences in Europe for a week while the baby stays home. Me? Sometimes, if I'm out of sight for five minutes, Elian loses it. Sometimes, we make it three hours. Some nights, waking up to breastfeed a sad, grumpy, teething baby, it's like I'm also pulling all-nighters, but without the getting work done part.

Times when I'm feeling overwhelmed, I remember a fictional girl named Keladry. The protagonist of Tamora Pierce's Protector of the Small quartet, she was the first girl in the kingdom to openly try to become a knight—traditionally a man's profession (see the parallel to academia?). She followed the footsteps of another girl, Alanna, who opened the ranks by pretending to be a boy throughout her training, revealing her identity only when she was knighted. I remember Keladry because of the discipline and perseverance she embodied.

I remember her feeling that she had to be stronger, faster, and better than all the boys, because she wasn't just representing herself, she was representing all girls. Sometimes, I feel the same: That as a grad mom, I'm representing all grad moms. I have to be a role model. I have to stick it out, show that not only do I measure up, but that I can excel, despite being a mother. Because of being a mother. I have to show that it's a point in our favor, not a mark against us.

I remember Keladry's discipline: getting up early to train extra hard, working longer to make sure she exceeded the standard. I remember her standing tall in the face of bullies, trying to stay strong when others told her she wasn't good enough and wouldn't make it.

So I get up earlier, writing paper drafts in the dawn light with a sleeping baby nestled beside me. I debug code when he naps (even at 14 months, he still naps twice a day, lucky me). I train UROPs, run experimental studies, analyze data, and publish papers. I push on. I don't have to face down bullies like Keladry, and I'm fortunate to have a lot of support at MIT. But sometimes, it's still a struggle.

When I was talking through my ideas for this blog with other writers, one person said, "I'm not sure how you do it." I didn't have a good answer then, but here's what I should have said: I do it with the help of a super supportive husband, a strong commitment to the life choices I've made, and a large supply of earl grey tea.

This article originally appeared on the MIT Graduate Student Blog, February 2018


0 comments

me wearing a red dress holding tega, a fluffy red and blue robot

Undervaluing hard work in grad school

Wow, you're at MIT? You must be a genius!"

Um. Not sure how to answer that. Look down at my shoes. Nervous laugh.

"Uh, thanks?"

The random passerby who saw my MIT shirt and just had to comment on my presumed brilliance seems satisfied with my response. Perhaps the "awkward genius" trope played in my favor?

See, I'm no genius. And I'll let you in on a little secret: Most of us at MIT aren't inherent geniuses, gliding by on the strength of a vast, extraordinary intellect.

We're not born super smart. Instead, we do things the old-fashioned way: with copious amounts of caffeine, liberally applied elbow grease, and emphatic grunts of effort that would make a Cro-Magnon proud.

The reality on campus is not exactly the effortless, glamorous image the media likes to paint. You know, headlines like:

  • MIT physicists create unbelievable new space dimension!
  • MIT scientists discover that chocolate and coffee cure cancer!
  • MIT engineers fly to the moon in a ship they built out of carbon nanotubes and crystal lattices!
  • Look, it's MIT! Land of the Brilliant, the Inventive, the Brave!

The reality is more like the Land of the Confused, the Obstinate, and the "Let's try it again and see if maybe it works this time so we can get at least one significant result for a paper!"

Yes, I'm exaggerating a little. I have, after all, met a ton of amazing, brilliant people here -- but they're amazing and brilliant because of their effort, curiosity, tenacity, and enthusiasm. Not their inherent genius. None of them are little cartoon figures with cartoon lightbulbs flashing around them like strobe lights as they are struck with amazing idea after amazing idea.

They're people like my labmate, who routinely shows up late to group meetings because he accidentally stays up all night trying to implement some cool machine learning algorithm he found in an obscure-but-possibly-relevant paper (eventually, I'm sure, the effort will pay off!).

They're people like my professors, who set aside entire days each week just for meeting with their students, to hash out ideas and go over paper drafts.

They're people like me, who spend 260% more effort than strictly necessary on making a child-robot interaction flow right, even though the study would probably be fine with subpar dialogue (for the curious: I work on fluffy robots that help kids learn stuff).

The reality is long hours in the library—reading papers, trying to understand what other people have already done and how it relates to my research—and long hours in the lab—trying to put that understanding to use (often learning in the process that I didn't really understand something after all and should probably do more reading).

I think MIT's reputation as being full of inherent geniuses gives many of us the short stick and fails to recognize the sheer amount of hard work and failure that goes into nearly every discovery and invention that's made. Sure, sometimes people get lucky.

There are certainly a few things that someone got right the first time, but let's be honest. The last time my Python code ran on the first try, I went looking for bugs anyway because that never happens (and I was right; hours later, there were still bugs aplenty). Likewise, the last time I got a really interesting experimental result, it was after months of thinking and re-planning, months of programming and testing on the robot, and months of wrangling participants in the lab. All the amazing insights that show up in the final paper draft only come after a lot of analysis, realizing the analysis missed something, rewriting all the R code to do the analysis right, and re-analyzing.

Think of it this way, if a PhD student has signed on to work in a lab for the next indefinite-but-hopefully-only-five-or-maybe-seven years (with a small stipend if they're lucky) and have no idea what magical, impactful dissertation topic will be their ticket out, they're probably already one of those people who likes a challenge. Maybe perseverance is their middle name.

And that's what I think being at MIT is actually about: Learning to fail, struggling to succeed, and knowing the value in the struggle.

Of the real "geniuses" I know, they're people who just want to know what's going on and are okay with doing a lot of hard work to find out.

They're people who keep asking "and then what? and then what?" after they learn something, and spend months or years chasing down answers. For example: "So I find that 5-year-olds mirror the robot's phrases when playing storytelling games with it, and learn more when they do—Why? What does this say about rapport and peer learning? What modulates this effect? What are the implications for educational technology more generally?"

They're people who dive wholeheartedly into each rabbit hole to see how far it goes and what useful tidbits of scientific knowledge can be gleaned along the way.

They're people who keep probing. Sometimes, that leads to dramatic headlines. More often, it doesn't.

This article originally appeared on the MIT Graduate Student Blog, February 2018


0 comments

Hi, my name is Mox!

This story begins in 2013, in a preschool in Boston, where I hide, with laptop, headphones, and microphone, in a little kitchenette. Ethernet cables trail across the hall to the classroom, where 17 children eagerly await their turn to talk to a small fluffy robot.

fluffy blue dragonbot robot

Dragonbot is a squash-and-stretch robot designed for playing with young children.

"Hi, my name is Mox! I'm very happy to meet you."

The pitch of my voice is shifted up and sent over the somewhat laggy network. My words, played by the speakers of Mox the robot and picked up by its microphone, echo back with a two-second delay into my headphones. It's tricky to speak at the right pace, ignoring my own voice bouncing back, but I get into the swing of it pretty quickly.

We're running show-and-tell at the preschool on this day. It's one of our pilot tests before we embark on an upcoming experimental study. The children take turns telling the robot about their favorite animals. The robot (with my voice) replies with an interesting fact about each animal, Did you know that capybaras are the largest rodents on the planet?" (Yes, one five-year-old's favorite animal is a capybara.) Later, we share how the robot is made and talk about motors, batteries, and 3D printers. We show them the teleoperation interface for remote-controlling the robot. All the kids try their hand at triggering the robot's facial expressions.

Then one kid asks if he can teach the robot how to make a paper airplane.

two paper airplanes, one has been colored on by a young child

Two paper airplanes that a child gave to DragonBot.

We'd just told them all how the robot was controlled by a human. I ask: Does he want to teach me how to make a paper airplane?

No, the robot, he says.

Somehow, there was a disconnect between what he had just learned about the robot and the robot's human operator, and the character that he perceived the robot to be.

Relationships with robots?

girl reaching across table to touch a fluffy robot's face

A child touches Tega's face while playing a language learning game.

In the years since that playtest, I've watched several hundred children interact with both teleoperated and autonomous robots. The children talk with the robots. They laugh. They give hugs, drawings, and paper airplanes. One child even invited the robot to his preschool's end-of-year picnic.

Mostly, though, I've seen kids treat the robots as social beings. But not quite like how they treat people. And not quite like how they treat pets, plants, or computers.

These interactions were clues: There's something interesting going on here. Children ascribed physical attributes to robots—they can move, they can see, they can feel tickles—but also mental attributes: thinking, feeling sad, wanting companionship. A robot could break, yes, and it is made by a person, yes, but it can be interested in things. It can like stories; it can be nice. Maybe, as one child suggested, if it were sad, it would feel better if we gave it ice cream.

girl hugs fluffy dragon robot in front of a small play table

A child listens to DragonBot tell a story during one of our research studies.

Although our research robots aren't commercially available, investigating how children understand robots isn't merely an academic exercise. Many smart technologies are joining us in our homes: Roomba, Jibo, Alexa, Google Home, Kuri, Zenbo...the list goes on. Robots and AI are here, in our everyday lives.

We ought to ask ourselves, what kinds of relationships do we want to have with them? Because, as we saw with the children in our studies, we will form relationships with them.

We see agency everywhere

One reason we can't help ourselves from forming relationships with robots is that humans have evolved to see agency and intention everywhere. If an object moves independently in an apparently goal-directed way, we interpret that as agency—that is, we see the object as an agent. Even in something as simple as a couple of animated triangles moving around on a screen, we look for, and project, agency and intentionality.

If you think about the theory of evolution, this makes sense. Is the movement I spotted out of the corner of my eye just a couple leaves dancing in the breeze, or is it a tiger? My survival relies on thinking it's a tiger.

But relationships aren't built on merely recognizing other agents; relationships are social constructs. And, humans are uniquely—unequivocally—social creatures. Social is the warp and weft of our lives. Everything is about our interactions with others: people, pets, characters in our favorite shows or books, even our plants or our cars. We need companionship and community to thrive. We pay close attention to social cues—like eye gaze, emotions, politeness—whether these cues come from a person...or from a machine.

Researchers have spent the past 25 years showing that humans respond to computers and machines as if those objects were people. There's even a classic book, published by Byron Reeves and Clifford Nass in 1996, titled, The Media Equation: How people treat computers, television, and new media like real people and places. Among their findings: people assign personalities to digital devices, people are polite to computers—for example, they evaluate a computer more positively when they had to tell it to its face. Merely telling people a computer was on their team leads them to rate it as more cooperative and friendly.

Research since that book has shown again and again that these findings still hold: Humans treat machines as social beings. And this brings us back to my work now.

Designing social robots to help kids

I'm a PhD student in the Personal Robots Group. We work in the field of human-robot interaction (HRI). HRI studies questions, such as: How do people think about and react to robots? How can we make robots that will help people in different areas of their lives—like manufacturing, healthcare, or education? How do we build autonomous robots—including algorithms for perception, social interaction, and learning? At the broadest scale, HRI encompasses anything where humans and robots come into contact and do things with, or near, each other.

jacqueline holding the red and blue stripy fluffy tega robot, wearing a red dress

Look, we match!

As you might guess based on the anecdotes I've shared in this post, the piece of HRI I'm working on is robots for kids.

There are numerous projects in our group right now focusing on different aspects of this: robots that help kids in hospitals, robots that help kids learn programming, robots that promote curiosity and a growth mindset, robots that help kids learn language skills.

In my research, I've been asking questions like: Can we build social robots that support children's early language and literacy learning? What design features of the robots affect children's learning—like the expressivity of the robot's voice, the robot's social contingency, or whether it provides personalized feedback? How, and what, do children think about these robots?

Will robots replace teachers?

When I tell people about the Media Lab's work with robots for children's education, a common question is: "Are you trying to replace teachers?"

To allay concerns: No, we aren't.

(There are also some parents who say that's nice, but can you build us some robot babysitters, soon, pretty please?)

We're not trying to replace teachers for two reasons:

  1. We don't want to.
  2. Even if we wanted to, we couldn't.

Teachers, tutors, parents, and other caregivers are irreplaceable. Despite all the research that seems to point to the conclusion "robots can be like people", there are also studies showing that children learn more from human tutors than from robot tutors. Robots don't have all the capabilities that people do for adapting to a particular child's needs. They have limited sensing and perception, especially when it comes to recognizing children's speech. They can't understand natural language (and we're not much closer to solving the underlying symbol grounding problem). So, for now, as often as science fiction has us believe otherwise (e.g., androids, cylons, terminators, and so on), robots are not human.

Even if we eventually get to the point where robots do have all the necessary human-like capabilities to be like human teachers and tutors—and we don't know how far in the future that would be or if it's even possible—humans are still the ones building the robots. We get to decide what we build. In our lab, we want to build robots that help humans and support human flourishing. That said, saying that we want to build helpful robots only goes so far. There's still more work to ensure that all the technology we build is beneficial, and not harmful, for humans. More on that later in this post.

a mother sits with her son holding a tablet

A mother reads a digital storybook with her child.

The role we foresee for robots and similar technologies is complementary: they are a new tool for education. Like affective pedagogical agents and intelligent tutoring systems, they can provide new activities and new ways of reaching kids. The teachers we've talked to in our research are excited about the prospects. They've suggested that the robot could provide personalized content, or connect learning in school to learning at home. We think robots could supplement what caregivers already do, support them in their efforts, and scaffold or model beneficial behaviors that caregivers may not know to use, or may not be able to use.

For example, one beneficial behavior during book reading is asking dialogic questions—that is, questions that prompt the child to think about the story, predict what might happen next, and engage more deeply with the material. Past work from our group has shown that when you add a virtual character to a digital storybook who models this dialogic questioning, it can help parents learn what kinds of questions they can ask, and remember to ask them more often.

In another Media Lab project, Natalie Freed—an alum of our group—made a simple vocabulary-learning game with a robot that children and their parents played together. The robot's presence encouraged communication and discussion. Parents guided and reinforced children's behavior in a way that aligned with the language learning goals of the game. Technology can facilitate child-caregiver interactions.

In summary, in the Personal Robots Group, we want our robots to augment existing relationships between children and their families, friends, and caregivers. Robots aren't human, and they won't replace people. But they will be robots.

Robots are friends—sort of?

In our research, we hear a lot of children's stories. Some are fictional: tales of penguins and snowmen, superheroes and villains, animals playing hide-and-seek and friends playing ball. Some are real: robots who look like rock stars, who ask questions and can listen, who might want ice cream when they're sad.

Such stories can tell you a lot about how children think. And we've found that not only will children learn new words and tell stories with robots, they think of the robots as active social partners.

In one study, preschool children talked about their favorite animals with two DragonBots, Green and Yellow. One robot was contingently responsive: it nodded and smiled at all the right times. The other was just as expressive, but not contingent—you might be talking, and it might be looking behind you, or it might interrupt you to say "mmhm!", rather than waiting until a pause in your speech.

a yellow dragonbot and a green dragonbot sitting on a table

Two DragonBots, ready to play!

Children were especially attentive to the more contingent robot, spending more time looking at it. We also asked children a couple questions to test whether they thought the robots were equally reliable informants. We showed children a new animal and asked them, "Which robot do you want to ask about this animal's name?" Children chose one of the robots.

But then each robot provided a different name! So we asked: "Which robot do you believe?" Regardless of which robot they had initially chosen (though most chose the contingent robot), almost all the children believed the contingent robot.

This targeted information seeking is consistent with previous psychology and education research showing that children are selective in choosing whom to question or endorse. They use their interlocutor's nonverbal social cues to decide how reliable that person is, or how reliable that robot is.

Then we performed a couple other studies to learn about children's word learning with robots. We found that here, too, children paid attention to the robot's social cues. As in their interactions with people, children followed the robot's gaze and watched the robot's body orientation to figure out which objects the robot was naming.

We looked at longer interactions. Instead of playing with the robot once, children got to play seven or eight times. For two months, we observed children take turns telling stories with a robot. Did they learn? Did they stay engaged, or get bored? The results were promising: The children liked telling their own stories to the robot. They copied parts of the robot's stories—borrowing characters, settings, and even some of the new vocabulary words that the robot had introduced.

We looked at personalization. If you have technology, after all, one of the benefits is that you can customize it for individuals. If the robot "leveled" its stories to match the child's current language abilities, would that lead to more learning? If the robot personalized the kinds of motivational strategies it used, would that increase learning or engagement?

a girl sits across from a dragon robot at a small play table

A girl looks up at DragonBot during a storytelling game.

Again and again, the results pointed to one thing: Children responded to these robots as social beings. Robots that acted more human-like—being more expressive, being responsive, personalizing content and responses—led to more engagement and learning by the children; even how expressive the robot's voice was mattered. When we compared a robot that had a really expressive voice to one that had a flat, boring voice (like a classic text-to-speech computer voice), we saw that with the expressive robot, children were more engaged, remembered the story more accurately, and used the key vocabulary words more often.

All these results make sense: There's a lot of research showing that these kinds of "high immediacy" behaviors are beneficial for building relationships, teaching, and communicating.

Beyond learning, we also looked at how children thought and felt about the robot.

We looked at how the robot was introduced to children: If you tell them it's a machine, rather than introducing it as a friend, do children treat the robot differently? We didn't see many differences. In general, children reacted in the moment to the social robot in front of them. You could say "it's just a robot, Frank," but like the little boy I mentioned earlier who wanted to teach the robot how to make a paper airplane, they didn't really get the distinction.

Or maybe they got it just fine, but to them, what it means to be a robot is different from what we adults think it means to be a robot.

Across all the studies, children claimed the robot was a friend. They knew it couldn't grow or eat like a person, but—as I noted earlier—they happily ascribed it with thinking, seeing, feeling tickles, and being happy or sad. They shared stories and personal information. They taught each other skills. Sure, the kids knew that a person had made the robot, and maybe it could break, but the robot was a nice, helpful character that was sort of like a person and sort of like a computer, but not really either.

And there was that one child who invited the robot to a picnic.

For children, the ontologies we adults know—the categories we see as clear-cut—are still being learned. Is something being real, or is it pretending? Is something a machine, or a person? Maybe it doesn't matter. To a child, someone can be imaginary and still be a friend. A robot can be in-between other things. It can be not quite a machine, not quite a pet, not quite friend, but a little of each.

But human-robot relationships aren't authentic!

One concern some people have when talking about relationships with social robots is that the robots are pretending to be a kind of entity that they are no—namely, an entity that can reciprocally engage in emotional experiences with us. That is, they're inauthentic (PDF): they provoke undeserved and unreciprocated emotional attachment, trust, caring, and empathy.

But why must reciprocality be a requirement for a significant, authentic relationship?

People already attach deeply to a lot of non-human things. People already have significant emotional and social relationships that are non-reciprocal: pets, cars, stuffed animals, favorite toys, security blankets, and pacifiers. Fictional characters in books, movies, and TV shows. Chatbots and virtual therapists, smart home devices, and virtual assistants.

A child may love their dog, and you may clearly see that the dog "loves" the child back, but not in a human-like way. We aren't afraid that the dog will replace the child's human relationships. We acknowledge that our relationships with our pets, our friends, our parents, our siblings, our cars, and our favorite fictional characters are all different, and all real. Yet the default assumption is generally that robots will replace human relationships.

If done right (more on that in a moment), human-robot relationships could just be one more different kind of relationship.

So we can make relational robots? Should we?

When we talk about how we can make robots that have relationships with kids, we also have to ask one big lurking question:

Should we?

Social robots have a lot of potential benefits. Robots can help kids learn; they can be used in therapy, education, and healthcare. How do we make sure we do it "right"? What guiding principles should we follow?

How do we build robots to help kids in a way that's not creepy and doesn't teach kids bad behaviors?

I think caring about building robots "right" is a good first step, because not everybody cares, and because it's up to us. We humans build robots. If we want them not to be creepy, we have to design and build them that way. If we want socially assistive robots instead of robot overlords, well, that's on us.

a drawing of two robots on a whiteboard

Tega says, 'What do you want to do tonight, DragonBot?' Dragonbot responds, 'The same thing we do every night, Tega! Try to take over the world!'

Fortunately, there's growing international interest in many disciplines for in-depth study into the ethics of placing robots in people's lives. For example, the Foundation for Responsible Robotics is thinking about future policy around robot design and development. The IEEE Standards Association has an initiative on ethical considerations for autonomous systems. The Open Roboethics initiative polls relevant stakeholders (like you and me) about important ethical questions to find out what people who aren't necessarily "experts" think: Should robots make life or death decisions? Would you trust a robot to take care of your grandma? There are an increasing number of workshops on robot policy and ethics at major robotics conferences—I've attended some myself. There's a whole conference on law and robots.

The fact that there's multidisciplinary interest is crucial. Not only do we have to care about building robots responsibly, but we also have to involve a lot of different people in making it happen. We have to work with people from related industries who face the same kinds of ethical dilemmas because robots aren't the only technology that could go awry.

We also have to involve all the relevant stakeholders—a lot more people than just the academics, designers, and engineers who build the robots. We have to work with parents and children. We have to work with clinicians, therapists, teachers. It may sound straightforward, but it can go a long way toward making sure the robots help and support the people they're supposed to help and support.

We have to learn from the mistakes made by other industries. This is a hard one, but there's certainly a lot to learn from. When we ask if robots will be socially manipulative, we can see how advertising and marketing have handled manipulation, and how we can avoid some of the problematic issues. We can study other persuasive technologies and addictive games. We can learn about creating positive behavior change instead. Maybe, as was suggested at one robot ethics workshop, we could create "warning labels" similar to nutrition labels or movie ratings, which explain the risks of interacting with particular technologies, what the technology is capable of, or even recommended "dosage", as a way of raising awareness of possible addictive or negative consequences.

For managing privacy, safety, and security, we can see what other surveillance technologies and internet of things devices have done wrong—such as not encrypting network traffic and failing to inform users of data breaches in a timely manner. Manufacturing already has standards for "safety by design" so could we create similar standards for "security by design"? We may need new regulations regarding what data can be collected, for example, requiring warrants to access any data from inside homes, or HIPAA-like protections for personal data. We may need roboticists to adopt an ethical code similar to the codes professionals in other fields follow, but one that emphasizes privacy, intellectual property, and transparency.

There are a lot of open questions. If you came into this discussion with concerns about the future of social robots, I hope I've managed to address them. But I'll be the first to tell you that our work is not even close to being done. There are many other challenges we still need to tackle, and opening up this conversation is an important first step. Making future technologies and robot companions beneficial for humans, rather than harmful, is going to take effort.

It's a work in progress.

Keep learning, think carefully, dream big

We're not done learning about robot ethics, designing positive technologies, or children's relationships with robots. In my dissertation work, I ask questions about how children think about robots, how they relate to them through time, and how their relationships are different from relationships with other people and things. Who knows: we may yet find that children do, in fact, realize that robots are "just pretending" (for now, anyway), but that kids are perfectly happy to suspend disbelief while they play with those robots.

As more and more robots and smart devices enter our lives, our attitudes toward them may change. Maybe the next generation of kids, growing up with different technology, and different relationships with technology, will think this whole discussion is silly because of course robots take whatever role they take and do whatever it is they do. Maybe by the time they grow up, we'll have appropriate regulations, ethical codes, and industry standards, too.

And maybe—through my work, and through opening up conversations about these issues—our future robot companions will make paper airplanes with us, attend our picnics, and bring us ice cream when we're sad.

small fluffy robot on a table looking at a bowl of ice cream

Miso the robot looks at a bowl of ice cream.

If you'd like to learn more about the topics in this post, I've compiled a list of relevant research and helpful links!

This article originally appeared on the MIT Media Lab website, June 2017

Acknowledgments:

The research I talk about in this post involved collaborations with, and help from, many people: Cynthia Breazeal, Polly Guggenheim, Sooyeon Jeong, Paul Harris, David DeSteno, Rosalind Picard, Edith Ackermann, Leah Dickens, Hae Won Park, Meng Xi, Goren Gordon, Michal Gordon, Samuel Ronfard, Jin Joo Lee, Nick de Palma, Siggi Aðalgeirsson, Samuel Spaulding, Luke Plummer, Kris dos Santos, Rebecca Kleinberger, Ehsan Hoque, Palash Nandy, David Nuñez, Natalie Freed, Adam Setapen, Marayna Martinez, Maryam Archie, Madhurima Das, Mirko Gelsomini, Randi Williams, Huili Chen, Pedro Reynolds-Cuéllar, Ishaan Grover, Nikhita Singh, Aradhana Adhikari, Stacy Ho, Lila Jansen, Eileen Rivera, Michal Shlapentokh-Rothman, Ryoya Ogishima.

This research was supported by an MIT Media Lab Learning Innovation Fellowship and by the National Science Foundation. Any opinions, findings and conclusions, or recommendations expressed in this paper are those of the authors and do not represent the views of the NSF.


0 comments