Robot Troops Will Follow Orders, Beat You at Rock, Paper, Scissors

The military has a ton of ground robots scurrying around Afghanistan. Too bad they’re dumb as puppets, unable to make the slightest move without a human pulling the strings. But if the U.S. Navy has its way, all that will change. Robots will be able to obey a pointed finger or a verbal command, and […]


The military has a ton of ground robots scurrying around Afghanistan. Too bad they're dumb as puppets, unable to make the slightest move without a human pulling the strings.

But if the U.S. Navy has its way, all that will change. Robots will be able to obey a pointed finger or a verbal command, and then tackle a job without flesh-and-blood micromanagement. Which will free up the hundreds, if not thousands, of troops who today have to spend their time twiddling robot joysticks.

The sea service just issued four contracts to set the plan in action. And there's already a blue-eyed robot that can register people's visual and verbal cues on display at the showroom floor of the Office of Naval Research's sci-tech conference in Virginia this week.

Meet Octavia, a $200,000 "mobile, dexterous, social" robot developed by the Naval Research Laboratory. Though not even two years old, she's already speaking, whirring around on her wheels and playing games with passers-by. Most strikingly, she's got a responsive, cherubic eggshell-white face to make you comfortable talking to her.

Greg Trafton, a cognitive scientist with the Naval Research Laboratory, laughs uncomfortably at the conference when asked if his 5-foot-6-inch, 375-pound Octavia would be used for detonating roadside bombs. He and his team decline comment on the military applications of the robot, but "she" -- as her creators intermittently call "her" -- is probablytoo big to disarm bombs in Afghanistan without falling into an irrigation ditch.

Instead, her slender nose, tiny mouth, bulging cheeks and pale complexion are meant to disarm people. Primarily, she's a learning tool for "understanding human-robot interaction," Trafton says. Just imagine how a wounded sailor would react if human-like Octavia pulled him out of a bomb blast.

It may not be so long until your appliances can figure out how burnt you like your toast or how starchy your spouse likes his laundered shirts. The easiest way to figure that out, some engineers reckon, is for you and the robot to have a face-to-face conversation.

Octavia is a program to help you learn to collaborate with your machines without specifically programming every individual command. That makes particular sense for the military, where the confusion of a war zone might make talking to a robot the simplest way to get it to do what you want -- on the presumption that it can understand what a soldier's actually saying.

Just ask Octavia to do something and then watch her facial reaction. "If it is confused with what you're saying, it can register confusion," Trafton says, as the robot raises her jutting eyebrows before politely asking if you'd mind repeating yourself. Eventually, the idea goes, you'll acclimate to each others' patterns of interaction.

From Octavia's perspective, that's a matter of relying on the ACT-R software that tells her how to process information from her forehead's infrared sensor, the cameras in her eyes and her audio-input channels. All that helps Octavia create profiles of the people she encounters, from rarely-mutable identifiers like face, speech and complexion to frequently changing ones like clothing.

Like a pet, she remembers you.

Eric Martinson, one of Trafton's colleagues, walks over, places his hands behind his back and calmly says hello to Octavia. "Hello, Eric," the robot replies in a slightly distorted but distinctly female programmed voice.

Octavia is the first wave of Navy robots that pick up on visual or verbal commands. Several others are in the works.

Veteran robot-maker and former Disney Imagineer Andrew Bennett is working with colleagues on "ENLIV-N, an Effective Natural Language Interface for Vehicle Navigation." The idea is to "translate natural language directions and gesture commands into vehicle waypoints." And when the robot can't quite make out what the human tells it do, it'll "augmen[t] its vocabulary using tagged databases" like Flickr to help it grok the meaning.

Bloomington, Indiana's Thinking Robots Inc. is already using spoken commands for search-and-rescue 'bots. In its new, Navy-funded work, the company wants to teach the machines to tell when a human master is "busy or has high cognitive load." The firm also wants its robots to "understand natural spoken instructions and robustly deal with disfluencies and speech errors that are typical for spontaneous speech, also filling in details for instructions that are not explicitly expressed using integrated planning mechanisms."

Soar Technology, out of Ann Arbor, Michigan, wants to train the 'bots to follow "pointing gestures." If it works, the approach will enable supervisory control of one or more UGVs [unmanned ground vehicles] and greatly reduce the workload of the UGV operator."

Waltham, Massachusetts' Infoscitex's "Unstructured Speech And Gesture Evaluation (USAGE)" software promises "a robust human-machine interface that allows human operators to control UGVs naturally, without additional cognitive workload, and without reduction in situational awareness."

Octavia probably qualifies as a human-controlled UGV. She's fully mobile, using a wheeled Segway platform to move around at her human commanders' instruction. (Whether she can climb steep terrain like the robo-mule Big Dog remains to be seen.)

Dozens of tiny motors in her fingers, arms, neck and torso respond to commands from the lab team's network -- "a not-so-standard ethernet network," is all Trafton will say -- to manipulate her remotely. Other commands are preprogrammed. Like when it's playtime.

Octavia, it appears, likes to play Rock, Paper, Scissors to pass the time. Her left hand bunches into a fist with a dim whirring sound from her knuckle motors. She pumps the first twice to signal that it's game on. Both of us throw our hands flat. "It's a tie," she says after a moment's pause. "I need to try harder."

So she brings it. On the next round, she keeps her fist solid while my hand goes flat again. Then she smooths it out to paper over my rock. "I won," she gloats. No one likes a sore winner, even if she isn't human.

Photo: Spencer Ackerman

See Also: