Essay

In Robots We Trust

“Siri, who’s considered the greatest football quarterback of all time?” People often ask smartphones questions like these and expect immediate—and correct—answers.

A branch of research called Explainable AI (or XAI) examines how smart machines can be made more transparent and trustworthy to humans.

The engineers built a clever anthropomorphic AI robot and taught it how to use its pliers-like gripper to mimic the movements of a human hand. To demonstrate for the robot, a volunteer wore a glove wired up with movement sensors that measured not only his hand positions but also the forces he used to open the bottle.

Every position and movement of the volunteer’s hand was translated into simple action words, like “grasp,” “push,” or “twist,” symbolic descriptions of movements that allowed the robot to encode the sequence of steps needed to open the bottle and gave the robot a language so it would be able to describe its actions back to the team.

The researchers then handed the robot the bottle, and it tried pushing and twisting the cap in different ways but seldom got the bottle open.

What the team left out was the haptic information. Haptic refers to the feelings associated with your body’s postures and motions: for example, the sensation of your fingers closing together.

The robot had another round with the tricky bottle, this time armed with both the symbolic and haptic components. Success! The robot learned how to open the bottle. The team knew the robot was learning because it described every decision it made as a live readout on a computer screen.

An audience of 150 people watched the robot struggling with the bottle. The audience was divided into groups, and the robot gave each group a different explanation of the task.

Later, those who were given the complete live explanations said they trusted the robot the most to open the bottle, but between the two types of information—symbolic and haptic—the symbolic component was more important in fostering people’s trust.

The findings from the UCLA experiment highlight important goals for future AI and robotics research, such as focusing not just on how well a smart machine can do a task, but also on how well it can explain itself to people—because for robots to earn a place in people’s daily lives, humans need to trust them.

One Comment

Leave a Reply