If we were playing the opposite game, channeling our inner childishness, and I were to shout, “a robot!,” you might respond with something like, “a human!.” One is mechanical, emotionless, and made of sensors and silicon; the other is alive, empathetic, and made of cells and vital organs. So mustn’t these two entities be inherently and entirely distinct? Since spending time in the Human-Robot Interaction Lab and working with graduate student Willie Wilson, the answer is clearly and forcefully no!
The gray area between human and machine is expansive. Neurons either fire or don’t - electrical impulses act identically; according to the computational theory of mind, brains are information processors - so are laptops and calculators. Wilson’s research runs along these lines but on its own circuit, expanding the human-robot gray territory. His project involves the development of algorithms and models for making decisions that are influenced by emotions. In other words, his models will allow robots to compute and act according to the emotional salience of specific and novel situations. While this mechanical response isn’t at all replicating emotional experience, it does have fascinating and practical applications.
One intriguing question these models raise, and one necessitated by a prudent trek into the future, goes something like this: “when do we want robots to make human-like decisions and when do we want them to be utilitarian calculators?” Algorithms that account for situational emotional salience allow robots to make human-like decisions. In a realistic timeframe (let’s say 20 years) this might affect pancake flippers, laundry folders, dog walkers, and, but of course, the quintessential Roomba. Imagination, however, can take us further; consider coding “emotion” into higher-risk machines such as driverless cars, drones, or robotic soldiers and policemen. Questions of ethnic, costs, and benefits require human-emotional reasoning, without which robots might make lethal mistakes. Conversely, when humans make lethal mistakes, are they deviating from moral norms or withholding immoral ones? Would a cold calculator have made a better decision?
Practically speaking, emotion-sensitive robots provide excellent control over the interaction portion of human-robot interaction. This comes in handy during experiments when human participants react to the robots’ actions. Emotion-programs affect the content and the delivery of whatever the robot is saying, along with its facial expressions (yes, facial expressions!).
Besides its innovative research, what’s so special about the Human-Robot Interaction Lab and other Tufts’s labs are their accessibility to undergraduate students. “We are always recruiting people [undergrads] to help run experiments, write code, and research in a variety of ways,” said Wilson in an email. “I'm currently working with four undergrads and hope to have at least that many working with me over the summer.” Sometimes summer work, or, as it’s sometimes known, the chance to help conduct research relevant to the future of technology, is a single email away.