Split-hook terminal devices (1960s-1980s)
Each of these split hooks is operated by a cable that runs from a small thumb-like projection to a lightweight shoulder harness. When the wearer moves their arm forward or shrugs the opposite shoulder, the cable tugs at the thumb and opens the hook.
Photo: Andres Serrano
<< Previous | Next >>
Even so, the system is brittle, easily flummoxed by anything outside the tai-chi startup sequence. The muscles and nerves in Lehman’s stump overlap, making the signal noisy, ambiguous. “If you raise your arm, you can get interference from other muscles,” says Todd Kuiken, who invented the reinnervation procedure at the Rehabilitation Institute’s Center for Bionic Medicine. “Your deltoid starts to fire, and the weight of the arm is enough to change the firing pattern. That could put it outside the envelope of what works.”
And if Lehman tells the arm to do something it doesn’t understand? To make a fluid move instead of one built out of its vocabulary of bends, swivels, and pinches? “The arm will do something,” says Levi Hargrove, one of its developers, “but it won’t do that. It’ll do its best guess.” This problem bedevils all of the current research in prosthetics. Today’s computer-controlled prosthetic arms can carry out only a few commands. “Sometimes patients come in and they’re expecting a super-arm that works as good as their old one,” Kuiken says. “And you have to say no, I’m sorry, they aren’t that good.”
They were supposed to be better. A decade ago, researchers seemed on the cusp of creating a working interface between body and machine. Even back then, arms controlled by myoelectric signals were old news; more-advanced limbs would read commands directly from the brain. In 2003 scientists at Duke announced that monkeys could control a robotic arm via electrodes implanted in their brains. A year later, a similar device allowed a quadriplegic human patient at Brown University to play Pong with his thoughts. In 2008 researchers at the University of Pittsburgh showed off a monkey that could use a neurally controlled robotic arm to eat marshmallows. Surely if a monkey could use a robot arm to feed itself, it wouldn’t be long before amputees used them to tie their shoes and pilots flew jets with their minds.
Such advances are needed more than ever. There are approximately 185,000 limb amputations in the US every year—the causes range from diabetes to workplace injuries to battlefield trauma. In 2005, the most recent year for which data is available, 1.6 million Americans were living without a limb. As of January 2012 there were 1,421 amputees from the Afghanistan war and the second Iraq war. Of those, 254 lost upper limbs, like Lehman, and 420 lost more than one limb. Many of the rest, presumably, lost single legs or feet, but the statistics don’t break out those details. In fact, the numbers seem low—but they’re the best ones the military will share.
What the military will share, though, is money to study limb replacements. In 2006 the Defense Advanced Research Projects Agency began a program to build, in four years, an arm “directly controlled by neural signals” that would have abilities “almost identical to a natural limb in terms of motor control and dexterity, sensory feedback (including proprioception), weight, and environmental resilience.” With its deadline now three years past, those ambitions now look wildly unrealistic. After $153 million in funding and years of engineering, the best any amputee can get is a heavy, clunky arm that moves slowly, can’t feel anything, and often misreads its user’s intentions.
“The human arm is amazing,” says Rahul Sarpeshkar, a bioengineer at MIT who pioneered the design of ultralow-power circuitry for bionic interfaces. “It does a lot of very intelligent local computation that the brain doesn’t even do. We don’t understand the coding schemes that biology employs. We don’t understand how its feedback loops work together.” In other words, the science hasn’t yet caught up with the fiction. A true bionic limb—one that responded to mental commands with precision and fluidity, one that transmitted sensory information, one that its user could feel as it moved through space—would require a depth of understanding and technological complexity that is simply beyond today’s prosthetic experts. “It’s not that we’re not going to be able to do it,” Sarpeshkar says. “But it’s higher-hanging fruit than people think.” In other words, this is more than just an engineering problem. It’s a problem of basic science.
Across the street from the Rehabilitation Institute of Chicago, a rhesus monkey named Thor is strapped into a chair. His tiny hand clutches a metal handle on the end of a makeshift arm mounted below a computer monitor. The arm is hinged; Thor can’t raise or lower it, but he can move it side to side and forward and back, like a salt shaker on a tabletop. And when Thor moves the handle, a yellow dot on the screen moves in the corresponding direction. The monkey gets a sip of juice when he steers the dot into a box on the screen. What Thor doesn’t know is that the handle isn’t wired to anything. It’s just a mechanical trick to make his brain issue the right commands. A tiny square array of 100 electrodes, wired to a computer, is resting on the part of his brain that controls his arm. The lab around Thor, run by a neuroscientist named Lee Miller, looks well used—scuffed floor tiles, jerry-rigged and discarded machine parts on shelves. A cat’s brain floats in a jar, unregarded, while a stack of computers and other electronic equipment whirs in the corner.
The researchers have spent months analyzing what Thor’s brain does when his hand moves. They start by reading his motor output—letting Thor work the handle when it’s wired to the cursor and trying to correlate his movements to patterns of neural activity. Then they set up the computer to do the reverse: to infer his arm motion by watching neural output.
While Thor works, dozens of pink, yellow, blue, and green lines writhe on a monitor, each symbolizing the intensity and frequency of individual neural firings. When a neuron fires, the computer emits a click. If you listen, you can tell how busy Thor’s brain is under the electrode array. It sounds like popcorn popping. But then, suddenly, every line on the monitor turns gray and freezes. The clicks stop. A grad student checks the cables leading to the titanium plug in Thor’s skull and finds that one has come loose. He pushes it back into its socket, and a moment later the monkey is back in business. The clicking resumes.
It’s a different approach from the one Kuiken has taken with Lehman. Intuitively, going straight to the brain seems smarter. The problem is, no one knows how the brain does what it does. Neuroscientists know how neurons work, sending waves of electrical charge along their lengths and then squirting out chemicals—neurotransmitters—to signal one another. But how an intention, a thought, a mind, arises from that network of electrochemistry-in-aspic is still largely a mystery. The brain changes from instant to instant. The same task might be handled by different neurons at different times. Moreover, any given set of neurons could be sending commands to his arm, processing sensory data, or responding to reflexive movements.
In 2007 Krishna Shenoy, a neuroscientist at Stanford, described observing individual neurons while a monkey did the same task over and over again. He found that a given neuron could be very active in one trial and not at all active in another. Averaged over many trials, the neuron’s firing was correlated with the monkey’s activity, but on an event-by-event basis it wasn’t. What is that neuron really doing? No one has a clue.
So the computer that’s attempting to translate Thor’s intentions makes a statistical best guess. The array of electrodes in the monkey’s brain polls about 100 nearby neurons and selects the closest match it can find in a database that catalogs what Thor’s neurons did before. These statistical inferences are inevitably imprecise; the yellow dot doesn’t move as smoothly as Thor’s arm. It jitters as if it had stage fright.
Humans don’t do much better. A few years ago, John Donoghue, a Brown University neuroscientist, built that Pong-playing brain interface for a quadriplegic man. Last November he announced that he had hooked up a paralyzed patient to a newer version of his device. On the other end of it was an advanced arm from DEKA Research and Development, the company founded by Segway inventor Dean Kamen. The patient couldn’t successfully reach out and grab a ball with the arm more than 50 percent of the time.
Old vs. New
Ironically, one of the most useful prosthetic arms available today uses centuries-old technology. Here’s how it compares with a truly bionic limb, which so far exists only in theory.
1 Extending the arm or flexing the shoulder pulls a cable attached to a harness on the user’s back.
2 As the cable tightens, it opens a split hook at the end of the arm. Reversing the move closes the hook.
3 These simple arms are the lightest limbs on the market, and they provide a sort of sensory feedback—force on the prosthetic hand or arm (like the weight of an object) gets felt by the user’s body.
1 Hypothetically, a neurally controlled prosthesis would begin with a brain interface, a chip capable of picking up complex signals from the user’s brain.
2 A computer would translate those signals into orders for the arm—”move up,” “bend my elbow,” “turn my wrist.”
3 Motors in the joints would move the arm smoothly in response to commands from the computer.
4 Sensors in the arm would feed information on its position and movement through the computer and into the chip in the user’s brain.
When the statistical methods work for Thor, it’s because his actions are drastically restricted. The computer was programmed earlier this week, when Thor was strapped into the same position he’s in today. He’s trained to move only one arm in only one way. This reduces the number of things his neurons will do, making it more likely that the computer will recognize a pattern of neural activity. “We only know how the brain moves the arm when you’re sitting still, not moving other body parts, not hearing other things, not being cognitively loaded,” Shenoy says. It’s a situation that could exist only in a lab.
In fact, Thor’s implants would never work in the real world. The electrodes aren’t attached to neurons; they’re just sort of floating near them. So an abrupt head-shake can move electrodes to different neurons, throwing off the software’s calibration. “I think it’s a perfect technology for a spinal-cord patient who is not very mobile,” says Kuiken. “That doesn’t translate to an amputee who moves around and plays football, or falls down and whacks his head on a door.”
For bigger implants like, say, the deep brain stimulators used to treat epilepsy and depression, head movement isn’t a problem. The electrodes in those are huge compared to neurons, and they discharge tiny electrical shocks into brain tissue rather than trying to record data from individual neurons. But for the microscopic needles of brain-computer interfaces, head motion is a real problem. They monitor neurons that are 20 to 50 microns wide. Researchers have tried to route around this problem by creating what they call adaptive algorithms that can adjust when the electrodes shift. It’s not easy. “If you have an adaptive algorithm and it changes things too quickly, it confuses the brain,” Miller says. “You’ve got two systems trying to learn at the same time, and they essentially learn things out from under each other.”
Electrodes like Thor’s are vulnerable in a physical way, too. “The body is a harsh environment,” Donoghue says. “It attacks the materials and eventually degrades them.” Electrodes are made of metal. The body is loaded with water, salt, and a dizzying array of other chemicals. Putting them together is like trying to bond a fork and a steak. And the steak fights back by trying to dissolve the fork.
The steak treats the fork as a threat—which, of course, it is. Confronted with foreign bodies, the brain mounts an inflammatory response called gliosis, wrapping cells like astrocytes and microglia around the electrodes to wall them off. Over time, the electrodes become encapsulated in a sheath of scar tissue that acts as an insulator. Engineers are working to forestall gliosis with anti-inflammatory coatings and exotic electrode designs. And in some cases, gliosis isn’t a problem at all. “We’ve published papers with yearlong recordings, and we have a patient who has nearly five years of recording,” Donoghue says.
Even so, implants that work well in one brain may fail in another. “Nobody quite understands exactly why signals deteriorate, and the rate at which they deteriorate seems to be wildly unpredictable,” says Gerald Loeb, a biomedical engineer at the University of Southern California. “Some animals will have usable signals for years, and others lose signals within a couple of months.” Yet so far no one has come up with a better way of getting information about individual neurons into or out of a living brain.
Glen Lehman’s arm isn’t clunky because its motors are slow. By itself the arm can move with speed and grace. Robotic arms, after all, assemble cars and perform surgery. The hardware isn’t mysterious; thousands of people use commercial motor-driven myoelectric prostheses, though they’re far from perfect. (The advances in Lehman’s arm are largely related to the surgery and the pattern-recognition software.) The fault in neural control lies not within the limb but in the brain—in our incomplete understanding of not only how to get signals out but how to send them in.
Animal brains keep track of body parts with a sort of sixth sense called proprioception. You know exactly where your right arm is, not because it feels hot or sore or is touching anything, but because you just know. Receptors in your limbs send position-and-motion data through your nervous system, and it all gets collated, somehow, into an unconscious awareness. My arm is up there; my arm is down here. “There are muscle receptors. There are tendon receptors. There are capsule receptors, even skin sensors, all contributing in a very complex way that we don’t understand,” Kuiken says. “I don’t think there’s going to be a single spot in the brain where you can put a dense array of electrodes and get a strong percept of proprioception.”
This monkey's brain implant allows it to feed itself with a thought-controlled robot arm.
Ironically, old-fashioned mechanical arms first prototyped two centuries ago are better at giving feedback than anything invented since. A cable attached to a harness opens the hook or flexes the elbow when the user pulls it by reaching forward or shrugging. Pick something up and force on the prosthesis, translated to your stump, tells you how heavy it is. Many users actually prefer cable-driven arms to the myoelectric, motor-driven type.
So what would it take to build an artificial arm that could send proprioceptive feedback to the brain? In the 1930s the neurosurgeon Wilder Penfield found that electrically stimulating the surface of the brain caused patients to feel sensations and twitches in specific parts of the body. That’s where the monkeys come in again. Miller’s graduate students are working on using the same kind of electrode array in Thor’s skull to send electrical signals directly into a part of the brain that is thought to receive proprioceptive input. (Complicating matters further, this area may also handle tactile somatosensory feedback—touch.) The idea is to make a monkey believe that a lever is jerking in its hand. Eventually, the thinking goes, they’ll be able to embed sensors in the arm that transmit the same kind of data.
So far the test animals do seem to react as if the handle had moved. After training them to push the handle to the right when they feel it move, Miller’s graduate students send a signal to the electrodes, and the monkey moves its hand as if it could feel the handle moving against it. But no one knows what sensation the input is actually producing. There’s no way to ask the monkey. Researchers at Caltech and the University of Pittsburgh, currently working on a neural interface for a fancy motorized arm created at the Johns Hopkins University Applied Physics Laboratory, plan to integrate this kind of sensory data into human trials in April 2013.