Originally appeared in The New Yorker
The climbers at Earth Treks gym, in Golden, Colorado, were warming up: stretching, strapping themselves into harnesses, and chalking their hands as they prepared to scale walls stippled with multicolored plastic holds. Seated off to one side, with a slim gray plastic band wrapped around his brow, Erik Weihenmayer was warming up, too—by reading flash cards. “I see an ‘E’ at the end,” he said, sweeping his head over the top card, from side to side and up and down. “It’s definitely popping—is it ‘please’?” he asked me. It was. Weihenmayer moved triumphantly on to the next card.
Erik Weihenmayer is the only blind person to have climbed Mt. Everest. He was born with juvenile retinoschisis, an inherited condition that caused his retinas to disintegrate completely by his freshman year of high school. Unable to play the ball games at which his father and his brothers excelled, he took to climbing after being introduced to it at a summer camp for the blind. He learned to pat the rock face with his hands or tap it with an ice axe to find his next hold, following the sound of a small bell worn by a guide, who also described the terrain ahead. With this technique, he has summited the tallest peaks on all seven continents.
A decade ago, Weihenmayer began using the BrainPort, a device that enables him to “see” the rock face using his tongue. The BrainPort consists of two parts: the band on his brow supports a tiny video camera; connected to this by a cable is a postage-stamp-size white plastic lollipop, which he holds in his mouth. The camera feed is reduced in resolution to a grid of four hundred gray-scale pixels, transmitted to his tongue via a corresponding grid of four hundred tiny electrodes on the lollipop. Dark pixels provide a strong shock; lighter pixels merely tingle. The resulting vision is a sensation that Weihenmayer describes as “pictures being painted with tiny bubbles.”
Reading the cards before his climb helped Weihenmayer calibrate the intensity of the electrical stimulation and make sure that the camera was pointing where he thought it was pointing. When he was done, he tied himself into his harness and set off up Mad Dog, a difficult route marked by small blue plastic holds set far apart on the wall. Without the BrainPort, Weihenmayer’s climbing style is inelegant but astonishingly fast—a spidery scramble with arms and feet sweeping like windshield wipers across the wall in front of him in order to feel out the next hold. With the device on his tongue, he is much slower, but more deliberate. After each move, he leans away from the wall, surveys the cliff face, and then carefully reaches his hand out into midair, where it hovers for a split second before lunging toward a hold several feet away. “You have to do the hand thing, because it’s hard to know where, exactly, things are in space,” Weihenmayer explained, as I prepared to tackle Cry Baby, a much simpler route. “Once my hand blocks the hold, I know I’m in front of it, and then I just kind of go in there.”
Weihenmayer told me that he wouldn’t take the BrainPort up Everest—relying on fallible electronics in such extreme conditions would be foolhardy. But he has used it on challenging outdoor climbs in Utah and around Colorado, and he loves the way that it restores his lost hand-eye coördination. “I can see the hold, I reach up, and I’m, like, ‘Pow!’ ” he said. “It’s in space, and I just grabbed it in space. It sounds so simple when you have eyes, but that’s a really cool feeling.”
The BrainPort, which uses the sense of touch as a substitute for sight, is one of a growing number of so-called sensory-substitution devices. Another, the vOICe, turns visual information into sound. Others translate auditory information into tactile sensation for the deaf or use sounds to supply missing haptic information for burn victims and leprosy patients. While these devices were designed with the goal of restoring lost sensation, in the past decade they have begun to revise our understanding of brain organization and development. The idea that underlies sensory substitution is a radical one: that the brain is capable of processing perceptual information in much the same way, no matter which organ delivers it. As the BrainPort’s inventor, the neuroscientist Paul Bach-y-Rita, put it, “You don’t see with the eyes. You see with the brain.”
Bach-y-Rita, who died in 2006, is known as “the father of sensory substitution,” although, as he liked to point out, both Braille and white canes are essentially sensory-substitution systems, replacing information that is typically visual—words on a page, objects at a distance—with tactile sensation. He even argued that writing ought to be considered the original precursor, because it enabled the previously auditory experience of the spoken word to be presented visually.
Bach-y-Rita began his medical career in visual rehabilitation, gaining a reputation as a specialist in the neurophysiology of eye muscles. In 1959, his father, Pedro Bach-y-Rita, a Catalan poet who had immigrated to the Bronx and taught at City College, suffered a catastrophic stroke. Doctors said that he would never speak or walk again, but Paul’s brother, then a medical student, designed a grueling rehabilitation regimen: Pedro had to crawl around on kneepads until he could walk, and to practice scooping up coins until he had learned to feed himself. After a year, Pedro went back to work as a teacher and, after two, he was able to live independently. When he eventually died—in 1965, of a heart attack—he was hiking up a mountain in Colombia. And yet, as his autopsy revealed, his brain was still severely damaged; the areas responsible for motion and involuntary muscle movements had been all but destroyed. “How could he have recovered so much?” Bach-y-Rita marveled. “If he could recover, why didn’t others recover?”
Bach-y-Rita had already begun tinkering with devices that substituted tactile sensation for vision, but, encouraged by this personal evidence of the brain’s ability to adapt to loss, he completed his first prototype in 1969. It was built from castoffs—a discarded dentist’s chair, an old TV camera—and weighed four hundred pounds. A blind person could sit in the chair and scan the scene by using hand cranks to move the camera. The analog video stream was fed into an enormous computer, which converted it into four hundred gray-scale dots. These points of information were then transferred not to four hundred electrodes, as in the BrainPort, but to a grid of vibrating, Teflon-tipped pins mounted on the back of the chair. The pins vibrated intensely for dark pixels and stayed still for light ones, enabling users to feel the picture pulsing on their backs. After just a few hours’ practice, Bach-y-Rita’s first six volunteers, all blind from birth, could distinguish between straight lines and curved ones, identify a telephone and a coffee mug, and even recognize a picture of the supermodel Twiggy.
Bach-y-Rita published his results in Nature, in 1969. During the following decade, he continued to refine the system, testing his blind subjects with more and more complex tasks while trying to shrink the enormous contraption into something more manageable. The bulk of cameras and computers at the time wasn’t the only challenge. He also ran up against a tactile constraint known as “two-point discrimination”—our ability to tell that two things touching the skin are indeed discrete objects, rather than a single large one. The skin’s spatial resolution varies widely; on the back, the stimuli had to be quite far apart, and Bach-y-Rita spent years looking for a better spot. Some of the most point-sensitive areas are on the hand, but if blind users had their hands stuck in a device they wouldn’t be able to manipulate the objects they were newly capable of seeing. Bach-y-Rita’s colleagues scoffed when he settled on the tongue, pointing out the difficulty of making the device work in a wet environment. But the tongue’s moisture makes it an excellent transmitter of electrical energy, and it is as sensitive to two-point discrimination as a fingertip.
In 1998, Bach-y-Rita founded a company, Wicab, to commercialize his invention. It is based in a small office park in the suburbs of Madison, Wisconsin, and shares an anonymous, two-story glass building and a plant-filled atrium with a family dentist. A couple of dozen employees sit at cubicles or in a small workshop where each of the devices is still built by hand. When I visited, Tricia Grant, Wicab’s director of clinical research, led me through the first steps of a ten-hour training program that she’s developed to help new users get accustomed to the device.
Grant spread a black cloth on a conference-room table—it’s easier for beginners to start in a high-contrast environment—and blindfolded me. She put the band holding the camera over my ears and gave me the plastic lollipop to put into my mouth. As I wiggled my fingers in front of my face, she explained how to increase the intensity of the electrical pulses on my tongue until I was able to feel them. (Smokers and the elderly typically require more stimulation than younger users.) Suddenly, there was a slightly sour fizzing on my tongue, and we were ready to begin.
Grant told me that she was putting a plastic banana and a ball on the table. “This is how we always start,” she said. “See if you can tell which is on the left and which is on the right.” Lips clamped shut around the BrainPort cable, I swept my head slowly from side to side, as if I were stroking the table with my brow, emitting a startled “Mmm,” as I bumped into each effervescent object. Although I couldn’t explain exactly how I knew, after scanning back and forth for a few seconds I was pretty sure that the ball was on the left and the banana was on the right, and I reached out to double-check. “You grabbed that ball like you saw it!” Grant said.
Half an hour later, I had successfully navigated an obstacle course of office chairs, and identified the letter “O,” written on the whiteboard. (A capital “L” proved a little trickier—I guessed “E” instead.) “What else can I see?” I asked Grant. Just then, our lunch arrived. She warned me to avoid hot peppers and pickles, in order to spare my overstimulated tongue. I barely heard her, slumped in my chair and suddenly aware of how hard I had been concentrating for the past forty-five minutes. Stripped of sight, I’d had to squeeze every drop of information I could about the world around me from a plastic square tingling like Pop Rocks on my tongue.
We completed only the first part of Grant’s course, but she told me that, after ten hours, I would have been able to use the BrainPort to safely move around my home. Achieving mastery takes much longer. “We recommend practicing for at least twenty minutes a day,” she said. “It’s like learning a foreign language.”
Wicab has been making the BrainPort for the better part of two decades, and the device received F.D.A. approval as a vision aid in 2015. No more than two hundred have shipped, however, and in the blind community it remains little more than a curiosity. Eric Bridges, the executive director of the American Council of the Blind, told me that he hadn’t heard of it or of the various alternative devices, like the vOICe. He said wearily that he is constantly approached by people claiming to have invented the next big blindness aid, but that few of these ideas ever make it to commercial production. Although 1.3 million Americans are blind, with another 8.7 million qualifying as visually impaired, they still constitute a niche market. “And guess what?” Bridges added. “The blind and visually impaired community has a really low labor-participation rate. We’re not exactly flush with cash.” Although users of the vOICe need purchase only a smartphone and a pair of cheap augmented-reality glasses—the software is free—the BrainPort is currently priced at ten thousand dollars. (Wicab is lobbying to have the device qualify for reimbursement under Medicaid.)
But cost is not the only obstacle. Learning how to use a sensory-substitution device is hard work. “I almost think of it as giving you an opportunity to see what sensory perception must have been like when you were an infant,” Michael Proulx, an experimental psychologist who studies sensory substitution, told me. “We can’t remember the first year of life and how confusing all that visual information would have been.” Learning to see using the vOICe or the BrainPort is, he said, “starting you back at square one again, and you have to build up an expertise and an understanding over time.” Not surprisingly, many blind people, for whom getting from A to B in a sighted world already poses a significant daily challenge, don’t feel that it’s worth the investment of time, money, and energy to become proficient users of a device that, at its best, offers limited results. The BrainPort’s images are, after all, gray-scale and low-resolution, and its auditory competitor, the vOICe, operates with a built-in time delay, so it can’t even help you cross the street.
In the late nineteen-fifties, in a windowless basement at Johns Hopkins, the neurophysiologists David Hubel and Torsten Wiesel began a series of experiments that eventually won them a Nobel Prize, for their contribution to our understanding of the visual cortex. Some of their most important work took place in the early sixties, when they investigated the development of visual processing. They sutured closed a single eye of an eight-day-old kitten and unstitched it three months later. Although the kitten now had two undamaged eyes, it remained blind in the eye that had been visually deprived. Examining the kitten’s visual cortex, Hubel and Wiesel found that the open eye had taken over the neurons of the one that was closed, leaving the kitten forever unable to process information from a second eye.
This finding became a central piece of evidence for the so-called “critical periods” doctrine of brain development. The theory holds that, if sensory input is lacking during a crucial phase, the brain will fail to develop normally, remaining unable to process that kind of information even if sensory input is later restored. According to this theory, Paul Bach-y-Rita’s sensory-substitution device should not have worked for adults who had spent their entire lives blind, because their brains would never have developed the ability to interpret visual information.
More recently, however, other neuroscientists have found clues indicating that the adult brain does retain some ability to adapt—a quality known as plasticity. In 2002, scientists installed a tiny glass window in the skulls of adult mice and trimmed every other whisker; they were able to watch as the spatial-processing center in the mouse brains reconfigured itself to compensate for the sensory damage. (Mice rely on their whiskers to orient themselves.) As the concept of adult neuroplasticity encroached on the dogma of critical periods, a new generation of neuroscientists seized on sensory-substitution devices as a valuable tool with which to probe human brain development and organization.
In 2007, the Israeli neurobiologist Ella Striem-Amit embarked on doctoral research investigating whether people who are born blind could ever learn to perceive visual information in the way that sighted people do. She joined the lab of Amir Amedi, a neurologist at Hebrew University, in Jerusalem, and they set about training a small group of congenitally blind subjects to use the vOICe. The vOICe translates a camera feed into electronically produced notes according to reasonably simple principles: brightness is mapped to volume, and elevation to pitch. The camera scans a hundred and eighty degrees and delivers a new snapshot every second, and the sound is heard in stereo, enabling you to tell which side an object is on. A staircase whose first step is on your left and which has a sunlit window at the top would, for example, sound like a musical scale, rising in volume as it ascends in pitch.
Striem-Amit discovered that teaching people to see using the vOICe required more than simply helping them master the technology. “Congenitally blind people don’t know how vision works,” she explained. “They don’t know principles of occlusion”—that one object can block another—“or that things appear larger when they’re closer.” Yet, after seventy hours of training, her subjects were able to grasp these concepts and to identify shapes, objects, and even faces. In a video of one experiment, a blind woman, shown a picture of a man spreading his arms and legs in the shape of a star, stands up and mimicks his position. In another, a man using a similar device to identify a plaid shirt says, “It sounds a bit checkered.”
More remarkable were the results of fMRI brain mapping of blind subjects. Although the initial processing of the vOICe’s soundscapes occurred in the auditory cortex, subsequent tasks, such as identifying objects, occurred in the same regions of the brain as in sighted people. Striem-Amit and Amedi believe that the results directly contradict the critical-periods theory of brain development. “What we are claiming is that a lot of these brain regions didn’t depend on visual experience to begin with,” Striem-Amit explained. Instead, they argue, the correct wiring is laid down in the brain regardless of whether it is ever used.
Amedi, a former jazz saxophonist, has recently developed a device called Eye-Music, which replaces the soulless electronic bleeping of the vOICe with instrumental timbres that add color to the auditory translation of visual information: strings are shades of yellow, brass is blue, and so on. After training nine congenitally blind subjects on the device for thirty hours, he showed them the shapes I, V, and X in three different colors, while mapping their brains. When asked to discriminate among the shapes as letters, the participants showed the greatest activation in the area of the brain associated with reading; when the participants were asked to identify the shapes as Roman numerals, their brains lit up in a region associated with numbers and quantity; and, when the participants sorted the shapes by color, Amedi and his colleagues saw activity in the color centers of the brain, as well as in the auditory cortex.
“If you open a neuroscience textbook right now, it would still talk about the visual cortex, the auditory cortex, and so on,” Amedi said. “I would argue that that labelling is wrong.” After all, if congenitally blind people are able to listen to and then accurately identify the red apple in a basket of Granny Smiths using the same area of the brain as sighted people, why should that area be considered visual? Instead, Striem-Amit and Amedi have begun to argue that the brain is organized along task-specific lines—and that the visual cortex seems to be linked to vision only because most of us use sight in order to gather the type of information that it processes. “This is not just a semantic thing,” Amedi said. “By looking at the brain this way, we can better understand what each area is really doing, and how it’s doing it.
“This is still controversial,” Striem-Amit acknowledged. “There’s a lot more to be done.” Another neuroscientist, David Eagleman, compares the current state of neuroscientific knowledge to the field of genetics before Crick and Watson discovered the structure of DNA. “Neuroscience is so young that we hardly know the first thing about the brain,” he told me. Nonetheless, he leans toward a point of view potentially even more radical than that of Striem-Amit and Amedi—that the adult brain may be flexible enough to encompass entirely novel senses.
Eagleman, too, has developed a sensory-substitution device, called the vest (Versatile Extra-Sensory Transducer), which will become available in 2018. It is a waistcoat with thirty-two embedded vibratory motors, connected to a smartphone app that translates sound frequencies into tactile stimuli. It is designed for deaf people, who, Eagleman claims, should, with adequate training, be able to understand not just basic environmental sounds but also speech. “It’s simple,” he said. “We’re just putting the cochlea on the torso.”
But Eagleman’s ambitions do not stop at sensory substitution: his larger goal is sensory augmentation. He expects that vest users may, depending on the data transmitted through their skin, be able to “feel” electromagnetic fields, stock-market data, or even space weather. “It may be the case that we can add one or two or three or more senses and the brain has no problem,” he said. Amedi likewise imagines that sensory augmentation could enable us to “see” bodies through walls using the infrared spectrum or to “hear” the location of family members using G.P.S. tracking technology. “The community of people that work in sensory substitution is very small, and ninety-nine per cent of them, including me, used to be very focussed on restoration, rehabilitation, and basic science,” Amedi told me. “Now, even just in the last year, the pendulum has swung in the direction of creating superabilities.”
The science of sensory substitution has also begun to attract the attention of philosophers and experimental psychologists, who hope that it will shed light on the nature of perceptual experience. What is seeing, after all, if your tongue can do it? Is a person who perceives visual information via the auditory system experiencing sight, sound, or an unprecedented hybrid of the two? The philosopher Fiona McPherson told me that the field is divided on these questions, in part because there is no agreement on what a sense actually is. Some argue that vision is defined by the organ that absorbs the information: anything that does not enter through the eye is not vision, and thus Erik Weihenmayer is feeling, rather than seeing, the rock wall in front of him. Striem-Amit, on the other hand, is one of many neuroscientists who favor a definition of vision that is determined by the source of the stimulus: vision is any processing of information that comes from reflected rays of light. By this measure, Weihenmayer is seeing, period. “For the past twenty years, there was a supremacy of neuroscience,” the French experimental psychologist Malika Auvray told me, meaning that activation in the visual cortex was sufficient proof that an experience was visual. “But people have defined the visual brain area as the location in the brain where you get activation in response to visual stimuli, so there’s a certain circularity there.”
The final criterion typically invoked in these debates is the lived experience of the sense—what philosophers call the “qualia.” This distinction has an intuitive logic: most people feel certain that they would never confuse the sensation of seeing something with that of touching or hearing it. But the experiences that sensory-substitution users report are varied. Some blind people say that if they look at an apple using the BrainPort, or vOICe, or EyeMusic, it feels like seeing: the knowledge that an apple is sitting on the table in front of them appears in their brain as a mental image. Indeed, some vOICe users are so strongly conditioned by its sound that they experience involuntary visual images: one reported seeing a light-gray arc in the sky every time a police car passed with its siren blaring. Others, however, define the experience in more cognitive terms: they decode the electrical stimuli they are feeling, or the sounds they are hearing, in order to arrive at the understanding that an apple is present.
Malika Auvray and Amir Amedi have individually conducted experiments designed to explore the causes of this variation. They found differences between people who were born blind and those who lost their sight as adults, and between those who had only just begun to use a given device and those who were fully accustomed to it. Auvray has shown that a single vOICe user may have a range of experiences, depending on the task at hand: the process of identifying an object often feels auditory, while that of figuring out where it is feels visual. Amedi speculates that much of this variation hinges on the vividness of an individual’s mental imagery, which, even among sighted people, is known to vary enormously: if asked to picture an apple, some people (including Amedi) can barely conjure up outlines, whereas others immediately envision a photo-realistic image. Eagleman, meanwhile, believes that further experiments may show that the subjective qualities of our sensory experiences are really produced by the structure of the incoming data itself. In other words, the brain of someone feeling electromagnetic-field fluctuations through vibrations in his vest will somehow recognize that this data stream contains patterns that aren’t related to touch and that, instead, qualify as something entirely new.
I spoke to Jens Naumann, a German-born Canadian who had lost the sight in each eye in two separate accidents by the age of twenty. He uses the vOICe, and when I asked whether it felt like vision to him he pointed out that even normal sight is the gateway to a range of experiences. “One is just functional,” he said. “And that’s where a sensory-substitution device means I can see things like the edge of the pavement or the entrance to a building. But another is beauty.” The vOICe can never successfully translate the visual experience of looking at his wife’s face or watching the sun set over the snow-covered mountains outside Banff. But, he added, “vinyl siding makes a very nice sound, actually, like music almost. So there’s a beauty in that.”
After we finished climbing, Weihenmayer and I went out for lunch—a curry at a local Nepali restaurant, in defiance of Tricia Grant’s recommendation to avoid spicy foods after using the device. He told me that he had never seen the world particularly well even before he became totally blind. “With the BrainPort, it’s similar to what I used to be able to see like,” he said. “Shapes, shades of light and dark—where things basically were, but not anything super-vivid, you know?”
Skyler Williams, Weihenmayer’s climbing partner, had joined us, and guided him along the buffet line, spooning chicken tikka masala and sag aloo onto his plate. Weihenmayer used a cane, “shorelining” against the edges of the room to get back to his seat. As we ate, he told me about his experience climbing with the BrainPort in the pinnacle-studded landscape near Moab, Utah. As he inched his way up Castleton Tower, the sun was directly behind him, and the shadows were confusing. “I kept reaching out and trying to touch this thing, and it was just rock,” he said. “Whenever I moved my head, it moved, too, and I eventually realized I was looking at myself. My head, my arms—and they were so defined it was crazy. I hadn’t seen myself since I was a dorky, pimply fourteen-year-old.”
“That’s so much of what the BrainPort is,” Weihenmayer explained. “You’re just reaching out like a kid again, and you’re, like, What the hell is that?” The experience shifts between decoding and seeing, between frustration and awe, frequently within the same instant. Later during that climb, as he neared the summit, the sun had gone behind the tower. “The lighting was perfect,” Weihenmayer said. “At that point, I wasn’t even thinking about my tongue. I’m just thinking about the picture in my brain.”
Weihenmayer doesn’t use the BrainPort exclusively for climbing. When he’s travelling, it enables him to find light switches and remote controls without patting down entire hotel rooms. At home, he wanders around with it, “just kind of looking at things,” he said, or hangs out with his kids—kicking a soccer ball, or playing rock, paper, scissors. On a phone, he showed me a short movie of him using the BrainPort to play tic-tac-toe with his daughter, Emma. Weihenmayer carefully felt out the thick, marker-drawn edges of each square before drawing his “O”s, while his daughter confidently filled in her “X”s. After drawing her third “X” in a row, Emma jumped up and down shouting, “I won! I won again!”
“Wait, I thought I had a circle on the top left, the middle left, and the bottom left,” Weihenmayer said, scanning the sheet. “You stinker!”
“Oh,” Emma said, caught cheating. “Maybe we both won?”
“When you go blind, you get kicked out of the club,” Weihenmayer told me. Using the BrainPort, he said, makes him feel like part of the gang again. He can see what his family is doing, without anyone needing to tell him. And he can never forget seeing his son smile for the first time. “I could see his lips sort of shimmering, moving,” Weihenmayer said. “And then I could see his mouth just kind of go ‘Brrrrp’ and take over his whole face. And that was cool, because I’d totally forgotten that smiles do that.” ♦