Amir Amedi's Lab - Projects
The Hebrew universiry homepage. Amir Amedi site Header
Examples and demonstrations of some of the projects can be found on our YouTube Channel here.

Reading in the blind: implications for brain organization and plasticity.
Visual Word Form Area

According to the canonical view, the human cortex is a sensory machine in which various regions govern activities of the different senses (e.g. the visual, auditory and somatosensory cortices). Within the visual areas, a region known as the Visual Word Form Area (VWFA) was shown to develop expertise for reading in sighted. If the brain is indeed a sensory machine, one would expect that the expertise for reading by touch in the blind would be in the somatosensory cortex. Another possibility is that it would be in the bilateral primary visual areas, as they were sown to be recruited for various non-visual tasks (such as language and tactile related tasks) in the blind. However, in a recent study performed in the lab in collaboration with Laurent Cohen we showed that the VWFA is also the peak of Braille-words selective activations across the entire brain of the congenitally blind. Furthermore, the anatomical location of the VWFA was highly consistent across blind subjects and between blind and sighted individuals. Thus, the functional recruitment and specialization of the VWFA for reading is independent of the sensory modality in which the words are presented, and even more surprisingly do not require any visual experience. This counters the notion that the brain is a sensory machine and suggests that it is rather a task machine, in which each area supports a specific task, computation or representation, irrespective of the input sensory modality. Future studies will test whether the metamodality of the reading areas extends also to auditory input. To this end, the shape of visual words can be translated into sounds using a visual-to-auditory sensory-substitution algorithm, similar to the system used to study the LOtv. Other future studies would investigate the specificity of the VWFA for words and the specific pathways and connections which enables the activation of the VWFA by tactile information.
Selected publications:
  • Reich L, Szwed M, Cohen L, Amedi A. A Ventral Visual Stream Reading Center Independent of Visual Experience. Current Biology 2011; 21: 1–6. [PDF]
Close
Sensory Substitution Devices
Sensory Substitution Project Image

In our laboratory, we have developed several new navigational aids. We are testing them in different environments and are using them for brain research regarding the brain’s flexibility and reorganization in cases of visual deprivation.
One of the tools developed is a tiny virtual cane. The cane operates as a kind of virtual flashlight, replacing or strengthening the “classic” walking stick. The device comprises sensors, which estimate the distance between the user and the object at which it is pointing. This information undergoes a "sensory transformation", to the form of complex vibrations, allowing the blind person to identify obstacles of different heights, understand the distance between him/herself and the objects around him, and create a spatial picture through which to navigate safely. The use of the device is intuitive and can be learnt within a few minutes of use.
For the purpose of this research, we built a real maze as well as virtual mazes, which facilitated trials in order to assess how the blind may deal with navigating through different paths, and improve their navigational capacities in both physical and virtual environments. This is complemented with the use of other devices which can convert complex visual input like objects and faces in the users surroundings into auditory and tactile input.
Research on the occurring brain activity shows that during the navigation time, areas of the brain devoted only to vision also come into play. These findings point to the brain’s flexibility and its division according to task rather than sense.
You can take a look here for hebrew description of this project, and more details.
Close
Multisensory Perception
Multisensory PErception Project Image

The appropriate binding of sensory inputs is fundamental in everyday experience. When crossing a road, for example, the sound of a car approaching can come from a car you can see, but also from another car outside of your field of view. Multisensory perception relies on multiple streams of unisensory input, which are processed independently in the periphery (e.g. auditory and visual inputs are detected in different sensory organs, and are transmitted to the cortex via separate pathways), and a multisensory integration stage during which the multiple sensory streams must be integrated to create a unified representation of the world. In other cases we need to separate these streams into separate entities. In spite of recent advances, how and where this is done in humans is still unclear. In the lab we use advanced techniques and computational approaches to study this subject. We recently studied how haptic and visual conveyed shape information are processed in the brain by using a multisensory design of adaptation fMRI. This technique relies on the "repetition suppression effect", which refers to the decrease in activation when an experimental condition is repeated. By presenting the same object in different modalities in a sequential audio and visual stimuli design, one can detect areas that manifest crossmodal repetition suppression effect. We identified a network of occipital (LOtv and the calcarine sulcus), parietal (aIPS), and prefrontal (precentral sulcus and the Insula) areas all showing a clear crossmodal repetition suppression effect. These results provide a crucial first insight into the neuronal basis of visuo-haptic integration of objects in humans. In another study we applied a novel experimental design with novel computational approach to detect unisensory and multisensory components of multisensory perception. We used a multisensory adaptation of spectral analysis used in retinotopy studies. In our paradigm auditory and visual stimuli were delivered in the same experimental condition with different number of repetitions. The rate of the auditory and visual stimuli getting in and out of synchronization was associated with a third interaction frequency. Spectral analysis enabled us to determine the contribution of auditory, visual and interaction responses to a voxel's overall response. This was done during one experimental condition, with audio and visual stimuli presented at the same time, in and out of synchronization, in a manner similar to real world experiences of multisensory perception. The results reveal a complex view of auditory and visual processes under a multisensory context. Future studies will elaborate the current results, using more complex stimuli, and different experimental and contextual conditions. For example one can introduce a task in which active combination of auditory and visual stimuli is required. Another effort is in the identification of the functional and effective connectivity between the areas identified earlier.
Selected publications:
  • Hertz U, Amedi A. Disentangling unisensory and multisensory components in audiovisual integration using a novel multifrequency fMRI spectral analysis. NeuroImage 2010; 52: 617-632. [PDF]

  • Tal N, Amedi A. Multisensory visual–tactile object related network in humans: insights gained using a novel crossmodal adaptation approach. Experimental Brain Research 2009; Published online. [PDF]

  • Amedi A, Malach R, Hendler T, Peled S, Zohary E. Visuo-haptic object-related activation in the ventral visual pathway. Nature Neuroscience 2001; 4:324-330. [PDF]

    See also:
    Amedi A, Jacobson G, Hendler T, Malach R, Zohary E. Convergence of visual and tactile shape processing in the human lateral occipital complex. Cerebral Cortex 2002; 12:1202-1212. [PDF]

Close
Topographic Brain
Topographic Senses Project Image

We applied fMRI spectral analysis to investigate the somatotopic organization of the human brain. Using movements of 20 body parts, we were able to map whole body representations in an unprecedent detailed manner, in a very short time, and in individual subjects. This kind of mapping can help in the future in clinical settings and in designing brain-machine interfaces. We found multiple motor-somatotopic maps, in both cortical and sub-cortical structures. The video depicts the progressions of movements of different body parts and its representation in two major motor areas - the primary motor cortex (M1) and the supplementary motor area (SMA).

We applied fMRI spectral analysis to investigate the cochleotopic organization of the human cerebral cortex. We found multiple mirror-symmetric novel cochleotopic maps covering most of the core and high-order human auditory cortex, including regions considered non-cochleotopic, stretching all the way to the superior temporal sulcus. The video depicts the progressions of tonal frequency sensitivity in the auditory cortex. Cortical response of the group to the heard rising tone chirp is displayed in white for successive sampling points. Note the impressive mirror-symmetric pattern revealed in this tonal frequency progression movie. These maps suggest that topographical mapping persists well beyond the auditory core and belt, and that the mirror-symmetry of topographical preferences may be a fundamental principle across sensory modalities.

Striem-Amit E, Hertz U, Amedi A. Extensive Cochleotopic Mapping of Human Auditory Cortical Fields Obtained with Phase-Encoding fMRI. PLoS ONE 2011; 6(3): e17832. [PDF]
Close
Seeing with music and sound
Sight with music and sound Project Image

[Image credit Michael Gluhoded, Shelly Levy-Tzedek and Amir Amedi]


By using both veteran sensory substitution devices such as The vOICe (Meijer 1992), and novel devices such as the new EyeCane and EyeMusic developed in our lab, our group of blind and sighted volunteers are learning to interact with visual information through other senses. They are facing fascinating challanges such as learning to read, to identify,locate and grasp objects and more. They are learning to use the advantages of each device, such as the high resolution of The vOICe, the pleasent soundscapes of the EyeMusic and the depth information of the EyeCane.
Additionally, we aim to create a training program which will allow potential users of these devices to train mre efficiently, independently and easily on their use.
Close
Amir Amedi site Footer