PhD 2005 ICNC (Interdisciplinary center for neural computation), Hebrew University of Jerusalem
2006 - Instructor of Neurology, Harvard Medical School.
2007 - Senior Lecturer, Hebrew University of Jerusalem.
2011 - Associate Professor of Medical Neurobiology, Hebrew University of Jerusalem.
Curriculum Vitae (Short format, updated July 2013)
Curriculum Vitae (Long format, updated December 2013)
Short bio sketch:
Amir is an internationally acclaimed brain scientist with 15 years of experience in the field of brain plasticity and multisensory integration. He has a particular interest in visual rehabilitation. He is an Associate Professor at the Department of Medical Neurobiology at the Hebrew University and the ELSC brain center, PhD in Computational Neuroscience (ICNC, Hebrew University) and Postdoctoral and Instructor of Neurology (Harvard Medical School). He won several international awards and fellowships such as The Krill Prize for Excellence in Scientific Research, the Wolf Foundation (2011), The international Human Frontiers Science Program Organization Post docatoral fellowship and later a Career Development award (2004, 2009), the JSMF Scholar Award in Understanding Human Cognition (2011), and was recently selected as a European Research Council (ERC) fellow (2013).
1. Large scale brain plasticity
2. Multisensory integration
3. Sensory substitution interfaces for the blind
* Understanding the neuronal basis of multisensory integration
* Understanding the neuronal basis of brain plasticity in the human brain
* Development of novel sensory substitution algorithms and devices
* Development of new approaches and training approaches for visual rehabilitation
My research focuses on blindness, which is both a limiting condition affecting many millions worldwide, and constitutes a unique model for answering fundamental questions in cognitive neuroscience. My lab's work ranges from basic science, querying brain plasticity and sensory integration, to technological developments, allowing the blind to be more independent and even “see” using sounds and touch similar to bats and dolphins (a.k.a. Sensory Substitution Devices, SSDs), and back to applying these devices in research. The central hypothesis of the work is that visual areas can process sound and touch to a similar extent as they process vision, but only when subjects learn to fully extract the relevant information encoded by these alternative senses. I propose that, with proper training, many (if not all) visual brain areas or network can change the type of sensory input it uses to retrieve behaviorally (task)-relevant information within a matter of weeks/months. I also suggest that visual-like selectivity might develop without any visual experience. If this is true, it can have far reaching implications also for clinical rehabilitation, the second major aim of my lab. To achieve this, we are currently developing several SSDs which try to encode the most crucial aspects of vision and increase their accessibility the blind, along with targeted, structured training protocols both in virtual environments and in real life scenarios using these novel SSDs (or veteran SSDs like The vOICe).
Finally, SSDs can also be used in conjunction with (any) invasive approach for visual rehabilitation. We are suggesting to develop a novel hybrid Visual Rehabilitation Device (VRD) which combines SSD and visual prostheses that at present lack in terms of resolution and in rehabilitative power. In this VRD, the SSDs are used in training the brain to “see” prior to surgery, in providing explanatory signals after surgery and in augmenting the capabilities using information arriving from the same image (e.g. adding color, depth and, increased resolution).
The following video from TED-X-Jerusalem explains the basic principles of our work and demonstrates the EyeMusic SSD. You are welcome to follow our YouTube Channel for more cool examples here.