Helen Wills Neuroscience Institute & School of Optometry
Director, Redwood Center for Theoretical Neuroscience
Computational models of sensory coding and visual perception
Each waking moment, our brain is bombarded by sensory information, estimated to be nearly one gigabit/sec. Somehow, we make sense of this data stream by extracting the forms of spatiotemporal structure embedded in it, and from this we build a model of the world containing objects, surfaces, and other relevant information for guiding action. The overarching goal of research in my laboratory is to understand how this process occurs in the brain, focusing especially on the thalamo-cortical system.
One major line of work is to develop probabilistic models of natural images, and to construct neural circuits capable of representing images in terms of these models. For example, we have developed a model of natural images based on the principle of sparse coding — in which the retinal image is explained in terms of a small number of events at any given point in time — and we have shown that the receptive field properties that emerge in such a system match those found in the primary visual cortex (V1) of mammals. The suggestion then is that V1 may be operating, at least in part, according to a similar principle. We are currently working on extending this model to learn invariances from natural image sequences, in addition to building models composed of multiple layers to capture the hierarchical structure of visual cortex.
Another goal of our work is to build physical computing and memory systems that work more like the brain. Current computing hardware revolves around exact, discrete representations consisting of 1’s and 0’s, with a central clock synchronizing all operations in the system. By contrast, the brain does most of its computation by manipulating analog signals (the membrane voltage) asynchronously in continuous-time, and it works all of its magic utilizing very low amounts of power – the human brain consumes just 20 watts! We are currently working together with electrical engineers to develop methods for encoding image data and other natural signals in an efficient manner that can exploit the intrinsic analog storage properties of low-power, nanoscale memory devices. We are also developing new models of neural computation based on high-dimensional representation for building a holistic, internal representation of a scene from multiple fixations of the eye.
Vision Science 206D. Neuroanatomy and Neurophysiology of the Eye and Visual System
Structure and function of the neurosensory retina, photoreceptors, RPE including blood supply. Current concepts of etiology and management of major retinal conditions. Overview of diagnostic techniques in retinal imaging, electrophysiologic testing and new genetic approaches. Structure and function of the early visual pathway including retinal ganglion cells, optic nerves, lateral geniculate nucleus and visual cortex. Pupillary responses. Specialization in the visual cortex.
Vision Science 212B. Visual Neurophysiology and Development
Introduction for graduate students. Visual pathways will be considered from retina to lateral geniculate to visual cortex. Basic organization at each stage will be covered. Primary focus will be studies of receptive field characteristics and associated visual function. Development and plasticity of the same visual pathways will also be covered. Evidence and implications will be explored from controlled rearing procedures and studies of abnormal visual exposure.
Vision Science 265. Neural Computation
This course provides an introduction to the theory of neural computation. The goal is to familiarize students with the major theoretical frameworks and models used in neuroscience and psychology, and to provide hands-on experience in using these models. Topics include neural network models, supervised and unsupervised learning rules, associative memory models, probabilistic/graphical models, and models of neural coding in the brain. See VS265.