On one hand, the "hardware" imposes serious constraints on the types of computations that are practical. For instance, the optic nerve from the retina to the rest of the brain can only have a limited number of fibers for space reasons. Hence, this nerve cannot convey all of the information about each point in an image but only a summary of this information. Moreover, the types of computation that the brain can perform are limited by developmental constraints. The genome has only tens of thousands of genes, which must specify, through development, a brain with hundreds of trillions of connections. Therefore, the computational structure of the brain cannot be arbitrarily designed, but epigenetic rules of development must be obeyed.
On the other hand, even if one knew all the connections in the brain and all its electrochemical signals at various instants, one would not understand what its computations are. To be useful, it would be necessary to have descriptions such as "this neural circuit computes the speed of motion" or "those cells normalize the neural responses." In other words, one would need a description of the tasks of each computation. Only after one understands these tasks and the neural limitations of the brain, can one say that one understands what the brain is doing and why.
Consequently, the long-term goal of our laboratory is to develop a framework to understand from multiple perspectives how the brain "sees." To achieve this goal, it is necessary that our research be multidisciplinary. Our techniques include electrophysiology, calcium-fluorescence imaging, light and electron microscopy, immunohistochemistry, high performance liquid chromatography, psychophysics, and biophysical and computational modeling.
We have been applying these techniques to adult retinal circuits, to the development of retinal receptive fields, and to the perception of visual motion. More recently, we have also begun to pay attention to whether the retina may be optimized to perform certain kinds of computations underlying early vision. Relatedly, we have also begun studying visual perceptual learning from an optimization perspective.