October 4, 2013Infra · Research · Academics

Can 'Deep Learning' Offer Deep Insights about Visual Representation? - Bay Area Vision Meeting

Deep network architectures and learning algorithms have a long and storied history in computational neuroscience going back to Fukushima (1980) and Selfridge (1959). What have we learned since these early efforts?.

Deep network architectures and learning algorithms have a long and storied history in computational neuroscience going back to Fukushima (1980) and Selfridge (1959). What have we learned since these early efforts? In vision, deep learning algorithms are typically utilized to learn a mapping from image pixels to labels (object or scene category). Despite the impressive performance of these systems, it is not clear they have advanced our understanding of visual representation in the brain.

Here, Bruno Olshausen argues that in order to provide insight into the neural representations used in biological systems we must consider a broader set of computational problems and neural architectures. Problems such as scene segmentation, separating reflectance from shading, recovery of 3D structure and heading from self motion, and learning sensorimotor contingencies present difficult computational challenges to any visual system operating in a complex 3D environment.

Solving these problems will likely require richer network architectures, in addition to more structured representations, than those encompassed by feedforward, multilayer perceptrons. The development of deep learning algorithms that embrace and tackle these difficult challenges has much to offer neuroscience.

Keep Updated

Stay up-to-date via RSS with the latest open source project releases from Facebook, news from our Engineering teams, and upcoming events.

Subscribe
Facebook © 2017