Connecting To The Server To Fetch The WebPage Elements!!....
MXPlank.com MXMail Submit Research Thesis Electronics - MicroControllers Contact us QuantumDDX.com




Search The Site






Computational Neuroscience and Cognitive Models




Outline



To learn how cognition is implemented in the brain, we must build computational models that can perform cognitive tasks, and test such models with brain and behavioral experiments. Cognitive science has developed computational models of human cognition,decomposing task performance into computational components. However, its algorithms still fall short of human intelligence and are not grounded in neurobiology. Computational neuroscience has investigated how interacting neurons can implement component functions of brain computation.



However, it has yet to explain how those components interact to explain human cognition and behavior. Modern technologies enable us to measure and manipulate brain activity in unprecedentedly rich ways in animals and humans. However, experiments will yield theoretical insight only when employed to test brain-computational models.It is time to assemble thepieces of the puzzle of brain computation. Here we review recent workinthe intersection of cognitive science, computational neuroscience, and artificial intelligence. Computational models that mimic brain information processing during perceptual, cognitive, and control tasks are beginning to be developed and tested with brain and behavioral data





Understanding brain information processing requires that we build computational models that are capable of performing cognitive tasks. The argument in favorof task-performing computational models was well articulated byAllen Newellin his commentary "You can't play 20 questions with nature and win"in 1973.1Newellwas criticizing the state of cognitive psychology.The fieldwas in the habit of testing one hypothesis about cognition at a time,in the hope that forcing nature to answer a series of binary questions would eventually reveal the brain's algorithms.


Newell argued that testing verbally defined hypothesesabout cognition might never lead to a computational understanding.Hypothesis testing, in his view, needed to becomplemented by the constructionof comprehensive task-performing computationalmodels. Only synthesis ina computer simulation can reveal what the interaction of the proposed component mechanisms actually entails,and whether it can account for the cognitive function in question.If we did have a full understanding of an information-processing mechanism, then we should be able to engineer it. "What I cannot create, I do not understand" in the words of physicist Richard Feynman, who left this sentence on his blackboard when he died in 1988




Here we argue that task-performing computational models that explain how cognition arises from neurobiologically plausible dynamic components will be central to a new cognitive computational neuroscience.We first briefly trace the steps of the cognitive and brain sciences and then review several exciting recent developments that suggest that it might be possible to meet the combined ambitions of cognitive science (to explain how humans learn and think)2and computational neuroscience (to explain how brains adapt and compute)3using neurobiologically plausible artificial intelligence(AI)models




In the spirit of Newell's critique, the transition from cognitive psychology to cognitive science was defined by the introduction oftask-performing computationalmodels.Cognitive scientistsknew that understandingcognition requiredAIandbrought engineering to cognitive studies. In the 1980s, cognitive science achieved important advances with symbolic cognitive architectures4,5and neural networks,6using human behavioral data to adjudicate between candidate computational models. However, computer hardwareand machine learning werenot sufficiently advanced to simulate cognitive processes in their full complexity. Moreover,these early developments reliedon behavioral data aloneand did not leverage constraintsprovided by the anatomy and activity of the brain




With the advent of human functional brain imaging, scientists began to relate cognitive theoriesto thehumanbrain. This endeavorstarted with electroencephalography (EEG),7expanded with magnetoencephalography (MEG)8and positron emission tomography (PET), and exploded with the invention of functional magnetic resonance imaging (fMRI).Itcame to be calledcognitive neuroscience.

Cognitive neuroscientistsbegan by mappingcognitive psychology's boxes (information-processing modules) and arrows (interactions between modules) onto the brain.This was a step forward in terms of engaging brain activity, buta step back in terms of computational rigor. Methods for testing the task-performing computational models of cognitive science with brain-activity data had not been conceived. As a result,cognitive science and cognitive neuroscience parted waysin the 1990s

Cognitive psychology's tasks and theories of high-level functional modules provided a reasonable starting point for mapping the coarse-scaleorganization of the human brain withfunctional imaging techniques,includingEEG,PETand early fMRI,which had low spatial resolution.Inspired bycognitive psychology'snotion of module,11cognitive neuroscience developed its own game of20 questions with nature.A given study would ask whether a particular cognitive modulecould be found in the brain. The fieldmapped an ever increasing array of cognitivefunctions to brain regions, providing a useful rough draft of the global functional layout of the human brain

Brain mapping enables us to relate the performance of a task to activity all over the brain, using statistical inference techniques that account for the multiple testing across locations.12As imaging technology advances, increasingly detailed patterns of selectivity can be mapped across the brains of humansand animals. In humans,fMRI affords up to whole-brain coverage at resolutions on the order of a millimeter;inanimals, modern techniques, such ascalcium imaging,can capturevastnumbers of neurons with single-neuron resolution




A brain map, at whatever scale,does not reveal thecomputational mechanism(Figure 1). However, mappingdoes provide constraints fortheory. After all, information exchange incurscoststhat scale with the distance between the communicating regions-costsin terms of physical connections, energy, and signal latency. Component placement is likely to reflect these costs. We expect regions that need to interact at high bandwidth and short latency to be placed close together.13More generally, thetopology and geometry of abiological neuralnetwork constrain its dynamics, and thus its functional mechanism. The literature on functional localizationresults, especially in combination with anatomical connectivity,may therefore ultimately prove useful for modeling brain information processing

Modernmeta-analysis techniques for brain imaging dataenable us to go beyond localization of predefined cognitive components and learn aboutthe way cognition is decomposed into component functions.14The field has also gone beyond associating overall activation of brain regions with their involvement inparticular functions. A growing literature aims to reveal the representational content of brain regions by analyzing their multivariate patterns of activity

Despite methodological challenges,19,20many of the findings of cognitive neuroscience providea solid basis to buildon. For example, the findings of face-selective regions in the human ventral stream21have been thoroughly replicated and generalized.22Nonhuman primates probed with fMRI exhibited similar face-selective regions,23which had evaded explorations with invasive electrodes, because the latter donot provide continuous images over large fields of view. Localized with fMRI and probed with invasive electrode recordings, the primate face patches revealed high densitiesof face-selective neurons,24withinvariancesemerging at higher stages of hierarchical processing, including mirror-symmetric tuningand view-tolerant representations of individual faces in the anterior-most patch.25The example of face perception illustrates, on one hand,the solid progress in mapping the anatomical substrate and characterizingneuronal responses26and, on the other,the lack of definitive computational models. The literature does provide clues tothe computational mechanism.A brain-computational model of face recognition27will have to explainthespatial clusters of face-selective unitsand the selectivities and invariances observed with fMRI28,29and invasive recordings

Cognitive neuroscience has mapped the global functional layout of the humanand nonhuman primatebrain.31However, it has not achieved a full computational account of brain information processing. The challenge ahead is to build computational models of brain information processing that are consistent with brain structure and function and perform complex cognitive tasks.The following recent developments in cognitive science, computational neuroscience, and artificial intelligence suggest that this may be achievable

  1. Cognitive science has proceeded from the top down, decomposing complex cognitive processesinto their computational components. Unencumbered by the need to make sense of brain data, it has developed task-performing computational models at the cognitive level. One successstory is that ofBayesian cognitive models, which optimally combine prior knowledge about the world with sensory evidence.32,33,34,35Initially applied tobasic sensory and motor processes,35,36Bayesian models have begun to engage complex cognition, including the way our minds model the physical and social world.2These developments occurred in interaction with statistics and machine learning, where a unified perspective on probabilistic empirical inference has emerged. This literature provides essential computational theory for understanding the brain.In addition, it provides algorithms for approximate inference on generative models that can grow in complexity with the available data-as might be required for real-world intelligence
  2. Computational neuroscience has taken a bottom-up approach, demonstrating howdynamic interactions between biological neurons can implement computational component functions. In the pasttwo decades, the field developedmathematical models ofelementarycomputationalcomponents and their implementation withbiological neurons.40,41These includecomponents for sensorycoding,42,43normalization,44working memory,45evidence accumulation and decision mechanisms,46,47,48and motor control.49Most of these component functions are computationally simple, but they provide building blocks for cognition. Computationalneuroscience has also begun totest complex computational models that can explain high-level sensory and cognitive brain representations
  3. Artificial intelligencehas shown how componentfunctions can be combined to createintelligent behavior. EarlyAI failed to live up to its promise,because the rich world knowledge required for feats of intelligence could not be either engineered or automatically learned. Recent advances in machine learning, boosted by growing computational power and larger data sets to learn from,have broughtprogress at perceptual,52 cognitive,and control challenges.Manyadvances were driven by cognitive-levelsymbolicmodels. Some of the most important recent advances aredriven by deep neural networkmodels, composed of units that compute linear combinations of their inputs, followed by static nonlinearities.55These modelsemploy only a small subset of the dynamic capabilities of biological neurons, abstracting from fundamental features such as action potentials. However, their functionality is inspired by brains and could be implemented with biological neurons





The three disciplines contribute complementary elements to biologically plausible computational models that perform cognitive tasks and explain brain information processingand behavior(Figure 2). Here we reviewthe first steps in the literature toward a cognitive computational neuroscience that meets the combined criteria for successof cognitive science (computational models that perform cognitive tasks and explain behavior) and computational neuroscience (neurobiologically plausible mechanistic models that explain brain activity). If computational models are to explain animal and humancognition, they will have to performfeats of intelligence. Machine learning andAImore broadly are therefore key disciplinesthat provide the theoretical and technological foundation for cognitive computational neuroscience

The overarching challenge is to build solid bridgesbetween theory (instantiated in task-performing computational models) and experiment (providing brain and behavioral data). The first part of this articledescribesbottom-up developmentsthat begin with experimental data, and attempt to build bridges from the datain the direction oftheory.56Given brain-activity data, connectivity models aim to reveal the large-scale dynamics of brain activation; decoding and encodingmodels aimto revealthe content and format of brain representations.The models employed in this literature provide constraints for computational theory, but they do not ingeneral perform the cognitive tasks in question and, thus, fall short ofexplainingthe computational mechanism underlying task performance