Neurophilosophy or the philosophy of neuroscience is the interdisciplinary study of neuroscience and philosophy that explores the relevance of neuroscientific studies to the arguments traditionally categorized as philosophy of mind. The philosophy of neuroscience attempts to clarify neuroscientific methods and results using the conceptual rigor and methods of philosophy of science.
Localization of function means that many cognitive functions can be localized to specific brain regions. An example of functional localization comes from studies of the motor cortex.Passingham, R. E. Stephan, K. E. Kotter, R. "The anatomical basis of functional localization in the cortex" Nature Reviews Neuroscience. 2002, VOL 3; PART 8, pages 606–616 There seem to be different groups of cells in the motor cortex responsible for controlling different groups of muscles.
Many philosophers of neuroscience criticize fMRI for relying too heavily on this assumption. Michael Anderson points out that subtraction-method fMRI misses a lot of brain information that is important to the cognitive processes.Anderson.(2007) "The Massive Redeployment Hypothesis and Functional Topography of the Brain" Philosophical Psychology Vol20 no 2 pp.144–149 Subtraction fMRI only shows the differences between the task activation and the control activation, but many of the brain areas activated in the control are obviously important for the task as well.
A 2011 article published in the New York Times has been heavily criticized for misusing reverse inference.Hayden, B "Do you Really love Your iPhone that Way" http://www.psychologytoday.com/blog/the-decision-tree/201110/do-you-really-love-your-iphone-way In the study, participants were shown pictures of their iPhones and the researchers measured activation of the Insular cortex. The researchers took insula activation as evidence of feelings of love and concluded that people loved their iPhones. Critics were quick to point out that the insula is not a very selective piece of cortex, and therefore not amenable to reverse inference.
The neuropsychologist Max Coltheart took the problems with reverse inference a step further and challenged neuroimagers to give one instance in which neuroimaging had informed psychological theory.Coltheart, M(2006b), "What Has Functional Neuroimaging Told Us about the Mind (So Far)?", Cortex 42: 323–331. Coltheart takes the burden of proof to be an instance where the brain imaging data is consistent with one theory but inconsistent with another theory.
Roskies maintains that Coltheart's ultra cognitive position makes his challenge unwinnable.Rooskies, A. (2009). "Brain-Mind and Structure-Function Relations: A methodological Response to Coltheart" Philosophy of Science. vol 76 Since Coltheart maintains that the implementation of a cognitive state has no bearing on the function of that cognitive state, then it is impossible to find neuroimaging data that will be able to comment on psychological theories in the way Coltheart demands. Neuroimaging data will always be relegated to the lower level of implementation and be unable to selectively determine one or another cognitive theory.
In a 2006 article, Richard Henson suggests that forward inference can be used to infer dissociation of function at the psychological level.Henson, R (2006). "Forward Inference Using Functional Neuroimaging: Dissociations vs Associations" Trends in Cognitive Sciences vol 10 no 2 He suggests that these kinds of inferences can be made when there is crossing activations between two task types in two brain regions and there is no change in activation in a mutual control region.
The name "functional-connectivity" is somewhat misleading since the data only indicates co-variation. Still, this is a powerful method for studying large networks throughout the brain.
Secondly, what mathematical techniques are best to characterize these brain regions?
The brain regions of interest are somewhat constrained by the size of the . Rs-fcMRI uses voxels that are only a few millimeters cubed, so the brain regions will have to be defined on a larger scale. Two of the statistical methods that are commonly applied to network analysis can work on the single voxel spatial scale, but graph theory methods are extremely sensitive to the way nodes are defined.
Brain regions can be divided according to their cellular architecture, according to their connectivity, or according to physiological measures. Alternatively, one could take a "theory-neutral" approach, and randomly divide the cortex into partitions with an arbitrary size.
As mentioned earlier, there are several approaches to network analysis once the brain regions have been defined. Seed-based analysis begins with an a priori defined seed region and finds all of the regions that are functionally connected to that region. Wig et al. caution that the resulting network structure will not give any information concerning the inter-connectivity of the identified regions or the relations of those regions to regions other than the seed region.
Another approach is to use independent component analysis (ICA) to create spatio-temporal component maps, and the components are sorted into those that carry information of interest and those that are caused by noise. Wigs et al. once again warns that inference of functional brain region communities is difficult under ICA. ICA also has the issue of imposing orthogonality on the data.Mumford et al (2010) "Detecting network modules in fMRI time series: A weighted network analysis approach" Neuroimage. 52
Graph theory uses a matrix to characterize covariance between regions, which is then transformed into a network map. The problem with graph theory analysis is that network mapping is heavily influenced by a priori brain region and connectivity (nodes and edges). This places the researcher at risk of cherry-picking regions and connections according to their own preconceived theories. However, graph theory analysis is still considered extremely valuable, as it is the only method that gives pair-wise relationships between nodes.
While ICA may have an advantage in being a fairly principled method, it seems that using both methods will be important to better understanding the network connectivity of the brain. Mumford et al. hoped to avoid these issues and use a principled approach that could determine pair-wise relationships using a statistical technique adopted from analysis of gene co-expression networks.
There are three principal types of evidence in cognitive neuropsychology: association, single dissociation and double dissociation.Patterson, K and Plaut, D (2009) "Shallow Droughts Intoxicate the Brain: Lessons from Cognitive Science for Cognitive Neuropsychology" Association inferences observe that certain deficits are likely to co-occur. For example, there are many cases who have deficits in both abstract and concrete word comprehension following brain damage. Association studies are considered the weakest form of evidence, because the results could be accounted for by damage to neighboring brain regions and not damage to a single cognitive system.Davies, M (2010) "Double Dissociation: Understanding its Role in Cognitive Neuropsychology" Mind & Language vol 25 no 5 pp500-540 Single Dissociation inferences observe that one cognitive faculty can be spared while another can be damaged following brain damage. This pattern indicates that a) the two tasks employ different cognitive systems b) the two tasks occupy the same system and the damaged task is downstream from the spared task or c) that the spared task requires fewer cognitive resources than the damaged task. The "gold standard" for cognitive neuropsychology is the double dissociation. Double dissociation occurs when brain damage impairs task A in Patient1 but spares task B and brain damage spares task A in Patient 2 but damages task B. It is assumed that one instance of double dissociation is sufficient proof to infer separate cognitive modules in the performance of the tasks.
Many theorists criticize cognitive neuropsychology for its dependence on double dissociations. In one widely cited study, Joula and Plunkett used a model connectionist system to demonstrate that double dissociation behavioral patterns can occur through random lesions of a single module.Joula and Plunkett (1998). "Why Double Dissociations Don't Mean Much" Proceedings of the Cognitive Science Society They created a multilayer connectionist system trained to pronounce words. They repeatedly simulated random destruction of nodes and connections in the system and plotted the resulting performance on a scatter plot. The results showed deficits in irregular noun pronunciation with spared regular verb pronunciation in some cases and deficits in regular verb pronunciation with spared irregular noun pronunciation. These results suggest that a single instance of double dissociation is insufficient to justify inference to multiple systems.Keren, G and Schuly (2003) "Two is not Always Better than One: a Critical Evaluation of Two System Theories" Perspectives on Psychological Science Vol 4 no 6
Charter offers a theoretical case in which double dissociation logic can be faulty.Charter, N (2003). "How Much Can We Learn From Double Dissociations" Cortex 39 pp.176–179 If two tasks, task A and task B, use almost all of the same systems but differ by one mutually exclusive module apiece, then the selective lesioning of those two modules would seem to indicate that A and B use different systems. Charter uses the example of someone who is allergic to peanuts but not shrimp and someone who is allergic to shrimp and not peanuts. He argues that double dissociation logic leads one to infer that peanuts and shrimp are digested by different systems. John Dunn offers another objection to double dissociation.Dunn, J (2003) "The elusive Dissociation" Cortex 39 no 1 21–37 He claims that it is easy to demonstrate the existence of a true deficit but difficult to show that another function is truly spared. As more data is accumulated, the value of your results will converge on an effect size of zero, but there will always be a positive value greater than zero that has more statistical power than zero. Therefore, it is impossible to be fully confident that a given double dissociation actually exists.
On a different note, Alphonso Caramazza has given a principled reason for rejecting the use of group studies in cognitive neuropsychology.Caramazza, A (1986) "On Drawing Inferences about the Structure of Normal Cognitive Systems From the Analysis of Patterns of Impaired Performance: the Case for Single Case Studies" Studies of brain damaged patients can either take the form of a single case study, in which an individual's behavior is characterized and used as evidence, or group studies, in which a group of patients displaying the same deficit have their behavior characterized and averaged. In order to justify grouping a set of patient data together, the researcher must know that the group is homogenous, that their behavior is equivalent in every theoretically meaningful way. In brain damaged patients, this can only be accomplished a posteriori by analyzing the behavior patterns of all the individuals in the group. Thus according to Caramazza, any group study is either the equivalent of a set of single case studies or is theoretically unjustified. Newcombe and Marshall pointed out that there are some cases (they use Geschwind's syndrome as an example) and that group studies might still serve as a useful heuristic in cognitive neuropsychological studies.Newcombe and Marshall (1988). "Idealization Meets Psychometrics. The case for the Right Groups and the Right Individuals" Human Cognitive Neuropsychology edited by Ellis and Young
Symbolic representational accounts have been famously championed by Fodor and Pinker. Symbolic representation means that the objects are represented by symbols and are processed through rule governed manipulations that are sensation to the constitutive structure. The fact that symbolic representation is sensitive to the structure of the representations is a major part of its appeal. Fodor proposed the language of thought hypothesis, in which mental representations are manipulated in the same way that language is syntactically manipulated in order to produce thought. According to Fodor, the language of thought hypothesis explains the systematicity and productivity seen in both language and thought.Aydede, Murat, "The Language of Thought Hypothesis", The Stanford Encyclopedia of Philosophy (Fall 2010 Edition), Edward N. Zalta (ed.), URL =
Associativist representations are most often described with Connectionism systems. In connectionist systems, representations are distributed across all the nodes and connection weights of the system and thus are said to be sub symbolic.Bechtel and Abrahamsen. Connectionism and the Mind. 2nd ed. Malden, Mass. : Blackwell, 2002. A connectionist system is capable of implementing a symbolic system. There are several important aspects of neural nets that suggest that distributed parallel processing provides a better basis for cognitive functions than symbolic processing. Firstly, the inspiration for these systems came from the brain itself indicating biological relevance. Secondly, these systems are capable of storing content addressable memory, which is far more efficient than memory searches in symbolic systems. Thirdly, neural nets are resilient to damage while even minor damage can disable a symbolic system. Lastly, soft constraints and generalization when processing novel stimuli allow nets to behave more flexibly than symbolic systems.
The Churchlands described representation in a connectionist system in terms of state space. The content of the system is represented by an n-dimensional vector where the n= the number of nodes in the system and the direction of the vector is determined by the activation pattern of the nodes. Fodor rejected this method of representation on the grounds that two different connectionist systems could not have the same content.Shea, N. "Content and its Vehicles in Connectionist Systems" Mind and Language. 2007 Further mathematical analysis of connectionist system revealed that connectionist systems that could contain similar content could be mapped graphically to reveal clusters of nodes that were important to representing the content.Laakso, Aarre & Cottrell, Garrison W. (2000). Content and cluster analysis: Assessing representational similarity in neural systems. Philosophical Psychology 13 (1):47–76 However, state space vector comparison was not amenable to this type of analysis. Recently, Nicholas Shea has offered his own account for content within connectionist systems that employs the concepts developed through cluster analysis.
Alternatively, some theorists choose to accept a narrow or wide definition for theoretical reasons. Pancomputationalism is the position that everything can be said to compute. This view has been criticized by Piccinini on the grounds that such a definition makes computation trivial to the point where it is robbed of its explanatory value.Piccinini, G. (2010). "The Mind as Neural Software? Understanding Functionalism, Computationalism, and Computational Functionalism." Philosophy and Phenomenological Research
The simplest definition of computations is that a system can be said to be computing when a computational description can be mapped onto the physical description. This is an extremely broad definition of computation and it ends up endorsing a form of pancomputationalism. Putnam and Searle, who are often credited with this view, maintain that computation is observer-related. In other words, if you want to view a system as computing then you can say that it is computing. Piccinini points out that, in this view, not only is everything computing, but also everything is computing in an indefinite number of ways.Piccinini, G. (2010b). "The Mind as Neural Software? Understanding Functionalism, Computationalism, and Computational Functionalism." Philosophy and Phenomenological Research 81 Since it is possible to apply an indefinite number of computational descriptions to a given system, the system ends up computing an indefinite number of tasks.
The most common view of computation is the semantic account of computation. Semantic approaches use a similar notion of computation as the mapping approaches with the added constraint that the system must manipulate representations with semantic content. Note from the earlier discussion of representation that both the Churchlands' connectionist systems and Fodor's symbolic systems use this notion of computation. In fact, Fodor is famously credited as saying "No computation without representation".Piccinini, G (2009) "Computation in the Philosophy of Mind" Philosophical Compass. vol 4 Computational states can be individuated by an externalized appeal to content in a broad sense (i.e. the object in the external world) or by internalist appeal to the narrow sense content (content defined by the properties of the system).Piccinini, Gualtiero, "Computation in Physical Systems", The Stanford Encyclopedia of Philosophy (Fall 2010 Edition), Edward N. Zalta (ed.), URL =
There are also syntactic or structural accounts for computation. These accounts do not need to rely on representation. However, it is possible to use both structure and representation as constrains on computational mapping. Oron Shagrir identifies several philosophers of neuroscience who espouse structural accounts. According to him, Fodor and Pylyshyn require some sort of syntactic constraint on their theory of computation. This is consistent with their rejection of connectionist systems on the grounds of systematicity. He also identifies Piccinini as a structuralist quoting his 2008 paper: "the generation of output strings of digits from input strings of digits in accordance with a general rule that depends on the properties of the strings and (possibly) on the internal state of the system".Piccinini (2008). "Computation without Representation" Philosophical Studies vol 137 no 2 Though Piccinini undoubtedly espouses structuralist views in that paper, he claims that mechanistic accounts of computation avoid reference to either syntax or representation. It is possible that Piccinini thinks that there are differences between syntactic and structural accounts of computation that Shagrir does not respect.
In his view of mechanistic computation, Piccinini asserts that functional mechanisms process vehicles in a manner sensitive to the differences between different portions of the vehicle, and thus can be said to generically compute. He claims that these vehicles are medium-independent, meaning that the mapping function will be the same regardless of the physical implementation. Computing systems can be differentiated based upon the vehicle structure and the mechanistic perspective can account for errors in computation.
Dynamical systems theory presents itself as an alternative to computational explanations of cognition. These theories are staunchly anti-computational and anti-representational. Dynamical systems are defined as systems that change over time in accordance with a mathematical equation. Dynamical systems theory claims that human cognition is a dynamical model in the same sense computationalists claim that the human mind is a computer.van Gelder, T. J. (1998) The dynamical hypothesis in cognitive science. Behavioral and Brain Sciences 21, 1–14 A common objection leveled at dynamical systems theory is that dynamical systems are computable and therefore a subset of computationalism. Van Gelder is quick to point out that there is a big difference between being a computer and being computable. Making the definition of computing wide enough to incorporate dynamical models would effectively embrace pancomputationalism.
|
|