ABSTRACT Title of Dissertation: REPOSITIONING COGNITIVE KINDS Aida Roige Mas, Doctor of Philosophy, 2022 Dissertation directed by: Distinguished University Professor Peter Carruthers, Department of Philosophy This dissertation puts forward a series of theoretical proposals aimed to advance our understanding of cognitive kinds. The first chapter introduces the general debates that provide the philosophical underpinnings for the topics addressed in each of the following chapters. Chapter two compares and distinguishes between modules of the mind and mechanisms-as- causings, arguing that they should not be conflated in cognitive science. Additionally, it provides a novel “toolbox” model of accounts of mechanisms, and discusses what makes any such account adequate. Chapter three addresses the question of whether there is a role within the new mechanistic philosophy of science for representations. It advances a proposal on how to carve working entity types, so that they may include representational explanans. Chapter four offers an account of mental disorders, one that captures the regulative ideal behind psychiatry’s inclusion of certain conditions as psychopathologies. Mental disorders are alterations in the production of some mental outputs (e.g. behaviors, beliefs, emotions, desires), such that their degree of reasons-responsiveness is extremely diminished with respect to what we would folk- psychologically expect it to be. REPOSITIONING COGNITIVE KINDS by Aida Roige Mas Dissertation submitted to the Faculty of the Graduate School of the University of Maryland, College Park, in partial fulfillment of the requirements for the degree of Doctor of Philosophy 2022 Advisory Committee: Professor Peter Carruthers, Chair Professor Lindley Darden Professor Eric Saidel Professor Harjit Bhogal Professor Bob Slevc © Copyright by Aida Roige Mas 2022 ii Dedication For my parents and àvia iii Acknowledgements I am deeply grateful and indebted to many for their support throughout the years, not only throughout the writing of this dissertation, but throughout my life more generally. I am profoundly indebted to my parents, who always believed in me and encouraged me to pursue my path even if it was out of the ordinary. Also to my beloved àvia, to whom I dedicate this dissertation I promised to her on her last days. I am most indebted to Peter Carruthers, who is not only a brilliant philosopher and cognitive scientist from whom I learned a lot and who helped me tremendously in this project, but is the best supervisor I could have ever asked for. I don’t think there is any mentor that is more kind, diligent, insightful and dedicated than Peter. I am very fortunate that he guided me so generously throughout this dissertation and my career, and I have accumulated a profound debt of gratitude to him. I owe many thanks as well to Lindley Darden, who has in effect come to be another advisor to me. Lindley has been incredibly supportive and encouraging of my work, revising and commenting on it countless times, and guiding me throughout with her knowledge of philosophy of science and biology. I am deeply indebted to both of them for my intellectual and professional development. Many thanks are owed as well to Eric Saidel, whom I had many stimulating discussions with and has provided insightful comments on my work. I am also indebted to Harjit Bhogal and Bob Slevc, from whom I learned much and I am very grateful to count with as committee members. At the University of Maryland, College Park, I was fortunate to find an engaging and nurturing intellectual community. Many thanks to Sam Kerstein for introducing me to biomedical ethics and for his support iv more generally. To Georges Rey, for many enjoyable conversations full of insights. Special thanks to Andrew Fyfe for his companionship throughout our shared doctoral journey. To him, Christopher Masciari, Julia Janczur, and Heather Adair, whom I am lucky to call friends. I am grateful as well to the many faculty members, colleagues and fellow graduate students from which I benefited during my graduate career, including Dan Moller, Elizabeth Schechter, Christopher Morris, Aiden Woodcock, Julius Schöenherr, Mike McCourt, Lia Curtis-Fine, Shen Pan, Kyley Ewing, Cody Gomez, Evan Westra, Jeremiah Tillman, Xintong Wang, Zhaoqi Hu, Ken Glazer, Louise Gilman, and many others. I also thank the DC History and Philosophy of Biology (DCHPB) group (including Makmiller Pedroso, Joan Straumanis, Kalewold Kalewold, and other past and current members) where I found a vibrant community. I am also deeply grateful to those who encouraged me during my academic journey, including faculty and colleagues at Universitat Autònoma de Barcelona (specially Daniel Quesada, Thomas Sturm, Josep Manuel Udina, and Èric Arnau Soler) and CSIC (Mario Toboso and Fanny Brotons). I also have countless people to thank for their comments as I have presented my ideas at conferences, workshops, reading groups and works-in-progress. A Fulbright-Spain fellowship supported me my first two years in the program and allowed me to take extra courses. Last, but not least, I would like to give many heartfelt thanks to Andrés, for his patience, love and support throughout the making of this dissertation. Thanks as well to the rest of my family, especially Ferran, Sergi, padrí, and in loving memory of those who unfortunately, passed away while I was away. v Table of Contents Dedication ................................................................................................................... ii Acknowledgements .................................................................................................... iii Table of Contents ........................................................................................................ v Chapter 1: Introduction ............................................................................................... 1 1. How is the mind/brain composed? Two sorts of mechanisms in cognitive science ................................................................................................................... 2 2. Can the New Mechanistic Philosophy of Science find a role for representations? ..................................................................................................... 7 3. A material girl in a normative world: an account of mental disorders ....... 11 Chapter 2. How is the mind/brain composed? Two sorts of mechanisms in cognitive science ................................................................................................................ 17 1. Introduction ................................................................................................. 17 2. Modules or mechanisms-as-systems ........................................................... 21 3. An illustration of a mechanism-as-system: face recognition ...................... 33 4. Mechanisms-as- causings ............................................................................ 36 5. An illustration of a mechanism-as-a-sort-of-causing: placebo analgesia ... 39 6. The relationship between mechanisms-as-systems and as-causings .......... 43 7. How to treat work in progress ..................................................................... 48 Chapter 3: Can the New Mechanistic Philosophy of Science find a role for representations? .................................................................................................. 52 1. Introduction ................................................................................................. 52 2. Representations in cognitive science .......................................................... 58 3. Fitting representations in the MDC account ............................................... 62 4. Working-entity-hood ................................................................................... 74 5. Getting it ARIght ........................................................................................ 90 6. Conclusion .................................................................................................. 95 Chapter 4: A material girl in a normative world: the extreme reasons- irresponsiveness account of mental disorders .................................................... 96 1. Introduction ................................................................................................. 96 2. Two questions about mental disorders ........................................................ 98 3. What makes a disorder mental? ................................................................ 106 4. Considerations favoring my account of mental disorders ......................... 117 5. Why does psychiatry restrict itself to disorders of reasons-responsiveness? 123 6. Conclusion ................................................................................................ 126 Bibliography ............................................................................................................ 128 1 Chapter 1: Introduction One of the more useful distinctions in philosophy is that of tokens versus types: we distinguish concrete particulars from general sorts of things. Science is mostly in the business of providing explanations and predictions of general (as opposed to particular) phenomena, and it employs general taxonomical categories (“types” or “kinds”) to do so. However, in cognitive science, what exactly can we say about those kinds? And what makes some such categories better than others for induction and explanation? Instead of abstract, general recipes that don’t capture the particularities of the cognitive domain, I believe that the work that remains to be done is closer to the scientific ground in which the discipline develops. Rather than providing a general account of kind-hood for the cognitive sciences, I believe we should answer this question on a case-by-case basis. Thus, my dissertation aims to provide three different —but related— contributions to debates involving cognitive kinds. The first paper distinguishes between two sorts of kinds that produce cognitive phenomena: modules of the mind (“systems” or “mechanisms”) and mechanisms-as-causes. The second paper provides an account of how to carve out a mechanism’s working entities such that these may include representational ones. The third paper provides a novel account of what sort of thing mental disorders are. 2 The aim of this introduction is to sketch the philosophical debates in which these three papers are situated. After providing the relevant philosophical background, I will briefly summarize the thesis of each paper as well as the contribution it makes to those debates. 1. How is the mind/brain composed? Two sorts of mechanisms in cognitive science The question of mental architecture, or of what the underlying structure of the human mind/brain consists in, has involved at least two major debates.1 The first debate concerns the type of processing by which the mind/brain converts inputs into outputs: classicists (e.g. Chomsky, Fodor, Pylyshyn) considered that this was done via symbol manipulation —analogous to symbolic computation in digital computers. Meanwhile, connectionists (e.g. Hinton, McClelland) held that this processing occurred via dynamic, parallel, and patterned activity in networks that connect simple processing devices; a proposal they saw as more biologically plausible (Dawson, 1998).2 The debate around type of cognitive processing slowed as two things became clear: first, that classical architectures could be implemented by neural networks and vice-versa; and second, that the mind/brain could involve more than one type of information 1 In these papers (and more generally), I assume that some version of physicalism is true. 2 Arguably, a third contender appeared later on: embodied and extended cognitive science, which emphasized the acting and interacting in the world aspects of information processing. 3 processing, and that indeed it seemed to do so for different information-processing tasks. This debate between classicists and connectionists paved the way for a second debate: one concerning the functional architecture of the mind/brain. It was, and still is, common for cognitive scientists to talk about the “modules,” “systems,” or “mechanisms” composing the mind/brain: long-term memory, face recognition, visual perception, and so on. This approach of breaking down the mind/brain into its functional components is informed by faculty psychology, and has as precedent Franz Gall’s phrenology (1835). Both faculty psychology and Gall viewed the mind as compositional: i.e., as something that could be best understood when explained in terms of separate functions, powers, or faculties. In philosophy of cognitive science, during the 1990s and 2000s a major point of contention was the extent of, and the characteristics of, modules of the mind. Regarding the extent of mental modules, Jerry Fodor (1983) held that only the peripheral systems of the mind were modular: that is, only the input (perception and language) and output (action) systems were modular, while central cognition (higher-order processing) was not. Modular systems, according to him, are informationally encapsulated: they cannot rely on information held elsewhere in the mind during the course of its processing. In contrast, central cognition is isotropic: it had access to all domains and could potentially use any relevant available information. Fodor’s view was countered by many authors who argued for massive modularity. Massive modularity entails that the entirety of the mind/brain is modular. For this to be possible, information encapsulation (as Fodor described it) had to go. Many authors disputed as well other aspects of Fodor’s 4 characterization of modules: content domain specificity, strong localizability, shallow outputs, innateness, fast processing and automaticity (see e.g. Barrett & Kurzban, 2006; Carruthers, 2006; Coltheart, 1999). Peter Carruthers provided, in his 2006 book, what I consider to be the correct account of modules —or at least, the closest to the notion of modularity actually used by cognitive scientists. He characterized modules as “isolable function-specific processing systems, all or almost all of which are domain specific (in the content sense), whose operations aren’t subject to the will, which are associated with specific neural structures (albeit sometimes spatially dispersed ones), and whose internal operations may be inaccessible to the remainder of cognition” (Carruthers, 2006, p. 12). He distinguished between narrow-scope information encapsulation (Fodor’s) and wide-scope information encapsulation —which is true of a system if it has access to some, but not all, exogenous information during the course of its operations. Carruthers (2006) also argued that comparative psychology alongside evolutionary considerations make the most plausible case for the massive modularity of mind hypothesis. Independently of these developments, philosophers of science were identifying a plurality of types of explanation in the sciences. New Mechanists argued that explanations in certain special sciences were in fact mechanistic, a notion which they aimed to elucidate. At a minimum, mechanistic explanations work by modelling the mechanism in the world causally responsible for the phenomenon of interest. 5 Following Krickel (2019) and Nicholson (2012), I will classify their proposed accounts of mechanism into two sorts: - Mechanisms-as-systems: mechanisms that consist of stable arrangements, structures, and interacting parts such that their combined operation produces predetermined outcomes. These will include early accounts of mechanisms by Stuart Glennan (1996, 2002, 2010), William Bechtel (2008a, 2008b), Bechtel and Robert C. Richardson (1993), and Bechtel and Adele Abrahamsen (2005). - Mechanisms-as-causes: mechanisms that aren’t object-like but process-like, in the sense that their operation is a manifestation of the causal processes involving several entities that act and interact. These will include the accounts of Peter Machamer et al. (2000), Phillis Illari and Jon Williamson (2012), and recently Glennan (2017). Recent debates in the literature involving New Mechanisms have been about the suitability (or lack thereof) of one or other account to a certain (sub-)discipline within the special sciences. For example, does Machamer et al.’s (2000) account capture the mechanisms involved in explanations in evolutionary biology? (Skipper & Millstein, 2005). What about neuroscience? (Craver, 2007; Chemero & Silberstein, 2008). The assumption seems to be that if any such account is successful, it is so because it captures the specific type of explanation deployed in a given subdiscipline. At this point, it is worth noting that debates about modularity and debates about mechanisms are related, even if I haven’t seen them connected in the literature. Insofar as New Mechanists assume that the mind/brain is composed of mechanisms, 6 are they providing a genuine alternative to modules as functionally dissociable units, or are they presenting modules under a different guise? If the latter, which notion of modularity were they employing? Similarly, do discussions on modularity give support to one account of mechanism over the others? And, do all explanations of cognitive phenomena rely on modules, or on mechanisms? Chapter 2, “How is the mind/brain decomposed? Two sorts of mechanisms in cognitive science” aims to bridge this gap, providing an answer to these questions. I argue that post-Fodorean modules of the mind (such as those characterized in Carruthers’ account) can be identified with mechanisms-as-systems (such as those characterized in Bechtel’s account), since both families of views place similar conditions on modules/mechanisms being functionally dissociable, stable architectural components of the mind/brain. However, not all explanations of cognitive phenomena involve identifying a module(s) performing its proprietary function: some cognitive phenomena are the causal product of the workings of different systems in ways that go beyond their proper functions. In other words, some cognitive phenomena are produced by mechanisms-as-causes, and as such, their production does not entail the existence of a stable, dedicated system for their production. Moreover, I argue that at early stages of research, scientists should use the “minimal notion” of mechanism, which allows one to cash partial explanations in mechanistic terms without carrying over metaphysical commitments on what the target mechanism is like. 7 This paper makes a contribution to at least two different debates: (1) the debates on modularity of mind, and (2) the debates on what is the “best” account of mechanisms for a given scientific discipline. I address (1) by highlighting the minimal conditions a system must have in order to be a “module,” and pointing out how these overlap with the conditions proposed by the mechanists-as-systems hypothesis; and (2) by arguing that the proprietary domain of a discipline may involve the action of more than one sort of mechanism, and so it is misguided to favor a single mechanistic account as the blanket answer to explanations in that domain. 2. Can the New Mechanistic Philosophy of Science find a role for representations? Central to philosophy of mind has been the mind-body problem: the question of how mind and body are (and can be) related and how they affect (and can affect) one another. Most contemporary authors favor physicalism —the thesis that only physical particulars exist, such that a physical duplicate of this world would be a duplicate simpliciter. Within physicalism, there is a debate about the ontological status of putative mental kinds: are they reducible to underlying physical ones, or on the contrary, do they constitute an autonomous domain? The position that mental kinds are reducible to physical ones is known as “reductive physicalism.” Two objections have traditionally been raised against reductive physicalism. First, the absence of bridge laws connecting the mental to the physical suggests that one domain is not reduceable to the other (Davidson, 1970). Second, 8 reductive physicalism entails that creatures with different brains cannot have the same mental kinds as we do. So, if mental states can be multiply realized, reductive physicalism is false (Putnam, 1967; Block & Fodor, 1972). Within non-reductive physicalist theories, a popular position is psycho-functionalism. Psycho-functionalism can be construed as the conjunction of two theses: that relevant ontological kinds are those posited by the best cognitive scientific theories, and that functionalism is true. Since the first tenet is self-explanatory, I will elaborate on the second. Functionalism is the view that what makes a mental state, event, process or property3 (for simplicity, I will just talk of “states” in what follows) the kind of mental state it is (e.g. pain) is its functional role —that is, its causal profile— in the system of which it is a part. An internal state of an individual is an instance of type of mental state if, given a certain input, it performs the relevant causal role in relation to other states of the nervous system, and is causally efficacious in contributing to the subsequent behavior of the organism that possesses it (Putnam, 1975). It is worth clarifying that mental states are not abstracta,4 according to the functionalist. When one is characterizing things by their functional role, one is not describing nonmaterial entities —but merely omitting5 certain implementational details from the 3 Functionalism doesn’t have a commitment as to whether states, processes, events, properties, etc. are the correct kinds or metaphysical units in the mind/brain. The idea is the same regardless of metaphysical unit: what makes a process/event/state an instance of a given kind is the role it plays in the system in which it is part. 4 Here I am referring to the distinction between concrete vs abstract things. Concrete things are those that are located in space and time, or —for an alternative characterization— that have causal powers (e.g. an atom, this book). In contrast, abstract stuff does not have spatiotemporal locations, and couldn’t possibly be physical. Others define abstracta by what lacks causal powers -i.e. can’t cause anything (e.g. the number 7; English) (see Rey, 1997; Falguera et al., 2022). 5 Abstraction or omission of details to characterize a kind is pervasive in the natural sciences, and if properly conducted is not problematic. 9 characterization. Items with the property of having a certain functional role (kind instances) also have physical properties6. Although mental states are recognized, in part, by the behavior to which they are a contributing cause, they are not identical to that behavior. Likewise, a state may be caused by, and play, certain causal role(s), but it is not just these causal role(s). Functionalism entails multiple realizability—the view that a single mental kind can, in principle, be implemented or realized by multiple distinct physical kinds. According to functionalism, there may be a one-to- many relations between mental and physical kinds. Although psycho-functionalism does not, in principle, entail that the mind is representational (that is, that it contains intentional items that are about or refer to things), in practice its adoption involves accepting that the mind contains representations. This is because many successful theories in cognitive science involve explanations and generalizations ranging over representational kinds. Historically, some of the most successful theories in cognitive science have been developed from classical approaches, which emphasize the manipulation of physical symbols or representations according to some rules. Even those deriving from connectionist and embodied-extended approaches are representational (Dawson, 2013; Calvo & Gomila, 2008). Yet, not everyone is onboard with granting the reality of functional and representational kinds. Some authors view theories involving representations in the 6 Note that functionalism is compatible with interactionist dualism. However, most functionalists are physicalists (they think that there is nothing “over and above the physical”, and that a physical duplicate of this world would be a duplicate simpliciter of this world), so I will just talk about physicalist functionalism in what follows. 10 cognitive science as suspicious, or even worse, deficient. For example, Gualtiero Piccinini and Carl Craver (2011) argued that functional (and other non-mechanistic) explanations in cognitive science are a temporary “patch,” a product of our present ignorance of the underlying mechanisms operating in cognitive phenomena, which will inevitably dissipate as neuroscience becomes more and more integrated with psychology and other “higher-level” branches of cognitive science. Since Craver and Piccinini use functional kinds in their mechanistic models (e.g. “neurotransmitter”, “selection pressure”), but not representational ones, it is to be supposed that what they feel uneasy about is that such explanations involve representations. Carl Craver is one of the main articulators of the account of mechanisms that came to be known as “MDC”, according to which: Mechanisms are entities and activities organized such that they are productive of regular changes from start or set-up to finish or termination conditions (Machamer et al., 2000, p. 1). So, one might ask: does the MDC account require that the kinds involved in cognitive scientific explanations are neuronal, or otherwise physical? A positive answer would situate MDC in the reductive physicalist side of the mind- body problem. To my knowledge, the only attempt to answer this question (Krickel, 2019) did not arrive to a convincing conclusion for either view. So, in this paper, I amend (or maybe interpret) MDC to be compatible with representationalism. I do so by developing an account on which properties determine whether an individual is an 11 instance of certain working entity-kind, which I call the “ARI” account —for Activity-Enabling, Robust and Individuating properties. My paper situates the mechanistic account provided by MDC within the reductive vs non-reductive physicalism debate. Also importantly, it makes a contribution to the question of which properties determine working entity status. That is, what makes certain entities suitable to be part of a mechanism producing a phenomenon, and what excludes other entities from this categorization? In so doing, I answer Franklin-Hall’s (2016) carving problem: the question of how to carve a mechanism’s components in a way that is appropriate, non-arbitrary, and non-gerrymandered. 3. A material girl in a normative world: an account of mental disorders Cognitive science is the discipline that systematically studies information-processing systems such as the human mind/brain. There is a related discipline that also studies the human mind/brain, but its goal is not merely to gain knowledge, but also to promote mental health: psychiatry. Psychiatry studies, classifies, and researches causes as well as possible venues for intervention in mental disorders. Yet, few taxonomical systems or kinds have had the assumption of realism questioned as vehemently as that of mental disorders has. What sort of things are mental disorders? This question can be decomposed into three more nuanced ones. The first one asks about the metaphysical nature of mental disorders: are they natural kinds, socially constructed kinds, broken normal kinds, 12 pragmatic kinds, etc.? The second asks about what makes them disorders, as opposed to non-pathological features of persons. The third asks what makes them mental disorders, as opposed to, say, neuronal or somatic disorders. My paper addresses the third question. What distinguishes the mental from the physical? In the philosophy of mind, there have been various attempts to find a feature or set of features that all mental states, processes, properties and so on have, and that all non-mental stuff lack. These feature(s) are sometimes called “the mark(s) of the mental” or criteria for mentality. The challenge is to provide such “mark(s)” in such a way that does not immediately presuppose the truth of a particular stance on the mind-body problem. There is still an ongoing debate about this, but two features have continued to be discussed since they were first proposed. First, phenomenal consciousness: it is the property which mental states have when it is like something to undergo them (Nagel, 1974). A problem for this proposal is that there are some mental states, such as standing beliefs or subliminal perceptions, that aren’t conscious. Second, intentionality: the property of being about something, of being directed at something, of standing for something (Brentano, 1874). A difficulty for this proposal is that some qualitative states (e.g. tickles, pains) are mental states but they don’t seem to be about anything (although representationalists have made a convincing case that those states also have content). Another difficulty is that it seems that some non-mental stuff can be intentional too: for instance, tree rings stand for a tree’s age, and this introduction is about my dissertation. There are also other proposals, such as direct access, incorrigibility, and transparency, but I won’t discuss them here. 13 The way psychiatrists have approached the mental, however, does not seem to be informed by those philosophical debates. Many conditions that cause distress and/or impairment affect phenomenal consciousness and intentionality, yet they are not mentioned in the Diagnostic and Statistical Manual of Mental Disorders (DSM) — most notably, those affecting perception such as prosopagnosia, blindsight or phantom limb syndrome, but also some affecting other cognitive functions, such as Alzheimer’s, Lou Gehrig’s disease, or migraines. The notion of the “mental” they use isn’t characterized negatively either, that is, by capturing what does not have a known physical cause: Down syndrome is a mental disorder despite having a known physical cause (having a third copy of the 21st chromosome), and so is narcolepsy, which is caused by hypocretin deficiency. The inclusion and exclusion of disorders in the DSM follows a complicated collective decision-making process, where social, political, technologic, economic and pragmatic considerations play important roles. It is plausible that the disorders currently in the DSM simply don’t form a systematic, coherent hole, and that there are exceptions to any characterization of mental disorders —since there are no consistently applied criteria, metaphysic or epistemic, for something to be a mental disorder. Despite this, I think it is possible to provide a general characterization of mental disorders that captures the paradigmatic conditions listed in the DSM, drawing on a folk-psychological conception of the mind as containing reasons-responsive mechanisms. A mental disorder will thus be a significantly diminished (compared to a 14 non-disordered person) degree of the reasons-responsiveness of the mechanisms producing certain intentional states, actions and/or emotions. Some clarifications: I don’t take folk-psychology to be true (also, there isn’t a single folk psychology but folk-psychologies, since some mental concepts are acquired through socialization and acculturation). I consider the “mental”, so understood, as a generally useful fiction produced by the mind-reading system. If “the mental” vs “non-mental” distinction had some sort of interest-independent reality, we would probably fail to get the division right, given the large amount of evidence that we interpret, and to certain degree confabulate, the reasons behind our actions and those of others (Carruthers, 2011). I also hold the view that all mental phenomena are explainable causally. Moreover, this view of “the mental” entails that what falls upon its domain changes through time, to the extent that reasons-discourse and normative standards evolve alongside socio-cultural factors. In philosophy, reasons-responsiveness has received the most attention from ethics. In a Frankfurtian compatibilist spirit, Fischer and Ravizza (1998) have an account of moral responsibility according to which a person is morally responsible for her behavior if she has guidance control over it, that is, if the mechanism producing these behaviors is responsive to reasons. Whatever the merits of this view to capture moral responsibility, I use this as the basis for an account of what makes certain disorders mental (as opposed to neurological or somatic conditions). Many mental disorders can be characterized by such a lack of reasons- responsiveness: the severity of mood disorders, for instance, depends on the degree to 15 which one’s mood is unresponsive to the things we consider provide reasons to change it. (Relatedly, an exclusionary factor for depression diagnoses is whether the person has an appropriate reason for experiencing its symptoms, such as the death of a spouse, in which case the persistent symptoms would not indicate a failure of reasons-responsiveness). A person with arachnophobia has an unfitting fear emotion to spiders that is resistant to sort of convincing that would modulate another person’s fear. Delusions are resistant to evidence. This doesn’t mean the mechanisms responsible for these states cannot be modulated: maybe they can, but venues for intervention would likely depend on causal, as opposed to reasons-giving, interventions (e.g. using antidepressants or cognitive behavioral therapy). The connection between “the mental” in psychiatry and “moral responsibility” in ethics also explains why it is plausible that mental illness at least mitigates moral responsibility. The debate as to whether people with mental disorders are morally responsible for their actions is a contested one in ethics (see, e.g. Pickard, 2011; King & May, 2018; Kozuch & McKenna, 2016) and in law (e.g. Elliott, 1996; Kalis & Meynen, 2014). My account of mental disorders makes sense of why these debates exist. Coming back to our original question: what can we say about cognitive kinds? My dissertation project tries to develop a piecemeal approach to what makes for a good kind in the cognitive sciences. The three papers comprising this dissertation aim, thus, to make contributions on this regard. In chapter 2, I distinguish between two sorts of mechanisms that should not be conflated in cognitive science. In addition, I provide a 16 “toolbox” model of mechanistic accounts of explanation: the adequacy of any particular account depends on both the target mechanism, as well as our current knowledge about it. In chapter 3, I vindicate the appropriateness of the New Mechanical Philosophy (and in particular, Machamer et al.’s 2000 account) to capture representational explanans, by providing a general account of how to carve working entities in the context of a mechanism. In chapter 4, I address a certain taxonomic kind —“mental disorders”— , with idiosyncratic characteristics that make them work differently than most other scientific kinds, and I provide an account of what guides the inclusion or exclusion of certain diagnostic categories in the psychiatric classification system. At the end of this dissertation, I hope, we will have reached a greater understanding of what cognitive mechanisms, representations and psychiatric disorders are. 17 Chapter 2. How is the mind/brain composed? Two sorts of mechanisms in cognitive science 1. Introduction Cognitive science is a multidisciplinary scientific field; its domain of interest is cognition, broadly understood—how systems (especially human nervous systems) represent, store and process information. Cognitive science is not only interested in what our cognitive systems do, but also in how the relevant parts of the mind/brain make these things occur. A way to shed light on the workings of a cognitive system like the human mind/brain is by decomposing it to see how the relevant parts produce the phenomenon of interest. When the goal is to produce general explanations, this decomposition often involves types (as opposed to particulars). In philosophy of cognitive science, two separate streams of literature that have taken on the project of answering the question “what should the mind/brain be decomposed into?” or “what are the relevant units when it comes to analyzing how cognitive phenomena are produced?”. These two traditions have proposed different blanket answers: one tradition emphasizes the notion of “modularity”. First introduced by Fodor (1983), this notion has been substantively revised in the hands of massive modularists (e.g. Cosmides & Tooby, 1992; Carruthers, 2006; Barrett, 2015). 18 Massive Modularists argued that the mind/brain is entirely composed by “modules” or "isolable function-specific processing systems" (Carruthers, 2006, p. 12). These systems are stable architectural components of the mind/brain, often characterized by appeal to domain specificity and a dedicated neural architecture (but without necessarily committing to the central component of Fodorian modularity, the encapsulation of modules). On the other hand, the New Mechanical Philosophy of science (after a widely-cited paper of Machamer et al., 2000, published some years after Glennan, 1996) proposes that many phenomena of the special sciences are the product of mechanisms. Some mechanists (e.g. Piccinini & Craver, 2011; Kaplan & Craver, 2011) have argued that all cognitive scientific phenomena are underlain by mechanisms-as- causings (or entities, activities and organization in causal continuity to produce the phenomenon). According to those mechanists, to explain cognition one ought to identify the relevant mechanism(s) involved. So, is it modules or mechanisms that we should be decomposing the mind/brain into? The dispute is not merely a verbal one, they are genuine alternatives. Although one may be tempted to consider these accounts to capture two sides of the same coin, I argue that many times their targets are different in nature. What is described by each of those accounts is distinct and usually ought to not to be conflated with the other. At a minimum, there are metaphysical differences between the two: systems or modules are machine-like in that they continue to exist even when they aren't acting; while mechanisms-as- causings only exist while they occur. 19 This has implications for how we quantify and count cognitive components and for our treatment of deviant cases. For instance, consider John’s ability to recognize faces. Treating face recognition as a module implies that John’s face-recognition system is the same when he recognizes Mary and when he recognizes Thomas, and that same token module (m1) explains both these occurrences. On the other hand, from a mechanism-as-sort-of-causing standpoint, causes are only tokened while they occur, so each token instance of John’s recognizing a friend (Mary, Thomas, Mary at a later time) is produced by a different token mechanism (call them mc1, mc2, mc3 …), which explains a particular occurrence of recognizing. Token systems (token modules) thus persist over time while token mechanisms-as- causings do not. Moreover, were John to acquire prosopagnosia, which is the inability to recognize faces as a result of a brain injury, the modularist would talk about a “damaged module” while the mechanist-as-a-sort-of-causing would talk about the different mechanisms that now underlie his response to facial stimuli7. This is not, however, the only difference to be found between mechanisms-as-systems and mechanisms-as- causes. The former are often universal among humans, with a distinct developmental path, a dedicated brain network, and a distinct proprietary function, while for the latter we may expect the involvement of distinct (and not so closely related) networks, greater variability in their starting conditions and/or parts involved, and invariance with respect to the production of the phenomena. 7 The mechanisms will differ by belonging to different mechanism-types. The two different mechanism-types may be close together in similarity space, in the sense of having a good number of components in common, but they are still different because they produce different phenomena. 20 I argue that both approaches to mechanisms properly capture part of the functioning of mind/brain. I claim (section 2) that cognition involves modules or mechanisms-as- systems, which I illustrate by presenting the face recognition system in section 3. I argue that cognition also involves mechanisms-as- causings in section 4, which I illustrate by presenting a mechanism for placebo analgesia in section 5—a mechanism which, like many mechanisms-as-causings, doesn’t have clear spatio- temporal boundaries nor dedicated processing structures. I address the relationship between both sorts of approaches in section 6. However, this raises questions about how to treat research in progress (section 7). Most research involves fragmentary sketchy models, that include some but not all the parts of the target mechanism. If both sorts of mechanisms (system-like and cause-like) exist in cognitive science, how do researchers decide into which category their proposed mechanism fits? How should they be treating their target system in the meantime?8 This leads us to the last section of the paper. I argue that we ought to understand hypotheses or proposals in the early stages as non-committal as to whether the phenomenon is underlain by a mechanism-as-system or mechanism-as-sort-of- causing. A good approach is to make use of Glennan and Illari’s “minimal notion of mechanism” (2017) as a tool to start thinking about the target mechanism and incorporate the different findings. Their minimal notion has the advantage of being neutral with respect to the metaphysical commitments made by the other two accounts. I argue that at early hypothesis-generating stages, researchers are not saying 8 This is important methodologically, as it would determine the methods researchers should choose to study the mechanisms of interest. 21 anything metaphysically “thick” about what is behind the phenomenon in question. It would be an error to assume, for instance, that every time evolutionary psychologists hypothesize that there is a “system” behind the observed phenomena of cheater detection, that what they have in mind a stable functional component or “system” in Carruthers’ (2006) or Bechtel’s (2008b) sense. Even if that ended up being the case, it wouldn’t be epistemically justifiable to make that assumption until there is more evidence for the claim. 2. Modules or mechanisms-as-systems Cognitive scientists often try to describe mental mechanisms, but they aren’t always explicit about how they use myriad terms like “mechanism”, “module”, or “system”. There seem to be at least two different ways in which they do so: very roughly, sometimes they treat them as systems (as in memory systems, language module, face recognition system, the visual system, etc.); other times, they mean the sort of causing that explains how something comes to be the case9 (here, we could find the sequence of causally interacting parts that gives rise to instances of the decoy effect, McGurk effect, Thatcher effect or change blindness). Although the two senses are intuitively related (or may even overlap when a system is performing its proper function), they 9 “How” in a causal sense, not in an evolutionary one. 22 capture different aspects of reality. In this section, I will discuss the first sense, i.e. the notion of mechanism-as-system or module. When discussing face perception, Nancy Kanwisher wondered: “Is face perception carried out by domain-specific mechanisms, that is, by modules specialized for processing faces in particular? Or are faces handled by domain- general mechanisms that can operate on nonface visual stimuli as well?” (Kanwisher, 2000, p. 759) Like her, many cognitive scientists have discussed the units composing the mind/brain to be mechanisms in the “system” or “module” sense: "An association may be found between tasks X and Y because the mechanisms on which they depend are adjacent in the brain rather than because they depend on the same underlying mechanism. Gerstmann’s syndrome is an example. It is defined by four very different symptoms: problems of finger identification; problems in calculation; impaired spelling; and left–right disorientation. It is improbable that the same mechanisms or modules are involved in all four tasks. What is much more likely is that these four symptoms depend on different mechanisms that happen to be anatomically adjacent in the brain.” (Eysenck & Keane, 2015, p. 20) As these quotations illustrate, mechanisms are sometimes treated as stable systems composing the brain, with a specific function, which perdure even when they aren’t acting, and may be (perhaps loosely) localized. 23 This sense of modularity is implicit in Fodor’s original formulation in his book Modularity of Mind (1983), and is the common ground between Fodor’s view and those of other modularists who later disputed his characterization and provided their own (e.g. Carruthers, 2006; Coltheart, 1999; Barrett & Kurzban, 2006; Cosmides & Tooby, 1992; Pinker, 1997). Fodor took modules to be systems characterized by being informationally encapsulated (that is, that during processing modules do not share, and cannot be affected by, information held anywhere else in the mind), as well as by exhibiting eight other characteristics10. According to him, modules are restricted to systems at the “periphery” of the mind —those that deal with perception, language and action. Many after him have argued that the entirety of the mind/brain is modular, a thesis known as “massive modularity”. Most notably, Peter Carruthers (2006) argued that comparative psychology alongside evolutionary considerations make the most plausible case for the massive modularity of mind hypothesis. For massive modularity to be possible, modules cannot be informationally encapsulated —and unsurprisingly, this is the feature of Fodor’s list that has received the most pushback. Other features 10 Here is the full list of characteristics Fodor (1983, p. 37) proposed a module had to have “to some interesting extent” (meaning to an appreciable degree): (1) Domain specificity: modules are restricted to certain kind of inputs.(2) Mandatory operation: modules operate in a mandatory (or automatic) way. (3) Limited central accessibility: the operations and representations occurring within a module are not accessible (or accessible only in a very limited way) to higher cognitive processes. (4) Fast processing: modules generate outputs quickly.(5) Informational encapsulation: during its processing modules do not share, and cannot be affected by, information held anywhere else in the mind. (6) ‘Shallow’ outputs: the outputs generated are basic, simple. (7) Fixed neural architecture: modules are realized in a dedicated neuronal architecture. (8) Characteristic and specific breakdown patterns: modules can be selectively damaged, allowing for phenomena like double dissociations. (9) Characteristic ontogenetic pace and sequencing: modules are hard-wired in the brain and have a characteristic development. Among the features of this list, Fodor took information encapsulation to be a module’s most important feature (Fodor, 2000), probably because he thought (mistakenly) it would solve the frame problem— the problem of how to restrict computations to what is relevant, “given that relevance is holistic, open- ended, and context-sensitive” (Shanahan, 2016). 24 of the list also got disputed: content domain specificity, strong localizability, shallow outputs, innateness, fast processing and automaticity (see e.g. Barrett & Kurzban, 2006; Carruthers, 2006; Coltheart, 1999). Nowadays, cognitive scientists largely operate outside Fodor’s assumptions about the extent and characteristics of modules. The important point for our purposes is that the family of modular views of the mind all share a common notion of module: modules are stable systems composing the brain, with a specific function, which perdure even when they aren’t acting, and may be (perhaps loosely) localized. This basic characterization is a minimum common denominator among modular accounts, to which different authors add further characteristics11. This is also the common denominator among a certain family of accounts of mechanisms, as I will discuss next. Daniel Nicholson’ describes a family of theories of mechanisms that he calls “machine mechanisms”: “systems conceived in mechanical terms; that is, as stable assemblies of interacting parts arranged in such a way that their combined operation results in predetermined outcomes” (Nicholson, 2012, p. 153). Beate Krickel (2019) labels it the “Complex Systems Approach” to mechanisms: accounts that “speak of mechanisms in terms of stable arrangements, structures, or objects” and “highlight the machine analogy in arguing that mechanisms are like machines” (Krickel, 2019, p. 11 For example, Carruthers (2006) considers the following characteristics help us identify modules: modules have a function, something they are supposed to do with the information they receive. They are domain-specific. Modules are usually associated with particular areas of the brain, although they may be using a set of neural pathways that may be scattered in multiple brain regions. The processing of information that occurs in modules is largely independent of information stored elsewhere. Modules are entity-like in that they can be selectively damaged, allowing for double dissociations. Modules operate automatically, not at will. Most importantly, the mind/brain contains several modules; those are treated as its architecture. 25 22). These will include early accounts of mechanisms by Stuart Glennan (1996, 2002, 2010), William Bechtel (2008a, 2008b), Bechtel and Robert C. Richardson (1993), and Bechtel and Adele Abrahamsen (2005). As I will spell out below, there are some commonalities between Complex Systems Approaches to mechanisms and accounts of modules, not only in their minimum common denominators discussed above, but also among other putative characteristics in maximally articulated accounts (e.g. Carruthers 2006 among modular accounts, Glennan 2002 and Bechtel 2008b among Complex System Approaches). The common features in these approaches make up what I will call “the mechanisms-as- systems” view. The targets for both, Complex Systems and modular accounts, are stable entity-like systems that persist. For instance, Glennan talks about mechanisms as stable arrangements of parts that have dispositions: “Perhaps the most notable difference between the complex-systems and Salmon/Railton approach is that Salmon/Railton mechanisms are sequences of interconnected events while complex-systems mechanisms are things (or objects)" (Glennan 2002, S345, emphasis in original). Bechtel and Abrahamsen say that "[a] mechanism is a structure performing a function in virtue of its component parts, component operations, and their organization" (Bechtel & Abrahamsen 2005, p. 423). I will start first by discussing the two features that come most to mind when discussing systems as object-like things: their spatial 26 localization, on the one hand, and their “function” or “predetermined outcomes”12, on the other. Mechanism-as-systems or modules are often associated with a particular part of the brain. Modularists typically speak of this as a module’s “dedicated neural implementation”, but that dedication isn’t always exclusive. We should interpret identifications of a module with certain brain areas as an heuristic, not something to be taken literally. Bechtel describes it in the following way: “A common first step in such research is to attribute to a part of the system that produces a given phenomenon full responsibility for that phenomenon (in Bechtel & Richardson, 1993, we referred to this as direct localization). In the case of vision, this involved treating an area of the brain as the visual center. Such attributions of complex activities to single components of a system seldom turn out to be correct. As heuristic identity claims, however, they often contribute to productive research by facilitating the discovery that in fact the area is associated with only one or a few of the component operations required to produce the phenomenon.” (Bechtel, 2008b, p. 89). In other words: spatial localization is a rough heuristic that helps to identify part of the implementation of a module —by looking at what areas selectively activate when performing a certain function. But more often than not, a module’s functioning will require an integrated network of brain areas working together, sometimes including (sub-)modules scattered across the brain. For instance, face recognition comprises a brain network not only involving left and right fusiform face areas (which seem to be involved in 12 The former is the term Bechtel and Carruthers use; the latter, Nicholson’s. 27 discrimination of faces from non-social stimuli; Lopatina et al., 2018), but also occipital face areas (more involved in detailed recognition; Atkinson & Adolphs, 2011), the medial temporal lobe (specially the perirhinal cortex for familiarity-based recognition; Eichenbaum et al., 2007), and probably others. For another example, the affective system involves brain circuits that include nearly the entire brain: positive valence is processed in a network that involves subcortical regions of the ventral striatum linked with regions of orbitofrontal and ventromedial prefrontal cortex, while negative valence is processed in the amygdala and subcortical regions of the ventral striatum linked with the anterior insula, and cortically within the anterior insula and anterior cingulate (Berridge & Kringelbach, 2013; Yarkoni et al., 2011; Grabenhorst & Rolls, 2011). The point illustrated here is that the spatial localization heuristic is an over-idealization of a module’s implementation: except for very simple and low-level sensory and motor systems, we can expect modules to comprise integrated networks of what may be different brain areas with specific sub-functions, which comprise a stable perduring system. Modules or mechanisms-as-systems also have a proper function. This proper function is to be distinguished from things a token “merely does”, does “by accident”, or does when “malfunctioning”. For instance, a clock is a mechanism-as-system for time- tracking, and an air conditioner, for cooling air. Even if they both take up space in one’s apartment, that (taking space) is not their (proprietary) function, just something they do. The notion of proprietary function is not that of “mere causal role” employed by those who articulate mechanisms-as- causings: it is historical, and has an at least minimal normative dimension. There is something the module is supposed to do with 28 the information it receives, there is a set of phenomena mechanisms of that type are supposed to produce. While I do not want to tie modularity claims to a specific account of proper function, I will point out that the account must be one of the teleological family that can be traced back to Larry Wright. In Wright’s characterization, the function of X is Z means (a) X is there because it does Z, and (b) Z is a consequence (or result) of X's being there (Wright 1976, p. 81). For Ruth Millikan (1984, 1989), Z is the proper function of X iff Z has had certain effects on how the tokens of X were “copied” or reproduced. These effects on the ancestors of a given token of X “have helped account for the survival, by continued reproduction, of the item's lineage” (Millikan, 2002, p. 8). For instance, hearts have been selected for by Darwinian natural selection (given that they did pump blood in “historically normal conditions”), and reproduced genetically. These are “proper biofunctions”. In addition, the proper function of some modules may be a “secondary adaptation” or exaptation, if their preservation in recent evolutionary past is due to their doing G and not their F-ing they were originally selected for (Griffiths, 1993). However, not all modules are adaptations, and thus not all functions of modules are grounded in natural selection: some modules acquire their function via learning mechanisms —for instance, those of the exact number system and the print reading system. Recently, Justin Garson (2017, 2019) has provided a historical account of function (which he calls the General Selected Effects, or GSE) that captures these two ways in which a mechanism comes to have a proper function. According to GSE, the proper function(s) of a mechanism is the activity that historically “contributed to its bearer’s differential reproduction, or differential retention, within a population” 29 (Garson, 2017, p. 523). The “or” here is disjunctive: if a selection process has taken place resulting in differential reproduction or retention of a function, that is enough to make it a proper function —it doesn’t matter if that selective process was natural selection, learning or competitive retention. For instance, suppose a learning process results in a neuronal disposition to Z to be retained, while another neuronal disposition to Y is eliminated, given a competitive process that takes place between them —a “zero sum game” (Garson, 2017, p. 532). In that case, Z-ing is the proper function of that mechanism. Something like this seems to be the process by which the print reading module is acquired: the left fusiform gyrus specializes in reading over the course of learning, resulting in the area known as “visual word form area” (VWFA) to acquire the proper function of recognizing letters and printed words, at the expense of conducting other tasks such as face processing. Supporting this hypothesis, Dehaene et al. (2010) and He et al. (2009) found that left fusiform responds to faces in illiterate adults, but such sensitivity is reduced when they learn to read. Among child beginner readers, Centanni et al. (2018) found that the greater the size of letter-sensitive cortex in left fusiform, the smaller the left face fusiform area (left FFA); and the greater the sensitivity of left fusiform to letters, the better the kid’s reading ability. Nicholas Shea (2018) has a similar account of proper function (which he calls “task function”). Roughly, an output F is a proper function of S if F is (i) a robust outcome function of S; and (ii) a stabilized function of S. An output F is a robust outcome function of S iff S produces F in response to a range of different inputs, and in a range of relevant external conditions (Shea, 2018, p. 55). An output F is a stabilized 30 function of S iff producing F has been systematically stabilized in at least one of the following manners: “(i) by contributing directly to the evolutionary success of systems S producing F; or (ii) by contributing through learning to S’s disposition to produce F; or (iii) where S is an organism, by contributing directly to the persistence of S” (Shea, 2018, p. 64). The idea here is similar to Garson’s: proper functions are grounded in processes that historically select Ss because they do F. Like Garson and Shea, I contend that the processes of differential learning or selective retention are proper function-endowing. I suspect a majority of modules of the mind have acquired proper functions in this way: they start as partially innately- specified learning systems that become elaborated and built through domain-specific learning, a process that endows them with specialized proper functions. The important point for our purposes is that the notion of “function” employed in mechanisms-as-systems is a teleological one, and not that of “mere causal role” employed by those who defend mechanisms-as-causings. This is true both for modularist authors as well as those who come from the mechanisms-as-systems traditions. For instance, William Bechtel and Oron Shaghir argue that, in order to describe a system computationally (in Marr’s 1982 sense), one needs to specify what a system’s function is, i.e. what it is meant to do, as well as its proprietary domain (see Bechtel & Shaghir, 2015). 31 Cognitive systems have proprietary domains; they are only sensitive to information of a certain sort. The “sort” of information a module is sensitive to is set by the properties of the inputs that reliably start a module’s performance of its function, and not whether things in the world belong to a “sort”. For instance, the proprietary domain of the face-recognition system are roughly face-like things, regardless of whether, in the world, these naturally conform to a kind. The generality or specificity of a proprietary domain doesn’t have anything to do with how many instances of the thing there are, either. It is not the case that the proprietary domain of the face recognition system keeps expanding as more and more people populate the world. It is important to emphasize that a module’s sensitivity to its proprietary domain of information is due to its physical properties13, and exempts the module from being actively working all the time. As an illustration, consider a photovoltaic cell. This cell is sensitive only to photons touching its surface, because of its physical-chemical properties and those of photons. Similarly, a module’s physical properties may make it sensitive only to representations with properties of a certain sort. That mechanisms-as-systems are stable and entity-like is also relevant in that it becomes possible to talk about a system as having a distinct development path, or being selectively damaged (thus allowing for double dissociations, i.e. cases when you have two related functions, X and Y, and after brain damage one subject can perform function X but not Y, and another subject can perform Y but not X; see 13 “Physical” as opposed to “mental” properties, not the properties of physics. See Shaghir and Bechtel (2017) and Bechtel and Shaghir (2015) for discussion of the role physical or contextual constraints play in perception and computational tractability. 32 Shallice 1988). Carruthers (2006) took findings of double dissociation and/or a distinct development path to provide (non-conclusive) evidence for the existence of the module. We can get to know more about the proprietary domain of the module in question also by manipulating the input and then check whether the specific function is produced (that is, by manipulation; Woodward, 2003). Among the New Mechanists, Bechtel also discusses lesion research and double dissociations as providing prima facie evidence that the brain area in question is part of a certain mechanism-as-system. If damaging a certain brain area results in impaired functioning, then that would seem to suggest that the brain area in question was involved in the mechanism’s operations (Bechtel, 2008b, p. 43). One thing to note is that lesion research can only provide evidence that something is a component of a system if we presume that the identity of the system is preserved despite the damage; that is, if we treat the system as by default perduring in time despite the fact that, in cases of damage, it may be missing some components and/or its functioning may be impaired. The persistence of modules also makes them subject to selective pressures. An argument that is sometimes provided for the modularity of the mind is that modules are the best candidates to be units of selection for psychological traits. The idea here is that complex systems (like biological ones) need to be organized in a pervasively modular way, that is, as a hierarchical assembly of separately modifiable, functionally autonomous components, for the system to be constructed incrementally. Since the human mind/brain is a complex biological system that has evolved incrementally from animal mind/brains, it is plausible that it is modular (Carruthers, 2006). 33 In summary, mechanisms-as-systems or modules are stable systems composing the brain, with specific functions, which perdure even when they aren’t acting, and may be loosely localized. Properties such as having a distinct developmental path, giving rise to distinct phenomena, having proprietary parts and operations, being distinctively sensitive to certain domain of information, and being dissociable add to the evidence for the mechanism in question being a system. While there are some discussions as to whether modules are peripheral (meaning, only in perceptual and motor systems) or everywhere in the mind (including for higher cognitive functions), what seems clear is that cognitive scientists have provided numerous examples of mechanisms-as-systems: for example, memory systems, the language module, the visual system, and face recognition. In the next section, I will discuss the latter as an example of a mechanism-as-system. 3. An illustration of a mechanism-as-system: face recognition When we talk about face recognition, we are referring to the identification of someone previously met using his/her face as the input. We are not talking about the recognition of a face as a face, as compared to another kind of object, nor of the recognition of facial expressions nor lip speech. The face recognition system is a mechanism-as-system which functions to recognize faces. Its proprietary inputs are face-like stimuli, and its output is a sometimes described as a sense of familiarity or recognition when we see a face of someone we know. 34 The most widely accepted model of face processing, by Bruce and Young (1986) separates the distinct processes that take place after encountering a face-like stimulus. Briefly: first, different codes are extracted from the input: pictorial codes (imagistic) and structural codes (more abstract). Those structural codes are then analyzed in order to obtain visually derived semantic codes (gender, age, personality), and identity- specific semantic codes (meaningfulness of a face). If the face is represented as familiar, the Face Recognition Unit (FRU) specific to that face becomes activated, producing a feeling of familiarity. This sense of familiarity takes place prior to the retrieval of any information about that person. Face-identity recognition ends, thus, with the activation of the FRU (or the lack of it) when presented with the stimulus face. Posteriorly, the FRU serves as a gateway to the information about that person, via stimulation of the Person Identity Node (PIN). In turn, that can activate the related Semantic Information Unit (containing information about the face’s owner), which in turn can stimulate the Name Unit (containing that person’s name). Starting from the early visual analysis and until the activation of a PIN, face-identity is processed and recognized independently of expression and facial-speech/lip-reading analysis. A variety of evidence jointly supports the claim that face recognition comprises a mechanism-as-system or module. For example, there are developmental impairments of face recognition as well as (non-pure) double dissociations. For double dissociations, we have some cases in which brain damage has caused severely impaired facial recognition while recognition of other objects is around control levels (acquired prosopagnosia), while other patients exhibit severely impaired object recognition and normal levels of face recognition (object agnosia). There are also 35 documented cases of developmental prosopagnosia (see Bate, 2017, for an overview), which appears to be ‘congenital’ or ‘hereditary’. Selective impairments of the face recognition system are better explained by assuming the functional dissociability of this system from others. There is also evidence that face recognition has an ontogenic developmental path of its own. Babies exhibit a visual preference for faces (Valenza et al., 1996), which can be explained by both an innate facial module, and to what are good stimuli for immature visual systems (in terms of contrast, spatial frequency, top-heavy layout, etc.; ). Face recognition develops and improves until one reaches approximately 10 years of age, becoming perceptually narrower (e.g. the other-race effect appears during this development). This effect is explained by both the face-specific module hypothesis and the expertise hypothesis (Hole & Bourne 2001). In addition, there are face processing-specific computational simulations. Several computational systems have successfully emulated face-identity processing while exhibiting the same errors patterns humans do. Most are based on neural networks (e.g., Lawrence et al., 1997). Finally, facial recognition involves the activation of brain regions different from non- facial recognition14. Neuroimaging studies have related it mainly to the fusiform face area (FFA) and the occipital face areas, which are regions that don’t seem to have 14 Maybe with the exception of the areas for visual expertise. There is some debate as to whether the FFA is specific to faces (but gets recruited for visual expertise in object recognition), to visual expertise (but gets recruited for face recognition), or both. See e.g. Tarr and Gauthier (2000), Gauthier et al. (2003), Kanwisher (2000). 36 other unrelated functions. But the module is likely to comprise a broader network of brain regions, including the hippocampus and the perirhinal cortex in the medial temporal lobe, some patches of the anterior inferior temporal cortex, and the amydgala (Lopatina et al., 2018). All this evidence supports the claim that face recognition is a mechanism-as-a-system or module. 4. Mechanisms-as- causings So far, I have defended the position that mind/brain contains some mechanisms-as- systems. However, there isn't an entity-like system behind every cognitive phenomenon15. Sometimes, researchers individuate mechanisms of interest on the basis of how a phenomenon is causally produced—regardless of whether it forms a "system" or not. In this second sense, a cognitive mechanism is a specific "sort of causing": “Researchers […] specify a mechanism for the causal relationship and combine the results from a variety of research questions.” (Morling, 2017, p. 266, my emphasis) Or: 15 Doing so would be committing a version of what Bedford (1997) calls the "Not-The-Liver Fallacy," i.e. assuming that there must be a system responsible for every dissociable function of the organism. 37 “[T]here are many questions of mechanism, such as How is object recognition accomplished by the visual system?, that the lesion method, alone, is ill-suited to address.” (Polsner, 2015, p. 56) This is the second, mechanisms-as -causings notion. Explanations invoking mechanisms in this second sense address questions of “how does X occur?” by appealing to repeatable causal chains producing X. Here, mechanisms are individuated solely on the basis what happens to produce a certain phenomenon— they don’t need to have natural boundaries or form a stable system16. This second notion is captured by Machamer, Darden and Craver’s account of mechanisms-as- causings: “entities and activities organized such that they are productive of regular changes from start or set-up to finish or termination conditions” (Machamer et al. 2000, p. 1; henceforth, “MDC”). In the rest of this section, this is the account I will be discussing because it is the most articulated, although it is not the only account of mechanisms-as- causings. The accounts of Illari and Williamson (1992) and recently Glennan (2018) also capture mechanisms in this sense, as what acts to produce a phenomenon. Krickel calls these “Acting Entities Approaches”: they “assume that mechanisms are not objects but process-like, in the sense that they consist of actual manifestations of causal activities of various entities that interact” (Krickel, 2019, p. 25). Nicholson calls this the “causal mechanism” sense: “A step- 16 According to Glennan (2007), there can even be one-off mechanisms. 38 by-step explanation of the mode of operation of a causal process that gives rise to a phenomenon of interest” (Nicholson, 2012, p. 153). According to MDC, mechanisms have productive continuity leading to a phenomenon. Mechanisms themselves are composed of activities, entities and their spatio-temporal organization within the context of a mechanism-as-causing. Entities (i.e. the things that engage in activities17 which have a robustness apart from their place in a mechanism) and activities (i.e. producers of change, specific “causings”) are components of mechanisms (i.e. what produces or underlies a phenomenon), and they cannot be reduced to one another. MDC is explicitly and irremediably dualistic: “both entities and activities constitute mechanisms. There are no activities without entities, and entities do not do anything without activities.” (MDC, 2000, p. 8). We saw that modules or mechanisms-as-systems perdure, even when they are not acting. Craver and Tabery make very clear that mechanisms-as-causings do not do so: “[e]ntities (or objects) are not mechanisms. Mechanisms do things. If an object is not doing anything (i.e., if there is no phenomenon), then it is not a mechanism” (Craver & Tabery, 2019). That a mechanism is only such when it’s producing a phenomenon doesn’t mean that components of a mechanism cannot persist even when they aren’t acting. Consider the mechanism of the action potential in neurons discussed in MDC 2000. The potassium channels, the axon, all of these continue to be 17 It has been said that MDC individuates entities on the basis of their physical properties; that entities are concrete physical objects that exist in space and time (Krickel, 2019, p. 117). One might thus think that such account might have trouble accommodating explanations couched in intentional terms, but in fact it might turn out that dropping the requirement that entities are individuated on the basis of physical properties is a minor extension of the theory. 39 there, having “a kind of robustness and reality apart from their place within that mechanism” (Glennan, 1996, p. 53). In summary: mechanistic explanation tells us how things work, by identifying entities, activities and their spatio-temporal organization, from initial or set-up conditions to finish or termination conditions. Mechanisms-as- causings exist insofar as they are acting to produce phenomena. While these accounts were originally developed for other scientific disciplines, what seems clear is that cognitive scientists have provided numerous examples of mechanisms-as- causings: for example, the sequence of causally interacting parts that gives rise to instances of the decoy effect, McGurk effect, Thatcher effect, change blindness, or the well-known placebo effect. In the next section, I will discuss the latter as an example of a mechanism-as-a-sort-of-causing. 5. An illustration of a mechanism-as-a-sort-of-causing: placebo analgesia The “placebo effect” is an illustration of the sort of case that captures nicely a mechanism-as-a-sort-of-causing. Placebos are sham treatments that are provided to patients, who believe them to be clinically effective. Placebos are well known to improve health outcomes: they account for as much as 50% of the effectiveness of analgesics, and they are also known to improve the immune response, motor outcomes in the case of Parkinson disease, depression, and so on (Barrett et al., 40 2006). The phenomenon of interest is the improvement in the patient’s condition following the administration of placebo. Placebo effects are so robust that when a new medical intervention (for a condition for which there isn’t a standard of care) is tested in a two armed, randomized clinical trial, a placebo is always given to the control group. Importantly, the improvement patients experience with the placebo cannot be attributed to the medically inert substance or intervention they receive, nor, in double blind studies, to expectations of the clinical staff. Instead, it results from the patient’s anticipation that the intervention will help (Wager & Atlas, 2015). In other words, the improvement in patients’ conditions when given placebos is not caused by the physical or chemical properties of the sham substance. Ingesting such a substance in another context, for instance, one in which the patient knows s/he is taking a sugar pill, has no effect. The placebo effect is caused by a mind/brain mechanism that has as a component the belief or expectation that the substance received will help. There are a multiplicity of mechanisms that may come under the label of “placebo effects”. Among these, the mechanisms behind placebo analgesia are among the most researched (Kong et al., 2007). In this section I will introduce some of the several mechanisms posited to underlie such pain relief. Let’s start by taking a step back and looking at normal pain processing in people without prior expectations. The brain contains several pain-responsive regions, some of which process the sensory components of pain (including the posterior insula, SS1 and SS2), others its badness (e,g, the anterior insula, thalamus and the anterior 41 cingulate cortex). When signals from across the body transporting information about noxious stimuli arrive, these areas are aroused, and the pain-responsive regions contribute to the generation of feelings of pain and the derived motor responses. Some even talk about a “neurologic pain signature” using fMRI on the basis of patterns of activation of those brain regions (see Wager et al., 2013; Reddan & Wager, 2018) Placebos work by causing a reduction of the activity in these regions. First, the person needs to be given the placebo intervention in a way that mimics the treatment ritual (Sanders et al., 2020). The contextual information in the ritual —whether it is via verbal suggestions, retrieval of previous therapeutic experiences in similar settings or with similar interventions, observations of others getting treated, etc.; see Colloca (2019)— generates expectations of pain reduction. When someone has been primed to expect an improvement of their health state, their brain patterns are different from those of someone who is in a neutral state. Believing and/or expecting to receive an improvement in health leads to an increase in the activation certain areas of the prefrontal cortex and nucleus accumbens, which are believed to be responsible for appraisals of context and its meaning. Their activation leads to several changes. First, a partial suppression of the pain signals coming from the medulla carrying information about the type and intensity of the pain. The amplitude of event-related potentials produced in response to the painful stimuli is reduced, resulting in a decrease in experienced pain level (Wager & Atlas, 2015). 42 In addition, the activation of those areas in the prefrontal cortex results in the production and release of endorphins, a type of endogenous peptides that bind chiefly to opiate receptors in the brain, which results in a decrease in experienced pain. That some modulatory effects are mediated by endogenous opioids is well-stablished since a classic study (Levine et al., 1978) blocked placebo anesthesia effects by administering patients naxolone, an opioid receptor antagonist. Other neurochemicals mediating modulatory effects are dopamine, endocannabinoids, cholecystokinin (CCK) or serotonin, which also open possibilities for intervention (Frisaldi et al., 2020). There are good reasons to believe that the analgesic effect of a placebo is a mechanism in the sort-of-causing sense. The placebo effect (and its sibling, the nocebo effect) seem to be a by-product of the more general evaluative learning system, which modulates expectations of goodness or badness based on previous cues. The formation of placebo responses has been accounted for as a general form of reward learning (de la Fuente-Fernández, 2009). It seems that the evaluative learning system is an adaptation, which at least in its associative (as opposed to cognitive) form we share with other animals. There is evidence of placebo effects mediated by Pavlovian conditioning occurring in e.g. rats (Herrstein, 1962). As I discussed in section 2, particular applications of a module’s function do not qualify as proper functions themselves, unless they have been selected for in another way —e.g. via learning processes. But the learning involved in placebo is not selection of a function (i.e. it is not preceded by selective retention of a specific function to be performed in contexts related to pain and medical settings), but rather just another learning of cues 43 and their likely values — something to be expected given that module’s proper function. Consistent with this, the interventions that can be made on the effect size of placebos have to do with modification of the cues or their linked rewards, not with placebos qua such. For instance, both the effect size and the duration of a placebo response are often equivalent to the active treatment being studied (Tuttle et al., 2015). This seems to conform to a Woodwardian intervention, in that changing the magnitude of a component of the mechanism produces effects on the magnitude of the phenomenon (Woodward, 1997). As mentioned, it is also possible to block analgesic effects by administering the relevant receptor antagonists, which also have been found to reduce reward-learning based on e.g. food (Galaj & Ranaldi, 2021). 6. The relationship between mechanisms-as-systems and as- causings Thus far, I argued that the mind-brain involves at least both mechanisms-as-systems or “modules”, and mechanisms-as- causings. A complete cognitive science would probably involve modelling of both “sorts”18. Despite the superficial similarities, the two notions involve different metaphysical commitments. Modules have a continued 18 I don’t want to exclude in principle that there may be more “sorts” of mechanisms besides these two. Glennan (2017), for instance, allows for “one-off” mechanisms. Although I contemplate the possibility of one-off mechanisms, cognitive science is mostly interested in general explanations for general phenomena, so I won’t discuss unique cases here. 44 existence, even when they aren’t actively working. On the other hand, mechanisms- as-causings are temporally tied to the production a given phenomenon. Confusing the two is making a category mistake—it is confusing a continuant with a causal occurrence. There are additional differences between modules and mechanisms-as- causings. I illustrated some with the examples discussed in 3 and 5, but let me elaborate a bit further on some other differences we can expect. One such difference will be whether it’s possible to change the range of circumstances in which the mechanism can operate. Modularists like Carruthers define a module’s domain by the range of inputs suitable to turn the system on, not by module-independent input kinds19 (2006). For example, the proprietary domain of the face recognition system are face-like objects that are processed by the face recognition system. A sometimes-overlooked fact is that token modules can change their proprietary domains through time. For example, there is evidence that during a child’s development, the phoneme recognition system becomes increasingly specialized, resulting in increased sensitivity to phonemes of one’s language, at the expense of increased insensitivity to those of other languages that cross-cut the phonetic space differently. For another example, the exact number system slowly expands its proprietary domain during development, as children learn to use number 19 In contrast, for Fodor “domain specificity has to do with the range of questions for which a device provides answers (the range of inputs for which it computes analyses)” (1983, p. 103). While these notions are conceptually different, extensionally they point I make above remains. 45 words to represent number concepts (Libertus et al. 2016, p. 208). The domain of some modules can also change during adulthood: the face recognition system, for example, can be trained to also process Greebles (Gauthier & Tarr, 1997). In contrast, it is impossible that the range of circumstances in which a particular mechanisms-as- causing can operate changes. Once the explanandum phenomenon is fixed, the world fixes what are the background and set-up or starting conditions of the mechanism-as- causing (Craver & Tabery, 2019). Relatedly, modules usually have a dedicated brain network, whether that is scattered across the brain or localized primarily in one area. In contrast, cognitive mechanisms- as-causings may have a lot more variation in their constituents. To mention a few: a mechanism-as-causing may be composed of a mixture of brain networks that aren’t regularly that causally interconnected (as in the mechanism for spontaneously producing a joke); or by just a very small part of the brain in a way that doesn’t follow natural boundaries (as in the mechanism of action potential); or by things outside the skull (as in the mechanism by which Otto came to know where Moma was, which includes his notebook, to use Clark and Chalmers’ 1998 example). Part of the reason for this large variability in what realizes a mechanism-as-causing is that the selection of phenomena to be explained for the mechanist is, to a certain extent, interest-dependent. Let me articulate what I mean by that. “Phenomena”, for MDC and the theorist of mechanisms-as- causings, is an epistemic category, not a metaphysical one. Anything can be a phenomenon of interest; it is the scientists’ choice to delineate it—including 46 making a decision about its range of application and degree of detail. While the researcher has a certain freedom in choosing the phenomenon, ultimately it is objective facts (about what produces that phenomenon) that establish what the underlying mechanism(s) for it is (are). Once a phenomenon is chosen, the explanatory work is to find the mechanism in the world causally responsible for it. In the best of cases, there is a single mechanism-type producing the phenomenon-type that the researcher is interested in. In other cases, however, lumping or splitting may be necessary. In any case, the mechanism is stablished by what it does. In contrast, the modularist and the theorist of mechanism-as-systems speak of a system’s “proprietary function” (see discussion in section 2). The notion of “proprietary function” applies to a more restricted set of cases than that of “phenomena”. A system’s function is the task it is meant to perform; not everything a system does when it is operating qualifies. In contrast, a mechanism is delineated by what it does. Despite these differences between modules and mechanisms-as-causings, there is a sense in which they overlap. Suppose that we fix the phenomenon in such a way that it overlaps with a module’s proprietary function. As I mentioned before, once the phenomenon is fixed, the mechanism responsible for it is also delineated. Then, in that case, when a token module is performing its proprietary function (let us designate that function with the letter X), then that module is a token mechanism-as -causing for phenomena X, while it is doing X. That is, during the time in which the module is actively functioning as it is meant to, that system is a mechanism-as- causing for phenomenon X. In that case, it may be unnecessary to distinguish which of the two notions of mechanism we are using. 47 Nonetheless, it is important to keep mechanisms-as-systems and mechanisms-as- - causings distinct. Doing so allows for more clarity on quantification of mechanisms, as well as how to treat broken ones. Regarding quantification, mechanisms-as-systems and mechanisms-as -causings have different ways to count tokens of a kind. If we are counting mechanisms-as- causings, then we would look at the phenomena being produced. A mechanism occurs only so when it’s producing a phenomenon. Every time a mechanism is causally responsible for a phenomenon, then that is a token mechanism. When it stops acting, then we no longer have a mechanism. On the other hand, mechanisms-as-systems are objects that persist through time regardless of whether they are in operation or not. Here, sameness of mechanism seems to be determined by sameness of parts (although the exact conditions for identity are not clear; think of ship of Theseus-style problems). In any case, mechanisms-as -causings greatly outnumber mechanisms-as-systems. Lastly, I want to highlight that the notion of “broken normal” or “damaged mechanism” only makes sense under modular approaches that take mechanisms to be systems. A module has a proprietary function, whether or not it carries it out. But mechanisms-as-causings exist only so insofar they are producing phenomena. If they are not producing anything, they don’t. In a nutshell: the set of modules and the set of mechanisms-as-causings involved in cognition are not co-extensive; while modules can be regarded as mechanisms-as- causings while they are performing its proprietary function, there are mechanisms-as- 48 causings (sometimes recurrent mechanism-types, like the placebo one) that aren’t modules. 7. How to treat work in progress In sections 2-6, I reviewed how mechanisms-as-systems and mechanisms-as - causings are different. However, it is worth taking a step back to focus not so much on complete explanations —cognitive science has few of these anyway— but on research in progress. Most research involves fragmentary models, in which we know some but not all the parts of the target mechanism. If both sorts of mechanisms (system-like and cause-like) exist in cognitive science, how do researchers figure out into which category their target mechanism fits? How should they be treating their target mechanism in the meantime? Eric Hochstein has recently argued that working scientists should be “adopting deliberate metaphysical positions when studying mechanisms that go beyond what is empirically justified regarding the nature of the phenomenon being studied, the conditions of its occurrence, and its boundaries.” (Hochstein, 2019, p. 579). In his opinion, it is acceptable for scientists to make metaphysical commitments at the onset of experimental investigation, provided that we are willing to revise these commitments on light of the evidence: “our metaphysical commitments might eventually need to be revised after a great deal of empirical work has taken place” (Hochstein, 2019, 588). However, in my opinion, making a metaphysical determination whether the target mechanism is a system or a causing before the 49 evidence is conclusive would only make scientists talk past each other. This can be seen, I believe, in discussions regarding the boundaries of the mind/brain20 as well as mechanisms hypothesized by evolutionary psychologists21. The metaphysical commitments of our theories must correspond to what is warranted by the evidence, not to a philosophical demand to keep our metaphysics tidy. Instead, I think here (at early research stages) is where we should employ the minimal notion of mechanism, formulated by Glennan and Illari: “A mechanism for a phenomenon consists of entities (or parts) whose activities and interactions are organized so as to be responsible for the phenomenon” (Glennan & Illari, 2017, p. 2). What makes this notion minimal? To start, it is meant to capture the main dimensions of the mechanistic framework in general, without tying it to more specific accounts. The different sorts of accounts can be seen as varieties of the minimal notion of mechanism, which add to it further details (and further commitments) of the account in question. Importantly, the minimal notion is not metaphysically committed to mechanisms being systems or causings, or constitutive or etiological, or persistent or one-off. Back to my proposal. We should understand hypotheses or proposals in the early stages—when scientists speak of “mechanisms”, “systems”, “modules”, “causal 20 Adams and Aizawa (2001, 2012) and Rupert (2009) make similar points. 21 See, for example, Atran (2001). 50 factors”, “contributing factors”, etc. behind a phenomenon—as non-committal as yet about whether the phenomenon is underlain by a mechanism-as-system or mechanism-as-sort-of-causing. At early, hypothesis-generating stages, researchers are not saying anything metaphysically “thick” about what is behind the phenomenon in question. It would be an error to assume, for instance, that a range of behaviors that would have been adaptive are always produced by a dedicated, innately-channeled “system” in Carruthers’ (2006) sense. Even if that ended up being the case, it wouldn’t be epistemically justifiable to hold that assumption until there is more evidence for the claim. But, they are definitely produced by (some) mechanism(s) or another. The New Mechanistic philosophy provides us with a toolbox to aid discovery. Which account to employ depends on our epistemic state regarding the target mechanism, as well as its nature —whether it is a persisting functional component or a sequence of causally interacting parts. At beginning research stages, the minimal notion is the one that should be employed; every other account of mechanisms (and modules) is thicker. Each makes concrete proposals as to which properties would count as evidence that, in reality, there is a mechanism-as-system or a mechanism-as-a-sort-of- causing producing the phenomenon. For example, for MDC, it would be changes in a phenomenon when you intervene on a part; for modules, it would be developmental and pathological evidence. Each account offers heuristics to help guide discovery, precisely because they are thicker. 51 If I am right, cognitive science must make use of more than one sort of account of mechanism, because those accounts capture the variety of mechanisms and of stages of knowledge about them. None of the accounts, in isolation, is adequate to the different stages of epistemic progress in discovering a mechanism, nor can they do justice to the diversity of mechanisms we find when it comes to cognition. Part of the literature has treated mechanistic accounts as each having its own proprietary scientific field. I am proposing that within the same field (for instance, cognitive science), there may be more than one sort of mechanism. If more than one account of mechanism applies to cognitive science, maybe the same is true for other scientific fields as well. This in turn puts an interesting twist as to how to understand the plurality of accounts of mechanisms available. If I am right, there is no winner-takes-all, even for a particular scientific discipline. An account may be appropriate for some mechanisms but not others; this is why we should view the New Mechanistic Philosophy as a toolbox of accounts. The criteria for choosing a given account of mechanism should not be merely a matter of scientific field, but a matter of the nature of the particular mechanism out there in the world one is investigating. 52 Chapter 3: Can the New Mechanistic Philosophy of Science find a role for representations? 1. Introduction Within the philosophy of science there have been competing views on what explanation is. For example, the classic deductive-nomological model considered that to explain a phenomenon we must describe the initial conditions C, and the law-like generalizations L, from which to deduce the phenomena to be explained. However, it was soon evident that this universalist proposal posed some requirements on explanation that may be unachievable in the special sciences22, where exceptionless generalizations are rare and individual variation is to be expected. Instead, explanations in those sciences take a different form: scientists often appeal to mechanisms behind phenomena, and see their work as that of discovering these mechanisms. For instance, while discussing research go