[PhilPhys] Center Debate - Representations in Neuroscience 2/27 and LTT - An Understanding-First view of Explanation 3/1

Center for Phil Sci center4philsci at gmail.com
Thu Feb 22 21:20:14 CET 2024


 The Center for Philosophy of Science at the University of Pittsburgh invites you to join us for our upcoming presentations. Both the Center Debate and the Lunch Time Talk will be live streamed on YouTube at https://www.youtube.com/channel/UCrRp47ZMXD7NXO3a9Gyh2sg.
Center Debate: Representations in Neuroscience<https://www.centerphilsci.pitt.edu/event/center-debate-representations-in-neuroscience/>

Nicholas Shea<https://www.philosophy.ox.ac.uk/people/dr-nicholas-shea> (Institute of Philosophy, University of London) and John Krakauer<https://neuroscience.jhu.edu/research/faculty/45> (The Johns Hopkins Hospital Department of Neurology) will participate as our debaters.
Tuesday, February 27 @ 12:00 pm - 1:30 pm EST
Online Only - https://pitt.zoom.us/j/91614621442
Nicholas Shea
Abstract:  Representation is a central explanatory tool of the cognitive sciences. There is not yet a strong consensus about its nature. However, many (but not all) explanations that rely on representations can in turn be explained by a family of theories of representation that appeal to internal entities that: (i) stand in exploitable relations to the world (e.g. correlation, correspondence), and (ii) interact in internal processes (algorithms); both (iii) in the service of performing some task or function.
We can also explain why things that afford this kind of explanation arise systematically in nature. Very roughly, stabilising processes like natural selection and learning are a diachronic force for producing certain outcomes robustly, and one way to achieve that synchronically is to calculate over internal states bearing exploitable relations to various features of the problem space. The most obvious cases are where representations are decoupled from immediate environmental input, but the same rationale, and the same explanatory scheme, is also present in simpler cases where no decoupling is involved.
A naturalistic account of the nature of representation, along these lines, makes sense of appeals to neural representation. There, the representational vehicles are patterns of activity in neural assemblies (or sometimes individual neurons); and computations take place between attractors or regions in neural activation spaces. Such accounts are equally applicable to explaining the operation of deep neural networks.

John Krakauer
Abstract:  Representations are things that we use to engage in representational behavior. For the most part, representational behavior of the kind that we are all interested in (if we are honest) is what humans do – we can contemplate black holes, imagine non-existent architectures and worlds (think Narnia and Dune), and write abstracts like this one. Representation is an explanandum – it is what must be present to do overt deliberative thought and understand things. Most intelligent behavior is non-representational, it does not need to be, survival can occur perfectly well without it: an arctic fox does not worry about what ice is.  It is easy to confuse these two kinds of behavior and the means to explain them. Naturalizing representation is for the most part the project to perpetuate this confusion. It is driven by the hope that some intelligent animal behaviors are using representational capacities of the kind that humans undoubtedly have, and that these capacities can be dissected using the modern tools of neuroscience. Two of the terms used for these protorepresentations are cognitive maps and internal models. The claim is that these are the foot in the door that will get us to the representations needed for full blown conceptual abstract thought.  This stance is, in my view, misguided for several reasons that I will elucidate. Intelligence –  competence without comprehension – does not need representations.  Overt representations are the substrate upon which comprehension operates but we do not have a theory for them yet.


Lunch Time Talk - Arnon Levy<https://www.centerphilsci.pitt.edu/fellows/levy-arnon/>
Friday, March 1 @ 12:00 EST
Join us in person in room 1117 on the 11th floor of the Cathedral of Learning
Title: An Understanding-First view of Explanation

Abstract:
Jaegwon Kim wrote once that “the idea of explaining something is inseparable from the idea of making it intelligible; to seek an explanation of something is to seek to understand it, to render it intelligible. These are simple conceptual points, and I take them to be untendentious and uncontroversial.” (1994, 54). Many discussions of explanation accept something like the truism we find in Kim, but regard it primarily as a background, orienting idea. In this talk I will sketch a view of explanation that forefronts the explanation-understanding connection – an understanding-first view of explanation. Its overall thrust is that what makes some an explanation is that it has the potential to generate understanding.
I will draw on two previous versions of this idea. One found in older work by Peter Achinstein and another, more recent version, from Daniel Wilkenfled. The central part of my discussion aims to motivate the understanding-first framework, and to fend off some potential worries about it. I will argue that such a view can helps with a number of outstanding debates surrounding explanation, including the explanatory role of idealization, the place of noncausal explanations and relations between mechanistic and difference making explanations. I’ll close with a few comments about how the understanding-first view relates to the historical trajectory of work on scientific explanation, and to uses of explanation in philosophy – in contrast to science.

Can’t make it in-person? This talk will available online with the following Zoom link: <https://pitt.zoom.us/j/91729611528> https://pitt.zoom.us/j/91729611528.



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listbox.elte.hu/pipermail/philphys/attachments/20240222/a57690d7/attachment-0001.html>


More information about the PhilPhys mailing list