The Science & More Talks are a series of work in progress talks at the Center for Logic, Language and Cognition (LLC) in Turin and run by the PRIN project. We use it to present our own work in progress and to learn about the ongoing research of our colleagues. From time to time, also speakers from outside Turin present their work.

Thematically, the talks are centered around philosophy of science, but we are also open to topics from related areas (e.g., logic, epistemology, philosophy of language) and other philosophical subdisciplines where exact methods are applied.

The talks take place in Palazzo Nuovo (Via Sant’Ottavio 20) from noon (= 12:00) to 13:00, usually on Wednesdays, followed by a joint lunch in one of the surrounding restaurants. They are meant to be low-key events, open to everybody and aiming at the improvement of ongoing work through critical, but constructive discussion.

Thematically, the talks are centered around philosophy of science, but we are also open to topics from related areas (e.g., logic, epistemology, philosophy of language) and other philosophical subdisciplines where exact methods are applied.

The talks take place in Palazzo Nuovo (Via Sant’Ottavio 20) from noon (= 12:00) to 13:00, usually on Wednesdays, followed by a joint lunch in one of the surrounding restaurants. They are meant to be low-key events, open to everybody and aiming at the improvement of ongoing work through critical, but constructive discussion.

**Upcoming Talks**

Monday 27 March 2023, Aula 11 Palazzo Nuovo, 12:00-13:00

**William Peden** (Lingnan University)

**The Ignorance Dilemma for Imprecise Bayesians**

Bayesians pursue a unified theory of epistemic and pragmatic rationality. A split has emerged between Standard Bayesians and Imprecise Bayesians. The latter argue that using sets of probability

functions to represent beliefs is a more powerful formalism for modelling epistemological concepts like justification and ignorance.

We investigate the pragmatic side of this debate. We use simulations to compare the short-run performances of Standard Bayesianism and Imprecise Bayesianism in a classic decision problem. Our results reveal the Ignorance Dilemma: the features of Imprecise Bayesianism which make it

such an epistemologically powerful representational framework for modelling states of ignorance also cause the players to underperform in many decision problems. We explain the trade-off and discuss some implications for Bayesian epistemology. (joint work with Mantas Radzvilas and Francesco De Pretis)

Wednesday 29 March, 12:00 - 13:00, Aula TBA

**Andrea Iacona**

On Ninan's Puzzle of Easy Foreknowledge

In this talk I present a recent puzzle offered by Ninan concerning foreknowledge: in some cases it seems that one can lose knowledge by moving foreward through time, although one's evidence remains the same. I discuss Ninan's purported solution, and propose some alternative option.

On Ninan's Puzzle of Easy Foreknowledge

**Past Talks**

Monday 20 March 2023, Aula 11 Palazzo Nuovo, 12:00-13:00

**José Díez** (University of Barcelona)

**Formalism Meets Pragmatism**

During the last decades, many philosophers of science have witnessed the failure of formal analysis of key aspects of scientific practice such as explaining, representing, or testing. This failure has made some(actually many) of them renounce traditional analytical projects and withdraw towards either pluralist or deflationary positions. The goal of this talk is to resist such a move and defend the possibility of monistic analysis weakening them in two dimensions. First, changing the traditional demand of "sufficiency" with "quasi-sufficiency". Second, and more importantly, accepting the introduction of some crucial pragmatic elements in the formal analysis. This strategy is exemplified by the analysis of the concepts of explanation and of representation.

Wednesday 15 March 2023

**Maciej Tarnowski** (Warsaw University)

**No doxastically innocent solution to Moore’s Paradox**

In this talk, I will compare two strategies for explaining the irrationality of Moore-paradoxical beliefs (beliefs of the form "*p*, but I don't believe that *p*" or "*p*, but I believe that *~p*"). The “pragmatic strategy” argues that the irrationality of such beliefs stems from the fact that they are self-falsifying, while the “epistemic strategy” argues that they cannot be rationally believed. The first is usually adopted on the grounds of its “doxastic innocence” since to prove that Moore-paradoxical beliefs are self-falsifying only very minimal assumptions regarding the logic of belief are needed, while the epistemic strategy is committed to stronger principles. I examine the similarities between Moore’s Paradox and Anti-expertise and Iterated Moorean paradoxes and argue that their paradoxicality should be given a uniform explanation. Such an explanation, as I demonstrate, cannot be provided by the pragmatic strategy without losing its doxastic innocence.

Wednesday 8 March 2023, 13:00--14:00, Sala Incontri 1, Philosophy Library

**Antoine Houlou-Garcia** (École de Hautes Études en Sciences Sociales, Paris)

**The Condorcet Jury Theorem: history, applicability and perspectives**

The Condorcet Jury Theorem (CJT) is the most important mathematical result on which the epistemic theory of democracy relies. In the first part, we will discuss the perspective that Condorcet gave to this result. We will find how it can be linked to the Aristotelian argument for the multitude and deduce the partially antidemocratic DNA of the result. In a second part, we will discuss the relevance of the CJT for collective decision and conclude that it cannot be applied in this kind of situation. At the same time, we will show that the CJT is relevant for collective measurement. In a third part, we will show how the CJT can perfectly explain some aspects of the swarming in honey bees. We will deduce some conclusions about the epistemic theory of democracy and the CJT.

Wednesday 1 March 2023, 13:00--14:00, Sala Incontri 1, Philosophy Library

**Miriam Bowen** (University of St. Andrews)

**Probabilistic Liar and Revenge of the Probabilistic Liar**

The following are examples of self-reference:

(1) This sentence is false.

(2) I do not believe what sentence (2) says.

In (1) we have an example of self-reference in truth and in (2) an example of doxastic self-reference with a flat out belief state. My main focus here will be doxastic self-reference that involves degrees of belief: probabilistic self-reference. Probabilistic self-reference occurs when the truth or the chance of an event is dependent on the degree of belief (credence) of the agent. A particularly problematic probabilistic self-referential scenario is that of the Probabilistic Liar, a probabilistic analogue of the Liar Paradox.

The Probabilistic Liar can be give as follows:

(a) Cr(a) < 0.5

Where Cr is a precise credence function. The Probabilistic Liar seems intuitively problematic as there is no clear attitude an agent ought to adopt towards (a). It also gives rise to contradictions between the plausible norms of rationality; Probabilism and Rational Introspection.

In this talk I argue that we should appeal to suspended judgment (where this is understood as having imprecise credences) as a solution to probabilistic self-reference. This avoids the problems posed by the Probabilistic Liar by showing a category mistake was made in setting up the problem - the original argument becomes a reductio of the assumption that the attitude towards the Probabilistic Liar is a precise attitude.

However, just as we can easily generate Revenge paradoxes for the Liar paradox an immediate worry for my account is that my solution will fall prey to a Revenge problem as well. I outline what Revenge would look like for the Probabilistic Liar (indeed on my picture the Revenge problem captures exactly what the original problem intended to) and show that by adopting imprecise credences as our background assumption we can avoid the problem by weakening rational introspection thus avoiding the conflict between Probabilism and Rational Introspection in a principled way.

Wednesday 22 February 2023

**Alexander Gebharter** (Marche Polytechnic University)

**The formal structure(s) of analogical inference**

Recently, Dardashti, Hartmann, Thébault, and Winsberg (2019) proposed a Bayesian model for analogical inference. In this paper we investigate how their model performs when varying the degree of certainty about the similarity between the source and the target system. We show that there are circumstances in which the degree of confirmation for the hypothesis about the target system obtained by collecting evidence from the source system goes down when increasing the degree of certainty about the similarity between the two systems. We then develop an alternative model in which the direction of the variation of the degree of confirmation always coincides with the direction of the degree of certainty about the similarity hypothesis. Finally, we argue that the two models capture different types of analogical inference.

8 February 2023

**Matteo Plebani** (University of Turin)

**Counterpossibles in computability theory?**

Philosophers have recently become obsessed with counterpossibles: counterfactuals with impossible antecedents. According to the orthodoxy, (Williamson 2017) counterpossibles are all vacuously true. Recently, several philosophers and logicians have tried to question the orthodoxy by providing (alleged) examples of counterpossibles that are either false or non-vacuously true. Matthias Jenny (2018), where it is argued that we can find examples of non-vacuously true counterpossibles and examples of false counterpossibles in computability theory. It would be exciting if Jenny was right, but I am afraid he is not: the examples he discusses, I will argue, are not counterpossibles.

18 January 2023

**Claire Field** (University of Stirling)

**Being wrong about logic**

Is it possible to be rationally mistaken about logic? While the possibility of rational mistakes about any topic, including logic, seems intuitively plausible this possibility is called into question if we accept a general Anti-Akrasia requirement of rationality. I show how the Anti-Akrasia principle bears on the possibility of rational mistakes about logic, and argue that the right way to respond to it is to distinguish the normative domains that conflicting requirements belong to. I argue that we should think of logical and epistemic requirements as belonging to distinct domains. Once we do this, Anti-Akrasia requirements do not imply that rational mistakes about logic are impossible. One upshot of this is that logic turns out to be epistemically unexceptional.

11 January 2023

**Marianna Girlando** (ILLC, Amsterdam)

**Proof systems for conditional logic: An introduction**

Conditional logics, as introduced by David Lewis in 1973, enrich the language of classical propositional logic with a two-places modal operator, the conditional, suitable to represent fine-grained notions of conditionality. The proof theory of conditional logics does not have a state of the art comparable to the proof theory of modal logics, even though it relies on similar proof-theoretic techniques. In this talk I will present different kinds of sequent calculi for conditional logics, that I developed in the course of my PhD and in ongoing research. After introducing conditional logics and their semantics, that I will define in terms of neighborhood models, I will present a labelled sequent calculus, modularly capturing a large family of systems, and a nested-style sequent calculus, featuring a structural connective representing neighborhoods of the model. Other than the conditional operator, I will discuss the comparative plausibility operator, also introduced by Lewis, which expresses comparisons between states or concepts. I will show how our approach provides an uniform model-theoretic and proof-theoretic treatment of this operator as well.

(Joint work with Tiziano Dalmonte, Bjoern Lellmann, Sara Negri, Nicola Olivetti and Gian Luca Pozzato)

14 December 2022

**Jan Sprenger** (University of Turin)

**Improving psychological explanations**

The explanation of psychological phenomena is a central aim of psychological science. However, the criteria which we use for evaluating whether a psychological theory explains a phenomenon are often implicit, or outright unclear. We address this shortcoming by developing the following account of explanation: a psychological theory explains a phenomenon /in principle/ if and only if there is a statistical pattern which is evidence for the phenomenon, and produced by a formal model that is anchored in the theory. The strength of such an explanation depends on three main criteria: its precision, robustness and empirical relevance. Using this account, we outline a workable methodology of explanation in psychology, and possibly, other scientific disciplines. This methodology entails (a) translating a verbal theory into a formal model, (b) representing phenomena by statistical patterns in data, and (c) assessing whether the formal model produces these statistical patterns. We conclude with a discussion about how this productive explanation methodology can be used within the broader aim of

constructing and developing psychological theories.

(Joint work with Noah van Dongen, Riet van Bork and Denny Borsboom)

7 December 2022

**Lorenzo Rossi and Caterina Sisti** (University of Turin)

**Variable-hypothetical Conditionals**

Consider the following conditional: ‘if Tweetie is a bird, then Tweetie flies’: B(t) → F(t). This seems like an acceptable conditional. Let’s model acceptability via degrees of probability, and suppose that this conditional has probability k: Pr(B(t) → F(t)) = k, for k ∈ [0,1] and k ≥ 1/2. Of course, accepting this conditional presupposes several background assumptions – that Tweetie is not a penguin, and not a chick, and so on. Abbreviate the conjunction of these sentences as ϕ(t). The full form of the conditional, therefore, is the following: B(t) ∧ ϕ(t) → F(t).

We call the latter an extended conditional, and we also assume that the acceptability of a conditional is identical to that of its extension, i.e. Pr(B(t) ∧ ϕ(t) → F(t)) = k. What we just sketched is the beginning of the variable hypothetical account of conditionals (inspired by Ramsey 1931, 1991). According to this account, B(t) ∧ ϕ(t) → F(t) is acceptable because it is an instance of a generalisation, called ‘variable hypothetical’, of the type ‘Everything that is B and ϕ is also F’: ∀x(B(x) ∧ ϕ(x) → F(x)). In this account, variable hypotheticals do the heavy lifting: they determine the probability assignment of the corresponding conditionals. We suppose that we have a primitive probability assignment to variable hypotheticals and that all their instances inherit that assignment, namely: Pr(∀x(B(x) ∧ ϕ(x) → F(x))) = k. This is why a speaker assigns probability k to B(t) → F(t): the probability of the latter is identical to the probability of its extension which, in turn, is identical to the probability of the associated variable hypothetical. In this paper, we make this picture fully precise. We develop a contextualist semantics for probability assignments to simple conditionals. Contexts, in our picture, play two distinct but related roles. First, they assign, with each speaker, the ϕ that determines the extended conditional. For example, in a context c1 where the speaker is s1, B(t) → F(t) is extended with a ϕ1 that states that Tweetie is not a chick and not a penguin, while in another context c2, where 39 the speaker is s2, the same conditional might be extended with a ϕ2 that only states that Tweetie is not a chick. In addition, contexts determine the probability of variable hypotheticals. So, we might have that in c1, Pr(∀x(B(x) ∧ ϕ1(x) → F(x))) = j and Pr(∀x(B(x) ∧ ϕ2(x) → F(x))) = k, while in c2, Pr(∀x(B(x) ∧ ϕ1(x) → F(x))) = m and Pr(∀x(B(x) ∧ ϕ2(x) → F(x))) = n. This allows us to explain speakers’ disagreement along two dimensions: first, two speakers s1 and s2 might disagree on the probability of B(t) → F(t) because they associate it with two distinct variable hypotheticals; second, s1 and s2 might disagree on the probability of B(t) → F(t) because, even though they select the same ϕ and therefore associate B(t) → F(t) with the same variable hypothetical, they assign different degrees of probability to the latter (based on their different available evidence, their different beliefs, and so on). Finally, we work out a conditional logic (based on probability preservation and relations between the extra information ϕ) which provides an attractive picture of hypothetical reasoning, avoids the paradoxes of material implication, and can be used to differentiate between indicatives and subjunctive.

9 November 2022

**Bahram Assadian** (University of Turin)

**Cross-structural Identities**

According to the realist renderings of mathematical structuralism, mathematical objects are merely positions in structures. The purely structural character of mathematical objects vindicates two treatments of identity statements linking positions drawn from distinct structures – statements such as ‘The natural number 2 is identical to the real number 2’. According to the first conception, there is ‘no fact of the matter’ concerning such statements, whereas according to the second, they are just false. This paper critically examines these treatments of cross-structural identities, refines the semantic and metaphysical difficulties they pose for structuralism, and develops an account of cross-structural identities based on an abstractionist approach to mathematical structures and their positions.

Wednesday 26 June 2022

Eugenio PETROVICH

(University of Siena)

Marco VIOLA

(University of Turin)

Mapping the interaction between neuroscience and philosophy

Philosophy and neuroscience have strengthened their

connections during the last forty years. Based on the results of a

survey and on bibliometric analyses, we analyze the unfolding of the

philosophy-neuroscience interactions. Our discussion has four topical

focuses. First, we draw and interpret the “silhouette” of a sub-part of

the philosophical literature that engages with neuroscience and divide

it into sub-topics based on co-citation networks. A historical glance

shows a steep increase in the citation flow from philosophy to

neuroscience journals. Second, we note how during the same time, the

neuro-to-phil citation flow increases, but remains negligible in

absolute terms. Third, we wonder whether the distinction provided by the

Stanford Encyclopedia of Philosophy between Philosophy of Neuroscience

(PoN, dealing with foundational issues of neuroscience) and

NeuroPhilosophy (NP, exploiting the results of neuroscience for

addressing intra-philosophical problems) is reflected in the community’s

perception of journals, or in their citation behavior. While the

distinction is rather blurred, PoN can be thought as a sub-field of

philosophy of science, whereas NP is best conceived as a general trend

in the naturalization of the discipline and is sparse across several

fields of philosophy. Fourth and last, we investigate which sub-fields

of neuroscience philosophers most often engage with, showing that they

pay far more attention to cognitive/behavioral-oriented journals than on

molecular and cellular neuroscience. To conclude, we draw some

methodological reflections that may generalize to any (inter-)scientific

mapping.

Wednesday 22 June 2022

Martina CALDERISI (University of Turin)

Probability, confirmation, and the base-rate fallacy

Base-rate neglect is the tendency to ignore (or at least underweight) base rates when updating one’s credence in a certain hypothesis in light of new evidence. It has been observed experimentally in a variety of domains, ranging from social psychology to law and medicine, since the 1970s. However, despite extensive discussion, neither the normative question: “Is the neglect of base rates a real fallacy?”, nor the descriptive question: “Why are base rates (mistakenly) neglected?” have been settled. In this talk, we will focus on the latter. In particular, we will present in some detail two possible determinants of this phenomenon: representativeness, as suggested by Kahneman & Tversky (1973), and linear integration, as suggested by Juslin, Nilsson, & Winman (2009). We will also put forward an alternative proposal, according to which humans’ appreciation of confirmation relations would account for the base-rate fallacy, much as it can for the conjunction fallacy, as shown by Crupi, Fitelson, & Tentori (2008) and Tentori, Crupi, & Russo (2013). Moreover, we will test this explanatory hypothesis against data recently collected by Pighin & Tentori (2021) and we will discuss the results of this analysis as well as its strengths and limitations, pointing to open issues for future research. Our results provide support to a confirmation-theoretic view of reasoning under uncertainty, including well-known tendencies to biased judgment of probability.

(Joint work with Vincenzo Crupi, Stefania Pighin, & Katya Tentori)

Wednesday 8 June 2022

Daniel WAXMAN (National University of Singapore)

Normative Guidance Without Access

A common desire in normative philosophy is to formulate norms that are capable of providing guidance to agents like us. Traditionally, this desire has been taken to favour internalist norms, i.e. those whose triggering conditions involve only the agent's internal mental states. (For instance"Do what maximizes expected happiness!", as opposed to "Do what actually maximizes happiness!"). However, the link between guidance and internalism has recently come under heavy attack from externalists who deny the transparency or privileged accessibility of the mental. The aim of my talk is to offer a compelling conception of what it is for a norm to be action guiding which does not require transparency or anything similar. I will argue that both sides have been operating with an impoverished conception of what it is to follow a norm, which has in turn distorted their conception of when a norm can be action-guiding. Once this is all cleared up, I'll argue, the link between guidance and internalism can be restored: in order for a norm to be action-guiding, it must be internalist.

Wednesday 18 May 2022

Luca SAN MAURO (University of Rome La Sapienza)

Buridan's Cell

The story of Buridan's ass (BA) refers to the following thought experiment: A donkey, placed between two equidistant and identical piles of hay, would not be able to choose between the two – and thus it would paradoxically starve to death. A common take is that BA strictly concerns rational behavior, by undermining the principle that (rational) choice implies (rational) preference. Indeed, if such a principle would hold unrestrictedly, then nonpreference would imply nonchoice leaving the donkey with no reason to move.

In this talk, we present a microscopic analogous to Buridan's paradox, labeled BC, by replacing the donkey with a biological cell. Although our variant of the paradox won't concern rationality, we'll argue that BC is at least as hard to dispel than BA, as any possible solution would require to support a strong, and maybe unpleasant, biological assumption.

This is joint work with Lavinia Ferrone.

Wednesday 20 April 2022

Simone PICENNI (University of Bristol)

Quid verificabit ipsos verificatores? A model for self-applicable exact truthmaking

Recent years have seen a rise of interest in exact truthmakers in philosophical logic and semantics. A state (of affairs, action, event, ...), s, is an exact truthmaker for a statement, φ, just in case s necessitates φ ’s truth while being wholly relevant to it. The state of the ball being red, for instance, is an exact truthmaker for “the ball is red”, while the complex state of the ball being red and round is not --the ball’s shape has nothing to do with its color. The concept gives rise to a fine-grained semantics, exact truthmaking semantics, according to which we individuate content of sentences by means of their exact truthmakers.

Exact truthmaking semantics has proved useful in the reconstruction of the notion of aboutness, in the reconstruction of the semantics of hyperintensional contexts – e.g.: propositional attitude reports (Alice believes that φ, Bob knows that φ, …), in which substituting sentences which are true in all and the same possible world may not preserve the truth-value of the report –, and as a semantics for large portions of natural language.

However, there are technical and philosophical problems that call for a solution:

• First of all, as Barwise puts it, a rich enough semantics should be able to “turn to itself”, i.e.: provide a semantics for the language we are using to do semantics. A model in which to do this is not yet present in the literature about exact truthmaking.

• Secondly, we may want a semantics for hyperintensional contexts to be able to distinguish between the semantic content of a sentence φ and the semantic content of a sentence that ascribes truth to φ, “'φ' is true”. E.g.: it seems that “'Snow is white' is true” says something about a statement, 'Snow is white', while the sentence “Snow is white” says something about snow. We would like to have a model in which to give an account of this difference, while maintaining the intensional equivalence of φ and “'φ' is true”.

• Thirdly, exact truthmaking semantics comes with philosophical problem related to paradoxes of truthmaking. Sentences like “This very sentence has not truthmakers” generate difficulties for any truthmaking semantics. In order to diagnose how statements like these are problematic, and to provide a solution to these paradoxes of truthmaking, a good semantics for a language containing the notion of “being a truthmaker” should be constructed.

In this talk, I will show how to construct a rich, non trivial exact truthmaking model for a first order language containing predicates like: “Being a truthmaker”, “Being an actual truthmaker”, “Making exactly true”, and I will say how such a model provides a solution to the said challenges.

Wednesday 23 March 2022

The text of Pascal’s Wager is still a conundrum after almost four centuries. The most influential reconstruction in the traditional scholarship is taken as “forced” even by its main proponent (Lachelier 1901). Hacking (1972) famously split Pascal’s reasoning into

**Vincenzo CRUPI (Università degli Studi di Torino)***The One Coherent Argument in Pascal's Wager*The text of Pascal’s Wager is still a conundrum after almost four centuries. The most influential reconstruction in the traditional scholarship is taken as “forced” even by its main proponent (Lachelier 1901). Hacking (1972) famously split Pascal’s reasoning into

*three*distinct arguments, “all valid, none sound”, whereas according to Hájek (2003) the Wager is both*in*valid and “seemingly impossible” to formally reconstruct in a coherent fashion. I present here a novel, unified, and detailed analysis involving limited hermeneutic effort on the text. Technically, this will require a new kind of numbers.Paragraph. Clicca qui per modificare.Wednesday 09 March 2022

In this talk, we offer a descriptive theory of analogical reasoning in mathematics, stating general conditions under which an analogy may provide genuine inductive support to a mathematical conjecture (over and above fulfilling the merely heuristic role of ‘suggesting’ a conjecture in the psychological sense). The proposed conditions generalize the criteria put forward by Hesse (1963) in her influential work on analogical reasoning in the empirical sciences.

**Francescco NAPPO (Politecnico di Milano)**

Nicolò CANGIOTTI (Politecnico di Milano)Nicolò CANGIOTTI (Politecnico di Milano)

**Reasoning by Analogy in Mathematical Practice**In this talk, we offer a descriptive theory of analogical reasoning in mathematics, stating general conditions under which an analogy may provide genuine inductive support to a mathematical conjecture (over and above fulfilling the merely heuristic role of ‘suggesting’ a conjecture in the psychological sense). The proposed conditions generalize the criteria put forward by Hesse (1963) in her influential work on analogical reasoning in the empirical sciences.

Wednesday 02 March 2022

Coordination is key to the success of most human activities, from every-day interactions to the building of complex institutions. Rational Decision Theory explains coordination in terms of strategic rationality: in short, when two or more agents interact, they form beliefs and expectations about what others will do, and, in turn, under the assumption that they are rational, they attribute beliefs and expectations. These two operations are defined “meta-representation” and “mind-reading” respectively. Eventually, the theory prescribes that a rational agent must choose maximizing her expected utility, depending on which strategy she believes others will choose. This model, however, suffers from theoretical and empirical shortcomings. First, strategic rationality does not always solve coordination problems, especially when multiple coordination equilibria are available. Secondly, meta-representation amounts to a cognitively demanding and complex task, while empirical evidence suggests that sophisticated reasoning skills are neither possessed nor exercised in most interactions. In this work, I argue that, in order to adequately explain cooperation, we need to look into alternative models of reasoning, which do not involve mind-reading. I group these alternative accounts under the label “belief-less reasoning” (e.g: team reasoning, means-end rationality, solution thinking). To do so, I designed an experimental study based on the “Hi-Lo” game. The experiment consists of three separate steps, which create epistemic asymmetry between two participants, and thus trigger different levels of meta-representation. The study examines both a “coordination” and a “competition” condition: I present evidence that, all other things being equal, subjects exercise meta-representation more frequently when competing, while using more frequently forms of belief-less reasoning when coordinating. These results suggest that individuals do not simply lack the competences to engage in mind-reading operations (as modelled in theories of bounded-rationality), but rather exercise them selectively, depending on the task and on the context. If this is the case, we should broaden our theory of rationality in order to properly account for belief-less reasoning.

**Camilla COLOMBO (IMT Lucca)***Rationality, Coordination, Belief: an Experimental Study*Coordination is key to the success of most human activities, from every-day interactions to the building of complex institutions. Rational Decision Theory explains coordination in terms of strategic rationality: in short, when two or more agents interact, they form beliefs and expectations about what others will do, and, in turn, under the assumption that they are rational, they attribute beliefs and expectations. These two operations are defined “meta-representation” and “mind-reading” respectively. Eventually, the theory prescribes that a rational agent must choose maximizing her expected utility, depending on which strategy she believes others will choose. This model, however, suffers from theoretical and empirical shortcomings. First, strategic rationality does not always solve coordination problems, especially when multiple coordination equilibria are available. Secondly, meta-representation amounts to a cognitively demanding and complex task, while empirical evidence suggests that sophisticated reasoning skills are neither possessed nor exercised in most interactions. In this work, I argue that, in order to adequately explain cooperation, we need to look into alternative models of reasoning, which do not involve mind-reading. I group these alternative accounts under the label “belief-less reasoning” (e.g: team reasoning, means-end rationality, solution thinking). To do so, I designed an experimental study based on the “Hi-Lo” game. The experiment consists of three separate steps, which create epistemic asymmetry between two participants, and thus trigger different levels of meta-representation. The study examines both a “coordination” and a “competition” condition: I present evidence that, all other things being equal, subjects exercise meta-representation more frequently when competing, while using more frequently forms of belief-less reasoning when coordinating. These results suggest that individuals do not simply lack the competences to engage in mind-reading operations (as modelled in theories of bounded-rationality), but rather exercise them selectively, depending on the task and on the context. If this is the case, we should broaden our theory of rationality in order to properly account for belief-less reasoning.

Wednesday 23 February 2022

The Problem of Old Evidence (POE) states that Bayesian confirmation theory cannot account for the intuition according to which a theory H can be confirmed by a piece of evidence E already known.

Different dimensions of POE have been highlighted (Eells 1985). Here, I consider the dynamic and static dimension. In the former, we want to explain how the discovery that H accounts for E confirms H. In the latter, we want to understand why E is and will be a reason to prefer H over its competitors.

The aim of the talk is twofold. Firstly, I point out that two recent solutions to the dynamic dimension, proposed by Eva and Hartmann (2020), can be read in terms of Inference to the Best Explanation (IBE). And, by making such a reading explicit, I gauge the weaknesses and strengths of the two models. Particularly, I contend that, while one condition of their first model is not an expression of Eva and Hartmann’s understanding of IBE, the only condition employed in their second model is.

Secondly, I focus on the static dimension of POE which, now, has to be expressed in IBE terms. To solve it, I rely on the counterfactual approach (Howson 1984), and on a version of IBE in which explanatory considerations help to evaluate the terms in Bayes’ theorem (Okasha 2000; Lipton 2001). However, it turns out that the problems of the counterfactual approach recur even when it is used to solve the static POE in IBE terms.

**Cristina SAGRAFENA (University of Turin)***The Old Evidence Problem and the Inference to the Best Explanation*The Problem of Old Evidence (POE) states that Bayesian confirmation theory cannot account for the intuition according to which a theory H can be confirmed by a piece of evidence E already known.

Different dimensions of POE have been highlighted (Eells 1985). Here, I consider the dynamic and static dimension. In the former, we want to explain how the discovery that H accounts for E confirms H. In the latter, we want to understand why E is and will be a reason to prefer H over its competitors.

The aim of the talk is twofold. Firstly, I point out that two recent solutions to the dynamic dimension, proposed by Eva and Hartmann (2020), can be read in terms of Inference to the Best Explanation (IBE). And, by making such a reading explicit, I gauge the weaknesses and strengths of the two models. Particularly, I contend that, while one condition of their first model is not an expression of Eva and Hartmann’s understanding of IBE, the only condition employed in their second model is.

Secondly, I focus on the static dimension of POE which, now, has to be expressed in IBE terms. To solve it, I rely on the counterfactual approach (Howson 1984), and on a version of IBE in which explanatory considerations help to evaluate the terms in Bayes’ theorem (Okasha 2000; Lipton 2001). However, it turns out that the problems of the counterfactual approach recur even when it is used to solve the static POE in IBE terms.

## Wednesday 02 February 2022

**Malvina ONGARO (University of Eastern Piedmont)**

*Uncertainties in Decision-Making*

Uncertainty is a pervasive feature of life. From the smallest choices to the big issues of our society, we may not be sure about what we want, what is the case, and what will happen. All this uncertainty is problematic because it makes it hard to choose what to do: ultimately, we need knowledge because we need to act effectively in our environment, and conditions of uncertainty hinder our efforts to move adequately in the world. In order to solve it, we need principles to guide our decision making.

But the concept of uncertainty has been addressed from different perspectives and using different labels. Discussions on uncertainty include mentions of risk, ignorance, ambiguity, unawareness, as well as distinctions between epistemic, aleatory, external, internal, fundamental, subjective, ontological, normative, moral, empirical, Keynesian, Knightian, severe, deep, and great uncertainty - among others. If this variety corresponds to an actual plurality of types of uncertainty, then we may need a corresponding plurality of approaches to face uncertainty in decisions.

In this paper, I explore the role that different types of uncertainty play in decision-making as understood by standard decision theory. I start by reviewing the traditional decision-theoretical treatment of uncertainty, which allows only for variations in severity. I then propose a distinction between cognitive and non-cognitive uncertainty to highlight the limits of the traditional treatment, and I identify the points in decision-making in which different subtypes of these categories play a role. I conclude with a discussion of which of these types of uncertainty can lead to radical disagreement, a situation that has important implications for how we should make decisions.

*Uncertainties in Decision-Making*

Wednesday 19 January 2022

Abstract: Ensemble modelling has become the dominant approach in the Earth sciences when competing models are available but data are not sufficient to their validation. In this paper we consider the use of ensemble modelling in probabilistic seismic hazard analysis (PSHA). We argue that the confirmation of seismic hazard estimates is sensitive to both epistemic and non-epistemic values, and discuss some consequences for the practice of PSHA. (joint work with Lorenza Petrini and Daniele Chiffi)

**Luca ZANETTI (Polytechnic University Milan)***Philosophical Aspects of Seismic Hazard Analysis*Abstract: Ensemble modelling has become the dominant approach in the Earth sciences when competing models are available but data are not sufficient to their validation. In this paper we consider the use of ensemble modelling in probabilistic seismic hazard analysis (PSHA). We argue that the confirmation of seismic hazard estimates is sensitive to both epistemic and non-epistemic values, and discuss some consequences for the practice of PSHA. (joint work with Lorenza Petrini and Daniele Chiffi)

Wednesday 17 November 2021

In this talk I will present and discuss some similarities and differences between Ramsey's account of laws and generalisations and Hempel's. First, I will introduce Ramsey's generalisations, i.e. variable hypotheticals, and their characterisation into laws and chances. I will stress the role they play in supporting some types of conditionals, like counterfactuals. Then I will discuss Hempel's account of laws and statistical generalisations and how they contribute to the explanation of a phenomenon. The way Ramsey's variable hypotheticals support conditionals recall closely Hempel's DN-model of explanation, where the

**Caterina SISTI****(University of Turin)****Ravens and Strawberries: Hempel and Ramsey on Laws, Explanation and Prediction**In this talk I will present and discuss some similarities and differences between Ramsey's account of laws and generalisations and Hempel's. First, I will introduce Ramsey's generalisations, i.e. variable hypotheticals, and their characterisation into laws and chances. I will stress the role they play in supporting some types of conditionals, like counterfactuals. Then I will discuss Hempel's account of laws and statistical generalisations and how they contribute to the explanation of a phenomenon. The way Ramsey's variable hypotheticals support conditionals recall closely Hempel's DN-model of explanation, where the

*explanandum*is the consequent of the conditional while the*explanans*is constituted by the law(s) together with the antecedent (and additional facts). However, when we compare the two accounts a major difference emerges: Hempel requires laws to be true propositions, whereas for Ramsey they are not propositions at all. In the last part of the talk I will discuss some possible applications of the comparison. In particular, I will focus on the ravens paradox, that Ramsey's account of laws, if adopted, seems to block.Wednesday 3 November 2021

Epidemiological models have played a central role in the COVID-19 pandemic. They have been used to predict the evolution of the disease and to inform policy-making. In this paper, we address two kinds of epidemiological models widely used in the pandemic, namely, compartmental models and agent-based models. After describing their essentials—some real examples are invoked—we discuss their main strengths and weaknesses. Then, on the basis of this analysis, we make a comparison between their respective merits concerning three different goals: prediction, explanation, and intervention. We argue that preferences for particular models must be grounded case-by-case since contextual factors, as the peculiarities of the target population and the aims and expectations of policy-makers, cannot be overlooked. (joint work with Valeriano Iranzo)

**Saúl PÉREZ-GONZÁLEZ (University of Turin)***Epidemiological models and COVID‑19*Epidemiological models have played a central role in the COVID-19 pandemic. They have been used to predict the evolution of the disease and to inform policy-making. In this paper, we address two kinds of epidemiological models widely used in the pandemic, namely, compartmental models and agent-based models. After describing their essentials—some real examples are invoked—we discuss their main strengths and weaknesses. Then, on the basis of this analysis, we make a comparison between their respective merits concerning three different goals: prediction, explanation, and intervention. We argue that preferences for particular models must be grounded case-by-case since contextual factors, as the peculiarities of the target population and the aims and expectations of policy-makers, cannot be overlooked. (joint work with Valeriano Iranzo)

**Wednesday 27 October 2021**

Gregorie Dupuis-Mc Donald (University of Salzburg)

Gregorie Dupuis-Mc Donald (University of Salzburg)

*Explaining causation in international migration with a complex system approach*One problem in international migration stems from its causal complexity: There are many conditions that drive migration, and there appears to be different levels (viz. micro-, meso- and macro-level) at which causation can be operating. More to that, standard models of migration do not give a clear representation of the causal processes that drive migration flows. That being said, the objective of my presentation is to explain how a complex-system approach can help to make progress in the understanding of causation in migration. More precisely, I explain why we can consider migration as a complex system, and I spell out what we need to theorize causation in migration with a complex-system approach. My proposal is the following: First, we need Agent-Based Models (ABMs) that represent the elements, interactions and decisions that lead to migration; second, we need a general causal account that conceptualises the feedbacks between different levels of causation in a given system. All in all, my proposal indicates one response to the problem of causation in migration science.