2025 Summer School in Logic and Formal Epistemology at Carnegie Mellon University
The Philosophy Department at Carnegie Mellon University is happy to announce the 2025 Summer School in Logic and Formal Epistemology: June 2-20, 2025 on the campus of Carnegie Mellon University.
June 2–6: Aydin Mohseni
Bayesian Epistemology and Metascience
This course explores the foundational principles of Bayesian epistemology and their applications to metascience. Students will engage with topics including probabilistic reasoning, updating beliefs in light of evidence, and the use of formal models to analyze and improve scientific practices. Special emphasis will be placed on addressing contemporary challenges in science, including the replication crisis and recent methodological reforms.
June 9–13: Teddy Seidenfeld & Floris Persiau
Decision Theory, Imprecise Probabilities, and Algorithmic Randomness
The first half of this week will provide a decision-theoretic perspective for introducing selected topics in Imprecise Probabilities:
- B. de Finetti’s Coherence and personal probability.
- The Fundamental Theorem and “weak-IP.”
- On the Value of Information and IP theory
- Optimal sequential decisions and the disvalue of new information:
- Act/state dependence and game theory.
- IP sequential decision making and dilation.
- Optimal sequential decisions and the disvalue of new information:
- Multi-agent IP decision theory
- Pareto consensus and non-binary theories of choice.
- Axiomatizing IP decision making
- Three opportunities for IP DT.
- Forward induction in sequential games
- IP forecasting and de Finetti’s two senses of coherence
- Dominance principles: non-Archimedean theories, …
The second half of the week will connect some of these ideas to the field of algorithmic randomness, which studies what it means for an infinite outcome sequence to be random. Consider for example infinite binary sequences that are generated by flipping a fair coin—which corresponds to probability 1/2: the infinite binary sequence 01010101… doesn’t seem random at all, whereas the sequence 10001011… seems more random. Algorithmic randomness notions try to formalise our intuition behind random sequences, by defining what it means for an infinite sequence to be random for an uncertainty model. Classically, these uncertainty models are probability measures. The field of imprecise probabilities, on the other hand, questions whether precise-probabilistic uncertainty models are always sufficient to capture one’s uncertainty, and puts forward alternative and (even) more general uncertainty models that allow for reasoning in an informative and conservative way. The following question then naturally pops up: can we allow for these more general uncertainty models in algorithmic randomness notions, and how does this change our understanding of algorithmically random sequences? Are there for example sequences whose randomness can only be described by an imprecise uncertainty model? This course will give an introduction to several basic concepts in the field of algorithmic randomness, and will provide answers to the above questions and more.
June 16–20: Clark Glymour & Peter Spirtes
The Logic of Discovery
The very idea that there is, or could be, a “Logic of Discovery” analogous to deductive logic but for empirical laws has been advocated and disputed for three centuries. In this century it was replaced by the development of algorithms that attempt to infer laws and causal relations from empirical data. This course will briefly review the history but focus on 21st century developments. We will describe algorithms, proofs of their correctness properties, and applications in finance, biology, neuroscience, and other areas.