Publications

Journal

We are responsible for the journal Philosophy of AI, which is published by the Society for the Philosophy of Artificial Intelligence and founded by PAIR members and associate members. Editors-in-chief: Guido Löhr & Vincent C. Müller. Editorial assistant: Eleonora Catena.

Forthcoming Publications

  • Ashby, B. (forthcoming). Let sleeping dogs lie: Stereotype completion and the phenomenology of category recognition. Philosophical Studies.
  • Dung, L., & Balg, D. (forthcoming). Right in the feels: Academic philosophy, disappointed students and the big questions of life. Teaching Philosophy.
  • Kirchhoff, M., Kiverstein, J. and Robertson, I. (forthcoming): The Literalist Fallacy and the Free Energy Principle: Model-Building, Scientific Realism, and Instrumentalism, British Journal for the Philosophy of Science, 76 https://doi.org/10.1086/720861
  • Müller, V. C. (forthcoming), ‘Deep opacity undermines data protection and explainable artificial intelligence’, in M. Hähnel and R. Müller (eds.), Handbook for Applied Philosophy of AI (London: Wiley).
  • Müller, V. C. (under contract), Can machines think? Fundamental problems of artificial intelligence (New York: Oxford University Press).
  • Müller, V. C. (ed.), (under contract), Oxford handbook of the philosophy of artificial intelligence (New York: Oxford University Press)
  • Müller, V. C., Dewey, A. R., Dung, L., & Löhr, G. (eds.) (forthcoming), Philosophy of Artificial Intelligence: The State of the Art (Synthese Library, Berlin: SpringerNature).
  • Müller, V. C. and Löhr, G. (forthcoming), Artificial minds, (Cambridge Elements – Philosophy of Mind, ed. Keith Frankish; Cambridge: Cambridge University Press).
  • Robertson, I. (under contract). AI and Expertise. Cambridge: Cambridge University Press.

2025

  • Catena, E., Tummolini, L., & Santucci, V. G. (2025). Human autonomy with AI in the loop. Philosophical Psychology, 1–28. https://doi.org/10.1080/09515089.2024.2448217
  • Müller, V. C. (2025), ‘Philosophy of AI: A structured overview’, in Nathalie A. Smuha (ed.), Cambridge handbook on the law, ethics and policy of Artificial Intelligence (Cambridge: Cambridge University Press), 40-58.
  • Müller, V. C. (2025), ‘Total surveillance – everybody watching everybody else’, in Henning Glaser and Pindar Wong (eds.), Governing the Future: Digitalization, Artificial Intelligence, Dataism (London: CRC Press – Routledge).

2024

  • Andreotta, A.J., & Lundgren. B. (2024). Automated informed consent. Big Data & Society 11 (4), https://doi.org/10.1177/20539517241289439
  • Balg, D., Bolte, L., Burkard, A., Constantin, J., Dung, L., Gottschalk, J., Gregor-Gehrmann, K., Guntermann, I., Schulz, K. (2024). “Ethische Fragen der Künstlichen Intelligenz” (Interview, Paper). Müller, Vincent C. (ed.), Göttingen: Philovernetzt. https://www.philovernetzt.de/ethische-fragen-der-kuenstlichen-intelligenz/
  • Caporuscio, C., Fink, S.B. (2024). Epistemic Risk Reduction in Psychedelic-Assisted Therapy. In: Current Topics in Behavioral Neurosciences. Springer, Berlin, Heidelberg, https://doi.org/10.1007/7854_2024_531
  • de Weerd, C. R. (2024). A Credence-based Theory-heavy Approach to Non-human Consciousness. Synthese 203,5. DOI:10.1007/s11229-024-04539-6
  • Dewey, A. R. (2024). Balancing the evidential scale for the mental unconscious [commentary]. Philosophical Psychology. https://doi.org/10.1080/09515089.2024.2316837
  • Dewey, A. R. (2024). Anatomy’s role in mechanistic explanations of organism behaviour. Synthese 203/137.
  • Dung, L. (2024). Is superintelligence necessarily moral? Analysis. https://doi.org/10.1093/analys/anae033
  • Dung, L. (2024). The argument for near-term human disempowerment through AI. AI & Society. https://doi.org/10.1007/s00146-024-01930-2
  • Dung, L. (2024). Evaluating approaches for reducing catastrophic risks from AI. AI and Ethics. https://doi.org/10.1007/   s43681-024-00475-w
  • Dung, L. (2024). Understanding artificial agency. The Philosophical Quarterly. https://doi.org/10.1093/pq/pqae010
  • Dung, L. (2024). Preserving the normative significance of sentience. Journal of Consciousness Studies, 31(1–2), 8–30. https://doi.org/10.53765/20512201.31.1.008
  • Dung, L., & Kersten, L. (2024). Implementing artificial consciousness. Mind & Language 40(1), 1–21. https://doi.org/10.1111/mila.12532
  • Fink, S. B. (2024). On Acid Empiricism. In: The Palgrave Handbook of Philosophy and Psychoactive Drug Use (pp. 225-244). Cham: Springer Nature Switzerland.
  • Fink, S. B. (2024). How-tests for consciousness and direct neurophenomenal structuralism. Frontiers in Psychology 15. https://doi.org/10.3389/fpsyg.2024.1352272
  • Hutto, D. D., & Robertson, I. (2024). What Comes Naturally?: Relaxed Naturalism’s New Philosophy of Nature. In Naturalism and Its Challenges (pp. 19-35). Routledge. https://www.taylorfrancis.com/chapters/edit/10.4324/9781003430568-2/comes-naturally-daniel-hutto-ian-robertson
  • Lundgren, B. (2024), There is No Scarcity Problem. Philosophy & Technology 37, 130. https://doi.org/10.1007/s13347-024-00815-y
  • Lundgren, B. (2024). Is a Moral Right to Privacy Limited by Agents’ Lack of Epistemic Control? Logos & Episteme 15(1): 83-87.
  • Lundgren, B. (2024). A new standard for accident simulations for self-driving vehicles: Can we use Waymo’s results from accident simulations? AI & Society39: 669–673. https://doi.org/10.1007/s00146-022-01495-y
  • Lundgren, B. (2024). Undisruptable or stable concepts: Can we design concepts that can avoid conceptual disruption, normative critique, and counterexamples? Ethics and Information Technology.
  • Lundgren, B., & Kudlek, K. (2024). What we owe (to) the present: Normative and practical challenges for strong longtermism. Futures 164, https://doi.org/10.1016/j.futures.2024.103471
  • Lundgren, B., Catena, E., Robertson, I. Hellrigel-Holderbaum, M., Jaja, I.R., & Dung, L. (2024). On the need for a Global AI ethics. Journal of Global Ethics (RJGE).https://doi.org/10.1080/17449626.2024.2425366
  • Müller, V. C. (2024), ‘Philosophie der Künstlichen Intelligenz: Ein strukturierter Überblick’, trans. Matthias Kettner, in Rainer Adolphi, et al. (eds.), Philosophische Digitalisierungsforschung: Verantwortung, Verständigung, Vernunft, Macht (Bielefeld: transcript Verlag), 345-70.
  • Müller, V. C. and Hähnel, M.(2024), Was ist, was kann, was soll KI? Ein philosophisches Gespräch (Blaue Reihe; Hamburg: Felix Meiner).
  • Porter, Ronald D./Shen, Minquian/Fabrigar, Leandre R./Seaboyer, Anthony (2024): Assessing Influence in Target Audiences that Won’t Say or Don’t Know How Much They Have Been Influenced. In: Éric Ouellet (Ed): Deterrence in the 21st Century: Statecraft in the information environment, University of Calgary Press, 36p.
    https://ucp.manifoldapp.org/read/deterrence-in-the-21st-century/section/097612c2-2005-4d56-b195-f2e418377755#_idParaDest-26
  • Repantis, D., Koslowski, M. & Fink, S.B. (2024) Ethische Aspekte der Therapie mit Psychedelika. Psychotherapie 69, 115–121. https://doi.org/10.1007/s00278-024-00710-z
  • Robertson, I. (2024). A Problem for Autonomous Know-How. Erkenntnis, 1-9.https://doi.org/10.1007/s10670-024-00849-w
  • Robertson, I. G. (2024). In Defence of Radically Enactive Imagination. Thought: A Journal of Philosophy.https://doi.org/10.5840/tht202421525
  • Seaboyer, Anthony/Jolicoeur, Pierre (2024): L’intelligence artificielle russe comme outil de désinformation et de déception en Ukraine. In: Frédéric Côté et André Simonyi (Ed), Le Canada à l’aune du conflit en Ukraine, Québec. University of Laval Press. Link.
  • Seaboyer, Anthony/Jolicoeur, Pierre (2024): The Evolution of China’s Information Exploitation of COVID-19. In: Éric Ouellet (Ed): Deterrence in the 21st Century: Statecraft in the information environment, University of Calgary Press, 28p. Link.

2023

  • Dung, L. (2023). Tests of animal consciousness are tests of machine consciousness. Erkenntnis. https://doi.org/10.1007/s10670-023-00753-9
  • Dung, L. (2023). Current cases of AI misalignment and their implications for future risks. Synthese, 202(5), 138. https://doi.org/10.1007/s11229-023-04367-0
  • Dung, L. (2023). How to deal with risks of AI suffering. Inquiry. https://doi.org/10.1080/0020174X.2023.2238287
  • Hopster, J., Brey, P., Klenk, M., Löhr, G., Marchiori, S., Lundgren, B., Scharp, K. (2023). Conceptual Disruption and the Ethics of Technology. In van de Poel, I. et al. (eds.) Ethics of Socially Disruptive Technologies: An Introduction. Open Book Publishers, pp. 141–162.
  • Lundgren, B. and Stefánsson, H. O. (2023). Can the normic de minimis decision theory save the de minimis principle? Erkenntnis. https://doi.org/10.1007/s10670-023-00751-x
  • Lundgren, B. (2023). Ethical requirements for digital systems for contact tracing in pandemics: a solution to the contextual limits of ethical guidelines. In Macnish, K., Henschke, A. (eds.) The Ethics of Surveillance in Times of Emergency. Oxford University Press, Oxford, pp. 169–185.
  • Lundgren, B. (2023). Should we allow for the possibility of unexercised abilities? A new route to rejecting the poss-ability principle. Inquiry. https://doi.org/10.1080/0020174X.2023.2250390.
  • Lundgren, B. (2023). Is lack of literature engagement a reason for rejecting a paper in philosophy? Res Publica. https://doi.org/10.1007/s11158-023-09632-0
  • Lundgren, B. (2023). Two notes on Axiological Futurism: The importance of disagreement and methodological implications for value theory. Futures 147, 103120. https://doi.org/10.1016/j.futures.2023.103120.
  • Lundgren, B. (2023). In defense of ethical guidelines. AI & Ethics  3: 1013–1020. https://doi.org/10.1007/s43681-022-00244-7.
  • Lundgren, B. (2023). An unrealistic and undesirable alternative to the right to be forgotten. Telecommunications Policy 47(1): 102446. https://doi.org/10.1016/j.telpol.2022.102446.
  • Robertson, I. (2023) The unbearable rightness of seeing? Conceptualism, enactivism, and skilled engagement. Synthese 202, 173 . https://doi.org/10.1007/s11229-023-04385-y