Publications

Journal

We are mainly responsible for the journal Philosophy of AI, which is published by the Society for the Philosophy of Artificial Intelligence and founded by PAIR members.

Editors-in-chief: Guido Löhr, Vincent C. Müller.

Editorial assistant: Eleonora Catena.

Forthcoming

  • Ashby, B. (forthcoming). Let sleeping dogs lie: Stereotype completion and the phenomenology of category recognition. Philosophical Studies.
  • Lundgren, B., Catena, E., Robertson, I. Hellrigel-Holderbaum, M., Jaja, I.R., & Dung, L. (forthcoming). On the need for a Global AI ethics. Journal of Global Ethics (RJGE).
  • Hutto, D. D., & Robertson, I. (Forthcoming.) What Comes Naturally: Relaxed Naturalism’s New Philosophy of Nature. In Naturalism and Its Challenges (eds. Kemp, G. N., Khani, A. H., Rezaee, H. S., Amiriara, H.).
  • Dewey, A. R. (forthcoming). Anatomy’s role in mechanistic explanations of organism behaviour. Synthese.
  • Dung, L., & Balg, D. (forthcoming). Right in the feels: Academic philosophy, disappointed students and the big questions of life. Teaching Philosophy.
  • Kirchhoff, M., Kiverstein, J. and Robertson, I. (forthcoming): The Literalist Fallacy and the Free Energy Principle: Model-Building, Scientific Realism, and Instrumentalism, British Journal for the Philosophy of Science, 76 https://doi.org/10.1086/720861
  • Müller, V. C., Dewey, A. R., Dung, L., & Löhr, G. (eds.) (forthcoming), Philosophy of Artificial Intelligence: The State of the Art (Synthese Library, Berlin: SpringerNature).
  • Robertson, I. (forthcoming). In Defence of Enactive Imagination. Thought.
  • Robertson, I. (forthcoming) A problem for autonomous knowledge: From TrueTemp to Neuralink. Erkenntnis.
  • Robertson, I. (under contract). AI and Expertise. Cambridge: Cambridge University Press.

2024

  • Caporuscio, C., Fink, S.B. (2024). Epistemic Risk Reduction in Psychedelic-Assisted Therapy. In: Current Topics in Behavioral Neurosciences. Springer, Berlin, Heidelberg, https://doi.org/10.1007/7854_2024_531
  • Fink, S. B. (2024). On Acid Empiricism. In: The Palgrave Handbook of Philosophy and Psychoactive Drug Use (pp. 225-244). Cham: Springer Nature Switzerland.
  • Andreotta, A.J., & Lundgren. B. (2024). Automated informed consent. Big Data & Society 11 (4), https://doi.org/10.1177/20539517241289439
  • Lundgren, B., & Kudlek, K. (2024). What we owe (to) the present: Normative and practical challenges for strong longtermism. Futures 164, https://doi.org/10.1016/j.futures.2024.103471
  • Dung, L., & Kersten, L. (2024). Implementing artificial consciousness. Mind & Language 40(1), 1–21. https://doi.org/10.1111/mila.12532
  • Dung, L. (2024). Is superintelligence necessarily moral? Analysis, anae033. https://doi.org/10.1093/analys/anae033
  • Dung, L. (2024). The argument for near-term human disempowerment through AI. AI & Society. https://doi.org/10.1007/s00146-024-01930-2
  • Dung, L. (2024). Evaluating approaches for reducing catastrophic risks from AI. AI and Ethics. https://doi.org/10.1007/   s43681-024-00475-w
  • Lundgren, B. (2024), There is No Scarcity Problem. Philosophy & Technology 37, 130. https://doi.org/10.1007/s13347-024-00815-y
  • Fink, S. B. (2024). How-tests for consciousness and direct neurophenomenal structuralism. Frontiers in Psychology. https://doi.org/10.3389/fpsyg.2024.1352272
  • de Weerd, C. R. (2024). A Credence-based Theory-heavy Approach to Non-human Consciousness. Synthese 203,5. DOI:10.1007/s11229-024-04539-6
  • Dewey, A. R. (2024). Balancing the evidential scale for the mental unconscious [commentary]. Philosophical Psychology. https://doi.org/10.1080/09515089.2024.2316837
  • Dung, L. (2024). Understanding artificial agency. The Philosophical Quarterly. https://doi.org/10.1093/pq/pqae010
  • Dung, L. (2024). Preserving the normative significance of sentience. Journal of Consciousness Studies, 31(1–2), 8–30. https://doi.org/10.53765/20512201.31.1.008
  • Lundgren, B. (2024). Is a Moral Right to Privacy Limited by Agents’ Lack of Epistemic Control? Logos & Episteme 15(1): 83-87.
  • Lundgren, B. (2024). A new standard for accident simulations for self-driving vehicles: Can we use Waymo’s results from accident simulations? AI & Society39: 669–673. https://doi.org/10.1007/s00146-022-01495-y
  • Lundgren, B. (2024). Undisruptable or stable concepts: Can we design concepts that can avoid conceptual disruption, normative critique, and counterexamples? Ethics and Information Technology.
  • Müller, V. C., & Hähnel, M. (2024). Was ist, was kann, was soll KI? Ein philosophisches Gespräch. Hamburg: Felix Meiner.
  • Porter, Ronald D./Shen, Minquian/Fabrigar, Leandre R./Seaboyer, Anthony (2024): Assessing Influence in Target Audiences that Won’t Say or Don’t Know How Much They Have Been Influenced. In: Éric Ouellet (Ed): Deterrence in the 21st Century: Statecraft in the information environment, University of Calgary Press, 36p.
    https://ucp.manifoldapp.org/read/deterrence-in-the-21st-century/section/097612c2-2005-4d56-b195-f2e418377755#_idParaDest-26
  • Repantis, D., Koslowski, M. & Fink, S.B. (2024) Ethische Aspekte der Therapie mit Psychedelika. Psychotherapie 69, 115–121. https://doi.org/10.1007/s00278-024-00710-z
  • Seaboyer, Anthony/Jolicoeur, Pierre (2024): L’intelligence artificielle russe comme outil de désinformation et de déception en Ukraine. In: Frédéric Côté et André Simonyi (Ed), Le Canada à l’aune du conflit en Ukraine, Québec. University of Laval Press. Link.
  • Seaboyer, Anthony/Jolicoeur, Pierre (2024): The Evolution of China’s Information Exploitation of COVID-19. In: Éric Ouellet (Ed): Deterrence in the 21st Century: Statecraft in the information environment, University of Calgary Press, 28p. Link.

2023

  • Dung, L. (2023). Tests of animal consciousness are tests of machine consciousness. Erkenntnis. https://doi.org/10.1007/s10670-023-00753-9
  • Dung, L. (2023). Current cases of AI misalignment and their implications for future risks. Synthese, 202(5), 138. https://doi.org/10.1007/s11229-023-04367-0
  • Dung, L. (2023). How to deal with risks of AI suffering. Inquiry. https://doi.org/10.1080/0020174X.2023.2238287
  • Hopster, J., Brey, P., Klenk, M., Löhr, G., Marchiori, S., Lundgren, B., Scharp, K. (2023). Conceptual Disruption and the Ethics of Technology. In van de Poel, I. et al. (eds.) Ethics of Socially Disruptive Technologies: An Introduction. Open Book Publishers, pp. 141–162.
  • Lundgren, B. and Stefánsson, H. O. (2023). Can the normic de minimis decision theory save the de minimis principle? Erkenntnis. https://doi.org/10.1007/s10670-023-00751-x
  • Lundgren, B. (2023). Ethical requirements for digital systems for contact tracing in pandemics: a solution to the contextual limits of ethical guidelines. In Macnish, K., Henschke, A. (eds.) The Ethics of Surveillance in Times of Emergency. Oxford University Press, Oxford, pp. 169–185.
  • Lundgren, B. (2023). Should we allow for the possibility of unexercised abilities? A new route to rejecting the poss-ability principle. Inquiry. https://doi.org/10.1080/0020174X.2023.2250390.
  • Lundgren, B. (2023). Is lack of literature engagement a reason for rejecting a paper in philosophy? Res Publica. https://doi.org/10.1007/s11158-023-09632-0
  • Lundgren, B. (2023). Two notes on Axiological Futurism: The importance of disagreement and methodological implications for value theory. Futures 147, 103120. https://doi.org/10.1016/j.futures.2023.103120.
  • Lundgren, B. (2023). In defense of ethical guidelines. AI & Ethics  3: 1013–1020. https://doi.org/10.1007/s43681-022-00244-7.
  • Lundgren, B. (2023). An unrealistic and undesirable alternative to the right to be forgotten. Telecommunications Policy 47(1): 102446. https://doi.org/10.1016/j.telpol.2022.102446.
  • Robertson, I. (2023) The unbearable rightness of seeing? Conceptualism, enactivism, and skilled engagement. Synthese 202, 173 . https://doi.org/10.1007/s11229-023-04385-y