Research
My research addresses the ethical, political, and epistemic tensions in technology policy issues in three areas: (1) the philosophy of AI, especially the ethical and societal implications of developing autonomous systems that challenge responsibility ascription and governance, (2) the ethics of influence in digital spaces, including when foreign governments engage in speech using new technologies, and (3) war and technology, namely the normative and conceptual challenges arising from our use of new technologies in gray zone warfare.
Philosophy of AI: This stream focuses on the ethical and societal implications of advanced machine learning (ML) systems, especially those with behaviors that challenge norms around responsibility attribution or which complicate existing governance mechanisms. In Machine Agency (MIT Press, 2025), James Mattingly and I examine the difference between machines that can act, and are agents, and those that cannot, and so are not agents, and we explore why this distinction might matter not just conceptually, but socially and ethically. I am working on a second book, AI and the Problem of Holding Responsible, that explores the ways in which AI is testing the limits of current legal notions—including intellectual property and product liability—and challenging moral norms for ascriptions of praise and blame. This project includes a historical dimension as I examine the political history of responsibility, including creative means of holding responsible, such as the use of truth and reconciliation commissions, and invites a reconsideration of what responsibility is for, and for whom. In short, I argue that current technology demands a reimagination of not just responsibility, but of mechanisms of governance.
Ethics of influence in digital spaces: I am interested in the tensions that arise when people communicate in the social and political spheres, especially where the boundaries between permissible and impermissible influence are unclear, and where epistemic factors directly impact democratic decision-making or institutional integrity. I am currently working on two new projects in this stream. In ‘Mind Games’, I collaborate with computer scientists and psychologists to assess the persuasive capacities of LLMs using theory of mind tasks as benchmarks. This project involves experiments with human participants as well as computational models, and we expect to publish the initial results in 2025. In another emerging project, I am examining whom we hold epistemically responsible for the ‘speech’ of LLMs, especially disinformation. I ultimately argue in favor of distinguishing between epistemic responsibility and moral responsibility, but suggest that we ought not attribute speaker status to AI systems.
Technology and war: My work on technology and gray zone warfare includes a moral study of targeted killing (or drone warfare) and human shielding, where I defend the adequacy and moral force of international humanitarian law. In ‘Impermissible Targeting of Human Shields’, I demonstrate that the conventional basis for determining which human shields are targetable is unjustifiable on both epistemological and conceptual grounds. On the view that I defend, all shields presumptively count as non-combatants who cannot be targeted. In future studies, I would like to explore the ethics and politics of using AI, specifically computer vision, to improve targeting practices in armed conflicts that satisfy jus ad bellum requirements. Such usage enhances our ability to satisfy the principle of distinction, but may also erode moral responsibility for a decision that many argue humans, not artificial systems, ought to author.
PUBLICATIONS
BOOKS
PEER-REVIEWED PUBLICATIONS
BOOK REVIEWS
POLICY REPORTS
PUBLIC PHILOSOPHY
Philosophy of AI: This stream focuses on the ethical and societal implications of advanced machine learning (ML) systems, especially those with behaviors that challenge norms around responsibility attribution or which complicate existing governance mechanisms. In Machine Agency (MIT Press, 2025), James Mattingly and I examine the difference between machines that can act, and are agents, and those that cannot, and so are not agents, and we explore why this distinction might matter not just conceptually, but socially and ethically. I am working on a second book, AI and the Problem of Holding Responsible, that explores the ways in which AI is testing the limits of current legal notions—including intellectual property and product liability—and challenging moral norms for ascriptions of praise and blame. This project includes a historical dimension as I examine the political history of responsibility, including creative means of holding responsible, such as the use of truth and reconciliation commissions, and invites a reconsideration of what responsibility is for, and for whom. In short, I argue that current technology demands a reimagination of not just responsibility, but of mechanisms of governance.
Ethics of influence in digital spaces: I am interested in the tensions that arise when people communicate in the social and political spheres, especially where the boundaries between permissible and impermissible influence are unclear, and where epistemic factors directly impact democratic decision-making or institutional integrity. I am currently working on two new projects in this stream. In ‘Mind Games’, I collaborate with computer scientists and psychologists to assess the persuasive capacities of LLMs using theory of mind tasks as benchmarks. This project involves experiments with human participants as well as computational models, and we expect to publish the initial results in 2025. In another emerging project, I am examining whom we hold epistemically responsible for the ‘speech’ of LLMs, especially disinformation. I ultimately argue in favor of distinguishing between epistemic responsibility and moral responsibility, but suggest that we ought not attribute speaker status to AI systems.
Technology and war: My work on technology and gray zone warfare includes a moral study of targeted killing (or drone warfare) and human shielding, where I defend the adequacy and moral force of international humanitarian law. In ‘Impermissible Targeting of Human Shields’, I demonstrate that the conventional basis for determining which human shields are targetable is unjustifiable on both epistemological and conceptual grounds. On the view that I defend, all shields presumptively count as non-combatants who cannot be targeted. In future studies, I would like to explore the ethics and politics of using AI, specifically computer vision, to improve targeting practices in armed conflicts that satisfy jus ad bellum requirements. Such usage enhances our ability to satisfy the principle of distinction, but may also erode moral responsibility for a decision that many argue humans, not artificial systems, ought to author.
PUBLICATIONS
BOOKS
- Mattingly, James & Cibralic, Beba (co first authors), Machine Agency (MIT Press, Feb 5, 2025) https://mitpress.mit.edu/9780262549981/philosophy-agency-and-ai/
PEER-REVIEWED PUBLICATIONS
- Nyrup, Rune & Cibralic, Beba (co first authors) (2024). “Idealism, realism, pragmatism: three modes of theorising within secular AI ethics”, in Barry Solemain & I. Glenn Cohen (eds.), Research Handbook on Health, AI and the Law. Edward Edgar Publishing. pp. 203-2018.
- Cibralic, Beba (2024). “A Topography of Information-Based Foreign Influence”, in Mitt Regan & Aurel Sari (eds.), Hybrid Threats and Grey Zone Conflict: The Challenge to Liberal Democracies. New York, NY: Oxford University Press. pp. 157-178
- Cibralic, Beba (2024). “Influence, War, and Ethics”, Journal of National Security Law and Policy 14 (1):29-54.
- Cibralic, Beba (2023). “Impermissible Targeting of Human Shields”, ARSP (Archiv für Rechts- und Sozialphilosophie) 109 (2):171-194.
- Cibralic, Beba & Mattingly, James (co first authors) (2022). “Machine agency and representation”, AI and Society 39 (1):345-352.
BOOK REVIEWS
- Cibralic, Beba, 2022. Review of Kate Darling’s New Breed. Essays in Philosophy.
- Cibralic, Beba, 2021. Review of German A. Duarte and Justin Michael Battin’s Reading Black Mirror. London Schools of Economics Review of Books blog.
POLICY REPORTS
- Cibralic, Beba, 2020. “Climate Colonialism Workshop Report.” Available upon request.
- Cibralic, Beba (first author) & Connelly, Aaron, 2018. “Russia’s Disinformation Game in Southeast Asia.” Lowy Interpreter, https://www.lowyinstitute.org/the-interpreter/russias-disinformation-game-southeast-asia
- Cibralic, Beba (first author) & Flitton, Daniel, 2018. “Trump-Putin: Beyond Election Meddling.”, Lowy Interpreter, https://www.lowyinstitute.org/the-interpreter/trump-putin-beyond-election-meddling
PUBLIC PHILOSOPHY
- Leverhulme Centre (Cambridge) Group Submission to Future of Life Institute WorldBuilder Contest on AI Futures, 2022 (2nd place). https://worldbuild.ai/core-central/
- Táíwò, Olúfẹ́mi & Cibralic, Beba (co first authors), 2022. “If the west can harbour Ukrainians, it can accept the many climate refugees to come.” Guardian. https://www.theguardian.com/commentisfree/2022/apr/01/ukraine-war-west-immigration-climate-refugees
- Cibralic, Beba, 2021. “Astrology: Informative, Harmful, or Just Plain Fun?” Aesthetics for Birds, https://aestheticsforbirds.com/2021/08/19/astrology-informative-harmful-or-just-plain-fun/
- Cibralic, Beba. 2021. “Epistemic Norms and Failures of Reporting.” Social Epistemology Review and Reply Collective 10 (5): 1-5. https://wp.me/p1Bfg0-5NTi.
- Cibralic, Beba. 2020. “‘Caliphate’ and the Problem of Testimony.” Social Epistemology Review and Reply Collective 9 (12): 33-36. https://wp.me/p1Bfg0-5zi.
- Táíwò, Olúfẹ́mi & Cibralic, Beba (co first authors), 2020. “The Case for Climate Reparations.” Foreign Policy, https://foreignpolicy.com/2020/10/10/case-for-climate-reparations-crisis-migration-refugees-inequality/
- Cibralic, Beba & Lang, JJ, 2020. “Why do we get sad when robots die?” The Outline, https://theoutline.com/post/8514/why-am-i-sad-robot-died