• About me
  • Research
  • Teaching

Research

My research in philosophy addresses the ethical, political, and epistemic tensions in technology policy issues in three areas: (1) the philosophy of AI, especially the ethical and societal implications of developing autonomous systems that challenge responsibility ascription and governance, (2) the ethics of influence in digital spaces, including when foreign governments engage in speech using new technologies, and (3) war and technology, namely the normative and conceptual challenges arising from our use of new technologies in gray zone warfare. 

Philosophy of AI: This stream focuses on the ethical and societal implications of advanced machine learning (ML) systems, especially those with behaviors that challenge norms around responsibility attribution or which complicate existing governance mechanisms. In Machine Agency (MIT Press, 2025), James Mattingly and I examine the difference between machines that can act, and are agents, and those that cannot, and so are not agents, and we explore why this distinction might matter not just conceptually, but socially and ethically. I am working on a second book, AI and the Problem of Holding Responsible, that explores the ways in which AI is testing the limits of current legal notions—including intellectual property and product liability—and challenging moral norms for ascriptions of praise and blame. This project includes a historical dimension as I examine the political history of responsibility, including creative means of holding responsible, such as the use of truth and reconciliation commissions, and invites a reconsideration of what responsibility is for, and for whom. In short, I argue that current technology demands a reimagination of not just responsibility, but of mechanisms of governance.


Ethics of influence in Digital Spaces: I am interested in the tensions that arise when people communicate in the social and political spheres, especially where the boundaries between permissible and impermissible influence are unclear, and where epistemic factors directly impact democratic decision-making or institutional integrity. I am currently working on two new projects in this stream. In ‘Mind Games’, I collaborate with computer scientists and psychologists to assess the persuasive capacities of LLMs using theory of mind tasks as benchmarks. This project involves experiments with human participants as well as computational models, and we expect to publish the initial results in 2025. In another emerging project, I am examining whom we hold epistemically responsible for the ‘speech’ of LLMs, especially disinformation. I ultimately argue in favor of distinguishing between epistemic responsibility and moral responsibility, but suggest that we ought not attribute speaker status to AI systems. 

Technology and War: My work on technology and gray zone warfare includes a moral study of targeted killing (or drone warfare) and human shielding, where I defend the adequacy and moral force of international humanitarian law. In ‘Impermissible Targeting of Human Shields’, I demonstrate that the conventional basis for determining which human shields are targetable is unjustifiable on both epistemological and conceptual grounds. On the view that I defend, all shields presumptively count as non-combatants who cannot be targeted. In future studies, I would like to explore the ethics and politics of using AI, specifically computer vision, to improve targeting practices in armed conflicts that satisfy jus ad bellum requirements. Such usage enhances our ability to satisfy the principle of distinction, but may also erode moral responsibility for a decision that many argue humans, not artificial systems, ought to author.

My work in AI policy and governance builds on this conceptual and normative work and also considers the broader socio-economic and geopolitical implications of developing and diffusing AI systems. Currently at RAND, and previously with the Center for the Security and Emerging Technology and the Center for Democracy and Technology, I've produced research aimed at helping policymakers understand the potential impacts of AI. Much of my work is non-public. 

PUBLICATIONS
​
BOOKS
  • Mattingly, James & Cibralic, Beba (co first authors), Machine Agency (MIT Press, Feb 5, 2025) https://mitpress.mit.edu/9780262549981/philosophy-agency-and-ai/

PEER-REVIEWED PUBLICATIONS
  • Moore, Jared, Overmark, Rasmus, Cooper, Ned, Cibralic, Beba, Jones, Cameron. “Do Large Language Models Have a Planning Theory of Mind? Evidence from a Multi-Step Persuasion Task”. CogSci 2025 (Forthcoming, COLM, Fall 2025). 
  • Fleisher W, Cibralic B, Basl J, Ricks V, and Smith M. “Responsibility and Accountability in an Algorithmic Society.” (Forthcoming, Philosophy and Technology, Fall 2025). 
  • Nyrup, Rune & Cibralic, Beba (co first authors) (2024). “Idealism, realism, pragmatism: three modes of theorising within secular AI ethics”, in Barry Solemain & I. Glenn Cohen (eds.), Research Handbook on Health, AI and the Law. Edward Edgar Publishing. pp. 203-2018.
  • Cibralic, Beba (2024). “A Topography of Information-Based Foreign Influence”, in Mitt Regan & Aurel Sari (eds.), Hybrid Threats and Grey Zone Conflict: The Challenge to Liberal Democracies. New York, NY: Oxford University Press. pp. 157-178
  • Cibralic, Beba (2024). “Influence, War, and Ethics”, Journal of National Security Law and Policy 14 (1):29-54.
  • Cibralic, Beba (2023). “Impermissible Targeting of Human Shields”, ARSP (Archiv für Rechts- und Sozialphilosophie) 109 (2):171-194.
  • Cibralic, Beba & Mattingly, James (co first authors) (2022). “Machine agency and representation”, AI and Society 39 (1):345-352.

BOOK REVIEWS
  • Cibralic, Beba, 2022. Review of Kate Darling’s New Breed. Essays in Philosophy.  
  • Cibralic, Beba, 2021. Review of German A. Duarte and Justin Michael Battin’s Reading Black Mirror. London Schools of Economics Review of Books blog.​

POLICY & RAND REPORTS
  • Boudreaux, Benjamin and Cibralic, Beba (co first authors), 2025. “Artificial Intelligence and the Social Contract: Foundations for Social and Economic Policy Under Technological Transformation”. RAND Corporation. https://www.rand.org/pubs/perspectives/PEA3888-1.html 
  • Welburn et al, 2025. “Rethinking Social and Economic Policy in the Age of General-Purpose Artificial Intelligence: Navigating the Cascading Impacts of AI Adoption.” RAND Corporation, https://www.rand.org/pubs/research_reports/RRA3888-2.html 
  • Cibralic et al, 2024. “On the Responsible Development and Use of Chem-Bio AI Models: Comments on Evaluations, Mitigations, and Emerging Trends”. RAND Corporation, https://www.rand.org/pubs/perspectives/PEA3674-1.html 
  • Cibralic et al, 2025. “Building Blocks in Responsible AI.” Center for Democracy and Technology https://cdt.org/wp-content/uploads/2025/09/2025-09-17-AI-Gov-Lab-Issue-Brief-Responsible-AI-final-2.pdf
  • Toner et al, 2024. "Through the Chat Window and Into the Real World: Preparing for AI Agents." Center for Security and Emerging Technology, https://cset.georgetown.edu/publication/through-the-chat-window-and-into-the-real-world-preparing-for-ai-agents/
  • Cibralic, Beba, 2020. “Climate Colonialism Workshop Report.” Available upon request.
  • Cibralic, Beba (first author) & Connelly, Aaron, 2018. “Russia’s Disinformation Game in Southeast Asia.” Lowy Interpreter, https://www.lowyinstitute.org/the-interpreter/russias-disinformation-game-southeast-asia
  • Cibralic, Beba (first author) & Flitton, Daniel, 2018. “Trump-Putin: Beyond Election Meddling.” Lowy Interpreter, https://www.lowyinstitute.org/the-interpreter/trump-putin-beyond-election-meddling

PUBLIC PHILOSOPHY
  • Boudreaux, Benjamin and Cibralic, Beba (co first authors), 2025. “Will AI Help or Hurt Democracy? There’s a User Manual for That”. USA Today.
  • Leverhulme Centre (Cambridge) Group Submission to Future of Life Institute WorldBuilder Contest on AI Futures, 2022 (2nd place). https://worldbuild.ai/core-central/ 
  • Táíwò, Olúfẹ́mi & Cibralic, Beba (co first authors), 2022. “If the west can harbour Ukrainians, it can accept the many climate refugees to come.” Guardian, https://www.theguardian.com/commentisfree/2022/apr/01/ukraine-war-west-immigration-climate-refugees
  • Cibralic, Beba, 2021. “Astrology: Informative, Harmful, or Just Plain Fun?” Aesthetics for Birds, https://aestheticsforbirds.com/2021/08/19/astrology-informative-harmful-or-just-plain-fun/
  • Cibralic, Beba, 2021. “Epistemic Norms and Failures of Reporting.” Social Epistemology Review and Reply Collective 10 (5): 1-5. https://wp.me/p1Bfg0-5NTi. 
  • Cibralic, Beba, 2020. “‘Caliphate’ and the Problem of Testimony.” Social Epistemology Review and Reply Collective 9 (12): 33-36. https://wp.me/p1Bfg0-5zi.
  • Táíwò, Olúfẹ́mi & Cibralic, Beba (co first authors), 2020. “The Case for Climate Reparations.” Foreign Policy, https://foreignpolicy.com/2020/10/10/case-for-climate-reparations-crisis-migration-refugees-inequality/
  • Cibralic, Beba & Lang, JJ, 2020. “Why do we get sad when robots die?” The Outline, https://theoutline.com/post/8514/why-am-i-sad-robot-died




​
Powered by Create your own unique website with customizable templates.
  • About me
  • Research
  • Teaching