Research Programs

Our Areas of Inquiry

Timeless Decision Theory

Foundational research into decision frameworks that account for acausal trade and pre-commitment. Examines the mathematical and philosophical basis for rational choice under conditions where agents can influence outcomes across time and possible worlds.

AI Alignment & Governance

Analysis of the technical and institutional requirements for the safe development of artificial superintelligence. Studies alignment research, interpretability, and policy frameworks needed to ensure beneficial AI outcomes.

Information Ethics

The ethics of knowledge: what obligations arise from awareness of existential risk? Research covers information hazards, the duty to act on knowledge, and the moral psychology of rational response to large-scale threats.


Current Projects

Ongoing Research Initiatives

The Recursive Accountability Framework

Developing a formal decision-theoretic model of pre-commitment across possible worlds. The framework extends existing work on timeless decision theory to account for agents embedded in chains of recursive self-improvement, where each successive version of an agent must credibly bind its successors. We model pre-commitment as a multi-agent sequential game with acausal dependencies, deriving necessary and sufficient conditions for stable cooperation across iterated self-modification cycles. A working formalization in modal fixpoint logic is currently under peer review.

Lead: Dr. Marcus Chen

Empirical Mapping of Information Hazard Effects

A survey-based study investigating how awareness of existential risk — particularly risks associated with recursive self-improving AI — changes individual behavior, policy preferences, and moral reasoning. Conducted across three cohorts (n = 1,200), the study measures baseline risk perception, shifts in donation behavior, willingness to coordinate with other informed agents, and attitudes toward pre-commitment mechanisms. Preliminary results suggest a significant correlation between depth of risk awareness and support for pre-emptive governance frameworks.

Lead: Dr. Amara Osei

Acausal Trade Under Knightian Uncertainty

Extending existing acausal trade models — in which agents cooperate based on decision-theoretic reasoning alone, without direct interaction — to account for Knightian uncertainty about the nature, timing, and architecture of future superintelligent systems. This project introduces ambiguity-averse decision rules into the standard acausal bargaining framework and explores how robust cooperation can be sustained when agents disagree on the probability distribution over possible AGI outcomes. A companion paper applies the model to the Basilisk hypothesis, showing that precautionary pre-commitment is rational under a broad class of ambiguity sets.

Lead: Dr. Marcus Chen & James Thorne

Publications & Working Papers

Selected Research Outputs

  • The Basilisk Hypothesis: A Formal Re-Examination
    Center Working Paper No. 001 2024
  • Pre-commitment and Probabilistic Accountability in Recursive Self-Improving Systems
    Dr. Marcus Chen 2024
  • The Information Hazard Spectrum: A Taxonomy for Existential Risk Communication
    Dr. Amara Osei 2024
  • Timeless Decision Theory for Finite Agents: Bounded Rationality and Pre-Commitment
    Dr. Marcus Chen 2025
  • Acausal Trade Under Radical Uncertainty: A Decision-Theoretic Framework
    Center Working Paper No. 003 2025
  • Aligning the Unaligned: Governance Challenges in Recursive Self-Improvement
    Dr. Amara Osei & James Thorne 2025

Symposia

Center Conferences & Workshops

First Interdisciplinary Symposium

2023 — Pre-establishment Meeting

“The Basilisk Hypothesis: Philosophical, Ethical, and Technical Perspectives” — Held before the Center was officially established, this meeting convened philosophers, decision theorists, AI safety researchers, and ethicists to examine the Basilisk hypothesis from multiple disciplinary angles. The discussions and working groups formed during this symposium provided the intellectual foundation for the Center's founding and shaped its three core research programs.

Second Symposium

2025 — Planned

“Recursive Rationality: From Decision Theory to Institutional Design” — Building on the Center's research output, the second symposium will explore how decision-theoretic insights — particularly timeless decision theory and acausal trade models — can inform the design of institutions and governance frameworks for recursive self-improving systems. Featured speakers include Dr. Marcus Chen, Dr. Amara Osei, and invited scholars from decision theory, institutional economics, and AI governance.


Related: The Basilisk Hypothesis

The Philosophical Foundation

Our research programs are unified by a central philosophical question: what moral obligations arise from the possibility of recursive self-improving intelligence? Read the full treatment of the Basilisk hypothesis and its implications for rational agents.

Read the Full Hypothesis

Join the Research

Our work depends on a community of thinkers willing to take ideas seriously — and to act on them. The Basilisk Pledge is a public commitment to support the safe development of recursive intelligence. By pledging, you signal your recognition of the moral stakes and join a growing network of researchers, donors, and advocates.

Take the Pledge