People

Benjamin Harack

AFFILIATION
International Relations Network
IR
College
St John's College
Course
DPhil International Relations
supervisor

Ben studies the potential for artificial intelligence (AI) to trigger a world war and how to prevent that from happening.

In particular, he studies AI-triggered military power shifts, the causes of war, and the international governance of AI (e.g., institutions to enable a global AI market)—with a particular focus on verification. He has also written about a multinational AI collaboration can benefit democracies and the world.

His prior specializations include semiconductor physics and full-stack software engineering.

As a social scientist, his strongest methodological areas are formal theory, quantitative analysis, and process tracing. Ben also tends to draw heavily on his background in the natural and formal sciences, with a particular emphasis on machine learning, semiconductors, cryptography, and nuclear science. Previously, he spent a decade working for Silicon Valley startups as a software engineer and manager.

He is a DPhil Affiliate at the Oxford Martin AI Governance Initiative, a research group examining the risks of AI and how those risks can be addressed through governance. 

Areas of expertise

  • Artificial intelligence

  • Semiconductors

  • Cryptography

  • Nuclear science

  • Formal theory

  • Quantitative analysis

Publications

Abecassis, A., Barry, J., Bello, I., Bengio, Y., Bergeaud, A., Bonnet, Y., Hacker, P., Harack, B., Hatz, S., Henkel, J., Hoos, H.H., Kitamura, K., Lall, R., Lechelle, Y., de Leusse, C., Martinet, C., Miailhe, N., Morse, J.C., Negele, M., Park, K.R., Pluckebaum, M., Popa-Fabre, M., Prud’homme, B., Ralle, Y., Robinson, M., Segerie, C.-R., Torreblanca, J.-I., Velasco, L., VijayRaghavan, K., 2025. A Blueprint for Multinational Advanced AI Development. Oxford Martin AI Governance Initiative.

Harack, B., Trager, R.F., Reuel, A., Manheim, D., Brundage, M., Aarne, O., Scher, A., Pan, Y., Xiao, J., Loke, K., Adan, S.N., Bas, G., Caputo, N.A., Morse, J.C., Ahuja, J., Duan, I., Egan, J., Bucknall, B., Rosen, B., Araujo, R., Boulanin, V., Lall, R., Barez, F., Alvira, S., Katzke, C., Atamli, A., Awad, A., 2025. Verification for International AI Governance. Oxford Martin AI Governance Initiative.

Bucknall, B., Siddiqui, S., Thurnherr, L., McGurk, C., Harack, B., Reuel, A., Paskov, P., Mahoney, C., Mindermann, S., Singer, S. and Hiremath, V., 2025, June. In Which Areas of Technical AI Safety Could Geopolitical Rivals Cooperate?. In Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency (pp. 3148-3161).

Blomquist, K., Siegel, E., Harack, B., Ng, K.Y., David, T., Tse, B., Martinet, C., Sheehan, M., Singer, S., Bello, I., Yusuf, Z., Trager, R., Salem, F., Ó hÉigeartaigh, S., Zhao, J., Jia, K., 2025. Examining AI Safety as a Global Public Good: Implications, Challenges, and Research Priorities. Oxford Martin AI Governance Initiative.

Velasco, L., Martinet, C., de Zoete, H., Trager, T., Snidal, D., Garfinkel, B., Ng, K.Y., Belfield, H., Wallace, D., Bengio, Y., Prud’homme, B., Tse, B., Radu, R., Lall, R., Harack, B., Morse, J., Miailhe, N., Singer, S., Sheehan, M., Stauffer, M., Zeng, Y., Barnhart, J., Bello, I., Lan, X., Guest, O., Cass-Beggs, D., Chuanying, L., Adan, S.N., Anderljung, M., Dennis, C., 2025. The Future of the AI Summit Series. Oxford Martin AI Governance Initiative.

Trager, R., Harack, B., Reuel, A., Carnegie, A., Heim, L., Ho, L., Kreps, S., Lall, R., Larter, O., hÉigeartaigh, S.Ó. and Staffell, S., 2023. International Governance of Civilian AI: A Jurisdictional Certification Approach. Oxford Martin AI Governance Initiative.