BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Talks.cam//talks.cam.ac.uk//
X-WR-CALNAME:Talks.cam
BEGIN:VEVENT
SUMMARY:Artificial General Intelligence Control & Non-Proliferation Treaty
 : A Blueprint for the Global Governance of Advanced Machine Intellect - D.
  A. Floudas (Hughes Hall\, University of Cambridge)
DTSTART:20240813T160000Z
DTEND:20240813T180000Z
UID:TALK220852@talks.cam.ac.uk
CONTACT:Demetrius Floudas
DESCRIPTION:_Aperçu:_\n\nWe propose the following legal framework: machin
 e intellect agents of a significantly higher capability than current model
 s should be treated similarly to Weapons of Mass Destruction. A new intern
 ational agency (along the lines of IAEA) must be invested with inspection 
 powers and a UN Security Council-backed mandate to guarantee safe governan
 ce and\ncurtail infringements.\n\n\n_Abstract:_\n\nThe urgent need for an 
 AI control and non-proliferation treaty\, along with an international agen
 cy to enforce it\, is not merely a matter of prudent governance—it is an
  imperative for human survival. The rapid\, unpredictable\, and dual-use n
 ature of AI\, coupled with the global dynamics of its development\, presen
 ts a unique challenge that our existing international frameworks are ill-e
 quipped to handle. The catastrophic potential of AI misuse\, and its abili
 ty to eventually acquire agency and escape human control\, poses an existe
 ntial risk for mankind that we cannot afford to ignore or underestimate.\n
 \nThe international community's current efforts\, while well-intentioned\,
  fall woefully short of addressing the magnitude of this challenge. The AI
  Convention\, set to be signed in September 2024 by 57 countries including
  major players like the EU\, USA\, and Britain\, has been diluted into a s
 et of general principles that lack real teeth. Similarly\, the EU AI Act\,
  despite its laudable intentions\, fails to adequately address the rapidly
  evolving hazards posed by advanced AI systems. These initiatives\, focuse
 d primarily on regulating everyday AI applications\, demonstrate a dangero
 us lack of foresight in forestalling the truly catastrophic risks on the h
 orizon.\n\nGiven the unparalleled perils\, the world must implement unprec
 edented mitigation measures. Every day that passes without comprehensive g
 lobal controls increases the risk of an eventual catastrophic event. The w
 indow for effective action is closing and gradual\, incremental measures a
 re a luxury.\n\n\nThe proposal is clear and unequivocal: non-biological br
 ains of significantly higher capability than current models must be treate
 d similarly to Weapons of Mass Destruction. This necessitates a global AI 
 Control & Non-Proliferation Treaty that would prohibit any further develop
 ment of advanced AI systems on a for-profit basis and subsume AI control t
 o an international agency with sweeping powers.\n\nThis agency\, modelled 
 on the International Atomic Energy Agency (IAEA)\, would be invested with 
 unlimited inspection powers over any potentially relevant facilities world
 wide. Crucially\, it would have a UN Security Council-backed mandate to cu
 rtail infringements\, including the authorisation to use military force ag
 ainst violators. Such a regime would effectively remove commercial firms\,
  criminals\, and private entities from the equation of advanced AI develop
 ment.\n\nThe IAEA's approach to nuclear non-proliferation and safety offer
 s a blueprint for this new AI governance body. Its rigorous safeguards sys
 tem\, which has consistently and unfailingly verified states' compliance w
 ith the Non-Proliferation Treaty\, could be adapted and enhanced for AI ov
 ersight. \n\nThis suggestion shall face fierce resistance from tech compan
 ies currently at the forefront of AI development. These entities\, having 
 invested billions in research and development\, would likely view this mov
 e as an anathema to their business models and future prospects.  However\,
  the potential pushback from the tech industry pales in comparison to the 
 risks of failing to implement such a system. Without robust global control
 s\, the planet faces a future where AI development becomes an uncontrolled
  arms race\, with nations and corporations competing to create ever more p
 owerful systems without adequate safety precautions. \n\nThis is not a cal
 l for the cessation of AI research and development\, but rather a proposal
  to create a system for its careful\, controlled\, and deliberate advancem
 ent under strict international oversight.   The proposed AI Control & Non-
 Proliferation Treaty and its enforcing agency may represent a fighting cha
 nce to harness the immense potential of AI while safeguarding against its 
 existential risks.  \n\n\n________________________________________\n\n\n\n
 *About the speaker:*\n\n_Demetrius A. Floudas is a transnational lawyer\, 
 a legal adviser specializing in tech and an AI regulatory & policy theoris
 t. With extensive experience\, he has counseled governments\, corporations
 \, and start-ups on regulatory aspects of policy and technology. He serves
  as an Adjunct Professor at the Law Faculty of Immanuel Kant Baltic Federa
 l University\, where he lectures on Artificial Intelligence Regulation. Ad
 ditionally\, he is a Fellow of the Hellenic Institute of International & F
 oreign Law and a Senior Adviser at the Cambridge Existential Risks Initiat
 ive. Floudas has contributed policy & political commentary to numerous int
 ernational think-tanks and organizations\, and his insights are frequently
  featured in respected media outlets worldwide._\n\n_In addition\, D. Flou
 das has provided commentary on matters of Foreign Affairs & International 
 Relations to a number of international think-tanks\, with his views freque
 ntly appearing in the media worldwide ("BBC":https://youtu.be/NpPXnOLz87E 
 TV & "Radio":https://www.facebook.com/HughesHallCambridge/posts/1125666955
 22194\, Voice of America\, "Financial Times":https://www.ft.com/content/36
 8cbc3a-1ebf-3580-b1a3-5a2903a4b9c0/\, Daily Telegraph\, Washington Post\, 
 Politico and others)_\n\n_He is currently involved in the European AI Offi
 ce’s Plenary drafting the Code of Practice for General-Purpose Artificia
 l Intelligence and a member of the EUAI Working Group for AI Systemic Risk
 s. He also participates in the Department for Science\, Innovation & Techn
 ology Focus Group on an independent UK AI Safety Office and is a Reviewer 
 of the Draft UNESCO Guidelines for the Use of AI Systems in Courts and Tri
 bunals._\n\n\n\n\n*The lecture will be followed by refreshments*\n\n*This 
 talk is open to all members of the University\, upon prior registration:*\
 n\n
LOCATION:Stephen Hawking Building\, West Road
END:VEVENT
END:VCALENDAR
