Live: Eliezer Yudkowsky - Is Artificial General Intelligence too Dangerous to Build?

Watch on YouTube (Embed)

Switch Invidious Instance

Show annotations

57,108

1,289

Genre: Science & Technology

License: Creative Commons Attribution license (reuse allowed)

Family friendly? Yes

Shared April 19, 2023

Live from the Center for Future Mind and the Gruber Sandbox at Florida Atlantic University, Join us for an interactive Q&A with Yudkowsky about Al Safety! Eliezer Yudkowsky discusses his rationale for ceasing the development of Als more sophisticated than GPT-4 Dr. Mark Bailey of National Intelligence University will moderate the discussion. An open letter published on March 22, 2023 calls for "all Al labs to immediately pause for at least 6 months the training of Al systems more powerful than GPT-4." In response, Yudkowsky argues that this proposal does not do enough to protect us from the risks of losing control of superintelligentAl. Eliezer Yudkowsky is a decision theorist from the U.S. and leads research at the Machine Intelligence Research Institute. He's been working on aligning Artificial General Intelligence since 2001 and is widely regarded as a founder of the field of alignment. Dr. Mark Bailev is the Chair of the Cvber Intelligence and Data Science Department, as well as the Co-Director of the Data Science Intelligence Center, at the National Intelligence University.