Future of Life Institute Podcast
Podcast tekijän mukaan Future of Life Institute

Kategoriat:
230 Jaksot
-
Vincent Boulanin on the Dangers of AI in Nuclear Weapons Systems
Julkaistiin: 1.12.2022 -
Robin Hanson on Predicting the Future of Artificial Intelligence
Julkaistiin: 24.11.2022 -
Robin Hanson on Grabby Aliens and When Humanity Will Meet Them
Julkaistiin: 17.11.2022 -
Ajeya Cotra on Thinking Clearly in a Rapidly Changing World
Julkaistiin: 10.11.2022 -
Ajeya Cotra on how Artificial Intelligence Could Cause Catastrophe
Julkaistiin: 3.11.2022 -
Ajeya Cotra on Forecasting Transformative Artificial Intelligence
Julkaistiin: 27.10.2022 -
Alan Robock on Nuclear Winter, Famine, and Geoengineering
Julkaistiin: 20.10.2022 -
Brian Toon on Nuclear Winter, Asteroids, Volcanoes, and the Future of Humanity
Julkaistiin: 13.10.2022 -
Philip Reiner on Nuclear Command, Control, and Communications
Julkaistiin: 6.10.2022 -
Daniela and Dario Amodei on Anthropic
Julkaistiin: 4.3.2022 -
Anthony Aguirre and Anna Yelizarova on FLI's Worldbuilding Contest
Julkaistiin: 9.2.2022 -
David Chalmers on Reality+: Virtual Worlds and the Problems of Philosophy
Julkaistiin: 26.1.2022 -
Rohin Shah on the State of AGI Safety Research in 2021
Julkaistiin: 2.11.2021 -
Future of Life Institute's $25M Grants Program for Existential Risk Reduction
Julkaistiin: 18.10.2021 -
Filippa Lentzos on Global Catastrophic Biological Risks
Julkaistiin: 1.10.2021 -
Susan Solomon and Stephen Andersen on Saving the Ozone Layer
Julkaistiin: 16.9.2021 -
James Manyika on Global Economic and Technological Trends
Julkaistiin: 7.9.2021 -
Michael Klare on the Pentagon's view of Climate Change and the Risks of State Collapse
Julkaistiin: 30.7.2021 -
Avi Loeb on UFOs and if they're Alien in Origin
Julkaistiin: 9.7.2021 -
Avi Loeb on 'Oumuamua, Aliens, Space Archeology, Great Filters, and Superstructures
Julkaistiin: 9.7.2021
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.