AI Safety Fundamentals: Alignment
Podcast tekijän mukaan BlueDot Impact
Kategoriat:
83 Jaksot
-
Is Power-Seeking AI an Existential Risk?
Julkaistiin: 13.5.2023 -
Where I Agree and Disagree with Eliezer
Julkaistiin: 13.5.2023 -
Supervising Strong Learners by Amplifying Weak Experts
Julkaistiin: 13.5.2023 -
Measuring Progress on Scalable Oversight for Large Language Models
Julkaistiin: 13.5.2023 -
Least-To-Most Prompting Enables Complex Reasoning in Large Language Models
Julkaistiin: 13.5.2023 -
Summarizing Books With Human Feedback
Julkaistiin: 13.5.2023 -
Takeaways From Our Robust Injury Classifier Project [Redwood Research]
Julkaistiin: 13.5.2023 -
AI Safety via Debatered Teaming Language Models With Language Models
Julkaistiin: 13.5.2023 -
High-Stakes Alignment via Adversarial Training [Redwood Research Report]
Julkaistiin: 13.5.2023 -
AI Safety via Debate
Julkaistiin: 13.5.2023 -
Robust Feature-Level Adversaries Are Interpretability Tools
Julkaistiin: 13.5.2023 -
Introduction to Logical Decision Theory for Computer Scientists
Julkaistiin: 13.5.2023 -
Debate Update: Obfuscated Arguments Problem
Julkaistiin: 13.5.2023 -
Discovering Latent Knowledge in Language Models Without Supervision
Julkaistiin: 13.5.2023 -
Feature Visualization
Julkaistiin: 13.5.2023 -
Toy Models of Superposition
Julkaistiin: 13.5.2023 -
Understanding Intermediate Layers Using Linear Classifier Probes
Julkaistiin: 13.5.2023 -
Acquisition of Chess Knowledge in Alphazero
Julkaistiin: 13.5.2023 -
Careers in Alignment
Julkaistiin: 13.5.2023 -
Embedded Agents
Julkaistiin: 13.5.2023
Listen to resources from the AI Safety Fundamentals: Alignment course!https://aisafetyfundamentals.com/alignment