Biological Anchors: A Trick That Might Or Might Not Work

AI Safety Fundamentals: Alignment - Podcast tekijän mukaan BlueDot Impact

I've been trying to review and summarize Eliezer Yudkowksy's recent dialogues on AI safety. Previously in sequence: Yudkowsky Contra Ngo On Agents. Now we’re up to Yudkowsky contra Cotra on biological anchors, but before we get there we need to figure out what Cotra's talking about and what's going on.The Open Philanthropy Project ("Open Phil") is a big effective altruist foundation interested in funding AI safety. It's got $20 billion, probably the majority of money in the field, so its deci...

Visit the podcast's native language site