Preface This essay argues that rational people don’t have goals, and that rational AIs shouldn’t have goals. Human actions are rational not because we direct them at some final ‘goals,’ but because we align actions to practices [1] : networks of actions, action-dispositions, action-evaluation criteria,
After Orthogonality: Virtue-Ethical Agency and AI Alignment
References
This article was originally published at The Gradient. For the full piece, read the original article.
Discussion
Sign in to comment. Your account must be at least 1 day old.