After Orthogonality: Virtue-Ethical Agency and AI Alignment

Read the original article →

Preface This essay argues that rational people don’t have goals, and that rational AIs shouldn’t have goals. Human actions are rational not because we direct them at some final ‘goals,’ but because we align actions to practices [1] : networks of actions, action-dispositions, action-evaluation criteria,

References

This article was originally published at The Gradient. For the full piece, read the original article.

Discussion

  • Loading…

← Back to News