Saturday, 10 December 2011

Moral Luck and the Infinite Future

In my last posts I talked about Thomas Nagel's 'moral luck' and the problem it posed for consequentialism: if the morality of an action is dependent on the consequences, doesn't that mean that how moral our actions are is determined to some extent by chance?  Let's consider how this works for a moment.  We can control the morality of our actions to the exact same extent that we can predict or control the consequences of those actions.  If we know exactly what will happen when we take a particular action, then we can act in a way that we know is moral, right?  Well, sure, but that's not a situation that occurs in real life.

Most of us can have a pretty good guess at the short-term consequences of our actions. If we are very clever, we can sometimes predict the consequences of our actions over the period of a few years.  Nobody, however, can accurately predict what's going to happen in a few hundred years (outside of some very specific scientific claims,) and the future just keeps on going.  Consider the total consequences of an action, which are infinite or near-infinite, and consider the percentage of those consequences that we can predict.  It's an infinitely small percentage, which means that we have infinite uncertainty as to how moral any action we take is.

Applying moral luck to the infinite future means that consequentialism suddenly becomes much, much less practical.  A simple corollary: not only can we control the morality of our own actions, we can't judge the morality of other people's actions either.  How can a utilitarian say that Stalin's actions were wrong when the full consequences aren't yet clear?  You might pick an arbitrary period - say, fifty years - over which to consider the results of Stalin's actions, but I can see no way of justifying why one might pick fifty years rather than fifty thousand.


(I should mention I'm talking exclusively about 'resultant moral luck'.  Nagel identifies three other kinds of moral luck that are less important.)


  1. This is a bit of a strawman. Most sane utilitarians are fairly clear that a person should act with foreseeable, likely consequences in mind and do not assign moral culpability for unforeseeable consequences. So that's that for resultant moral luck.

    As for constitutive and circumstantial moral luck, don't these problems equally apply to other ethical theories as well?

  2. Sure, most utilitarians think people should act on the foreseeable consequences. But why? From a utilitarian perspective, an act where the foreseeable consequences are morally beneficial is no more likely to be moral, in the long term, than an act where the foreseeable consequences are morally harmful. I think I've put forth at least the outline of a case for this in my post.

    Constitutive and circumstantial moral luck may well apply equally to other ethical theories; I'm not very interested here in those varieties of moral luck.