In my last posts I talked about Thomas Nagel's 'moral luck' and the problem it posed for consequentialism: if the morality of an action is dependent on the consequences, doesn't that mean that how moral our actions are is determined to some extent by chance? Let's consider how this works for a moment. We can control the morality of our actions to the exact same extent that we can predict or control the consequences of those actions. If we know exactly what will happen when we take a particular action, then we can act in a way that we know is moral, right? Well, sure, but that's not a situation that occurs in real life.
Most of us can have a pretty good guess at the short-term consequences of our actions. If we are very clever, we can sometimes predict the consequences of our actions over the period of a few years. Nobody, however, can accurately predict what's going to happen in a few hundred years (outside of some very specific scientific claims,) and the future just keeps on going. Consider the total consequences of an action, which are infinite or near-infinite, and consider the percentage of those consequences that we can predict. It's an infinitely small percentage, which means that we have infinite uncertainty as to how moral any action we take is.
Applying moral luck to the infinite future means that consequentialism suddenly becomes much, much less practical. A simple corollary: not only can we control the morality of our own actions, we can't judge the morality of other people's actions either. How can a utilitarian say that Stalin's actions were wrong when the full consequences aren't yet clear? You might pick an arbitrary period - say, fifty years - over which to consider the results of Stalin's actions, but I can see no way of justifying why one might pick fifty years rather than fifty thousand.
(I should mention I'm talking exclusively about 'resultant moral luck'. Nagel identifies three other kinds of moral luck that are less important.)