Immanuel Kant
Don't mess with Immanuel Kant.

Through the ‘categorical imperative’ Immanuel Kant thought we could rationalise universal moral laws, where the good-willed person does X because it is their moral duty, not because they will reach some end, Y.

Kant saw rationality as some kind of special cognitive tool which gives humans duty to each other. With it they can commit themselves to moral codes they think all good-willed people should follow.

Aside from the exclusion of non-human animals from his formula, there’s a big metaethical issue here: what about people who can rationalise not giving a crap because they’re happy for people not to give a crap about them?

Take Bernard Williams’ ‘ethical egoist’, the guy who exclusively pursues his own interests. His lack of altruism successfully passes Kant’s formula because he’s happy to be treated with the same disregard he gives to others. (I’m sure we’ve all been acquainted with an ethical egoist or two in our lives.)

In response to this issue Christine Korsgaard argues that, actually, rationality binds us to valuing other humans. But why? She first has to assume that we are orientated towards valuing other humans in the first place. Then why should a purely rational agent rationally value humanity?

Let’s express the criticism differently. Replace ‘humanity’ with ‘being a cool guy’ in my value system: I would have to value value ‘being a cool guy’ in the first place. But why should I value that! While a strong argument could be made for my trying to be cool and succeeding (cough, cough), I can too easily imagine valuing valuing being a loser, too. Why does rationality force me into wanting to be cool (or wanting to be good to people)?

More in tune with David Hume, we may be tempted by the irrational heart of sentimentalism in our morality. The sentimentalist’s moral beliefs are chained to contingent desires. This seems like a weakness, sure, but think about it: at least in the sentimentalist’s view we have desires with which we are genuinely motivated to care about humanity in our moral codes (or not, depending on our desires).

Sentimentalism does not lead us to objective moral facts but at least any value we place in humanity is already baked into our constitutions. Moreover, we do not lose rationality: we use it to guide our evaluative attitudes from ‘above’.

What do you think?