Imagine the taste of an overripe banana. How did your mind store this data?

In the past we’ve spoken a lot about artificial minds. But human minds are fascinating in themselves, too!

In science and technology people build AI to perform human-mind functions (e.g. to process natural language) and non-human-mind functions (e.g. to detect complex trends in high volumes of data). But how do human minds perform these tasks? If we want to be able to build the equivalent of a complete human mind, we may need to know if our mental functions are to be replicated artificially.

David Hume had an interesting theory. His aim was to demystify the mind and bring it into the remit of science without appealing to rationalism.

Rationalists use tools such as intuition and deduction to teach us about the external world (e.g. through logic). But they cannot give us substantive knowledge of it, Hume said, hence his famous rallying call against metaphysics in An Inquiry Concerning Human Understanding (1748):

'If we take in our hand any volume—of divinity or school metaphysics, for instance—let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames, for it can contain nothing but sophistry and illusion.'

Part of the reason for building this idea was that he did not trust human reasoning to deduce real knowledge about the world.

'The intense view of these manifold contradictions and imperfections in human reason has so wrought upon me, and heated my brain, that I am ready to reject all belief and reasoning, and can look upon no opinion even as more probable or likely than another.'

Conversely, we have to empirically gain understanding (i.e. from experience, not reason).

So how does the mind fit into this picture? How do we acquire knowledge? Hume writes that the mind holds perceptions from experience—impressions (e.g. the taste of a banana) —and ideas, which are their copies and combinations (e.g. thoughts, beliefs, and memories about bananas). The latter requires fuller mental representation (e.g. imagining an overripe banana or abstracting to the thought of food); the former can be simple senses.

There is an analogy to apply from Newton’s scientific model here, according to which our perceptions are particles and how we mentally relate them are the forces. In tune with this analogy, we construct ideas from impressions through the forces of resemblance (e.g. by imagining a unicorn comes from impressions of horses), contiguity (e.g. by association), and cause and effect (e.g. by learning which past events precede which future events).

A final thought: Hume denies that causal connections outside the mind are necessarily real; our assumptions about them should be based on habit and probability. We can therefore say we enjoy free will (liberty) in a world (necessity) that does not undermine, inside our minds, the power to act or not act. Great! Can we, then, construct an artificial mind that experiences the world with liberty, in a way that’s not caused by the code we programmed into it?