Perceiving the outside world is not enough to act optimally. How do we move from perception to decision? The problem is set out simply in a recent review (Maloney & Zhang, 2010). Each state of the world (w) translates, after Bayesian inference, into a distribution of inferred sensory states (x). The decision problem consists in choosing the action (a = d(x)) according to the sensory states x. Actions have positive or negative consequences that depend on the actual states of the world, according to a Gain (or Cost) function G(a,w). A rational strategy consists in choosing the action that maximizes the expected gain G.
From this very general model, we can deduce, for example, the whole of signaldetection theory. The states of the world are binary (w or ~w) and the gain function, which is very simple, is + 1 if the response corresponds to the state of the world, -1 otherwise. However, the theory can be used to model much more complex situations. For example, an interesting task involves asking the subject to point a finger very quickly at a target (a green disk), while imposing a penalty if the finger touches a red circle (Trommershauser, Maloney & Landy, 2008). The results indicate that such motor decisions, despite their speed and automaticity, approach optimality. Subjects adequately take into account the uncertainty associated with their own movements, the distribution of stimuli and the resulting costs. They also manage to arbitrate between the time devoted to decision-making and the time devoted to movement, always optimizing the cost function imposed by the experimenter (Battaglia & Schrater, 2007). Finally, they take into account, in a quasi-optimal way, the intermediate sensory information, more or less noisy, that they receive on the trajectory of their current movement (Kording & Wolpert, 2004).