Computational anatomy is an emerging discipline at the interface of geometry, statistics, image analysis and medicine, whose aim is to model the biological variability of organs. We are interested, for example, in the mean shape and its variations in a population, so as to describe and quantify normal and pathological variations or evolutions. This lecture focuses on the statistical dimension of algorithmic anatomy, and describes the geometric foundations that have enabled significant algorithmic advances in recent years.
To analyze shape variability, we generally identify geometric primitives that describe the anatomy locally (curves, surfaces, deformations) and attempt to model their statistical distribution in the population. A major difficulty is that these geometric objects generally belong to non-linear spaces, whereas statistics have essentially been developed within a vector framework. For example, adding or subtracting two curves doesn't really make sense. So we can't easily talk about their average! So we need to redefine the mathematical framework in which we develop our algorithms.
However, the spaces in which the shapes live are often locally Euclidean, and an infinitesimal distance measure (a metric) can be used to give them the structure of a Riemannian variety. This makes it possible to measure directions, angles, intrinsic distances and the shortest geodesic paths, thus generalizing space geometry to curved spaces, of which the sphere or the saddle horse are the simplest examples. On this basis, we can redefine consistent statistical notions. For example, the Fréchet mean is the set of points minimizing the sum of the square of the distances to the observations. Once the mean has been calculated, we can then expand the variety linearly around this point and return to classical statistical notions for higher-order moments. In a way, we have corrected the non-linearity of our space.