Amphithéâtre Marguerite de Navarre, Site Marcelin Berthelot
Open to all
-

In the second lesson, we set out to answer some fundamental questions: what is the optimum information that can be obtained from measurements made on a single copy of a quantum system? How can this information be quantified? More generally, what can we say about a system if we have a finite number N of copies? How does information about the state of the system increase with N? This problem is reminiscent of that of estimating a random variable in classical probability theory. The aim is to infer, from the results of the measurements, the value of the parameters that define the state. We began the lesson with a reminder of classical information theory, introducing Fisher's concept of information, the Cramer Rao limit and the maximum likelihood estimation method. In its simplest form, classical estimation theory associates an estimator θ(x) of any measurement result x of a random variable X obeying a probability distribution p(x|θ) dependent on an a priori unknown parameter θ. The variance of θ(x) averaged over a large number of measurements represents the precision of the estimate. If the mean of θ(x) over an infinite number of measurements corresponds to the true value of the parameter θ, the estimator is said to be "unbiased". The accuracy of an unbiased estimator is bounded below (Cramer Rao bound) by the quantity 1/I(θ) where I(θ), called Fisher information, is equal to the mean value of the square of the derivative with respect to θ of the log likelihood function p(x|θ). The greater the Fisher information, the smaller the lower bound on the variance of the estimate, in other words, the more potential information the statistical law contains for estimating θ. An estimate is optimal if its variance reaches the Cramer Rao bound. Based on these properties, we showed the additivity of Fisher information associated with independent measurements made on a set of N identical systems, which immediately leads to the well-known 1/√N variation in the standard deviation of the optimal estimate of the parameter θ when N measurements of the random variable are made. We then introduced the estimator θ(x) based on the maximum likelihood principle , which corresponds to the value of θ canceling the derivative of the likelihood function with respect to θ, and showed that this estimator is optimal in the limit of an infinite number of measurements. We conclude these reminders by illustrating them with a simple example, that of a binomial statistic (coin toss), showing that the well-known properties of this statistic can simply be found from an analysis based on Fisher information.

Events