The Monte-Carlo method approximates averages by empirical sums of independent samples, which is equivalent to approximating integrals, potentially in very high dimensions. It is used in physics to simulate systems with a large number of degrees of freedom. Its history goes back to Buffon's needles in the 18th century.
It involves calculating the expectation of a random variable using an empirical sum of independent variables with the same law. Convergence is guaranteed by the strong law of large numbers, and the error converges to a normal law. In large dimensions, this enables integrals to be approximated with asymptotic convergence, which is much faster than a Riemann sum calculation on regular grids. However, we show that Monte Carlo calculation with a uniform measure can be inefficient, for probability distributions that concentrate on varieties of lower dimension than the ambient space. It is then necessary to build a model enabling sampling to be concentrated on these varieties.