Just two quick plots.
For maximum simulated likelihood estimation and for some other cases, we need to integrate the likelihood function with respect to a distribution that reflects unobserved heterogeneity. When numeric integration is too difficult, then we can integrate by simulating the underlying distribution.
However, using random draws from the underlying distribution can be inefficient for integration, and there are several ways of speeding up the integration or of increasing the accuracy for the same amount of time.
One possibility is to use sequences that mimic random draws from the underlying distribution but have a better coverage of the underlying space, examples for low-discrepancy_sequences are Sobol and Halton sequences.
To figure out how this works, I converted a function for Halton sequences that was coded in C to Python, and used the inverse transform from the distributions in scipy.stats to generate a few examples.
The first figure contrasts the Halton sequences and draws from a random number generator, on the left side for the uniform and on the right side for the normal distribution. We can see in both cases that the Halton sequence has a more even spread of the points than the pseudo-random numbers. They have fewer gaps and fewer spots with thight bunching of points.
The second figure just shows a few more examples, log-normal and t-distribution, and the normal distribution projected onto the unit circle, which, if I remember correctly, is just the Von-Mises distribution. The last plot shows the Halton sequence with 5000 points, which covers the unit square in a very regular pattern. All the other plots have 500 points.