# Import plotting library
import matplotlib.pyplot as plt
# Import numerical Python library
import numpy as np
# Import normal distribution from SciPy
from scipy.stats import norm
# Set interactive plots for Jupyter
%matplotlib widget
# Set default figure size
plt.rcParams["figure.figsize"] = (8, 4)Bias and Consistency are two core mathematical concepts in Estimation Theory. In this notebook, we will demonstrate these concepts using Monte Carlo simulations.[1]
Problem Formulation¶
Suppose we are given a coin that is possibly biased, i.e., a coin toss has an unknown probability of head coming up. Our task is to estimate based on a sequence of coin tosses. More precisely, define a generic random variable that indicates whether the outcome of a coin flip is head, i.e.,
Our goal is to estimate the expectation of ,
given i.i.d. sample of ,
This problem, while seemingly simplistic, provides an excellent ground for understanding the principles of bias and consistency in estimation theory.
A convenient way to obtain the i.i.d. sample is to simulate the process of coin tosses:
# Initialize random number generator with seed
rng = np.random.default_rng(seed=0)
# Generate the probability of head randomly
p = rng.random()
# Set the number of coin tosses we want to simulate
n = 5000
# Use the rng.choice function to simulate n coin tosses,
# with "H" (heads) and "T" (tails) as possible outcomes,
# and the probability of heads and tails given by p and 1-p, respectively.
coin_tosses = rng.choice(["H", "T"], size=n, p=[p, 1 - p])
coin_tosses == "H"In the above code:
pkeeps the value of the unknown probability , which is uniformly randomly picked from the unit interval .coin_tossesin array of"H"and"T", which denote the outcomes “head” and “tail” respectively.
To obtain the i.i.d. sample in (3), simply run
coin_tosses == "H"which gives a list of 1 (True) and 0 (False) obtained by element-wise equality comparisons with "H".
M-Estimation¶
A natural way to estimate is the M-estimate (sample average estimate),
which is the fraction (empirical distribution) of the coin tosses coming up heads.
The estimate can be implemented as follows using the method mean of a numpy array:
(coin_tosses == "H").mean()Is the estimate good? The following is one desirable property:
YOUR ANSWER HERE
An unbiased estimate, while correct in expectation, can be far away from the ground truth, especially for large variance and continuous random variables. It is desirable for an estimate to be nearly correct with high probability:
An estimator of from a random sample is said to be consistent iff
namely, the estimate converge to the ground truth in probability.[3]
YOUR ANSWER HERE
To illustrate consistency by Monte-Carlo simulation, the following code generates and plots the estimate for different sample size .
size = 5000
n = np.arange(1, size + 1)
phat = (coin_tosses == "H").cumsum() / n # use first n tosses to estimate
sigma = (p * (1 - p) / n) ** 0.5 # true standard deviations of the estimates
# Create Figure 1, or clear it if it exists
plt.figure(1, clear=True)
# plot the ground truth p
plt.axhline(p, color="red", label=r"$p$")
# fill the region 2 sigma away from p
plt.fill_between(
n, p - 2 * sigma, p + 2 * sigma, color="red", alpha=0.2, label=r"$p\pm 2\sigma$"
)
# plot the estimates phat
plt.plot(n, phat, marker=".", color="blue", linestyle="", markersize=1, label=r"$\hat{\mathsf{p}}$")
# configure the plot
plt.ylim([0, 1])
plt.xlim([0, n.size])
plt.title(r"Plot of ${\hat{p}}$ vs sample size")
plt.xlabel("sample size")
plt.ylabel("probability")
plt.legend()
plt.show()For a consistent estimator, it is expected that
- The estimates (blue dots) converge to the ground truth (red line) as the sample size increases.
- The convergence rate is bounded by the rate of drop in the standard deviation.
In addition, observe that the estimates mostly fall within 2 standard deviation away from the ground truth:
The sample average estimates falls within 2 standard deviation away from the ground truth over of the time,
The interval is referred to as the -confidence interval estimate of .
The above follows from the Central Limit Theorem: As goes to ∞, the estimate has a gaussian distribution almost surely.
plt.figure(num=2, clear=True)
# plot the stardard normal distribution
x = np.linspace(-4, 4, 8 * 10 + 1)
plt.plot(x, norm.pdf(x), color="red", label=r'$\frac{1}{\sqrt{2\pi}}e^{-x^2}$')
# Fill the area under curve within certain number of standard deviations from the mean
for i in range(3, 0, -1):
plt.fill_between(
x,
norm.pdf(x),
alpha=2 ** (-i),
color="blue",
label=rf"$\Pr(|\hat{{p}}-p|\leq{i}\sigma)\approx {(norm.cdf(i)-0.5)*200:.3g}\%$",
where=(abs(x) <= i),
)
# Label the plot
plt.title(r"Standard normal distribution for $\frac{{\hat{p}}-{p}}{\sigma}$ as $n\to \infty$")
plt.xlabel(r"x")
plt.ylabel(r"probability density")
plt.legend()
plt.show()See Also
For more details about the law of large number and central limit theorem, see the following video by Prof. Robert Gallager:
A Coin Tossing Game¶
To understand the concept of bias in estimation, imagine playing the coin-tossing game:
- You win if a coin toss comes up head.
- You get to choose 1 out of the coins with unknown probability of coming up head.
- You can flip each coin times before making your choice.
How to play the game?
A particular strategy for playing the game is to
- estimate the chance by the empirical probability for each coin , and
- select the coin (with ties broken arbitrarily)
It is easy to see that the chance of winning by the given strategy is . Is the strategy optimal? Can a player evaluate or estimate the chance of winning without knowing ’s? Is the following a good estimate:
How is the strategy related to data-mining?
Suppose is the empirical accuracy of the classifier . A common model selection strategy is to
- choose the classifier defined above that has the empirical accuracy, and
- estimate its performance by .
How to evaluate the estimate?
Consider the simple case , , and . We have the following four equally likely events:
For the above simple case, compute and . Is an unbiased estimate of ?
YOUR ANSWER HERE
Instead of carrying out the exact analysis, which involves order statistics, we will conduct the Monte-Carlo simulation of the coin tossing game. The simulation can be verified by hand-calculating the closed-form solution for , , and .
The following initializes the list p_list of probabilities of head for different coins:
m = 2
p_list = np.array([0.4] * (m - 1) + [0.6])
# To generate the probability randomly instead, use
# p_list = rng.random(m)
p_listInstead of generating a sequence of coin tosses, we will simulate directly using the binomial distribution since
size = 10
n = np.arange(1, size + 1)
k = 100000
phat = rng.binomial(
n.reshape(-1, 1, 1), p_list.reshape(1, -1, 1), (size, m, k)
) / n.reshape(-1, 1, 1)
max_phat = phat.max(axis=1)
max_phatmax_phat is a 2-dimensional array of samples of :
- The first axis indexes samples obtained from different number of tosses.
- The second axis indexes
kindependent samples for the same number of tosses.
The k independent samples can be used to approximates as follows.
E_max_phat = max_phat.mean(axis=-1)
E_max_phatSimilarly, the winning probability can be approximated as follows:
win_prob = p_list[phat.argmax(axis=1)].mean(axis=-1)
win_probThe following plots compare the probabilities as a function of .
plt.figure(3, clear=True)
plt.axhline(p_list.max(), color="red", label=r"$\max_i p_i$")
plt.plot(
n,
E_max_phat,
linestyle="--",
marker=".",
color="blue",
markersize=10,
label=r"$E[\max_i{\hat{p}}_i]$",
)
plt.plot(
n,
win_prob,
linestyle=":",
marker="x",
color="green",
markersize=10,
label="winning probability",
)
plt.ylim([0, 1])
plt.xlim([n[0], n[-1]])
plt.title(r"Plot of $E[\max_i{\hat{p}}_i]$ vs $n$")
plt.xlabel("$n$")
plt.ylabel("probability")
plt.legend()
plt.show()Compare the chance of winning with in general, for different ’s.
Hint
Change p_list to explore different the non-uniform cases where ’s may not be equal. Be careful about the deterministic case where for all .
YOUR ANSWER HERE
Compare the chance of winning with . Is a biased estimate? If so, is it overly optimistic, i.e., has an expectation larger than the chance of winning?
YOUR ANSWER HERE
Is a consistent estimate of the chance of winning?
YOUR ANSWER HERE