- Last updated
- Save as PDF
- Page ID
- 51850
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\)
\( \newcommand{\vectorC}[1]{\textbf{#1}}\)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}}\)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}}\)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
We will now make use of the approximation of the binomial distribution by the \(z\)-distribution given in Section 7.1: Using the Normal Distribution to Approximate the Binomial Distribution. As usual, the confidence interval will switch the roles of population and sample quantities. The recipe will be laid out first, then we will connect it to what you know about the binomial distribution.
First some definitions. Let \(X\) be the number of items in a population of size \(N\) that have a given quality. (e.g. the number of females in a population; or the number of people at the U of S wearing yellow sweaters). Then the proportion of the population having the given quality is
\[ p = \frac{X}{N} \]Given a sample from the population of size \(n\), the best estimate for \(p\) is:
\[\hat{p} = \frac{x}{n}\]where \(x\) is the number of items in the sample having the given quality. To go along with \(\hat{p}\) we also have
\[\hat{q} = 1 -\hat{p}\]which is is the proportion of items in the sample without the given quality.
To compute an \(\cal{C}\) confidence interval for a proportion \(p\) we need to compute
\[ E = z_{\cal{C}} \sqrt{\frac{\hat{p} \hat{q}}{n}}\]and it must be true that both \(n\hat{p} \geq 5\) and \(n\hat{q} \geq 5\) (otherwise we need to use the binomial distribution directly).
With \(E\), the \(\cal{C}\) confidence interval for a proportion is given by
\[ \hat{p} - E < p < \hat{p} + E.\]To derive the proportions confidence interval formula we’ll begin with the sampling theory given by the binomial distribution and the corresponding \(z\)-approximation. Then we’ll switch the roles of \(p\) and \(\hat{p}\). Let
\[x_{\rm pop} = \frac{n}{N} X = np\]be the mean, the expected value, of \(x\) that you expect to find in a sample of size \(n\) randomly selected from the population with a proportion \(p\) of items of interest. This is true because \(p\) is also the probability of randomly selecting an item of interest (the probability of success) from the population as per what we did in Chapter 4. The binomial distribution tells you the probability of getting different numbers \(x\) of items of interest in your sample given \(p\). The binomial distribution that describes our situation is shown in Figure 8.7; it has a standard deviation of \(\sigma = \sqrt{npq}\).
Moving to the normal approximation, we have the picture of Figure 8.8.
Figure 8.8 says :
\[\begin{eqnarray*} \mu - z_{\cal{C}} \sigma \:\: < & x & < \:\: \mu + z_{\cal{C}} \sigma \\ np - z_{\cal{C}} \sqrt{npq} \:\: < & x & < \:\: np + z_{\cal{C}} \sqrt{npq} \end{eqnarray*}\]with a (frequentist) probability of \(\cal{C}\). This is our sampling theory. Divide by \(n\):
\[\begin{eqnarray*} p - z_{\cal{C}} \sqrt{\frac{pq}{n}} & < & \frac{x}{n} \:\: < \:\: p + z_{\cal{C}} \sqrt{\frac{pq}{n}}\\ p - z_{\cal{C}} \sqrt{\frac{pq}{n}} & < & \hat{p} \:\: < \:\: p + z_{\cal{C}} \sqrt{\frac{pq}{n}} \end{eqnarray*}\]Swapping the roles of the population and sample, we arrive at the confidence interval formula :
\[\hat{p} - z_{\cal{C}} \sqrt{\frac{\hat{p}\hat{q}}{n}} \:\: < \:\: p \:\: < \:\: \hat{p} + z_{\cal{C}}\sqrt{\frac{\hat{p}\hat{q}}{n}}.\]Time for a worked example.
Example 8.3 : A sample of 500 nursing applications included 60 men. Find the 90% confidence interval of the true proportion of men who applied to the nursing program.
Solution : From the t Distribution Table, look up
\[z_{\cal{C}} = 1.65\]and compute
\[\hat{p} = \frac{x}{n} = \frac{60}{500} = 0.12\]\[\hat{q} = 1 -\hat{p} = 1 - 0.12 = 0.88\]
Then
\[\hat{p}+E < p < \hat{p}-E\]\[0.12-0.024 < p < 0.12+0.024\]\[0.096 < p < 0.144\]is the confidence interval with 90% confidence.
▢
Sample size need for a poll
Measuring proportions is what pollsters do. For example in an election you might want to know how many people will vote for liberals (items of interest) and how many will vote for conservatives (items not of interest)[1] In a news paper you might see: “The poll says that 72\(\%\) of the voters will vote liberal. The poll is considered accurate to 2 percentage points 19 time out of 20.” This means that the 95\(\%\) confidence interval (19/20 = 0.95) of the proportion of liberal voters is \(0.72 \pm 0.02\) (note how proportions are presented as percentages in the newspaper). The error here is \(E = 0.02\). Before the pollster starts telephoning people, she must know how many people to phone to arrive at that goal error of 2\(\%\). She needs to know what the sample size \(n\) needed is. In general, the minimum sample size needed to attain a goal error \(E\) on a confidence interval of \(\cal{C}\) is
\[n = \hat{p}\hat{q}\left( \frac{z_{\cal{C}}}{E} \right)^{2}.\]Here \(\hat{p}\) and \(\hat{q}\) could come from a previous survey if available. If there is no such survey or if you want to be sure of ending up with an error equal to or less than a goal E, then use \(\hat{p} = \hat{q} = 0.5\), see Figure 8.9.
Example 8.4 : We want to estimate, with 95\(\%\) confidence, the proportion of people who own a home computer. A previous study gave an answer of 40\(\%\). For a new study we want an error of 2\(\%\). How many people should we poll?
Solution : From the question we have :
\[\hat{p}=0.40, \hspace{.25in} \hat{q}=0.60\]\[E = 0.02, \hspace{.25in} \alpha = 0.95\]From the t Distribution Table (or the Standard Normal Distribution Table if you think about the areas correctly) we find
\[z_{\cal{C}} = z_{95\%} = 1.960.\]Therefore
Which we round up to a sample size of 2305 to ensure that \(E<0.02\).
▢
- We assume here that there are only two parties. For the real life situation of more than two parties we need the multinomial distribution and to approximate it with a multivariate normal distribution. That is a topic for multivariate statistics but the principles are the same as what we cover here. ↵