text
stringlengths
200
319k
## [1] 0.8682238 Observe that the Normal approximation (with a continuity correction) is bet- ter than the Poisson approximation in this case. In the second case (𝑛 = 200, 𝑝 = 0.01) the actual Binomial probability and the Normal approximation of the probability are:
## [1] 0.856789 Observe that the Poisson approximation that produces 0.8571235 is slightly closer to the target than the Normal approximation. The greater accuracy of the Poisson approximation for the case where 𝑛 is large and 𝑝 is small is more pronounced in the final case (𝑛 = 2000, 𝑝 = 0.001) where the target probability and the Normal approximation are:
6.5 Exercises 99 ## [1] 0.8556984 Compare the actual Binomial probability, which is equal to 0.8572138, to the Poisson approximation that produced 0.8571235. The Normal approximation, 0.8556984, is slightly off, but is still acceptable. Exercises Exercise 6.1. Consider the problem of establishing regulations concerning the maximum number of people who can occupy a lift. In particular, we would like to assess the probability of exceeding maximal weight when 8 people are allowed to use the lift simultaneously and compare that to the probability of allowing 9 people into the lift. Assume that the total weight of 8 people chosen at random follows a normal distribution with a mean of 560kg and a standard deviation of 57kg. Assume that the total weight of 9 people chosen at random follows a normal distribu- tion with a mean of 630kg and a standard deviation of 61kg. What is the probability that the total weight of 8 people exceeds 650kg? What is the probability that the total weight of 9 people exceeds 650kg? What is the central region that contains 80% of distribution of the total weight of 8 people? What is the central region that contains 80% of distribution of the total weight of 9 people? Exercise 6.2. Assume 𝑋 ∼ Binomial(27, 0.32). We are interested in the prob- ability P(𝑋 > 11). Compute the (exact) value of this probability. Compute a Normal approximation to this probability, without a continuity correction. Compute a Normal approximation to this probability, with a continuity correction. Compute a Poisson approximation to this probability.
Summary Glossary Normal Random Variable: A bell-shaped distribution that is frequently used to model a measurement. The distribution is marked with Normal(πœ‡, 𝜎2). Standard Normal Distribution: The Normal(0, 1). The distribution of standardized Normal measurement. Percentile: Given a percent 𝑝 β‹… 100% (or a probability 𝑝), the value π‘₯ is the percentile of a random variable 𝑋 if it satisfies the equation P(𝑋 ≀ π‘₯) = 𝑝. Normal Approximation of the Binomial: Approximate computations associated with the Binomial distribution with parallel computations that use the Normal distribution with the same expectation and standard deviation as the Binomial. Poisson Approximation of the Binomial: Approximate computations associated with the Binomial distribution with parallel computations that use the Poisson distribution with the same expectation as the Binomial. Discuss in the Forum Mathematical models are used as tools to describe reality. These models are supposed to characterize the important features of the analyzed phenomena and provide insight. Random variables are mathematical models of measure- ments. Some people claim that there should be a perfect match between the mathematical characteristics of a random variable and the properties of the measurement it models. Other claim that a partial match is sufficient. What is your opinion? When forming your answer to this question you may give an example of a situation from you own field of interest for which a random variable can serve as a model. Identify discrepancies between the theoretical model and actual properties of the measurement. Discuss the appropriateness of using the model in light of these discrepancies. Consider, for example, testing IQ. The score of many IQ tests are modeled as having a Normal distribution with an expectation of 100 and a standard deviation of 15. The sample space of the Normal distribution is the entire line of real numbers, including the negative numbers. In reality, IQ tests produce only positive values.
Student Learning Objective In this section we integrate the concept of data that is extracted from a sam- ple with the concept of a random variable. The new element that connects between these two concepts is the notion of sampling distribution. The data we observe results from the specific sample that was selected. The sampling distribution, in a similar way to random variables, corresponds to all samples that could have been selected. (Or, stated in a different tense, to the sample that will be selected prior to the selection itself.) Summaries of the distribu- tion of the data, such as the sample mean and the sample standard deviation, become random variables when considered in the context of the sampling distribution. In this section we investigate the sampling distribution of such data summaries. In particular, it is demonstrated that (for large samples) the sampling distribution of the sample average may be approximated by the Nor- mal distribution. The mathematical theorem that proves this approximation is called the Central Limit Theory. By the end of this chapter, the student should be able to: Comprehend the notion of sampling distribution and simulate the sampling distribution of the sample average. Relate the expectation and standard deviation of a measurement to the expectation and standard deviation of the sample average. Apply the Central Limit Theorem to the sample averages. The Sampling Distribution In Chapter the concept of a random variable was introduced. As part of the introduction we used an example that involved the selection of a random person from the population and the measuring of his/her height. Prior to the
102 7 The Sampling Distribution action of selection, the height of that person is a random variable. It has the potential of obtaining any of the heights that are present in the population, which is the sample space of this example, with a distribution that reflects the relative frequencies of each of the heights in the population: the probabilities of the values. After the selection of the person and the measuring of the height we get a particular value. This is the observed value and is no longer a random variable. In this section we extend the concept of a random variable and define the concept of a random sample. A Random Sample The relation between the random sample and the data is similar to the re- lation between a random variable and the observed value. The data is the observed values of a sample taken from a population. The content of the data is known. The random sample, similarly to a random variable, is the data that will be selected when taking a sample, prior to the selection itself. The content of the random sample is unknown, since the sample has not yet been taken. Still, just like for the case of the random variable, one is able to say what the possible evaluations of the sample may be and, depending on the mechanism of selecting the sample, what are the probabilities of the different potential evaluations. The collection of all possible evaluations of the sample is the sample space of the random sample and the probabilities of the different evaluations produce the distribution of the random sample. (Alternatively, if one prefers to speak in past tense, one can define the sample space of a random sample to be the evaluations of the sample that could have taken place, with the distribution of the random sample being the probabilities of these evaluations.) A statistic is a function of the data. Example of statistics are the average of the data, the sample variance and standard deviation, the median of the data, etc. In each case a given formula is applied to the data. In each type of statistic a different formula is applied. The same formula that is applied to the observed data may, in principle, be applied to random samples. Hence, for example, one may talk of the sample average, which is the average of the elements in the data. The average, consid- ered in the context of the observed data, is a number and its value is known. However, if we think of the average in the context of a random sample then it becomes a random variable. Prior to the selection of the actual sample we do not know what values it will include. Hence, we cannot tell what the outcome of the average of the values will be. However, due to the identification of all possible evaluations that the sample can possess we may say in advance what is the collection of values the sample average can have. This is the sample space of the sample average. Moreover, from the sampling distribution of the 7.2 The Sampling Distribution 103 random sample one may identify the probability of each value of the sample average, thus obtaining the sampling distribution of the sample average. The same line of argumentation applies to any statistic. Computed in the context of the observed data, the statistic is a known number that may, for example, be used to characterize the variation in the data. When thinking of a statistic in the context of a random sample it becomes a random variable. The distribution of the statistic is called the sampling distribution of the statistic. Consequently, we may talk of the sampling distribution of the median, the sample distribution of the sample variance, etc. Random variables are also applied as models for uncertainty in future measure- ments in more abstract settings that need not involve a specific population. Specifically, we introduced the Binomial and Poisson random variables for settings that involve counting and the Uniform, Exponential, and Normal random variables for settings where the measurement is continuous. The notion of a sampling distribution may be extended to a situation where one is taking several measurements, each measurement taken independently of the others. As a result one obtains a sequence of measurements. We use the term β€œsample” to denote this sequence. The distribution of this sequence is also called the sampling distribution. If all the measurements in the sequence are Binomial then we call it a Binomial sample. If all the measurements are Exponential we call it an Exponential sample and so forth. Again, one may apply a formula (such as the average) to the content of the random sequence and produce a random variable. The term sampling distri- bution describes again the distribution that the random variable produced by the formula inherits from the sample. In the next subsection we examine an example of a sample taken from a population. Subsequently, we discuss examples that involves a sequence of measurements from a theoretical model. Sampling From a Population Consider taking a sample from a population. Let us use again for the illus- tration the file β€œpop1.csv” like we did in Chapter . The data frame produced from the file contains the sex and hight of the 100,000 members of some imag- inary population. Recall that in Chapter we applied the function β€œsample” to randomly sample the height of a single subject from the population. Let us apply the same function again, but this time in order to sample the heights of 100 subjects:
In the first line of code we produce a data frame that contains the information on the entire population. In the second line we select a sample of size 100 from the population, and in the third line we present the content of the sample. The first argument to the function β€œsample” that selects the sample is the sequence of length 100,000 with the list of heights of all the members of the population. The second argument indicates the sample size, 100 in this case. The outcome of the random selection is stored in the object β€œX.samp”, which is a sequence that contains 100 heights. Typically, a researcher does not get to examine the entire population. Instead, measurements on a sample from the population are made. In relation to the imaginary setting we simulate in the example, the typical situation is that the research does not have the complete list of potential measurement evaluations, i.e. the complete list of 100,000 heights in β€œpop.1$height”, but only a sample of measurements, namely the list of 100 numbers that are stored in β€œX.samp” and are presented above. The role of statistics is to make inference on the parameters of the unobserved population based on the information that is obtained from the sample. For example, we may be interested in estimating the mean value of the heights in the population. A reasonable proposal is to use the sample average to serve as an estimate:
7.2 The Sampling Distribution 105 The actual estimate that we have obtained resulted from the specific sample that was collected. Had we collected a different subset of 100 individuals we would have obtained different numerical value for the estimate. Consequently, one may wonder: Was it pure luck that we got such good estimates? How likely is it to get estimates that are close to the target parameter? Notice that in realistic settings we do not know the actual value of the target population parameters. Nonetheless, we would still want to have at least a probabilistic assessment of the distance between our estimates and the param- eters they try to estimate. The sampling distribution is the vehicle that may enable us to address these questions. In order to illustrate the concept of the sampling distribution let us select another sample and compute its average:
## [1] 171.26 In each case we got a different value for the sample average. In the first of the last two iterations the result was more than 1 centimeter away from the population average, which is equal to 170.035, and in the second it was within the range of 1 centimeter. Can we say, prior to taking the sample, what is the probability of falling within 1 centimeter of the population mean? Chapter discussed the random variable that emerges by randomly sampling a single number from the population presented by the sequence β€œpop.1$height”. The distribution of the random variable resulted from the assignment of the probability 1/100,000 to each one of the 100,000 possible outcomes. The same principle applies when we randomly sample 100 individuals. Each possible outcome is a collection of 100 numbers and each collection is assigned equal probability. The resulting distribution is called the sampling distribution. The distribution of the average of the sample emerges from this distribution: With each sample one may associate the average of that sample. The probabil- ity assigned to that average outcome is the probability of the sample. Hence, one may assess the probability of falling within 1 centimeter of the popula- tion mean using the sampling distribution. Each sample produces an average 106 7 The Sampling Distribution that either falls within the given range or not. The probability of the sample average falling within the given range is the proportion of samples for which this event happens among the entire collection of samples. However, we face a technical difficulty when we attempt to assess the sampling distribution of the average and the probability of falling within 1 centimeter of the population mean. Examination of the distribution of a sample of a single individual is easy enough. The total number of outcomes, which is 100,000 in the given example, can be handled with no effort by the computer. However, when we consider samples of size 100 we get that the total number of ways to select 100 number out of 100,000 numbers is in the order of 10342 (1 followed by 342 zeros) and cannot be handled by any computer. Thus, the probability cannot be computed. As a compromise we will approximate the distribution by selecting a large number of samples, say 100,000, to represent the entire collection, and use the resulting distribution as an approximation of the sampling distribution. Indeed, the larger the number of samples that we create the more accurate the approximation of the distribution is. Still, taking 100,000 repeats should produce approximations which are good enough for our purposes. Consider the sampling distribution of the sample average. We simulated above a few examples of the average. Now we would like to simulate 100,000 such examples. We do this by creating first a sequence of the length of the number of evaluations we seek (100,000) and then write a small program that produces each time a new random sample of size 100 and assigns the value of the average of that sample to the appropriate position in the sequence. Do first and explain later: In the first line we produce a sequence of length 100,000 that contains ze- ros. The function β€œrep” creates a sequence that contains repeats of its first argument a number of times that is specified by its second argument. In this example, the numerical value 0 is repeated 100,000 times to produce a sequence of zeros of the length we seek. 1Running this simulation, and similar simulations of the same nature that will be consid- ered in the sequel, demands more of the computer’s resources than the examples that were considered up until now. Beware that running times may be long and, depending on the strength of your computer and your patience, too long. You may save time by running less iterations, replacing, say, β€œ10^5” by β€œ10^4”. The results of the simulation will be less accurate, but will still be meaningful. 7.2 The Sampling Distribution 107 The main part of the program is a β€œfor” loop. The argument of the function β€œfor” takes the special form: β€œindex.name in index.values”, where index.name is the name of the running index and index.values is the collection of values over which the running index is evaluated. In each iteration of the loop the running index is assigned a value from the collection and the expression that follows the brackets of the β€œfor” function is evaluated with the given value of the running index. In the given example the collection of values is produced by the expression β€œ1:n”. Recall that the expression β€œ1:n” produces the collection of integers be- tween 1 and n. Here, n = 100,000. Hence, in the given application the collection of values is a sequence that contains the integers between 1 and 100,000. The running index is called β€œi”. the expression is evaluated 100,000 times, each time with a different integer value for the running index β€œi”.
The R system treats a collection of expressions enclosed within curly brackets as one entity. Therefore, in each iteration of the β€œfor” loop, the lines that are within the curly brackets are evaluated. In the first line a random sample of size 100 is produced and in the second line the average of the sample is computed and stored in the 𝑖-th position of the sequence β€œX.bar”. Observe that the specific position in the sequence is referred to by using square brackets. The program changes the original components of the sequence, from 0 to the average of a random sample, one by one. When the loop ends all values are changed and the sequence β€œX.bar” contains 100,000 evaluations of the sample average. The last line, which is outside the curly brackets and is evaluated after the β€œfor” loop ends, produces an histogram of the averages that were simulated. The histogram is presented in the lower panel of Figure . Compare the distribution of the sample average to the distribution of the heights in the population that was presented first in Figure and is currently 108 7 The Sampling Distribution presented in the upper panel of Figure . Observe that both distributions are centered at about 170 centimeters. Notice, however, that the range of values of the sample average lies essentially between 166 and 174 centimeters, whereas the range of the distribution of heights themselves is between 127 and 217 centimeter. Broadly speaking, the sample average and the original measurement are centered around the same location but the sample average is less spread. Specifically, let us compare the expectation and standard deviation of the sample average to the expectation and standard deviation of the original mea- surement:
## [1] 1.118308 Observe that the expectation of the population and the expectation of the sample average, are practically the same, the standard deviation of the sample average is about 10 times smaller than the standard deviation of the popula- tion. This result is not accidental and actually reflects a general phenomena that will be seen below in other examples. We may use the simulated sampling distribution in order to compute an ap- proximation of the probability of the sample average falling within 1 centime- ter of the population mean. Let us first compute the relevant probability and then explain the details of the computation:
7.2 The Sampling Distribution 109 sample averages. This sequence represents the distribution of the sample aver- age. The expression β€œabs(X.bar - mean(pop.1$height)) <= 1” produces a sequence of logical β€œTRUE” or β€œFALSE” values, depending on the value of the sample av- erage being less or more than one unit away from the population mean. The application of the function β€œmean” to the output of the last expression results in the computation of the relative frequency of TRUEs, which corresponds to the probability of the event of interest. Example 7.1. A poll for the determination of the support in the population for a candidate was describe in Example . The proportion in the population of supporters was denoted by 𝑝. A sample of size 𝑛 = 300 was considered in order to estimate the size of 𝑝. We identified that the distribution of 𝑋, the number of supporters in the sample, is Binomial(300, 𝑝). This distribution is the sampling distribution of 𝑋. One may use the proportion in the sample of supporters, the number of supporters in the sample divided by 300, as an estimate to the parameter 𝑝. The sampling distribution of this quantity, 𝑋/300, may be considered in order to assess the discrepancy between the estimate and the actual value of the parameter. Theoretical Models Sampling distribution can also be considered in the context of theoretical distribution models. For example, take a measurement 𝑋 ∼ Binomial(10, 0.5) from the Binomial distribution. Assume 64 independent measurements are produced with this distribution: 𝑋1, 𝑋2, … , 𝑋64. The sample average in this case corresponds to the distribution of the random variable produced by averaging these 64 random variables:
Again, one may wonder what is the distribution of the sample average 𝑋̄ in this case? We can approximate the distribution of the sample average by simulation. The function β€œrbinom” produces a random sample from the Binomial distribution. The first argument to the function is the sample size, which we take in this example to be equal to 64. The second and third arguments are the parameters of the Binomial distribution, 10 and 0.5 in this case. We can use this function in the simulation: 2Mathematically speaking, the Binomial distribution is only an approximation to the sampling distribution of 𝑋. Actually, the Binomial is an exact description to the distribution only in the case where each subject has the chance be represented in the sample more than once. However, only when the size of the sample is comparable to the size of the population would the Binomial distribution fail to be an adequate approximation to the sampling distribution.
Observe that in this code we created a sequence of length 100,000 with evalu- ations of the sample average of 64 Binomial random variables. We start with a sequence of zeros and in each iteration of the β€œfor” loop a zero is replaced by the average of a random sample of 64 Binomial random variables.
## [1] 0.1973577 The histogram of the sample average is presented in the lower panel of Fig- ure . Compare it to the distribution of a single Binomial random variable that appears in the upper panel. Notice, once more, that the center of the two distributions coincide but the spread of the sample average is smaller. The sample space of a single Binomial random variable is composed of integers. The sample space of the average of 64 Binomial random variables, on the other hand, contains many more values and is closer to the sample space of a random variable with a continuous distribution. 7.2 The Sampling Distribution 111 Recall that the expectation of a Binomial(10, 0.5) random variable is E(𝑋) = 10 β‹… 0.5 = 5 and the variance is V(𝑋) = 10 β‹… 0.5 β‹… 0.5 = 2.5 (thus, the standard deviation is √2.5 = 1.581139). Observe that the expectation of the sample average that we got from the simulation is essentially equal to 5 and the standard deviation is 0.1982219. One may prove mathematically that the expectation of the sample mean is equal to the theoretical expectation of its components: E(𝑋̄) = E(𝑋) . The results of the simulation for the expectation of the sample average are con- sistent with the mathematical statement. The mathematical theory of prob- ability may also be used in order to prove that the variance of the sample average is equal to the variance of each of the components, divided by the sample size: V(𝑋̄) = V(𝑋)/𝑛 , here 𝑛 is the number of observations in the sample. Specifically, in the Binomial example we get that V(𝑋̄) = 2.5/64, since the variance of a Binomial component is 2.5 and there are 64 observations. Consequently, the standard deviation is √2.5/64 = 0.1976424, in agreement, more or less, with the results of the simulation (that produced 0.1982219 as the standard deviation). Consider the problem of identifying the central interval that contains 95% of the distribution. In the Normal distribution we were able to use the function β€œqnorm” in order to compute the percentiles of the theoretical distribution. A function that can be used for the same purpose for simulated distribution is the function β€œquantile”. The first argument to this function is the sequence of simulated values of the statistic, β€œX.bar” in the current case. The second argument is a number between 0 and 1, or a sequence of such numbers: ## 2.5% 97.5% ## 4.609375 5.390625 We used the sequence β€œc(0.025,0.975)” as the input to the second argument. As a result we obtained the output 4.609375, which is the 2.5%-percentile of the sampling distribution of the average, and 5.390625, which is the 97.5%- percentile of the sampling distribution of the average. Of interest is to compare these percentiles to the parallel percentiles of the Normal distribution with the same expectation and the same standard devia- tion as the average of the Binomials:
## [1] 4.613266 5.386894 Observe the similarity between the percentiles of the distribution of the aver- age and the percentiles of the Normal distribution. This similarity is a reflec- tion of the Normal approximation of the sampling distribution of the average, which is formulated in the next section under the title: The Central Limit Theorem. Example 7.2. The distribution of the number of events of radio active decay in a second was modeled in Example according to the Poisson distribution. A quantity of interest is πœ†, the expectation of that Poisson distribution. This quantity may be estimated by measuring the total number of decays over a pe- riod of time and dividing the outcome by the number of seconds in that period of time. Let 𝑛 be this number of second. The procedure just described corre- sponds to taking the sample average of Poisson(πœ†) observations for a sample of size 𝑛. The expectation of the sample average is πœ† and the variance is πœ†/𝑛, lead- ing to a standard deviation of size βˆšπœ†/𝑛. The Central Limit Theorem states that the sampling distribution of this average corresponds, approximately, to the Normal distribution with this expectation and standard deviation. Law of Large Numbers and Central Limit Theorem The Law of Large Numbers and the Central Limit Theorem are mathemati- cal theorems that describe the sampling distribution of the average for large samples. The Law of Large Numbers The Law of Large Numbers states that, as the sample size becomes larger, the sampling distribution of the sample average becomes more and more concen- trated about the expectation. Let us demonstrate the Law of Large Numbers in the context of the Uniform distribution. Let the distribution of the measurement 𝑋 be Uniform(3, 7). Con- sider three different sample sizes 𝑛: 𝑛 = 10, 𝑛 = 100, and 𝑛 = 1000. Let us carry out a simulation similar to the simulations of the previous section. However, this time we run the simulation for the three sample sizes in parallel:
Observe that we have produced 3 sequences of length 100,000 each: β€œunif.10”, β€œunif.100”, and β€œunif.1000”. The first sequence is an approximation of the sam- pling distribution of an average of 10 independent Uniform measurements, the second approximates the sampling distribution of an average of 100 measure- ments and the third the distribution of an average of 1000 measurements. The distribution of single measurement in each of the examples is Uniform(3, 7). Consider the expectation of sample average for the three sample sizes:
## [1] 4.999975 For all sample size the expectation of the sample average is equal to 5, which is the expectation of the Uniform(3, 7) distribution. Recall that the variance of the Uniform(π‘Ž, 𝑏) distribution is (𝑏 βˆ’ π‘Ž)2/12. Hence, the variance of the given Uniform distribution is V(𝑋) = (7 βˆ’ 3)2/12 = 16/12 β‰ˆ 1.3333. The variances of the sample averages are:
## [1] 0.001335726 Notice that the variances decrease with the increase of the sample sizes. The decrease is according to the formula V(𝑋̄) = V(𝑋)/𝑛. The variance is a measure of the spread of the distribution about the expec- tation. The smaller the variance the more concentrated is the distribution around the expectation. Consequently, in agreement with the Law of Large Numbers, the larger the sample size the more concentrated is the sampling distribution of the sample average about the expectation. The Central Limit Theorem (CLT) The Law of Large Numbers states that the distribution of the sample average tends to be more concentrated as the sample size increases. The Central Limit Theorem (CLT in short) provides an approximation of this distribution.
The deviation between the sample average and the expectation of the mea- surement tend to decreases with the increase in sample size. In order to obtain a refined assessment of this deviation one needs to magnify it. The appropri- ate way to obtain the magnification is to consider the standardized sample average, in which the deviation of the sample average from its expectation is divided by the standard deviation of the sample average:
Recall that the expectation of the sample average is equal to the expectation of a single random variable (E(𝑋̄) = E(𝑋)) and that the variance of the sample average is equal to the variance of a single observation, divided by the sample size (V(𝑋̄) = V(𝑋)/𝑛). Consequently, one may rewrite the standardized sample average in the form: 𝑋̄ βˆ’ E(𝑋) βˆšπ‘›(𝑋̄ βˆ’ E(𝑋)) 𝑍 = = . √V(𝑋)/𝑛 √V(𝑋) The second equality follows from placing in the numerator the square root of 𝑛 which divides the term in the denominator. Observe that with the in- crease of the sample size the decreasing difference between the average and the expectation is magnified by the square root of 𝑛. The Central Limit Theorem states that, with the increase in sample size, the sample average converges (after standardization) to the standard Normal distribution. Let us examine the Central Normal Theorem in the context of the example of the Uniform measurement. In Figure you may find the (approximated) density of the standardized average for the three sample sizes based on the simulation that we carried out previously (as red, green, and blue lines). Along side with these densities you may also find the theoretical density of the stan- dard Normal distribution (as a black line). Observe that the four curves are almost one on top of the other, proposing that the approximation of the dis- tribution of the average by the Normal distribution is good even for a sample size as small as 𝑛 = 10. However, before jumping to the conclusion that the Central Limit Theorem applies to any sample size, let us consider another example. In this example we repeat the same simulation that we did with the Uniform distribution, but this time we take Exponential(0.5) measurements instead:
The expectation of an Exponential(0.5) random variable is E(𝑋) = 1/πœ† = 1/0.5 = 2 and the variance is V(𝑋) = 1/πœ†2 = 1/(0.5)2 = 4. Observe below that the expec- tations of the sample averages are equal to the expectation of the measurement and the variances of the sample averages follow the relation V(𝑋̄) = V(𝑋)/𝑛:
## [1] 0.004011308 Which is in agreement with the decrease proposed by the theory, However, when one examines the densities of the sample averages in Fig- ure @reF(fig:SampDist9) one may see a clear distinction between the sampling distribution of the average for a sample of size 10 and the normal distribution (compare the red curve to the black curve. The match between the green curve that corresponds to a sample of size 𝑛 = 100 and the black line is better, but not perfect. When the sample size is as large as 𝑛 = 1000 (the blue curve) then the agreement with the normal curve is very good. Applying the Central Limit Theorem The conclusion of the Central Limit Theorem is that the sampling distribu- tion of the sample average can be approximated by the Normal distribution, regardless what is the distribution of the original measurement, but provided that the sample size is large enough. This statement is very important, since it allows us, in the context of the sample average, to carry out probabilistic computations using the Normal distribution even if we do not know the actual distribution of the measurement. All we need to know for the computation are the expectation of the measurement, its variance (or standard deviation) and the sample size. The theorem can be applied whenever probability computations associated with the sampling distribution of the average are required. The computation of the approximation is carried out by using the Normal distribution with the same expectation and the same standard deviation as the sample average. An example of such computation was conducted in Subsection where the central interval that contains 95% of the sampling distribution of a Binomial average was required. The 2.5%- and the 97.5%-percentiles of the Normal distribution with the same expectation and variance as the sample average produced boundaries for the interval. These boundaries were in good agree- ment with the boundaries produced by the simulation. More examples will be provided in the Solved Exercises of this chapter and the next one. With all its usefulness, one should treat the Central Limit Theorem with a grain of salt. The approximation may be valid for large samples, but may
Exercises Exercise 7.1. The file β€œpop2.csv” contains information associated to the blood pressure of an imaginary population of size 100,000. The file can be found on the internet (). The variables in this file are: id: A numerical variable. A 7 digits number that serves as a unique identifier of the subject. sex: A factor variable. The sex of each subject. The values are either β€œMALE” or β€œFEMALE”. age: A numerical variable. The age of each subject. bmi: A numerical variable. The body mass index of each subject. systolic: A numerical variable. The systolic blood pressure of each subject. diastolic: A numerical variable. The diastolic blood pressure of each subject. group: A factor variable. The blood pressure category of each subject. The values are β€œNORMAL” both the systolic blood pressure is within its normal range (between 90 and 139) and the diastolic blood pressure is within its normal range (between 60 and 89). The value is β€œHIGH” if either measurements of blood pressure are above their normal upper limits and it is β€œLOW” if either measurements are below their normal lower limits. Our goal in this question is to investigate the sampling distribution of the sample average of the variable β€œbmi”. We assume a sample of size 𝑛 = 150. Compute the population average of the variable β€œbmi”. Compute the population standard deviation of the variable β€œbmi”. Compute the expectation of the sampling distribution for the sample average of the variable. Compute the standard deviation of the sampling distribution for the sample average of the variable. Summary 119 Identify, using simulations, the central region that contains 80% of the sampling distribution of the sample average. Identify, using the Central Limit Theorem, an approximation of the central region that contains 80% of the sampling distribution of the sample average. Exercise 7.2. A subatomic particle hits a linear detector at random locations. The length of the detector is 10 nm and the hits are uniformly distributed. The location of 25 random hits, measured from a specified endpoint of the interval, are marked and the average of the location computed. What is the expectation of the average location? What is the standard deviation of the average location? Use the Central Limit Theorem in order to approximate the probability the average location is in the left-most third of the linear detector. The central region that contains 99% of the distribution of the average is of the form 5 ±𝑐. Use the Central Limit Theorem in order to approximate the value of c. 7.5 Summary Glossary Random Sample: The probabilistic model for the values of a measurements in the sample, before the measurement is taken. Sampling Distribution: The distribution of a random sample. Sampling Distribution of a Statistic: A statistic is a function of the data; i.e. a formula applied to the data. The statistic becomes a random variable when the formula is applied to a random sample. The distribution of this random variable, which is inherited from the distribution of the sample, is its sampling distribution. Sampling Distribution of the Sample Average: The distribution of the sample average, considered as a random variable. The Law of Large Numbers: A mathematical result regarding the sam- pling distribution of the sample average. States that the distribution of the average of measurements is highly concentrated in the vicinity of the expec- tation of a measurement when the sample size is large. 120 7 The Sampling Distribution The Central Limit Theorem: A mathematical result regarding the sam- pling distribution of the sample average. States that the distribution of the average is approximately Normal when the sample size is large. Discussion in the Forum Limit theorems in mathematics deal with the convergence of some property to a limit as some indexing parameter goes to infinity. The Law of Large Numbers and the Central Limit Theorem are examples of limit theorems. The property they consider is the sampling distribution of the sample average. The indexing parameter that goes to infinity is the sample size 𝑛. Some people say that the Law of Large Numbers and the Central Limit The- orem are useless for practical purposes. These theorems deal with a sample size that goes to infinity. However, all sample sizes one finds in reality are necessarily finite. What is your opinion? When forming your answer to this question you may give an example of a situation from your own field of interest in which conclusions of an abstract mathematical theory are used in order to solve a practical problem. Identify the merits and weaknesses of the application of the mathematical theory. For example, in making statistical inference one frequently needs to make statements regarding the sampling distribution of the sample average. For instant, one may want to identify the central region that contains 95% of the distribution. The Normal distribution is used in the computation. The justification is the Central Limit Theorem.
Student Learning Objective This section provides an overview of the concepts and methods that where presented in the first part of the book. We attempt to relate them to each other and put them in prospective. Some problems are provided. The solutions to these problems require combinations of many of the tools that were presented in previous chapters. By the end of this chapter, the student should be able to: Have a better understanding of the relation between descriptive statistics, probability, and inferential statistics. Distinguish between the different uses of the concept of variability. Integrate the tools that were given in the first part of the book in order to solve complex problems. An Overview The purpose of the first part of the book was to introduce the fundamentals of statistics and teach the concepts of probability which are essential for the understanding of the statistical procedures that are used to analyze data. These procedures are presented and discussed in the second part of the book. Data is typically obtained by selecting a sample from a population and taking measurements on the sample. There are many ways to select a sample, but all methods for such selection should not violate the most important character- istic that a sample should posses, namely that it represents the population it came from. In this book we concentrate on simple random sampling. However, the reader should be aware of the fact that other sampling designs exist and may be more appropriate in specific applications. Given the sampled data, the
122 8 Overview and Integration main concern of the science of statistics is in making inference on the param- eter of the population on the basis of the data collected. Such inferences are carried out with the aid of statistics, which are functions of the data. Data is frequently stored in the format of a data frame, in which columns are the measured variable and the rows are the observations associated with the selected sample. The main types of variables are numeric, either discrete or not, and factors. We learned how one can produce data frames and read data into R for further analysis. Statistics is geared towards dealing with variability. Variability may emerge in different forms and for different reasons. It can be summarized, analyzed and handled with many tools. Frequently, the same tool, or tools that have much resemblance to each other, may be applied in different settings and for different forms of variability. In order not to loose track it is important to understand in each scenario the source and nature of the variability that is being examined. An important split in term of the source of variability is between descriptive statistics and probability. Descriptive statistics examines the distribution of data. The frame of reference is the data itself. Plots, such as the bar plots, histograms and box plot; tables, such as the frequency and relative frequency as well as the cumulative relative frequency; and numerical summaries, such as the mean, median and standard deviation, can all serve in order to understand the distribution of the given data set. In probability, on the other hand, the frame of reference is not the data at hand but, instead, it is all data sets that could have been sampled (the sample space of the sampling distribution). One may use similar plots, tables, and numerical summaries in order to analyze the distribution of functions of the sample (statistics), but the meaning of the analysis is different. As a matter of fact, the relevance of the probabilistic analysis to the data actually sampled is indirect. The given sample is only one realization within the sample space among all possible realizations. In the probabilistic context there is no special role to the observed realization in comparison to all other potential realizations. The fact that the relation between probabilistic variability and the observed data is not direct does not make the relation unimportant. On the contrary, this indirect relation is the basis for making statistical inference. In statistical inference the characteristics of the data may be used in order to extrapolate from the sampled data to the entire population. Probabilistic description of the distribution of the sample is then used in order to assess the reliability of the extrapolation. For example, one may try to estimate the value of popula- tion parameters, such as the population average and the population standard deviation, on the basis of the parallel characteristics of the data. The variabil- ity of the sampling distribution is used in order to quantify the accuracy of this estimation. (See Example 5 below.) 8.2 An Overview 123 Statistics, like many other empirically driven forms of science, uses theoretical modeling for assessing and interpreting observational data. In statistics this modeling component usually takes the form of a probabilistic model for the measurements as random variables. In the first part of this book we have encountered several such models. The model of simple sampling assumed that each subset of a given size from the population has equal probability to be selected as the sample. Other, more structured models, assumed a specific form to the distribution of the measurements. The examples we considered were the Binomial, the Poisson, the Uniform, the Exponential and the Normal distributions. Many more models may be found in the literature and may be applied when appropriate. Some of these other models have R functions that can be used in order to compute the distribution and produce simulations. A statistic is a function of sampled data that is used for making statistical inference. When a statistic, such as the average, is computed on a random sample then the outcome, from a probabilistic point of view, is a random vari- able. The distribution of this random variable depends on the distribution of the measurements that form the sample but is not identical to that distribu- tion. Hence, for example, the distribution of an average of a sample from the Uniform distribution does not follow the Uniform distribution. In general, the relation between the distribution of a measurement and the distribution of a statistic computed from a sample that is generated from that distribution may be complex. Luckily, in the case of the sample average the relation is rather simple, at least for samples that are large enough. The Central Limit Theorem provides an approximation of the distribution of the sample average that typically improves with the increase in sample size. The expectation of the sample average is equal to the expectation of a single measurement and the variance is equal to the variance of a single measurement, divided by the sample size. The Central Limit Theorem adds to this observation the statement that the distribution of the sample average may be approximated by the Normal distribution (with the same expectation and standard deviation as those of the sample average). This approximation is valid for practically any distribution of the measurement. The conclusion is, at least in the case of the sample average, that the distribution of the statistic depends on the underlying distribution of the measurements only through their expectation and variance but not through other characteristics of the distribution. The conclusion of the theorem extends to quantities proportional to the sample average. Therefore, since the sum of the sample is obtained by multiplying the sample average by the sample size 𝑛, we get that the theorem can be used in order to approximate the distribution of sums. As a matter of fact, the theorem may be generalized much further. For example, it may be shown to hold for a smooth function of the sample average, thereby increasing the applicability of the theorem and its importance. 124 8 Overview and Integration In the next section we will solve some practical problems. In order to solve these problems you are required to be familiar with the concepts and tools that were introduced throughout the first part of the book. Hence, we strongly recommend that you read again and review all the chapters of the book that preceded this one before moving on to the next section. Integrated Applications The main message of the Central Limit Theorem is that for the sample av- erage we may compute probabilities based on the Normal distribution and obtain reasonable approximations, provided that the sample size is not too small. All we need to figure out for the computations are the expectation and variance of the underlying measurement. Otherwise, the exact distribution of that measurement is irrelevant. Let us demonstrate the applicability of the Central Limit Theorem in two examples. Example 1 A study involving stress is done on a college campus among the students. The stress scores follow a (continuous) Uniform distribution with the lowest stress score equal to 1 and the highest equal to 5. Using a sample of 75 students, find: The probability that the average stress score for the 75 students is less than 2. The 90th percentile for the average stress score for the 75 students. The probability that the total of the 75 stress scores is less than 200. The 90th percentile for the total stress score for the 75 students. Solution: Denote by 𝑋 the stress score of a random student. We are given that 𝑋 ∼ Uniform(1, 5). We use the formulas E(𝑋) = (π‘Ž + 𝑏)/2 and V(𝑋) = (𝑏 βˆ’ π‘Ž)2/12 in order to obtain the expectation and variance of a single observation and then we use the relations E(𝑋̄) = E(𝑋) and V(𝑋̄) = V(𝑋)/𝑛 to translated these results to the expectation and variance of the sample average:
## [1] 0.1333333 After obtaining the expectation and the variance of the sample average we can forget about the Uniform distribution and proceed only with the R functions that are related to the Normal distribution. By the Central Limit Theorem we get that the distribution of the sample average is approximately Normal(πœ‡, 𝜎2), with πœ‡ = mu.bar and 𝜎 = sig.bar. In the Question 1.1 we are asked to find the value of the cumulative distribu- tion function of the sample average at π‘₯ = 2:
## [1] 3.170874 The sample average is equal to the total sum divided by the number of obser- vations, 𝑛 = 75 in this example. The total sum is less than 200 if, and only if the average is less than 200/𝑛. Therefore, for Question 1.3:
Example 2 Consider again the same stress study that was described in Example 1 and answer the same questions. However, this time assume that the stress score may obtain only the values 1, 2, 3, 4 or 5, with the same likelihood for obtaining each of the values. Solution: Denote again by 𝑋 the stress score of a random student. The modified dis- tribution states that the sample space of 𝑋 are the integers {1, 2, 3, 4, 5}, with equal probability for each value. Since the probabilities must sum to 1 we get that P(𝑋 = π‘₯) = 1/5, for all π‘₯ in the sample space. In principle we may repeat the steps of the solution of previous example, substituting the expec- tation and standard deviation of the continuous measurement by the discrete counterpart:
## [1] 0.1632993 Notice that the expectation of the sample average is the same as before but the standard deviation is somewhat larger due to the larger variance in the distribution of a single response. We may apply the Central Limit Theorem again in order to conclude that distribution of the average is approximately Normal(πœ‡, 𝜎2), with πœ‡ = mu.bar as before and for the new 𝜎 = sig.bar. 8.3 Integrated Applications 127 For Question 2.1 we compote that the cumulative distribution function of the sample average at π‘₯ = 2 is approximately equal: ## [1] 4.570649e-10 and the 90%-percentile is: ## [1] 3.209276 which produces the answer to Question 2.2. Similarly to the solution of Question 1.3 we may conclude that the total sum is less than 200 if, and only if the average is less than 200/𝑛. Therefore, for Question 2.3: ## [1] 0.02061342 Observe that in the current version of the question we have the score is integer- valued. Clearly, the sum of scores is also integer valued. Hence we may choose to apply the continuity correction for the Normal approximation whereby we approximate the probability that the sum is less than 200 (i.e. is less than or equal to 199) by the probability that a Normal random variable is less than or equal to 199.5. Translating this event back to the scale of the average we get the approximation:
128 8 Overview and Integration Example 3 Suppose that a market research analyst for a cellular phone company conducts a study of their customers who exceed the time allowance included on their basic cellular phone contract. The analyst finds that for those customers who exceed the time included in their basic contract, the excess time used follows an exponential distribution with a mean of 22 minutes. Consider a random sample of 80 customers and find The probability that the average excess time used by the 80 cus- tomers in the sample is longer than 20 minutes. The 95th percentile for the average excess time for samples of 80 customers who exceed their basic contract time allowances. Solution: Let 𝑋 be the excess time for customers who exceed the time included in their basic contract. We are told that 𝑋 ∼ Exponential(πœ†). For the Exponential distribution E(𝑋) = 1/πœ†. Hence, given that E(𝑋) = 22 we can conclude that πœ† = 1/22. For the Exponential we also have that V(𝑋) = 1/πœ†2. Therefore:
## [1] 2.459675 Like before, we can forget at this stage about the Exponential distribution and refer henceforth to the Normal Distribution. In Question 2.1 we are asked to compute the probability above π‘₯ = 20. The total probability is 1. Hence, the required probability is the difference between 1 and the probability of being less or equal to π‘₯ = 20:
Example 4 A beverage company produces cans that are supposed to contain 16 ounces of beverage. Under normal production conditions the expected amount of beverage in each can is 16.0 ounces, with a standard deviation of 0.10 ounces. As a quality control measure, each hour the QA department samples 50 cans from the production during the previous hour and measures the content in each of the cans. If the average content of the 50 cans is below a control threshold then production is stopped and the can filling machine is re-calibrated. Compute the probability that the amount of beverage in a random can is below 15.95. Compute the probability that the amount of beverage in a sample average of 50 cans is below 15.95. Find a threshold with the property that the probability of stopping the machine in a given hour is 5% when, in fact, the production conditions are normal. Consider the data in the file β€œQC.csv”. It contains measurement results of 8 hours. Assume that we apply the threshold that was obtained in Question 4.3. At the end of which of the hours the filling machine needed re-calibration? Based on the data in the file β€œQC.csv”, which of the hours contains measurements which are suspected outliers in comparison to the other measurements conducted during that hour? Solution The only information we have on the distribution of each measurement is its expectation (16.0 ounces under normal conditions) and its standard deviation (0.10, under the same condition). We do not know, from the information pro- vided in the question, the actual distribution of a measurement. (The fact that the production conditions are normal does not imply that the distribution of
130 8 Overview and Integration the measurement in the Normal distribution!) Hence, the correct answer to Question 4.1 is that there is not enough information to calculate the proba- bility. When we deal with the sample average, on the other hand, we may apply the Central Limit Theorem in order to obtain at least an approximation of the probability. Observe that the expectation of the sample average is 16.0 ounces and the standard deviation is 0.1/√50. The distribution of the average is approximately the Normal distribution: ## [1] 0.000203476 Hence, we get that the probability of the average being less than 15.95 ounces is (approximately) 0.0002, which is a solution to Question 4.2. In order to solve Question 4.3 we may apply the function β€œqnorm” in order to compute the 5%-percentile of the distribution of the average: ## [1] 15.97674 Consider the data in the file β€œQC.csv”. Let us read the data into a data frame by the by the name β€œQC” and apply the function β€œsummary” to obtain an overview of the content of the file:
8.3 Integrated Applications 131 Observe that the file contains 8 quantitative variables that are given the names h1, …, h8. Each of these variables contains the 50 measurements conducted in the given hour. Observe that the mean is computed as part of the summary. The threshold that we apply to monitor the filling machine is 15.97674. Clearly, the average of the measurements at the third hour β€œh3” is below the threshold. Not enough significance digits of the average of the 8th hour are presented to be able to say whether the average is below or above the threshold. A more accurate pre- sentation of the computed mean is obtained by the application of the function β€œmean” directly to the data: ## [1] 15.9736 Now we can see that the average is below the threshold. Hence, the machine required re-calibration after the 3rd and the 8th hours, which is the answer to Question 4.4. In Chapter it was proposed to use box plots in order to identify points that are suspected to be outliers. We can use the expression β€œboxplot(QC$h1)” in order to obtain the box plot of the data of the first hour and go through the names of the variable one by one in order to screen all variable. Alternatively, we may apply the function β€œboxplot” directly to the data frame β€œQC” and get a plot with box plots of all the variables in the data frame plotted side by side:
132 8 Overview and Integration Example 5 A measurement follows the Uniform(0, 𝑏), for an unknown value of 𝑏. Two statisticians propose two distinct ways to estimate the unknown quantity 𝑏 with the aid of a sample of size 𝑛 = 100. Statistician A proposes to use twice the sample average (2𝑋̄) as an estimate. Statistician B proposes to use the largest observation instead. The motivation for the proposal made by Statistician A is that the expectation of the measurement is equal to E(𝑋) = 𝑏/2. A reasonable way to estimate the expectation is to use the sample average 𝑋̄. Thereby, a reasonable way to estimate 𝑏, twice the expectation, is to use 2𝑋̄. A motivation for the proposal made by Statistician B is that although the largest observation is indeed smaller that 𝑏, still it may not be much smaller than that value. In order to choose between the two options they agreed to prefer the statistic that tends to have values that are closer to 𝑏. (with respect to the sampling distribution). They also agreed to compute the expectation and variance of each statistic. The performance of a statistic is evaluated using the mean square error (MSE), which is defined as the sum of the variance and the squared difference between the expectation and 𝑏. Namely, if 𝑇 is the statistic (either the one proposed by Statistician A or Statistician B) then 𝑀𝑆𝐸 = V(𝑇) + (E(𝑇) βˆ’ 𝑏)2 . A smaller mean square error corresponds to a better, more accurate, statistic. Assume that the actual value of 𝑏 is 10 (𝑏 = 10). Use simulations to compute the expectation, the variance and the MSE of the statistic proposed by Statistician A. Assume that the actual value of 𝑏 is 10 (𝑏 = 10). Use simulations to compute the expectation, the variance and the MSE of the statistic proposed by Statistician B. (Hint: the maximal value of a sequence can be computed with the function β€œmax”.) Assume that the actual value of 𝑏 is 13.7 (𝑏 = 13.7). Use simula- tions to compute the expectation, the variance and the MSE of the statistic proposed by Statistician A. Assume that the actual value of 𝑏 is 13.7 (𝑏 = 13.7). Use simula- tions to compute the expectation, the variance and the MSE of the statistic proposed by Statistician B. (Hint: the maximal value of a sequence can be computed with the function β€œmax”.) Based on the results in Questions 5.1–4, which of the two statistics seems to be preferable? 8.3 Integrated Applications 133 Solution In Questions 5.1 and 5.2 we take the value of 𝑏 to be equal to 10. Consequently, the distribution of a measurement is Uniform(0, 10). In order to generate the sampling distributions we produce two sequences, β€œA” and β€œB”, both of length 100,000, with the evaluations of the statistics: Observe that in each iteration of the β€œfor” loop a sample of size 𝑛 = 100 from the Uniform(0, 10) distribution is generated. The statistic proposed by Statistician A (β€œ2*mean(X.samp)”) is computed and stored in sequence β€œA” and the statistic proposed by Statistician B (β€œmax(X.samp)”) is computed and stored in sequence β€œB”. Consider the statistic proposed by Statistician A:
0.01787562 + (13.56467 βˆ’ 13.7)2 = 0.03618937 . Once more, the mean square error of the statistic proposed by Statistician B is smaller. Considering the fact that the mean square error of the statistic proposed by Statistician B is smaller in both cases we may conclude that this statistic seems to be better for estimation of 𝑏 in this setting of Uniformly distributed measurements. Discussion in the Forum In this course we have learned many subjects. Most of these subjects, especially for those that had no previous exposure to statistics, were unfamiliar. In this forum we would like to ask you to share with us the difficulties that you encountered. What was the topic that was most difficult for you to grasp? In your opinion, what was the source of the difficulty? When forming your answer to this question we will appreciate if you could elaborate and give details of what the problem was. Pointing to deficiencies in the learning material and confusing explanations will help us improve the presentation for the future application of this course.
Student Learning Objectives The next section of this chapter introduces the basic issues and tools of statis- tical inference. These tools are the subject matter of the second part of this book. In Chapters – we use data on the specifications of cars in order to demonstrate the application of the tools for making statistical inference. In the third section of this chapter we present the data frame that contains this data. The fourth section reviews probability topics that were discussed in the first part of the book and are relevant for the second part. By the end of this chapter, the student should be able to: Define key terms that are associated with inferential statistics. Recognize the variables of the β€œcars.csv” data frame. Revise concepts related to random variables, the sampling distribution and the Central Limit Theorem. Key Terms The first part of the book deals with descriptive statistics and with probability. In descriptive statistics one investigates the characteristics of the data by using graphical tools and numerical summaries. The frame of reference is the observed data. In probability, on the other hand, one extends the frame of reference to include all data sets that could have potentially emerged, with the observed data as one among many. The second part of the book deals with inferential statistics. The aim of sta- tistical inference is to gain insight regarding the population parameters from the observed data. The method for obtaining such insight involves the applica- tion of formal computations to the data. The interpretation of the outcome of these formal computations is carried out in the probabilistic context, in which 137 138 9 Introduction to Statistical Inference one considers the application of these formal computations to all potential data sets. The justification for using the specific form of computation on the observed data stems from the examination of the probabilistic properties of the formal computations. Typically, the formal computations will involve statistics, which are functions of the data. The assessment of the probabilistic properties of the computations will result from the sampling distribution of these statistics. An example of a problem that requires statistical inference is the estimation of a parameter of the population using the observed data. Point estimation attempts to obtain the best guess to the value of that parameter. An estimator is a statistic that produces such a guess. One may prefer an estimator whose sampling distribution is more concentrated about the population parameter value over another estimator whose sampling distribution is less so. Hence, the justification for selecting a specific statistic as an estimator is a consequence of the probabilistic characteristics of this statistic in the context of the sampling distribution. For example, a car manufacture may be interested in the fuel consumption of a new type of car. In order to do so the manufacturer may apply a standard test cycle to a sample of 10 new cars of the given type and measure their fuel consumptions. The parameter of interest is the average fuel consumption among all cars of the given type. The average consumption of the 10 cars is a point estimate of the parameter of interest. An alternative approach for the estimation of a parameter is to construct an interval that is most likely to contain the population parameter. Such an interval, which is computed on the basis of the data, is called the a confidence interval. The sampling probability that the confidence interval will indeed contain the parameter value is called the confidence level. Confidence intervals are constructed so as to have a prescribed confidence level. A different problem in statistical inference is hypothesis testing. The scientific paradigm involves the proposal of new theories and hypothesis that presum- ably provide a better description for the laws of Nature. On the basis of these hypothesis one may propose predictions that can be examined empirically. If the empirical evidence is consistent with the predictions of the new hypoth- esis but not with those of the old theory then the old theory is rejected in favor of the new one. Otherwise, the established theory maintains its status. Statistical hypothesis testing is a formal method for determining which of the two hypothesis should prevail that uses this paradigm. Each of the two hypothesis, the old and the new, predicts a different distribu- tion for the empirical measurements. In order to decide which of the distribu- tions is more in tune with the data a statistic is computed. This statistic is called the test statistic. A threshold is set and, depending on where the test Key Terms 139 statistic falls with respect to this threshold, the decision is made whether or not to reject the old theory in favor of the new one. This decision rule is not error proof, since the test statistic may fall by chance on the wrong side of the threshold. Nonetheless, by the examination of the sampling distribution of the test statistic one is able to assess the probability of making an error. In particular, the probability of erroneously rejecting the currently accepted theory (the old one) is called the significance level of the test. Indeed, the threshold is selected in order to assure a small enough significance level. Returning to the car manufacturer. Assume that the car in question is manu- factured in two different factories. One may want to examine the hypothesis that the car’s fuel consumption is the same for both factories. If 5 of the tested cars were manufactured in one factory and the other 5 in the other factory then the test may be based on the absolute value of the difference between the average consumption of the first 5 and the average consumption of the other 5. The method of testing hypothesis is also applied in other practical settings where it is required to make decisions. For example, before a new treatment to a medical condition is approved for marketing by the appropriate authorities it must undergo a process of objective testing through clinical trials. In these trials the new treatment is administered to some patients while other obtain the (currently) standard treatment. Statistical tests are applied in order to compare the two groups of patient. The new treatment is released to the market only if it is shown to be beneficial with statistical significance and it is shown to have no unacceptable side effects. In subsequent chapters we will discuss in more details the computation of point estimation, the construction of confidence intervals, and the application of hypothesis testing. The discussion will be initiated in the context of a single measurement but will later be extended to settings that involve comparison of measurements. An example of such analysis is the analysis of clinical trials where the response of the patients treated with the new procedure is compared to the response of patients that were treated with the conventional treatment. This comparison involves the same measurement taken for two sub-samples. The tools of sta- tistical inference – hypothesis testing, point estimation and the construction of confidence intervals – may be used in order to carry out this comparison. Other comparisons may involve two measurements taken for the entire sample. An important tool for the investigation of the relations between two measure- ments, or variables, is regression. Models of regression describe the change in the distribution in one variable as a function of the other variable. Again, point estimation, confidence intervals, and hypothesis testing can be carried out in order to examine regression models. The variable whose distribution is
The Cars Data Set Statistical inference is applied to data in order to address specific research ques- tion. We will demonstrate different inferential procedures using a specific data set with the aim of making the discussion of the different procedures more con- crete. The same data set will be used for all procedures that are presented in Chapters –. This data set contains information on various models of cars and is stored in the CVS file β€œcars.csv”. The file can be found on the internet at . You are advised to download this file to your computer and store it in the working directory of R. Let us read the content of the CSV file into an R data frame and produce a brief summary: π‘β„Ž ∢ πΆπ‘Žπ‘ π‘’π‘†π‘‘α΅†π‘‘π‘–π‘’π‘  and in the quizzes and assignments. 2The original β€œAutomobiles” data set is accessible at the UCI Machine Learning Reposi- tory (). This data was assembled by Jeffrey C. Schlimmer, using as source the 1985 Model Import Car and Truck Specifications, 1985 Ward’s Automotive Yearbook. The current file β€œcars.csv” is based on all 205 observations of the original data set. We selected 17 of the 26 variables available in the original source.
## Mean :53.72 Mean :2556 Mean :126.9 Mean :104.3 ## 3rd Qu.:55.50 3rd Qu.:2935 3rd Qu.:141.0 3rd Qu.:116.0 ## Max. :59.80 Max. :4066 Max. :326.0 Max. :288.0 ## NA's 2 ## peak.rpm city.mpg highway.mpg price ## Min. :4150 Min. :13.00 Min. :16.00 Min. : 5118 ## 1st Qu.:4800 1st Qu.:19.00 1st Qu.:25.00 1st Qu.: 7775 ## Median :5200 Median :24.00 Median :30.00 Median :10295 ## Mean :5125 Mean :25.22 Mean :30.75 Mean 13207 ## 3rd Qu.:5500 3rd Qu.:30.00 3rd Qu.:34.00 3rd Qu.:16500 Observe that the first 6 variables are factors, i.e. they contain qualitative data that is associated with categorization or the description of an attribute. The last 11 variable are numeric and contain quantitative data. Factors are summarized in R by listing the attributes and the frequency of each attribute value. If the number of attributes is large then only the most frequent attributes are listed. Numerical variables are summarized in R with the aid of the smallest and largest values, the three quartiles (Q1, the median, and Q3) and the average (mean). The third factor variable, β€œnum.of.doors”, as well as several of the numerical variables have a special category titled β€œNA’s”. This category describes the number of missing values among the observations. For a given variable, the observations for which a value for the variable is not recorded, are marked as missing. R uses the symbol β€œNA” to identify a missing value. Missing observations are a concern in the analysis of statistical data. If the relative frequency of missing values is substantial and the reason for not ob- taining the data for specific observations is related to the phenomena under investigation than naΓ―ve statistical inference may produce biased conclusions. In the β€œcars” data frame missing values are less of a concern since their relative frequency is low. One should be on the lookout for missing values when applying R to data since the different functions may have different ways for dealing with missing
142 9 Introduction to Statistical Inference values. One should make sure that the appropriate way is applied for the specific analysis. Consider the variables of the data frame β€œcars”: make: The name of the car producer (a factor). fuel.type: The type of fuel used by the car, either diesel or gas (a factor). num.of.doors: The number of passenger doors, either two or four (a factor). body.style: The type of the car (a factor). drive.wheels: The wheels powered by the engine (a factor). engine.location: The location in the car of the engine (a factor). wheel.base: The distance between the centers of the front and rear wheels in inches (numeric). length: The length of the body of the car in inches (numeric). width: The width of the body of the car in inches (numeric). height: The height of the car in inches (numeric). curb.weight: The total weight in pounds of a vehicle with standard equipment and a full tank of fuel, but with no passengers or cargo (numeric). engine.size: The volume swept by all the pistons inside the cylinders in cubic inches (numeric). horsepower: The power of the engine in horsepowers (numeric). peak.rpm: The top speed of the engine in rounds-per-minute (numeric). city.mpg: The fuel consumption of the car in city driving conditions, measured as miles per gallon of fuel (numeric). highway.mpg: The fuel consumption of the car in highway driving conditions, measured as miles per gallon of fuel (numeric). price: The retail price of the car in US Dollars (numeric).
The Sampling Distribution 143 tics are the sample average and the sample standard deviation. These are im- portant examples, but clearly not the only ones. Given numerical data, one may compute the smallest value, the largest value, the quartiles, and the me- dian. All are examples of statistics. Statistics may also be associated with factors. The frequency of a given attribute among the observations is a statis- tic. (An example of such statistic is the frequency of diesel cars in the data frame.) As part of the discussion in the subsequent chapters we will consider these and other types of statistics. Any statistic, when computed in the context of the data frame being analyzed, obtains a single numerical value. However, once a sampling distribution is be- ing considered then one may view the same statistic as a random variable. A statistic is a function or a formula which is applied to the data frame. Conse- quently, when a random collection of data frames is the frame of reference then the application of the formula to each of the data frames produces a random collection of values, which is the sampling distribution of the statistic. We distinguish in the text between the case where the statistic is computed in the context of the given data frame and the case where the computation is conducted in the context of the random sample. This distinguishing is empha- sized by the use of small letters for the former and capital letters for the later. Consider, for example, the sample average. In the context of the observed data we denote the data values for a specific variable by π‘₯1, π‘₯2, … , π‘₯𝑛. The sample average computed for these values is denoted by π‘₯Μ„ = π‘₯1 + π‘₯2 + β‹― + π‘₯𝑛 . 𝑛 On the other hand, if the discussion of the sample average is conducted in the context of a random sample then the sample is a sequence 𝑋1, 𝑋2, … , 𝑋𝑛 of random variables. The sample average is denoted in this context as 𝑋̄ = 𝑋1 + 𝑋2 + β‹― + 𝑋𝑛 . 𝑛 The same formula that was applied to the data values is applied now to the random components of the random sample. In the first context π‘₯Μ„ is an ob- served non-random quantity. In the second context 𝑋̄ is a random variable, an abstract mathematical concept. A second example is the sample variance. When we compute the sample vari- ance for the observed data we use the formula:
The Sampling Distribution The sampling distribution may emerge as random selection of samples from a particular population. In such a case, the sampling distribution of the sample, and hence of the statistic, is linked to the distribution of values of the variable in the population. Alternatively, one may assign theoretical distribution to the measurement as- sociated with the variable. In this other case the sampling distribution of the statistic is linked to the theoretical model. Consider, for example, the variable β€œprice” that describes the prices of the 205 car types (with 4 prices missing) in the data frame β€œcars”. In order to define a sampling distribution one may imagine a larger population of car types, perhaps all the car types that were sold during the 80’s in the United States, or some other frame of reference, with the car types that are included in the data frame considered as a random sample from that larger population. The observed sample corresponds to car types that where sold in 1985. Had one chosen to consider car types from a different year then one may expect to obtain other evaluations of the price variable. The reference population, in this case, is the distribution of the prices of the car types that were sold during the 80’s and the sampling distribution is associated with a random selection of a particular year within this period and the consideration of prices of car types sold in that year. The data for 1985 is what we have at hand. But in the sampling distribution we take into account the possibility that we could have obtained data for 1987, for example, rather than the data we did get. An alternative approach for addressing sampling distribution is to consider a theoretical model. Referring again to the variable β€œprice” one may propose an Exponential model for the distribution of the prices of cars. This model implies that car types in the lower spectrum of the price range are more frequent than cars with a higher price tag. With this model in mind, one may propose the sampling distribution to be composed of 205 unrelated copies from the Exponential distribution (or 201 if we do not want to include the missing values). The rate πœ† of the associated Exponential distribution is treated as an unknown parameter. One of the roles of statistical inference is to estimate the value of this parameter with the aid of the data at hand. Sampling distribution is relevant also for factor variables. Consider the vari- able β€œfuel.type” as an example. In the given data frame the frequency of diesel 9.4 The Sampling Distribution 145 cars is 20. However, had one considered another year during the 80’s one may have obtained a different frequency, resulting in a sampling distribution. This type of sampling distribution refers to all cars types that were sold in the United States during the 80’s as the frame of reference. Alternatively, one may propose a theoretical model for the sampling distribu- tion. Imagine there is a probability 𝑝 that a car runs on diesel (and probability 1 βˆ’ 𝑝 that it runs on gas). Hence, when one selects 205 car types at random then one obtains that the distribution of the frequency of car types that run on diesel has the Binomial(205, 𝑝) distribution. This is the sampling distribu- tion of the frequency statistic. Again, the value of 𝑝 is unknown and one of our tasks is to estimate it from the data we observe. In the context of statistical inference the use of theoretical models for the sampling distribution is the standard approach. There are situation, such as the application surveys to a specific target population, where the consideration of the entire population as the frame of reference is more natural. But, in most other applications the consideration of theoretical models is the method of choice. In this part of the book, where we consider statistical inference, we will always use the theoretical approach for modeling the sampling distribution. Theoretical Distributions of Observations In the first part of the book we introduced several theoretical models that may describe the distribution of an observation. Let us take the opportunity and review the list of models: Binomial: The Binomial distribution is used in settings that involve counting the number of occurrences of a particular outcome. The parameters that determine the distribution are 𝑛, the number of observations, and 𝑝, the probability of obtaining the particular outcome in each observation. The expression β€œBinomial(𝑛, 𝑝)” is used to mark the Binomial distribution. The sample space for this distribution is formed by the integer values {0, 1, 2, … , 𝑛}. The expectation of the distribution is 𝑛𝑝 and the variance is 𝑛𝑝(1 βˆ’ 𝑝). The functions β€œdbinom”, β€œpbinom”, and β€œqbinom” may be used in order to compute the probability, the cumulative probability, and the percentiles, respectively, for the Binomial distribution. The function β€œrbinom” can be used in order to simulate a random sample from this distribution. Poisson: The Poisson distribution is also used in settings that involve count- ing. This distribution approximates the Binomial distribution when the num- ber of examinations 𝑛 is large but the probability 𝑝 of the particular outcome is small. The parameter that determines the distribution is the expectation πœ†. The expression β€œPoisson(πœ†)” is used to mark the Poisson distribution. The sample space for this distribution is the entire collection of natural numbers {0, 1, 2, …}. The expectation of the distribution is πœ† and the variance is also πœ†. 146 9 Introduction to Statistical Inference The functions β€œdpois”, β€œppois”, and β€œqpois” may be used in order to compute the probability, the cumulative probability, and the percentiles, respectively, for the Poisson distribution. The function β€œrpois” can be used in order to simulate a random sample from this distribution. Uniform: The Uniform distribution is used in order to model measurements that may have values in a given interval, with all values in this interval equally likely to occur. The parameters that determine the distribution are π‘Ž and 𝑏, the two end points of the interval. The expression β€œUniform(π‘Ž, 𝑏)” is used to identify the Uniform distribution. The sample space for this dis- tribution is the interval [π‘Ž, 𝑏]. The expectation of the distribution is (π‘Ž +𝑏)/2 and the variance is (𝑏 βˆ’ π‘Ž)2/12. The functions β€œdunif”, β€œpunif”, and β€œqunif” may be used in order to compute the density, the cumulative probability, and the percentiles for the Uniform distribution. The function β€œrunif” can be used in order to simulate a random sample from this distribution. Exponential: The Exponential distribution is frequently used to model times between events. It can also be used in other cases where the outcome of the measurement is a positive number and where a smaller value is more likely than a larger value. The parameter that determines the distribution is the rate πœ†. The expression β€œExponential(πœ†)” is used to identify the Exponen- tial distribution. The sample space for this distribution is the collection of positive numbers. The expectation of the distribution is 1/πœ† and the vari- ance is 1/πœ†2. The functions β€œdexp”, β€œpexp”, and β€œqexp” may be used in order to compute the density, the cumulative probability, and the percentiles, re- spectively, for the Exponential distribution. The function β€œrexp” can be used in order to simulate a random sample from this distribution. Normal: The Normal distribution frequently serves as a generic model for the distribution of a measurement. Typically, it also emerges as an approx- imation of the sampling distribution of statistics. The parameters that de- termine the distribution are the expectation πœ‡ and the variance 𝜎2. The expression β€œNormal(πœ‡, 𝜎2)” is used to mark the Normal distribution. The sample space for this distribution is the collection of all numbers, negative or positive. The expectation of the distribution is πœ‡ and the variance is 𝜎2. The functions β€œdnorm”, β€œpnorm”, and β€œqnorm” may be used in order to compute the density, the cumulative probability, and the percentiles for the Normal distribution. The function β€œrnorm” can be used in order to simulate a random sample from this distribution. Sampling Distribution of Statistics Theoretical models describe the distribution of a measurement as a function of a parameter, or a small number of parameters. For example, in the Binomial case the distribution is determined by the number of trials 𝑛 and by the 9.4 The Sampling Distribution 147 probability of success in each trial 𝑝. In the Poisson case the distribution is a function of the expectation πœ†. For the Uniform distribution we may use the end-points of the interval, π‘Ž and 𝑏, as the parameters. In the Exponential case, the rate πœ† is a natural parameter for specifying the distribution and in Normal case the expectation πœ‡ and the variance 𝜎2 my be used for that role. The general formulation of statistical inference problems involves the identifi- cation of a theoretical model for the distribution of the measurements. This theoretical model is a function of a parameter whose value is unknown. The goal is to produce statements that refer to this unknown parameter. These statements are based on a sample of observations from the given distribution. For example, one may try to guess the value of the parameter (point estima- tion), one may propose an interval which contains the value of the parameter with some subscribed probability (confidence interval) or one may test the hypothesis that the parameter obtains a specific value (hypothesis testing). The vehicles for conducting the statistical inferences are statistics that are computed as a function of the measurements. In the case of point estimation these statistics are called estimators. In the case where the construction of an interval that contains the value of the parameter is the goal then the statistics are called confidence interval. In the case of testing hypothesis these statistics are called test statistics. In all cases of inference, The relevant statistic possesses a distribution that it inherits from the sampling distribution of the observations. This distribution is the sampling distribution of the statistic. The properties of the statistic as a tool for inference are assessed in terms of its sampling distribution. The sampling distribution of a statistic is a function of the sample size and of the parameters that determine the distribution of the measurements, but other- wise may be of complex structure. In order to assess the performance of the statistics as agents of inference one should be able to determine their sampling distribution. We will apply two approaches for this determination. One approach is to use a Normal ap- proximation. This approach relies on the Central Limit Theorem. The other approach is to simulate the distribution. This other approach relies on the functions available in R for the simulation of a random sample from a given distribution. The Normal Approximation In general, the sampling distribution of a statistic is not the same as the sampling distribution of the measurements from which it is computed. For example, if the measurements are from the Uniform distributed then the dis- tribution of a function of the measurements will, in most cases, not possess 148 9 Introduction to Statistical Inference the Uniform distribution. Nonetheless, in many cases one may still identify, at least approximately, what the sampling distribution of the statistic is. The most important scenario where the limit distribution of the statistic has a known shape is when the statistic is the sample average or a function of the sample average. In such a case the Central Limit Theorem may be applied in order to show that, at least for a sample size not too small, the distribution of the statistic is approximately Normal. In the case where the Normal approximation may be applied then a prob- abilistic statement associated with the sampling distribution of the statistic can be substituted by the same statement formulated for the Normal distribu- tion. For example, the probability that the statistic falls inside a given interval may be approximated by the probability that a Normal random variable with the same expectation and the same variance (or standard deviation) as the statistic falls inside the given interval. For the special case of the sample average one may use the fact that the expec- tation of the average of a sample of measurements is equal to the expectation of a single measurement and the fact that the variance of the average is the variance of a single measurement, divided by the sample size. Consequently, the probability that the sample average falls within a given interval may be approximate by the probability of the same interval according to the Nor- mal distribution. The expectation that is used for the Normal distribution is the expectation of the measurement. The standard deviation is the standard deviation of the measurement, divided by the square root of the number of observations. The Normal approximation of the distribution of a statistic is valid for cases other than the sample average or functions thereof. For example, it can be shown (under some conditions) that the Normal approximation applies to the sample median, even though the sample median is not a function of the sample average. On the other hand, one need not always assume that the distribution of a statistic is necessarily Normal. In many cases it is not, even for a large sample size. For example, the minimal value of a sample that is generated from the Exponential distribution can be shown to follow the Exponential distribution with an appropriate rate, regardless of the sample size.
9.4 The Sampling Distribution 149 statistic. However, every now and then we may want to check the validity of this approximation in order to reassure ourselves of its appropriateness. Com- puterized simulations can be carried out for that checking. The simulations are equivalent to those used in the first part of the book. A model for the distribution of the observations is assumed each time a simu- lation is carried out. The simulation itself involves the generation of random samples from that model for the given sample size and for a given value of the parameter. The statistic is evaluated and stored for each generated sam- ple. Thereby, via the generation of many samples, an approximation of the sampling distribution of the statistic is produced. A probabilistic statement inferred from the Normal approximation can be compared to the results of the simulation. Substantial disagreement between the Normal approximation and the outcome of the simulations is an evidence that the Normal approximation may not be valid in the specific setting. As an illustration, assume the statistic is the average price of a car. It is as- sumed that the price of a car follows an Exponential distribution with some unknown rate parameter πœ†. We consider the sampling distribution of the aver- age of 201 Exponential random variables. (Recall that in our sample there are 4 missing values among the 205 observations.) The expectation of the average is 1/πœ†, which is the expectation of a single Exponential random variable. The variance of a single observation is 1/πœ†2. Consequently, the standard deviation of the average is √(1/πœ†2)/201 = (1/πœ†)/√201 = (1/πœ†)/14.17745 = 0.0705/πœ†. In the first part of the book we found out that for Normal(πœ‡, 𝜎2), the Nor- mal distribution with expectation πœ‡ and variance 𝜎2, the central region that contains 95% of the distribution takes the form πœ‡ Β± 1.96 𝜎 (namely, the inter- val [πœ‡ βˆ’ 1.96 𝜎, πœ‡ + 1.96 𝜎]). Thereby, according to the Normal approximation for the sampling distribution of the average price we state that the region 1/πœ† Β± 1.96 β‹… 0.0705/πœ† should contain 95% of the distribution. We may use simulations in order to validate this approximation for selected values of the rate parameter πœ†. Hence, for example, we may choose πœ† = 1/12, 000 (which corresponds to an expected price of $12,000 for a car) and validate the approximation for that parameter value. The simulation itself is carried out by the generation of a sample of size 𝑛 = 201 from the Exponential(1/1200) distribution using the function β€œrexp” for generating Exponential samples. The function for computing the average (mean) is applied to each sample and the result stored. We repeat this process a large number of times (100,000 is the typical number we use) in order to produce an approximation of the sampling distribution of the sample average. Finally, we check the relative frequency of cases where the simulated average
150 9 Introduction to Statistical Inference is within the given range. This relative frequency is an approximation of the required probability and may be compared to the target value of 0.95. Let us run the proposed simulation for the sample size of 𝑛 = 201 and for a rate parameter equal to πœ† = 1/12000. Observe that the expectation of the sample average is equal to 12, 000 and the standard deviation is 0.0705 Γ— 12000. Hence: ## [1] 0.95002 Observe that the simulation produces 0.9496 as the probability of the interval. This result is close enough to the target probability of 0.95, proposing that the Normal approximation is adequate in this example. The simulation demonstrates the appropriateness of the Normal approxima- tion for the specific value of the parameter that was used. In order to gain more confidence in the approximation we may want to consider other values as well. However, simulations in this book are used only for demonstration. Hence, in most cases where we conduct a simulation experiment, we conduct it only for a single evaluation of the parameters. We leave it to the curios- ity of the reader to expand the simulations and try other evaluations of the parameters. Simulations may also be used in order to compute probabilities in cases where the Normal approximation does not hold. As an illustration, consider the mid- range statistic. This statistic is computed as the average between the largest and the smallest values in the sample. This statistic is discussed in the next chapter. Consider the case where we obtain 100 observations. Let the distribution of each observation be Uniform. Suppose we are interested as before in the central range that contains 95% of the distribution of the mid-range statistic. The Normal approximation does not apply in this case. Yet, if we specify the parameters of the Uniform distribution then we may use simulations in order to compute the range. As a specific example let the distribution of an observation be Uniform(3, 7).
The Sampling Distribution 151 In the simulation we generate a sample of size 𝑛 = 100 from this distribution and compute the mid-range for the sample. For the computation of the statistic we need to obtain the minimal and the maximal values of the sample. The minimal value of a sequence is compute with the function β€œmin”. The input to this function is a sequence and the output is the minimal value of the sequence. Similarly, the maximal value is computed with the function β€œmax”. Again, the input to the function is a sequence and the output is the maximal value in the sequence. The statistic itself is obtained by adding the two extreme values to each other and dividing the sum by two. We produce, just as before, a large number of samples and compute the value of the statistic to each sample. The distribution of the simulated values of the statistic serves as an approximation of the sampling distribution of the statistic. The central range that contains 95% of the sampling distribution may be approximated with the aid of this simulated distribution. Specifically, we approximate the central range by the identification of the 0.025-percentile and the 0.975-percentile of the simulated distribution. Be- tween these two values are 95% of the simulated values of the statistic. The percentiles of a sequence of simulated values of the statistic can be identified with the aid of the function β€œquantile” that was presented in the first part of the book. The first argument to the function is a sequence of values and the second argument is a number 𝑝 between 0 and 1. The output of the function is the 𝑝-percentile of the sequence. The 𝑝-percentile of the simulated sequence serves as an approximation of the 𝑝-percentile of the sampling distribution of the statistic. The second argument to the function β€œquantile” may be a sequence of values between 0 and 1. If so, the percentile for each value in the second argument is computed. Let us carry out the simulation that produces an approximation of the central region that contains 95% of the sampling distribution of the mid-range statistic for the Uniform distribution: 7With the expression β€œrunif(100,3,7)”. 8If the sample is stored in an object by the name β€œX” then one may compute the mid-range statistic with the expression β€œ(max(X)+min(X))/2”. 9The 𝑝-percentile of a sequence is a number with the property that the proportion of entries with values smaller than that number is 𝑝 and the proportion of entries with values larger than the number is 1 βˆ’ 𝑝. 10If the simulated values of the statistic are stored in a sequence by the name β€œmid.range” then the 0.025-percentile and the 0.975-percentile of the sequence can be computed with the expression β€œquantile(mid.range,c(0.025,0.975))”.
## 2.5% 97.5% ## 4.941024 5.058786 Observe that (approximately) 95% of the sampling distribution of the statistic are in the range [4.941680, 5.059004]. Simulations can be used in order to compute the expectation, the standard deviation or any other numerical summary of the sampling distribution of a statistic. All one needs to do is compute the required summary for the simulated sequence of statistic values and hence obtain an approximation of the required summary. For example, we my use the sequence β€œmid.range” in order to obtain the expectation and the standard deviation of the mid-range statistic of a sample of 100 observations from the Uniform(3, 7) distribution:
## [1] 0.02778315 The expectation of the statistic is obtained by the application of the function β€œmean” to the sequence. Observe that it is practically equal to 5. The stan- dard deviation is obtained by the application of the function β€œsd”. Its value is approximately equal to 0.028. Exercises Magnetic fields have been shown to have an effect on living tissue and were proposed as a method for treating pain. In the case study presented here, Carlos Vallbona and his colleagues sought to answer the question β€œCan the chronic pain experienced by postpolio patients be relieved by magnetic fields applied directly over an identified pain trigger point?” 11Vallbona, Carlos, Carlton F. Hazlewood, and Gabor Jurida. (1997). Response of pain to static magnetic fields in postpolio patients: A double-blind pilot study. Archives of Physical and Rehabilitation Medicine 78(11): 1200-1203. Exercises 153 A total of 50 patients experiencing post-polio pain syndrome were recruited. Some of the patients were treated with an active magnetic device and the others were treated with an inactive placebo device. All patients rated their pain before (score1) and after application of the device (score2). The variable β€œchange” is the difference between β€œscore1” and β€œscore2. The treatment condition is indicated by the variable β€œactive.” The value β€œ1” indicates subjects receiving treatment with the active magnet and the value β€œ2” indicates subjects treated with the inactive placebo. This case study is taken from the Rice Virtual Lab in Statistics. More details on this case study can be found in the case study Magnets and Pain Relief that is presented in that site. Exercise 9.1. The data for the 50 patients is stored in file β€œmagnets.csv”. The file can be found on the internet at . Download this file to your computer and store it in the working directory of R. Read the content of the file into an R data frame. Produce a summary of the content of the data frame and answer the following questions: What is the sample average of the change in score between the patient’s rating before the application of the device and the rating after the application? Is the variable β€œactive” a factor or a numeric variable? Compute the average value of the variable β€œchange” for the pa- tients that received and active magnet and average value for those that received an inactive placebo. (Hint: Notice that the first 29 pa- tients received an active magnet and the last 21 patients received an inactive placebo. The sub-sequence of the first 29 values of the given variables can be obtained via the expression β€œchange[1:29]” and the last 21 vales are obtained via the expression β€œchange[30:50]”.) Compute the sample standard deviation of the variable β€œchange” for the patients that received and active magnet and the sample standard deviation for those that received an inactive placebo. Produce a boxplot of the variable β€œchange” for the patients that received and active magnet and for patients that received an inactive placebo. What is the number of outliers in each subsequence? Exercise 9.2. In Chapter we will present a statistical test for testing if there is a difference between the patients that received the active magnets and the patients that received the inactive placebo in terms of the expected value
154 9 Introduction to Statistical Inference of the variable that measures the change. The test statist for this problem is taken to be 𝑋̄1 βˆ’ 𝑋̄2 , 𝑆2/29 + 𝑆2/21 √ 1 2 where 𝑋̄1 and 𝑋̄2 are the sample averages for the 29 patients that receive active magnets and for the 21 patients that receive inactive placebo, respectively. The quantities 𝑆2 and 𝑆2 are the sample variances for each of the two samples. Our 1 2 goal is to investigate the sampling distribution of this statistic in a case where both expectations are equal to each other and to compare this distribution to the observed value of the statistic. Assume that the expectation of the measurement is equal to 3.5, regardless of what the type of treatment that the patient received. We take the standard deviation of the measurement for patients the receives an active magnet to be equal to 3 and for those that received the inactive placebo we take it to be equal to 1.5. Assume that the distribution of the measurements is Normal and there are 29 patients in the first group and 21 in the second. Find the interval that contains 95% of the sampling distribution of the statistic. Does the observed value of the statistic, computed for the data frame β€œmagnets”, falls inside or outside of the interval that is com- puted in 1? Summary Glossary Statistical Inferential: Methods for gaining insight regarding the popula- tion parameters from the observed data. Point Estimation: An attempt to obtain the best guess of the value of a population parameter. An estimator is a statistic that produces such a guess. The estimate is the observed value of the estimator. Confidence Interval: An interval that is most likely to contain the popula- tion parameter. The confidence level of the interval is the sampling proba- bility that the confidence interval contains the parameter value. Hypothesis Testing: A method for determining between two hypothesis, with one of the two being the currently accepted hypothesis. A determination Summary 155 is based on the value of the test statistic. The probability of falsely rejecting the currently accepted hypothesis is the significance level of the test. Comparing Samples: Samples emerge from different populations or under different experimental conditions. Statistical inference may be used to com- pare the distributions of the samples to each other. Regression: Relates different variables that are measured on the same sam- ple. Regression models are used to describe the effect of one of the variables on the distribution of the other one. The former is called the explanatory variable and the later is called the response. Missing Value: An observation for which the value of the measurement is not recorded. R uses the symbol β€œNA” to identify a missing value. Discuss in the forum A data set may contain missing values. Missing value is an observation of a variable for which the value is not recorded. Most statistical procedures delete observations with missing values and conduct the inference on the remaining observations. Some people say that the method of deleting observations with missing values is dangerous and may lead to biased analysis. The reason is that missing values may contain information. What is your opinion? When you formulate your answer to this question it may be useful to come up with an example from you own field of interest. Think of an example in which a missing value contains information relevant for inference or an example in which it does not. In the former case try to assess the possible effects on the analysis that may emerge due to the deletion of observations with missing values. For example, the goal in some clinical trials is to assess the effect of a new treatment on the survival of patients with a life-threatening illness. The trial is conducted for a given duration, say two years, and the time of death of the patients is recorded. The time of death is missing for patients that survived the entire duration of the trial. Yet, one is advised not to ignore these patients in the analysis of the outcome of the trial.
Student Learning Objectives The subject of this chapter is the estimation of the value of a parameter on the basis of data. An estimator is a statistic that is used for estimation. Criteria for selecting among estimators are discussed, with the goal of seeking an esti- mator that tends to obtain values that are as close as possible to the value of the parameter. Different examples of estimation problems are presented and analyzed. By the end of this chapter, the student should be able to: Recognize issues associated with the estimation of parameters. Define the notions of bias, variance and mean squared error (MSE) of an estimator. Estimate parameters from data and assess the performance of the estimation procedure. Estimating Parameters Statistic is the science of data analysis. The primary goal in statistic is to draw meaningful and solid conclusions on a given phenomena on the basis of observed data. Typically, the data emerges as a sample of observations. An observation is the outcome of a measurement (or several measurements) that is taken for a subject that belongs to the sample. These observations may be used in order to investigate the phenomena of interest. The conclusions are drawn from the analysis of the observations. A key aspect in statistical inference is the association of a probabilistic model to the observations. The basic assumption is that the observed data emerges from some distribution. Usually, the assumption is that the distribution is linked to a theoretical model, such as the Normal, Exponential, Poisson, or any other model that fits the specifications of the measurement taken.
158 10 Point Estimation A standard setting in statistical inference is the presence of a sequence of ob- servations. It is presumed that all the observations emerged from a common distribution. The parameters one seeks to estimate are summaries or charac- teristics of that distribution. For example, one may be interested in the distribution of price of cars. A rea- sonable assumption is that the distribution of the prices is the Exponential(πœ†) distribution. Given an observed sample of prices one may be able to estimate the rate πœ† that specifies the distribution. The target in statistical point estimation of a parameter is to produce the best possible guess of the value of a parameter on the basis of the available data. The statistic that tries to guess the value of the parameter is called an estimator. The estimator is a formula applied to the data that produces a number. This number is the estimate of the value of the parameter. An important characteristic of a distribution, which is always of interest, is the expectation of the measurement, namely the central location of the distri- bution. A natural estimator of the expectation is the sample average. However, one may propose other estimators that make sense, such as the sample mid- range that was presented in the previous chapter. The main topic of this chap- ter is the identification of criteria that may help us choose which estimator to use for the estimation of which parameter. In the next section we discuss issues associated with the estimation of the ex- pectation of a measurement. The following section deals with the estimation of the variance and standard deviation – summaries that characterize the spread of the distribution. The last section deals with the theoretical models of distri- bution that were introduced in the first part of the book. It discusses ways by which one may estimate the parameters that characterize these distributions. Estimation of the Expectation A natural candidate for the estimation of the expectation of a random variable on the basis of a sample of observations is the sample average. Consider, as an example, the estimation of the expected price of a car using the information in the data file β€œcars.csv”. Let us read the data into a data frame named β€œcars” and compute the average of the variable β€œprice”:
Estimation of the Expectation 159 The application of the function β€œmean” for the computation of the sample aver- age produced a missing value. The reason is that the variable β€œprice” contains 4 missing values. As default, when applied to a sequence that contains missing values, the function β€œmean” produce as output a missing value. The behavior of the function β€œmean” at the presence of missing values is deter- mined by the argument β€œna.rm”. If we want to compute the average of the non-missing values in the sequence we should specify the argument β€œna.rm” as β€œTRUE”. This can be achieved by the inclusion of the expression β€œna.rm=TRUE” in the arguments of the function:
The Accuracy of the Sample Average How close is the estimated value of the expectation – the average price – to the expected price? There is no way of answering this question on the basis of the data we observed. Indeed, we think of the price of a random car as a random variable. The expectation we seek to estimate is the expectation of that random variable. However, the actual value of that expectation is unknown. Hence, not knowing what is the target value, how can we determine the distance between the computed average 13207.13 and that unknown value? As a remedy for not being able to answer the question we would like to address we, instead, change the question. In the new formulation of the question we ignore the data at hand altogether. The new formulation considers the sample average as a statistic and the question is formulated in terms of the sampling distribution of that statistic. The question, in its new formulation is: How close is the sample average of the price, taken as a random variable, to the expected price? Notice that in the new formulation of the question the observed average price π‘₯Μ„ = 13207.13 has no special role. The question is formulated in terms of the sampling distribution of the sample average (𝑋̄). The observed average value is only one among many in the sampling distribution of the average. 1The name of the argument stands for β€œNA remove”. If the value of the argument is set to β€œTRUE” then the missing values are removed in the computation of the average. Consequently, the average is computed for the sub-sequence of non-missing values. The default specification of the argument in the definition of the function is β€œna.rm=FALSE”, which implies a missing value for the mean when computed on a sequence that contains missing values. 160 10 Point Estimation The advantage of the new formulation of the question is that it can be ad- dressed. We do have means for investigating the closeness of the estimator to the parameter and thereby producing meaningful answers. Specifically, con- sider the current case where the estimator is the sample average 𝑋̄. This esti- mator attempts to estimate the expectation E(𝑋) of the measurement, which is the parameter. Assessing the closeness of the estimator to the parameter cor- responds to the comparison between the distribution of the random variable, i.e. the estimator, and the value of the parameter. For this comparison we may note that the expectation E(𝑋) is also the expec- tation of the sample average 𝑋̄. Consequently, in this problem the assessment of the closeness of the estimator to the parameter is equivalent to the in- vestigation of the spread of the distribution of the sample average about its expectation. Consider an example of such investigation. Imagine that the expected price of a car is equal to $13,000. A question one may ask is how likely it is that the estimator’s guess at the value is within $1,000 of the actual value? In other words, what is the probability that sample average falls in the range [12, 000, 14, 000]? Let us investigate this question using simulations. Recall our assumption that the distribution of the price is Exponential. An expectation of 13,000 corre- sponds to a rate parameter of πœ† = 1/13, 000. We simulate the sampling distribu- tion of the estimator by the generation of a sample of 201 Exponential random variables with this rate. The sample average is computed for each sample and stored. The sampling distribution of the sample average is approximated via the production of a large number of sample averages: ## [1] 0.72251 In the last line of the code we compute the probability of being within $1,000 of the expected price. Recall that the expected price in the Exponential case is the reciprocal of the rate πœ†. In this simulation we obtained 0.7247 as an approximation of the probability. In the case of the sample average we may also apply the Normal approximation in order to assess the probability under consideration. In particular, if πœ† = 1/13, 000 then the expectation of an Exponential observation is E(𝑋) = 1/πœ† = 10.3 Estimation of the Expectation 161 13, 000 and the variance is V(𝑋) = 1/πœ†2 = (13, 000)2. The expectation of the sample average is equal to the expectation of the measurement, 13,000 in this example. The variance of the sample average is equal to the variance of the observation, divided by the sample size. In the current setting it is equal to (13, 000)2/201. The standard deviation is equal to the square root of the variance. The Normal approximation uses the Normal distribution in order to compute probabilities associated with the sample average. The Normal distribution that is used has the same expectation and standard deviation as the sample average: ## [1] 0.7245391 The probability of falling within the interval [12000, 14000] is computed as the difference between the cumulative Normal probability at 14,000 and the cumulative Normal probability at 12,000. These cumulative probabilities are computed with the function β€œpnorm”. Re- call that this function computes the cumulative probability for the Normal distribution. If the first argument is 14,000 then the function produces the probability that a Normal random variable is less than or equal to 14,000. Likewise, if the first argument is 12,000 then the computed probability is the probability of being less than or equal to 12,000. The expectation of the Normal distribution enters in the second argument of the function and the standard deviation enters in the third argument. The Normal approximation of falling in the interval [12000, 14000], computed as the difference between the two cumulative probabilities, produces 0.7245391 as the probability. Notice that the probability 0.7247 computed in the simu- lations is in agreement with the Normal approximation. If we wish to assess the accuracy of the estimator at other values of the parameter, say E(𝑋) = 12, 000 (which corresponds to πœ† = 1/12, 000) or E(𝑋) = 14, 033, (which corresponds to πœ† = 1/14, 033) all we need to do is change the expression β€œlam <- 1/13000” to the new value and rerun the simulation. Alternatively, we may use a Normal approximation with modified interval, expectation, and standard deviation. For example, consider the case where the expected price is equal to $12,000. In order to asses the probability that 2As a matter of fact, the difference is the probability of falling in the half-open interval (12000, 14000]. However, for continuous distributions the probability of the end-points is zero and they do not contribute to the probability of the interval. 162 10 Point Estimation the sample average falls within $1,000 of the parameter we should compute the probability of the interval [11, 000, 13, 000] and change the entries to the first argument of the function β€œpnorm” accordingly. The new expectation is 12,000 and the new standard deviation is 12, 000/√201: ## [1] 0.7625775 This time we get that the probability is, approximately, 0.763. The fact that the computed value of the average 13,207.13 belongs to the interval [12, 000, 14, 000] that was considered in the first analysis but does not belong to the interval [11, 000, 13, 000] that was considered in the second analy- sis is irrelevant to the conclusions drawn from the analysis. In both cases the theoretical properties of the sample average as an estimator were considered and not its value at specific data. In the simulation and in the Normal approximation we applied one method for assessing the closeness of the sample average to the expectation it es- timates. This method involved the computation of the probability of being within $1,000 of the expected price. The higher this probability, the more accurate is the estimator. An alternative method for assessing the accuracy of an estimator of the expec- tation may involve the use of an overall summary of the spread of the distri- bution. A standard method for quantifying the spread of a distribution about the expectation is the variance (or its square root, the standard deviation). Given an estimator of the expectation of a measurement, the sample average for example, we may evaluate the accuracy of the estimator by considering its variance. The smaller the variance the more accurate is the estimator. Consider again the case where the sample average is used in order to estimate the expectation of a measurement. In such a situation the variance of the estimator, i.e. the variance of the sample average, is obtained as the ratio between the variance of the measurement V(𝑋), divided by the sample size 𝑛: V(𝑋̄) = V(𝑋)/𝑛 . Notice that for larger sample sizes the estimator is more accurate. The lager the sample size 𝑛 the smaller is the variance of the estimator, in which case the values of the estimator tend to be more concentrated about the expectation. Hence, one may make the estimator more accurate by increasing the sample size. Another method for improving the accuracy of the average of measurements in Estimation of the Expectation 163 estimating the expectation is the application of a more accurate measurement device. If the variance V(𝑋) of the measurement device decreases so does the variance of the sample average of such measurements. In the sequel, when we investigate the accuracy of estimators, we will generally use overall summaries of the spread of their distribution around the target value of the parameter. Comparing Estimators Notice that the formulation of the accuracy of estimation that we use replaces the question: β€œHow close is the given value of the estimator to the unknown value of the parameter?” by the question: β€œHow close are the unknown (and random) values of the estimator to a given value of the parameter?” In the second formulation the question is completely academic and unrelated to ac- tual measurement values. In this academic context we can consider different potential values of the parameter. Once the value of the parameter has been selected it can be treated as known in the context of the academic discussion. Clearly, this does not imply that we actually know what is the true value of the parameter. The sample average is a natural estimator of the expectation of the measure- ment. However, one may propose other estimators. For example, when the distribution of the measurement is symmetric about the expectation then the median of the distribution is equal to the expectation. The sample median, which is a natural estimator of the measurement median, is an alternative estimator of the expectation in such case. Which of the two alternatives, the sample average or the sample median, should we prefer as an estimator of the expectation in the case of a symmetric distribution? The straightforward answer to this question is to prefer the better one, the one which is more accurate. As part of the solved exercises you are asked to com- pare the sample average to the sample median as estimators of the expectation. Here we compare the sample average to yet another alternative estimator – the mid-range estimator – which is the average between the smallest and the largest observations. In the comparison between estimators we do not evaluate them in the context of the observed data. Rather, we compare them as random variables. The comparison deals with the properties of the estimators in a given theoretical context. This theoretical context is motivated by the realities of the situation as we know them. But, still, the frame of reference is the theoretical model and not the collected data. Hence, depending on the context, we may assume in the comparison that the observations emerge from some distribution. We may specify parameter values 164 10 Point Estimation for this distribution and select the appropriate sample size. After setting the stage we can compare the accuracy of one estimator against that of the other. Assessment at other parameter values in the context of the given theoretical model, or of other theoretical models, may provide insight and enhance our understanding regarding the relative merits and weaknesses of each estimator. Let us compare the sample average to the sample mid-range as estimators of the expectation in a situation that we design. Consider a Normal measurement 𝑋 with expectation E(𝑋) = 3 and variance that is equal to 2. Assume that the sample size is 𝑛 = 100. Both estimators, due to the symmetry of the Normal distribution, are centered at the expectation. Hence, we may evaluate the accuracy of the two estimators using their variances. These variances are the measure of the spread of the distributions of each estimator about the target parameter value. We produce the sampling distribution and compute the variances using a simu- lation. Recall that the distribution of the mid-range statistic was simulated in the previous chapter. In the computation of the mid-range statistic we used the function β€œmax” that computes the maximum value of its input and the function β€œmin” that computes the minimum value:
## [1] 0.1865454 We get that the variance of the sample average is approximately equal to 0.02. The variance of the mid-range statistic is approximately equal to 0.185, more than 9 times as large. We see that the accuracy of the sample average is better in this case than the accuracy of the mid-range estimator. Evaluating the two estimators at other values of the parameter will produce the same
Estimation of the Expectation 165 relation. Hence, in the current example it seems as if the sample average is the better of the two. Is the sample average necessarily the best estimator for the expectation? The next example will demonstrate that this need not always be the case. Consider again a situation of observing a sample of size 𝑛 = 100. However, this time the measurement 𝑋 is Uniform and not Normal. Say 𝑋 ∼ Uniform(0.5, 5.5) has the Uniform distribution over the interval [0.5, 5.5]. The expectation of the measurement is equal to 3 like before, since E(𝑋) = (0.5 + 5.5)/2 = 3. The vari- ance on an observation is V(𝑋) = (5.5 βˆ’ 0.5)2/12 = 2.083333, not much different from the variance that was used in the Normal case. The Uniform distribution, like the Normal distribution, is a symmetric distribution about the center of the distribution. Hence, using the mid-range statistic as an estimator of the expectation makes sense. We re-run the simulations, using the function β€œrunif” for the simulation of a sample from the Uniform distribution and the parameters of the Uniform distribution instead of the function β€œrnorm” that was used before:
## [1] 0.001221015 Again, we get that the variance of the sample average is approximately equal to 0.02, which is close to the theoretical value. The variance of mid-range statistic is approximately equal to 0.0012. Observe that in the current comparison between the sample average and the 4Observe that the middle range of the Uniform(π‘Ž, 𝑏) distribution, the middle point be- tween the maximum value of the distribution 𝑏 and the minimal value π‘Ž, is (π‘Ž + 𝑏)/2, which is equal to the expectation of the distribution 5Actually, the exact value of the variance of the sample average is V(𝑋)/100 = 0.02083333. The results of the simulation are consistent with this theoretical computation. 166 10 Point Estimation mid-range estimator we get that the latter is a clear winner. Examination of other values of π‘Ž and 𝑏 for the Uniform distribution will produce the same relation between the two competitors. Hence, we may conclude that for the case of the Uniform distribution the sample average is an inferior estimator. The last example may serve as yet another reminder that life is never simple. A method that is good in one situation may not be as good in a different situation. Still, the estimator of choice of the expectation is the sample average. Indeed, in some cases we may find that other methods may produce more accurate estimates. However, in most settings the sample average beats its competitors. The sample average also possesses other useful benefits. Its sampling distribu- tion is always centered at the expectation it is trying to estimate. Its variance has a simple form, i.e. it is equal to the variance of the measurement divided by the sample size. Moreover, its sampling distribution can be approximated by the Normal distribution. Henceforth, due to these properties, we will use the sample average whenever estimation of the expectation is required. Estimation of the Variance and Standard Deviation The spread of the measurement about its expected value may be measured by the variance or by the standard deviation, which is the square root of the variance. The standard estimator for the variance of the measurement is the sample variance and the square root of the sample variance is the default estimator of the standard deviation. The computation of the sample variance from the data is discussed in Chap- ter . Recall that the sample variance is computed via the formula:
where π‘₯Μ„ is the sample average and 𝑛 is the sample size. The term π‘₯𝑖 βˆ’ π‘₯Μ„ is the deviation from the sample average of the 𝑖th observation and βˆ‘π‘› (π‘₯ βˆ’ π‘₯Μ„)2 𝑖 is the sum of the squares of deviations. It is pointed out in Chapter that the reason for dividing the sum of squares by (𝑛 βˆ’ 1), rather than 𝑛, stems from considerations of statistical inference. A promise was made that these reasonings will be discussed in due course. Now we want to deliver on this promise. Let us compare between two competing estimators for the variance, both considered as random variables. One is the estimator 𝑆2, which is equal to the formula for the sample variance applied to a random sample:
The computation of this statistic can be carried out with the function β€œvar”. The second estimator is the one obtained when the sum of squares is divided by the sample size (instead of the sample size minus 1): 𝑛 Μ„ 2
𝑛 𝑛 𝑛 βˆ’ 1 Hence, the second estimator may be obtained by the multiplication of the first estimator 𝑆2 by the ratio (𝑛 βˆ’ 1)/𝑛. We seek to compare between 𝑆2 and [(𝑛 βˆ’ 1)/𝑛]𝑆2 as estimators of the variance. In order to make the comparison concrete, let us consider it in the context of a Normal measurement with expectation πœ‡ = 5 and variance 𝜎2 = 3. Let us assume that the sample is of size 20 (𝑛 = 20). Under these conditions we carry out a simulation. Each iteration of the simu- lation involves the generation of a sample of size 𝑛 = 20 from the given Normal distribution. The sample variance 𝑆2 is computed from the sample with the application of the function β€œvar”. The resulting estimate of the variance is stored in an object that is called β€œX.var”: The content of the object β€œX.var”, at the end of the simulation, approximates the sampling distribution of the estimator 𝑆2. Our goal is to compare between the performance of the estimator of the vari- ance 𝑆2 and that of the alternative estimator. In this alternative estimator the sum of squared deviations is divided by the sample size (𝑛 = 20) and not by the sample size minus 1 (𝑛 βˆ’ 1 = 19). Consequently, the alternative estimator is obtained by multiplying 𝑆2 by the ratio 19/20. The sampling distribution of the values of 𝑆2 is approximated by the content of the object β€œX.var”. It follows that the sampling distribution of the alternative estimator is approx- imated by the object β€œ(19/20)*X.var”, in which each value of 𝑆2 is multiplied 168 10 Point Estimation by the appropriate ratio. The comparison between the sampling distribution of 𝑆2 and the sampling distribution of the alternative estimator is obtained by comparing between β€œX.var” and β€œ(19/20)*X.var”. Let us start by the investigation of the expectation of the estimators. Recall that when we analyzed the sample average as an estimator of the expectation of a measurement we obtained that the expectation of the sampling distribu- tion of the estimator is equal to the value of the parameter it is trying to estimate. One may wonder: What is the situation for the estimators of the variance? Is it or is it not the case that the expectation of their sampling dis- tribution equals the value of the variance? In other words, is the distribution of either estimators of the variance centered at the value of the parameter they are trying to estimate? Compute the expectations of the two estimators:
## [1] 2.853809 Note that 3 is the value of the variance of the measurement that was used in the simulation. Observe that the expectation of 𝑆2 is essentially equal to 3, whereas the expectation of the alternative estimator is less than 3. Hence, at least in the example that we consider, the center of the distribution of 𝑆2 is located on the target value. On the other hand, the center of the sampling distribution of the alternative estimator is located off that target value. As a matter of fact it can be shown mathematically that the expectation of the estimator 𝑆2 is always equal to the variance of the measurement. This holds true regardless of what is the actual value of the variance. On the other hand the expectation of the alternative estimator is always off the target value. An estimator is called unbiased if its expectation is equal to the value of the parameter that it tries to estimate. We get that 𝑆2 is an unbiased estimator of the variance. Similarly, the sample average is an unbiased estimator of the expectation. Unlike these two estimators, the alternative estimator of the variance is a biased estimator. The default is to use 𝑆2 as the estimator of the variance of the measurement and to use its square root as the estimator of the standard deviation of the
10.4 Estimation of the Variance and Standard Deviation 169 measurement. A justification, which is frequently quoted to justify this selec- tion, is the fact that 𝑆2 is an unbiased estimator of the variance. In the previous section, when comparing two competing estimators of the expectation, or main concern was the quantification of the spread of the sam- pling distribution of either estimator about the target value of the parameter. We used that spread as a measure of the distance between the estimator and the value it tries to estimate. In the setting of the previous section both es- timators were unbiased. Consequently, the variance of the estimators, which measures the spread of the distribution about its expectation, could be used in order to quantify the distance between the estimator and the parameter. (Since, for unbiased estimators, the parameter is equal to the expectation of the sampling distribution.) In the current section one of the estimators (𝑆2) is unbiased, but the other (the alternative estimator) is not. In order to compare their accuracy in estimation we need to figure out a way to quantify the distance between a biased estimator and the value it tries to estimate. Towards that end let us recall the definition of the variance. Given a random variable 𝑋 with an expectation E(𝑋), we consider the square of the deviations (𝑋 βˆ’ E(𝑋))2, which measure the (squared) distance between each value of the random variable and the expectation. The variance is defined as the expecta- tion of the squared distance: V(𝑋) = E[(𝑋 βˆ’ E(𝑋))2]. One may think of the variance as an overall measure of the distance between the random variable and the expectation. Assume now that the goal is to assess the distance between an estimator and the parameter it tries to estimate. In order to keep the discussion on an abstract level let us use the Greek letter πœƒ (read: theta) to denote this parameter. The estimator is denoted by πœƒΜ‚ (read: theta hat). It is a statistic, a formula applied to the data. Hence, with respect to the sampling distribution, πœƒΜ‚ is a random variable. The issue is to measure the distance between the random variable πœƒΜ‚ and the parameter πœƒ. 7As part of your homework assignment you are required to investigate the properties of 𝑆, the square root of 𝑆2, as an estimator of the standard deviation of the measurement. A conclusion of this investigation is that 𝑆 is a biased estimator of the standard deviation. 8The letter πœƒ is frequently used in the statistical literature to denote a parameter of the distribution. In the previous section we considered πœƒ = E(𝑋) and in this section we consider πœƒ = V(𝑋). 9Observe that we diverge here slightly from our promise to use capital letters to denote random variables. However, denoting the parameter by πœƒ and denoting the estimator of the parameter by πœƒΜ‚ is standard in the statistical literature. As a matter of fact, we will use the β€œhat” notation, where a hat is placed over a Greek letter that represents the parameter, in other places in this book. The letter with the hat on top will represent the estimator and will always be considered as a random variable. For Latin letters we will still use capital letters, with or without a hat, to represent a random variable and small letter to represent evaluation of the random variable for given data. 170 10 Point Estimation Motivated by the method that led to the definition of the variance we consider the deviations between the estimator and the parameter. The square devia- tions (πœƒΜ‚ βˆ’ πœƒ)2 may be considered in the current context as a measure of the (squared) distance between the estimator and the parameter. When we take the expectation of these square deviations we get an overall measure of the distance between the estimator and the parameter. This overall distance is called the mean square error of the estimator and is denoted by MSE: MSE = E[(πœƒΜ‚ βˆ’ πœƒ)2] . The mean square error of an estimator is tightly linked to the bias and the variance of the estimator. The bias of an estimator πœƒΜ‚ is the difference between the expectation of the estimator and the parameter it seeks to estimate: Bias = E(πœƒΜ‚) βˆ’ πœƒ . In an unbiased estimator the expectation of the estimator and the estimated parameter coincide, i.e. the bias is equal to zero. For a biased estimator the bias is either negative, as is the case for the alternative estimator of the variance, or else it is positive. The variance of the estimator, Variance = V(πœƒΜ‚), is a measure of the spread of the sampling distribution of the estimator about its expectation. The link between the mean square error, the bias, and the variance is described by the formula: MSE = Variance + (Bias)2 . Hence, the mean square error of an estimator is the sum of its variance, the (squared) distance between the estimator and its expectation, and the square of the bias, the square of the distance between the expectation and the parame- ter. The mean square error is influenced both by the spread of the distribution about the expected value (the variance) and by the distance between the ex- pected value and the parameter (the bias). The larger either of them become the larger is the mean square error, namely the distance between the estimator and the parameter. Let us compare between the mean square error of the estimator 𝑆2 and the mean square error of the alternative estimator [19/20]𝑆2. Recall that we have computed their expectations and found out that the expectation of 𝑆2 is es- sentially equal to 3, the target value of the variance. The expectation of the alternative estimator turned out to be equal to 2.845630, which is less than the target value. It turns out that the bias of 𝑆2 is zero (or essen- 10It can be shown mathematically that E([(𝑛 βˆ’ 1)/𝑛]𝑆2 ) = [(𝑛 βˆ’ 1)/𝑛]E(𝑆2 ). Consequently, the actual value of the expectation of the alternative estimator in the current setting is [19/20] β‹… 3 = 2.85 and the bias is βˆ’0.15. The results of the simulation are consistent with this fact. Estimation of the Variance and Standard Deviation 171 tially zero in the simulations) and the bias of the alternative estimator is 2.845630 βˆ’ 3 = βˆ’0.15437 β‰ˆ βˆ’0.15. In order to compute the mean square errors of both estimators, let us compute their variances:
## [1] 0.8572649 Observe that the variance of 𝑆2 is essentially equal to 0.936 and the variance of the alternative estimator is essentially equal to 0.845. The estimator 𝑆2 is unbiased. Consequently, the mean square error of 𝑆2 is equal to its variance. The bias of the alternative is -0.15. As a result we get that the mean square error of this estimator, which is the sum of the variance and the square of the bias, is essentially equal to 0.845 + (βˆ’0.15)2 = 0.845 + 0.0225 = 0.8675 . Observe that the mean square error of the estimator 𝑆2, which is equal to 0.936, is larger than the mean square error of the alternative estimator. Notice that even though the alternative estimator is biased it still has a smaller mean square error than the default estimator 𝑆2. Indeed, it can be prove mathematically that when the measurement has a Normal distribution then the mean square error of the alternative estimator is always smaller than the mean square error of the sample variance 𝑆2. Still, although the alternative estimator is slightly more accurate than 𝑆2 in the estimation of the variance, the tradition is to use the latter. Obeying this tradition we will henceforth use 𝑆2 whenever estimation of the variance is required. Likewise, we will use 𝑆, the square root of the sample variance, to estimate the standard deviation. In order to understand how is it that the biased estimator produced a smaller mean square error than the unbiased estimator let us consider the two com- ponents of the mean square error. The alternative estimator is biased but, on the other hand, it has a smaller variance. Both the bias and the variance contribute to the mean square error of an estimator. The price for reducing the bias in estimation is usually an increase in the variance and vice versa. The consequence of producing an unbiased estimator such as 𝑆2 is an inflated variance. A better estimator is an estimator that balances between the error 172 10 Point Estimation that results from the bias and the error that results from the variance. Such is the alternative estimator. We will use 𝑆2 in order to estimate the variance of a measurement. A context in which an estimate of the variance of a measurement is relevant is in the assessment of the variance of the sample mean. Recall that the variance of the sample mean is equal to V(𝑋)/𝑛, where V(𝑋) is the variance of the mea- surement and 𝑛 is the size of the sample. In the case where the variance of the measurement is not known one may estimate it from the sample using 𝑆2. It follows that the estimator of the variance of the sample average is 𝑆2/𝑛. Similarly, 𝑆/βˆšπ‘› can be used as an estimator of the standard deviation of the sample average. Estimation of Other Parameters In the previous two section we considered the estimation of the expectation and the variance of a measurement. The proposed estimators, the sample average for the expectation and the sample variance for the variance, are not tied to any specific model for the distribution of the measurement. They may be applied to data whether or not a theoretical model for the distribution of the measurement is assumed. In the cases where a theoretical model for the measurement is assumed one may be interested in the estimation of the specific parameters associated with this model. In the first part of the book we introduced the Binomial, the Poisson, the Uniform, the Exponential, and the Normal models for the dis- tribution of measurements. In this section we consider the estimation of the parameters that determine each of these theoretical distributions based on a sample generated from the same distribution. In some cases the estimators co- incide with the estimators considered in the previous sections. In other cases the estimators are different. Start with the Binomial distribution. We will be interested in the special case 𝑋 ∼ Binomial(1, 𝑝). This case involves the outcome of a single trial. The trial has two possible outcomes, one of them is designated as β€œsuccess” and the other as β€œfailure”. The parameter 𝑝 is the probability of the success. The Binomial(1, 𝑝) distribution is also called the Bernoulli distribution. Our con- cern is the estimation of the parameter 𝑝 based on a sample of observations from this Bernoulli distribution. This estimation problem emerges in many settings that involve the assessment of the probability of an event based on a sample of 𝑛 observations. In each observation the event either occurs or not. A natural estimator of the prob- Estimation of Other Parameters 173 ability of the event is its relative frequency in the sample. Let us show that this estimator can be represented as an average of a Bernoulli sample and the sample average is used for the estimation of a Bernoulli expectation. Consider an event, one may code a measurement 𝑋, associated with an obser- vation, by 1 if the event occurs and by 0 if it does not. Given a sample of size 𝑛, one thereby produces 𝑛 observations with values 0 or 1. An observation has the value 1 if the event occurs for that observation or, else, the value is 0. Notice that E(𝑋) = 1 β‹… 𝑝 = 𝑝. Consequently, the probability of the event is equal to the expectation of the Bernoulli measurement. It turns out that the parameter one seeks to estimate is the expectation of a Bernoulli measurement. The estimation is based on a sample of size 𝑛 of Bernoulli observations. In Section it was proposed to use the sample average as an estimate of the expectation. The sample average is the sum of the observations, divided by the number of observation. In the specific case of a sample of Bernoulli observations, the sum of observation is the sum of zeros and one. The zeros do not contribute to the sum. Hence, the sum is equal to the number of times that 1 occurs, namely the frequency of the occurrences of the event. When we divide by the sample size we get the relative frequency of the occurrences. The conclusion is that the sample average of the Bernoulli observations and the relative frequency of occurrences of the event in the sample are the same. Consequently, the sample relative frequency of the event is also a sample average that estimates the expectation of the Bernoulli measurement. We seek to estimate 𝑝, the probability of the event. The estimator is the relative frequency of the event in the sample. We denote this estimator by 𝑃̂. This estimator is a sample average of Bernoulli observations that is used in order to estimate the expectation of the Bernoulli distribution. From the discussion in Section one may conclude that this estimator is an unbiased estimator of 𝑝 (namely, E(𝑃̂) = 𝑝) and that its variance is equal to: V(𝑃̂) = V(𝑋)/𝑛 = 𝑝(1 βˆ’ 𝑝)/𝑛 , where the variance of the measurement is obtained from the formula for the variance of a Binomial(1, 𝑝) distribution. The second example of an integer valued random variable that was considered in the first part of the book is the Poisson(πœ†) distribution. Recall that πœ† is the expectation of a Poisson measurement. Hence, one may use the sample average of Poisson observations in order to estimate this parameter. The first example of a continuous distribution that was discussed in the first
174 10 Point Estimation part of the book is the Uniform(π‘Ž, 𝑏) distribution. This distribution is param- eterized by π‘Ž and 𝑏, the end-points of the interval over which the distribution is defined. A natural estimator of π‘Ž is the smallest value observed and a nat- ural estimator of 𝑏 is the largest value. One may use the function β€œmin” for the computation of the former estimate from the sample and use the function β€œmax” for the computation of the later. Both estimators are slightly biased but have a relatively small mean square error. Next considered the 𝑋 ∼ Exponential(πœ†) random variable. This distribution was applied in this chapter to model the distribution of the prices of cars. The distribution is characterized by the rate parameter πœ†. In order to estimate the rate one may notice the relation between it and the expectation of the measurement: E(𝑋) = 1/πœ† ⟹ πœ† = 1/E(𝑋) . The rate is equal to the reciprocal of the expectation. The expectation can be estimated by the sample average. Hence a natural proposal is to use the reciprocal of the sample average as an estimator of the rate:
The final example that we mention is the Normal(πœ‡, 𝜎2) case. The parameter πœ‡ is the expectation of the measurement and may be estimated by the sample average 𝑋̄. The parameter 𝜎2 is the variance of a measurement, and can be estimated using the sample variance 𝑆2. Exercises Exercise 10.1. In Subsection we compare the average against the mid- range as estimators of the expectation of the measurement. The goal of this exercise is to repeat the analysis, but this time compare the average to the median as estimators of the expectation in symmetric distributions. Simulate the sampling distribution of average and the median of a sample of size 𝑛 = 100 from the Normal(3, 2) distribution. Compute the expectation and the variance of the sample average and of the sample median. Which of the two estimators has a smaller mean square error? Simulate the sampling distribution of average and the median of a sample of size 𝑛 = 100 from the Uniform(0.5, 5.5) distribution. Exercises 175 Compute the expectation and the variance of the sample average and of the sample median. Which of the two estimators has a smaller mean square error? Exercise 10.2. The goal in this exercise is to assess estimation of a proportion in a population on the basis of the proportion in the sample. The file β€œpop2.csv” was introduced in Exercise of Chapter . This file con- tains information associated to the blood pressure of an imaginary population of size 100,000. The file can be found on the internet ( ). One of the variables in the file is a factor by the name β€œgroup” that identifies levels of blood pressure. The levels of this variable are β€œHIGH”, β€œLOW”, and β€œNORMAL”. The file β€œex2.csv” contains a sample of size 𝑛 = 150 taken from the given population. This file can also be found on the internet ( ). It contains the same variables as in the file β€œpop2.csv”. The file β€œex2.csv” corresponds in this exercise to the observed sample and the file β€œpop2.csv” corresponds to the unobserved population. Download both files to your computer and answer the following questions: Compute the proportion in the sample of those with a high level of blood pressure. Compute the proportion in the population of those with a high level of blood pressure. Simulate the sampling distribution of the sample proportion and compute its expectation. Compute the variance of the sample proportion. It is proposed in Section that the variance of the sample proportion is V(𝑃̂) = 𝑝(1 βˆ’ 𝑝)/𝑛, where 𝑝 is the probability of the event (having a high blood pressure in our case) and 𝑛 is the sample size (𝑛 = 150 in our case). Examine this proposal in the current setting. 13Hint: You may use the function summary or you may note that the expression β€œvari- able==level” produces a sequence with logical β€œTRUE” or β€œFALSE” entries that identify entries in the sequence β€œvariable” that obtain the value β€œlevel”.
10.7 Summary Glossary Point Estimation: An attempt to obtain the best guess of the value of a population parameter. An estimator is a statistic that produces such a guess. The estimate is the observed value of the estimator. Bias: The difference between the expectation of the estimator and the value of the parameter. An estimator is unbiased if the bias is equal to zero. Oth- erwise, it is biased. Mean Square Error (MSE): A measure of the concentration of the distri- bution of the estimator about the value of the parameter. The mean square error of an estimator is equal to the sum of the variance and the square of the bias. If the estimator is unbiased then the mean square error is equal to the variance. Bernoulli Random Variable: A random variable that obtains the value β€œ1” with probability 𝑝 and the value β€œ0” with probability 1 βˆ’π‘. It coincides with the Binomial(1, 𝑝) distribution. Frequently, the Bernoulli random variable emerges as the indicator of the occurrence of an event. Discuss in the forum Performance of estimators is assessed in the context of a theoretical model for the sampling distribution of the observations. Given a criteria for optimality, an optimal estimator is an estimator that performs better than any other estimator with respect to that criteria. A robust estimator, on the other hand, is an estimator that is not sensitive to misspecification of the theoretical model. Hence, a robust estimator may be somewhat inferior to an optimal estimator in the context of an assumed model. However, if in actuality the assumed model is not a good description of reality then robust estimator will tend to perform better than the estimator denoted optimal. Some say that optimal estimators should be preferred while other advocate the use of more robust estimators. What is your opinion? When you formulate your answer to this question it may be useful to come up with an example from you own field of interest. Think of an estimation problem and possible estimators that can be used in the context of this problem. Try to identify a model that is natural to this problem an ask yourself in what ways may this model err in its attempt to describe the real situation in the estimation problem. 10.7 Summary 177 As an example consider estimation of the expectation of a Uniform measure- ment. We demonstrated that the mid-range estimator is better than the sam- ple average if indeed the measurements emerge from the Uniform distribution. However, if the modeling assumption is wrong then this may no longer be the case. If the distribution of the measurement in actuality is not symmetric or if the distribution is more concentrated in the center than in the tails then the performance of the mid-range estimator may deteriorate. The sample average, on the other hand is not sensitive to the distribution not being symmetric.
Student Learning Objectives A confidence interval is an estimate of an unknown parameter by a range of values. This range contains the value of the parameter with a prescribed prob- ability, called the confidence level. In this chapter we discuss the construction of confidence intervals for the expectation and for the variance of a measure- ment as well as for the probability of an event. In some cases the construction will apply the Normal approximation suggested by the Central Limit Theo- rem. This approximation is valid when the sample size is large enough. The construction of confidence intervals for a small sample is considered in the con- text of Normal measurements. By the end of this chapter, the student should be able to: Define confidence intervals and confidence levels. Construct a confidence interval for the expectation of a measurement and for the probability of an event. Construct a confidence interval for expectation and for the variance of a Normal measurement. Compute the sample size that will produce a confidence interval of a given width. Intervals for Mean and Proportion A confidence interval, like a point estimator, is a method for estimating the unknown value of a parameter. However, instead of producing a single num- ber, the confidence interval is an interval of numbers. The interval of values is calculated from the data. The confidence interval is likely to include the unknown population parameter. The probability of the event of inclusion is denoted as the confidence level of the confidence intervals.
180 11 Confidence Intervals This section presents a method for the computation of confidence intervals for the expectation of a measurement and a similar method for the computation of a confidence interval for the probability of an event. These methods rely on the application of the Central Limit Theorem to the sample average in the one case, and to the sample proportion in the other case. In the first subsection we compute a confidence interval for the expectation of the variable β€œprice” and a confidence interval for the proportion of diesel cars. The confidence intervals are computed based on the data in the file β€œcars.csv”. In the subsequent subsections we discuss the theory behind the computation of the confidence intervals and explain the meaning of the confidence level. Subsection does so with respect to the confidence interval for the ex- pectation and Subsection with respect to the confidence interval for the proportion. Examples of Confidence Intervals A point estimator of the expectation of a measurement is the sample average of the variable that is associated with the measurement. A confidence interval is an interval of numbers that is likely to contain the parameter value. A natural interval to consider is an interval centered at the sample average π‘₯Μ„. The interval is set to have a width that assures the inclusion of the parameter value in the prescribed probability, namely the confidence level. Consider the confidence interval for the expectation. The structure of the confidence interval of confidence level 95% is [π‘₯Μ„ βˆ’ 1.96 β‹… 𝑠/βˆšπ‘›, π‘₯Μ„ + 1.96 β‹… 𝑠/βˆšπ‘›], where 𝑠 is the estimated standard deviation of the measurement (namely, the sample standard deviation) and 𝑛 is the sample size. This interval may be expressed in the form:
In the first line of code the data in the file β€œcars.csv” is stored in a data frame called β€œcars”. In the second line the average π‘₯Μ„ is computed for the variable β€œprice” in the data frame β€œcars”. This average is stored under the name β€œx.bar”. Recall that the variable β€œprice” contains 4 missing values. Hence, in order to compute the average of the non-missing values we should set a β€œTRUE” value Intervals for Mean and Proportion 181 to the argument β€œna.rm”. The sample standard deviation β€œs” is computed in the third line by the application of the function β€œsd”. We set once more the argument β€œna.rm=TRUE” in order to deal with the missing values. Finally, in the last line we store the sample size β€œn”, the number of non-missing values. Let us compute the lower and the upper limits of the confidence interval for the expectation of the price: ## [1] 12108.47 ## [1] 14305.79 The lower limit of the confidence interval turns out to be $12,108.47 and the upper limit is $14,305.79. The confidence interval is the range of values between these two numbers. Consider, next, a confidence interval for the probability of an event. The es- timate of the probability 𝑝 is 𝑝̂, the relative proportion of occurrences of the event in the sample. Again, we construct an interval about this esti- mate. In this case, a confidence interval of confidence level 95% is of the form [𝑝̂ βˆ’ 1.96 β‹… βˆšπ‘Μ‚(1 βˆ’ 𝑝̂)/𝑛, 𝑝̂ + 1.96 β‹… βˆšπ‘Μ‚(1 βˆ’ 𝑝̂)/𝑛], where 𝑛 is the sample size. Observe that 𝑝̂ replaces π‘₯Μ„ as the estimate of the parameter and that 𝑝̂(1 βˆ’π‘Μ‚)/𝑛 replace 𝑠2/𝑛 as the estimate of the variance of the estimator. The confidence interval for the probability may be expressed in the form:
As an example, let us construct a confidence interval for the proportion of car types that use diesel fuel. The variable β€œfuel.type” is a factor that records the type of fuel the car uses, either diesel or gas: ## ## diesel gas ## 20 185 Only 20 of the 205 types of cars are run on diesel in this data set. The point estimation of the probability of such car types and the confidence interval for this probability are:
Confidence Intervals for the Mean In the previous subsection we computed a confidence interval for the expected price of a car and a confidence interval for the probability that a car runs on diesel. In this subsection we explain the theory behind the construction of confidence intervals for the expectation. The theory provides insight to the way confidence intervals should be interpreted. In the next subsection we will discuss the theory behind the construction of confidence intervals for the probability of an event. Assume one is interested in a confidence interval for the expectation of a measurement 𝑋. For a sample of size 𝑛, one may compute the sample average 𝑋̄, which is the point estimator for the expectation. The expected value of the sample average is the expectation E(𝑋), for which we are trying to produce the confidence interval. Moreover, the variance of the sample average is V(𝑋)/𝑛, where V(𝑋) is the variance of a single measurement and 𝑛 is the sample size. The construction of a confidence interval for the expectation relies on the Central Limit Theorem and on estimation of the variance of the measurement. The Central Limit Theorem states that the distribution of the (standardized) sample average 𝑍 = (𝑋̄ βˆ’ E(𝑋)/√V(𝑋)/𝑛 is approximately standard Normal for a large enough sample size. The variance of the measurement can be estimated using the sample variance 𝑆2. Supposed that we are interested in a confidence interval with a confidence level of 95%. The value 1.96 is the 0.975-percentile of the standard Normal. Therefore, about 95% of the distribution of the standardized sample average is concentrated in the range [βˆ’1.96, 1, 96]:
The event, the probability of which is being described in the last display, states that the absolute value of deviation of the sample average from the expectation, divided by the standard deviation of the sample average, is no more than 1.96. In other words, the distance between the sample average and the expectation is at most 1.96 units of standard deviation. One may rewrite this event in a form that puts the expectation within an interval that is centered at the sample average: {|𝑋̄ βˆ’ E(𝑋)| ≀ 1.96 β‹… √V(𝑋)/𝑛} ⟺ {𝑋̄ βˆ’ 1.96 β‹… √V(𝑋)/𝑛 ≀ E(𝑋) ≀ 𝑋̄ + 1.96 β‹… √V(𝑋)/𝑛} . Clearly, the probability of the later event is (approximately) 0.95 since we are considering the same event, each time represented in a different form. The second representation states that the expectation E(𝑋) belongs to an interval about the sample average: 𝑋̄ Β± 1.96√V(𝑋)/𝑛. This interval is, almost, the confidence interval we seek. The difficulty is that we do not know the value of the variance V(𝑋), hence we cannot compute the interval in the proposed form from the data. In order to overcome this difficulty we recall that the unknown variance may nonetheless be estimated from the data: 𝑆2 β‰ˆ V(𝑋) ⟹ √V(𝑋)/𝑛 β‰ˆ 𝑆/βˆšπ‘› , where 𝑆 is the sample standard deviation. When the sample size is sufficiently large, so that 𝑆 is very close to the value of the standard deviation of an observation, we obtain that the interval 𝑋̄ Β± 1.96√V(𝑋)/𝑛 and the interval 𝑋̄ Β± 1.96 β‹… 𝑆/βˆšπ‘› almost coincide. Therefore:
184 11 Confidence Intervals are interested in a confidence interval for the expected price of a car. In the simulation we assume that the distribution of the price is Exponential(1/13000). (Consequently, E(𝑋) = 13, 000). We take the sample size to be equal to 𝑛 = 201 and compute the actual probability of the confidence interval containing the value of the expectation: ## [1] 0.94441 Below we will go over the code and explain the simulation. But, before doing so, notice that the actual probability that the confidence interval contains the expectation is about 0.945, which is slightly below the nominal confidence level of 0.95. Still quoting the nominal value as the confidence level of the confidence interval is not too far from reality. Let us look now at the code that produced the simulation. In each iteration of the simulation a sample is generated. The sample average and standard devi- ations are computed and stored in the appropriate locations of the sequences β€œX.bar” and β€œS”. At the end of all the iterations the content of these two se- quences represents the sampling distribution of the sample average 𝑋̄ and the sample standard deviation 𝑆, respectively. The lower and the upper end-points of the confidence interval are computed in the next two lines of code. The lower level of the confidence interval is stored in the object β€œLCL” and the upper level is stored in β€œUCL”. Consequently, we obtain the sampling distribution of the confidence interval. This distribution is approximated by 100,000 random confidence intervals that are generated by the sampling distribution. Some of these random intervals contain the value of the expectation, namely 13,000, and some do not. The proportion of intervals that contain the expectation is the (simulated) confidence level. The last expression produces this confidence level, which turns out to be equal to about 0.945. The last expression involves a new element, the term β€œ&”, which calls for more explanations. Indeed, let us refer to the last expression in the code. Intervals for Mean and Proportion 185 This expression involves the application of the function β€œmean”. The input to this function contains two sequences with logical values (β€œTRUE” or β€œFALSE”), separated by the character β€œ&”. The character β€œ&” corresponds to the logical β€œAND” operator. This operator produces a β€œTRUE” if a β€œTRUE” appears at both sides. Otherwise, it produces a β€œFALSE”. (Compare this operator to the operator β€œOR”, that is expressed in R with the character β€œ|”, that produces a β€œTRUE” if at least one β€œTRUE” appears at either sides.) In order to clarify the behavior of the terms β€œ&” and β€œ|” consider the following example:
## [1] TRUE TRUE TRUE FALSE The term β€œ&” produces a β€œTRUE” only if parallel components in the sequences β€œa” and β€œb” both obtain the value β€œTRUE”. On the other hand, the term β€œ|” produces a β€œTRUE” if at least one of the parallel components are β€œTRUE”. Observe, also, that the output of the expression that puts either of the two terms between two sequences with logical values is a sequence of the same length (with logical components as well). The expression β€œ(13000 >= LCL)” produces a logical sequence of length 100,000 with β€œTRUE” appearing whenever the expectation is larger than the lower level of the confidence interval. Similarly, the expression β€œ(13000 <= UCL)” produces β€œTRUE” values whenever the expectation is less than the upper level of the confi- dence interval. The expectation belongs to the confidence interval if the value in both expressions is β€œTRUE”. Thus, the application of the term β€œ&” to these two sequences identifies the confidence intervals that contain the expectation. The application of the function β€œmean” to a logical vector produces the relative frequency of TRUE’s in the vector. In our case this corresponds to the relative frequency of confidence intervals that contain the expectation, namely the confidence level. We calculated before the confidence interval [12108.47, 14305.79] for the ex- pected price of a car. This confidence interval was obtained via the application of the formula for the construction of confidence intervals with a 95% confi- dence level to the variable β€œprice” in the data frame β€œcars”. Casually speaking, people frequently refer to such an interval as an interval that contains the expectation with probability of 95%. 186 11 Confidence Intervals However, one should be careful when interpreting the confidence level as a probabilistic statement. The probability computations that led to the method for constructing confidence intervals were carried out in the context of the sampling distribution. Therefore, probability should be interpreted in the con- text of all data sets that could have emerged and not in the context of the given data set. No probability is assigned to the statement β€œThe expectation belongs to the interval [12108.47, 14305.79]”. The probability is assigned to the statement β€œThe expectation belongs to the interval 𝑋̄ Β± 1.96 β‹… 𝑆/βˆšπ‘›β€, where 𝑋̄ and 𝑆 are interpreted as random variables. Therefore the statement that the interval [12108.47, 14305.79] contains the expectation with probability of 95% is meaningless. What is meaningful is the statement that the given interval was constructed using a procedure that produces, when applied to random samples, intervals that contain the expectation with the assigned probability. Confidence Intervals for a Proportion The next issue is the construction of a confidence interval for the probability of an event. Recall that a probability 𝑝 of some event can be estimated by the observed relative frequency of the event in the sample, denoted 𝑃̂. The estimation is associated with the Bernoulli random variable 𝑋, that obtains the value 1 when the event occurs and the value 0 when it does not. In the estimation problem 𝑝 is the expectation of 𝑋 and 𝑃̂ is the sample average of this measurement. With this formulation we may relate the problem of the construction of a confidence interval for 𝑝 to the problem of constructing a confidence interval for the expectation of a measurement. The latter problem was dealt with in the previous subsection. Specifically, the discussion regarding the steps in the construction – staring with an application of the Central Limit Theorem in order to produce an interval that depends on the sample average and its variance and proceeding by the replacement of the unknown variance by its estimate – still apply and may be taken as is. However, in the specific case we have a particular expression for the variance of the estimate 𝑃̂: V(𝑃̂) = 𝑝(1 βˆ’ 𝑝)/𝑛 β‰ˆ 𝑃̂(1 βˆ’ 𝑃̂)/𝑛 . The tradition is to estimate this variance by using the estimator 𝑃̂ for the unknown 𝑝 instead of using the sample variance. The resulting confidence interval of significance level 0.95 takes the form: 𝑃̄ Β± 1.96 β‹… βˆšπ‘ƒΜ‚(1 βˆ’ 𝑃̂)/𝑛 . Let us run a simulation in order to assess the confidence level of the confidence interval for the probability. Assume that 𝑛 = 205 and 𝑝 = 0.12. The simulation we run is very similar to the simulation of Subsection . In the first stage 11.3 Intervals for Mean and Proportion 187 we produce the sampling distribution of 𝑃̂ (stored in the sequence β€œP.hat”) and in the second stage we compute the relative frequency in the simulation of the intervals that contain the actual value of 𝑝 that was used in the simulation: ## [1] 0.95182 In this simulation we obtained that the actual confidence level is approxi- mately 0.951, which is slightly above the nominal confidence level of 0.95. The formula 𝑋̄ Β± 1.96 β‹… 𝑆/βˆšπ‘› that is used for a confidence interval for the expectation and the formula 𝑃̂ Β± 1.96 β‹… {𝑃̂(1 βˆ’ 𝑃̂)/𝑛}1/2 for the probability both refer to a confidence intervals with confidence level of 95%. If one is interested in a different confidence level then the width of the confidence interval should be adjusted: a wider interval for higher confidence and a narrower interval for smaller confidence level. Specifically, if we examine the derivation of the formulae for confidence in- tervals we may notice that the confidence level is used to select the number 1.96, which is the 0.975-percentile of the standard Normal distribution (1.96 =qnorm(0.975)). The selected number satisfies that the interval [βˆ’1.96, 1.96] con- tains 95% of the standard Normal distribution by leaving out 2.5% on both tails. For a different confidence level the number 1.96 should be replace by a different number. For example, if one is interested in a 90% confidence level then one should use 1.645, which is the 0.95-percentile of the standard Normal distribution (qnorm(0.95)), leaving out 5% in both tails. The resulting confidence interval for an expectation is 𝑋̄±1.645⋅𝑆/βˆšπ‘› and the confidence interval for a probability is 𝑃̂ Β± 1.645 β‹… {𝑃̂(1 βˆ’ 𝑃̂)/𝑛}1/2.
Intervals for Normal Measurements In the construction of the confidence intervals in the previous section it was assumed that the sample size is large enough. This assumption was used both in the application of the Central Limit Theorem and in the substitution of the unknown variance by its estimated value. For a small sample size the reasoning that was applied before may no longer be valid. The Normal distribution may not be a good enough approximation of the sampling distribution of the sample average and the sample variance may differ substantially from the actual value of the measurement variance. In general, making inference based on small samples requires more detailed modeling of the distribution of the measurements. In this section we will make the assumption that the distribution of the measurements is Normal. This as- sumption may not fit all scenarios. For example, the Normal distribution is a poor model for the price of a car, which is better modeled by the Exponential distribution. Hence, a blind application of the methods developed in this sec- tion to variables such as the price when the sample size is small may produce dubious outcomes and is not recommended. When the distribution of the measurements is Normal then the method dis- cussed in this section will produce valid confidence intervals for the expecta- tion of the measurement even for a small sample size. Furthermore, we will extend the methodology to enable the construction of confidence intervals for the variance of the measurement. Before going into the details of the methods let us present an example of in- ference that involves a small sample. Consider the issue of fuel consumption. Two variables in the β€œcars” data frame describe the fuel consumption. The first, β€œcity.mpg”, reports the number of miles per gallon when the car is driven in urban conditions and the second, β€œhighway.mpg”, reports the miles per gal- lon in highway conditions. Typically, driving in city conditions requires more stopping and change of speed and is less efficient in terms of fuel consumption. Hence, one expects to obtained a reduced number of miles per gallon when driving in urban conditions compared to the number when driving in highway conditions. For each car type we calculate the difference variable that measures the differ- ence between the number of miles per gallon in highway conditions and the number in urban conditions. The cars are sub-divided between cars that run on diesel and cars that run on gas. Our concern is to estimate, for each fuel type, the expectation of difference variable and to estimate the variance of that variable. In particular, we are interested in the construction of a confidence intervals for the expectation and a confidence interval for the variance. Intervals for Normal Measurements 189 Box plots of the difference in fuel consumption between highway and urban conditions are presented in Figure . The box plot on the left hand side corresponds to cars that run on diesel and the box plot on the right hand side corresponds to cars that run on gas. Recall that 20 of the 205 car types use diesel and the other 185 car types use gas. One may suspect that the fuel consumption characteristics vary between the two types of fuel. Indeed, the measurement tends to have sightly higher values for vehicles that use gas.
FIGURE 11.1: Box Plots of Differences in MPG We conduct inference for each fuel type separately. However, since the sample size for cars that run on diesel is only 20, one may have concerns regarding the application of methods that assume a large sample size to a sample size this small. Confidence Intervals for a Normal Mean Consider the construction of a confidence interval for the expectation of a Nor- mal measurement. In the previous section, when dealing with the construction of a confidence interval for the expectation, we exploited the Central Limit Theorem in order to identify that the distribution of the standardized sample average (𝑋̄ βˆ’ E(𝑋))/√V(𝑋)/𝑛 is, approximately, standard Normal. Afterwards, we substituted the standard deviation of the measurement by the sample stan- dard deviation 𝑆, which was an accurate estimator of the former due to the magnitude sample size. In the case where the measurements themselves are Normally distributed one can identify the exact distribution of the standardized sample aver- age, with the sample variance substituting the variance of the measure- ment: (𝑋̄ βˆ’ E(𝑋))/(𝑆/βˆšπ‘›). This specific distribution is called the Student’s 𝑑- distribution, or simply the 𝑑-distribution. The 𝑑-distribution is bell shaped and symmetric. Overall, it looks like the stan- 190 11 Confidence Intervals dard Normal distribution but it has wider tails. The 𝑑-distribution is charac- terized by a parameter called the number of degrees of freedom. In the current setting, where we deal with the standardized sample average (with the sample variance substituting the variance of the measurement) the number of degrees of freedom equals the number of observations associated with the estimation of the variance, minus 1. Hence, if the sample size is 𝑛 and if the measure- ment is Normally distributed then the standardized sample average (with 𝑆 substituting the standard deviation of the measurement) has a 𝑑-distribution on (𝑛 βˆ’ 1) degrees of freedom. We use 𝑑(π‘›βˆ’1) to denote this 𝑑-distribution. The R system contains functions for the computation of the density, the cu- mulative probability function and the percentiles of the 𝑑-distribution, as well as for the simulation of a random sample from this distribution. Specifically, the function β€œqt” computes the percentiles of the 𝑑-distribution. The first argu- ment to the function is a probability and the second argument is the number of degrees of freedom. The output of the function is the percentile associated with the probability of the first argument. Namely, it is a value such that the probability that the 𝑑-distribution is below the value is equal to the probability in the first argument. For example, let β€œn” be the sample size. The output of the expression β€œqt(0.975,n-1)” is the 0.975-percentile of the 𝑑-distribution on (𝑛 βˆ’ 1) degrees of freedom. By definition, 97.5% of the 𝑑-distribution are below this value and 2.5% are above it. The symmetry of the 𝑑 distribution implies that 2.5% of the distribution is below the negative of this value. The middle part of the distribution is bracketed by these two values: [βˆ’qt(0.975,n-1), qt(0.975,n-1)], and it contains 95% of the distribution. Summarizing the above claims in a single formula produces the statement: 𝑋̄ βˆ’ E(𝑋) | 𝑋̄ βˆ’ E(𝑋) |
Notice that the equation associated with the probability is not an approxi- mation but an exact relation. Rewriting the event that is described in the probability in the form of a confidence interval, produces 𝑋̄ Β± qt(0.975,n-1) β‹… 𝑆/βˆšπ‘› as a confidence interval for the expectation of the Normal measurement with a confidence level of 95%. The structure of the confidence interval for the expectation of a Normal mea- surement is essentially identical to the structure proposed in the previous sec- tion. The only difference is that the number 1.96, the percentile of the standard Normal distribution, is substituted by the percentile of the 𝑑-distribution.
11.3 Intervals for Normal Measurements 191 Consider the construction of a confidence interval for the expected difference in fuel consumption between highway and urban driving conditions. In order to save writing we created two new variables; a factor called β€œfuel” that contains the data on the fuel type of each car, and a numerical vector called β€œdif.mpg” that contains the difference between highway and city fuel consumption for each car type: We are interested in confidence intervals based on the data stored in the variable β€œdif.mpg”. One confidence interval will be associated with the level β€œdiesel” of the factor β€œfuel” and the other will be associated with the level β€œgas” of the same factor. In order to compute these confidence intervals we need to compute, for each level of the factor β€œfuel”, the sample average and the sample standard devia- tion of the data points of the variable β€œdif.mpg” that are associated with that level. It is convenient to use the function β€œtapply” for this task. This function uses three arguments. The first argument is the sequence of values over which we want to carry out some computation. The second argument is a factor. The third argument is a name of a function that is used for the computation. The function β€œtapply” applies the function in the third argument to each sub- collection of values of the first argument. The sub-collections are determined by the levels of the second argument. Sounds complex but it is straightforward enough to apply:
## diesel gas ## 2.781045 1.433607 Sample averages are computed in the first application of the function β€œtapply”. Observe that an average was computed for cars that run on diesel and an average was computed for cars that run on gas. In both cases the average corresponds to the difference in fuel consumption. Similarly, the standard deviations were computed in the second application of the function. We obtain that the point estimates of the expectation for diesel and gas cars are 4.45 and 192 11 Confidence Intervals 5.648649, respectively and the point estimates for the standard deviation of the variable are 2.781045 and 1.433607. Let us compute the confidence interval for each type of fuel:
## diesel gas ## 5.751569 5.856598 The objects β€œx.bar” and β€œs” contain the sample averages and sample stan- dard deviations, respectively. Both are sequences of length two, with the first component referring to β€œdiesel” and the second component referring to β€œgas”. The object β€œn” contains the two sample sizes, 20 for β€œdiesel” and 185 for β€œgas”. In the expression next to last the lower boundary for each of the confi- dence intervals is computed and in the last expression the upper boundary is computed. The confidence interval for the expected difference in diesel cars is [3.148431, 5.751569]. and the confidence interval for cars using gas is [5.440699, 5.856598]. The 0.975-percentiles of the 𝑑-distributions are computed with the expressions β€œqt(0.025,n-1)”: ## [1] 2.093024 1.972941 The second argument of the function β€œqt” is a sequence with two components, the number 19 and the number 184. Accordingly, The first position in the output of the function is the percentile associated with 19 degrees of freedom and the second position is the percentile associated to 184 degrees of freedom. Compare the resulting percentiles to the 0.975-percentile of the standard Nor- mal distribution, which is essentially equal to 1.96. When the sample size is small, 20 for example, the percentile of the 𝑑-distribution is noticeably larger than the percentile of the standard Normal. However, for a larger sample size the percentiles, more or less, coincide. It follows that for a large sample the method proposed in Subsection and the method discussed in this subsection produce essentially the same confidence intervals. Intervals for Normal Measurements 193 11.3.2 Confidence Intervals for a Normal Variance The next task is to compute confidence intervals for the variance of a Normal measurement. The main idea in the construction of a confidence interval is to identify the distribution of a random variable associated with the parameter of interest. A region that contains 95% of the distribution of the random variable (or, more generally, the central part of the distribution of probability equal to the confidence level) is identified. The confidence interval results from the reformulation of the event associated with that region. The new formulation puts the parameter between a lower limit and an upper limit. These lower and the upper limits are computed from the data and they form the boundaries of the confidence interval. We start with the sample variance, 𝑆2 = βˆ‘π‘› (𝑋 βˆ’ 𝑋̄)2/(𝑛 βˆ’ 1), which serves 𝑖 as a point estimator of the parameter of interest. When the measurements are Normally distributed then the random variable (𝑛 βˆ’ 1)𝑆2/𝜎2 possesses a special distribution called the chi-square distribution. (Chi is the Greek letter πœ’, which is read β€œKai”.) This distribution is associated with the sum of squares of Normal variables. It is parameterized, just like the 𝑑-distribution, by a parameter called the number of degrees of freedom. This number is equal to (π‘›βˆ’1) in the situation we discuss. The chi-square distribution on (π‘›βˆ’1) degrees of freedom is denoted with the symbol πœ’2 . The R system contains functions for the computation of the density, the cumu- lative probability function and the percentiles of the chi-square distribution, as well as for the simulation of a random sample from this distribution. Specif- ically, the percentiles of the chi-square distribution are computed with the aid of the function β€œqchisq”. The first argument to the function is a probability and the second argument is the number of degrees of freedom. The output of the function is the percentile associated with the probability of the first argument. Namely, it is a value such that the probability that the chi-square distribution is below the value is equal to the probability in the first argument. For example, let β€œn” be the sample size. The output of the expression β€œqt(0.975,n-1)” is the 0.975-percentile of the chi-square distribution. By def- inition, 97.5% of the chi-square distribution are below this value and 2.5% are above it. Similarly, the expression β€œqchisq(0.025,n-1)” is the 0.025-percentile of the chi-square distribution, with 2.5% of the distribution below this value. Notice that between these two percentiles, namely within the interval [qchisq(0.025,n-1), qchisq(0.975,n-1)], are 95% of the chi-square distribution. We may summarize that for Normal measurements:
194 11 Confidence Intervals the region that contains 95% of the distribution region we have to compute both the 0.025- and the 0.975-percentiles of the distribution. The event associated with the 95% region is rewritten in a form that puts the parameter 𝜎2 in the center: {(𝑛 βˆ’ 1)𝑆2/qchisq(0.975,n-1) ≀ 𝜎2 ≀ (𝑛 βˆ’ 1)𝑆2/qchisq(0.025,n-1)} . The left most and the right most expressions in this event mark the end points of the confidence interval. The structure of the confidence interval is: [{(𝑛 βˆ’ 1)/qchisq(0.975,n-1)} Γ— 𝑆2, {(𝑛 βˆ’ 1)/qchisq(0.025,n-1)} Γ— 𝑆2] . Consequently, the confidence interval is obtained by the multiplication of the estimator of the variance by a ratio between the number of degrees of freedom (𝑛 βˆ’ 1) and an appropriate percentile of the chi-square distribution. The per- centile on the left hand side is associated with the larger probability (making the ratio smaller) and the percentile on the right hand side is associated with the smaller probability (making the ratio larger). Consider, specifically, the confidence intervals for the variance of the measure- ment β€œdiff.mpg” for cars that run on diesel and for cars that run on gas. Here, the size of the samples is 20 and 185, respectively:
## [1] 2.133270 1.240478 The ratios that are used in the left hand side of the intervals are 0.5783456 and 0.8234295, respectively. Both ratios are less than one. On the other hand, the ratios associated with the other end of the intervals, 2.133270 and 1.240478, are both larger than one. Let us compute the point estimates of the variance and the associated con- fidence intervals. Recall that the object β€œs” contains the sample standard deviations of the difference in fuel consumption for diesel and for gas cars. The object β€œn” contains the two sample sizes:
## diesel gas ## 16.499155 2.549466 The variance of the difference in fuel consumption for diesel cars is estimated to be 7.734211 with a 95%-confidence interval of [4.473047, 16.499155] and for cars that use gas the estimated variance is 2.055229, with a confidence interval of [1.692336, 2.549466]. As a final example in this section let us simulate the confidence level for a confidence interval for the expectation and for a confidence interval for the variance of a Normal measurement. In this simulation we assume that the expectation is equal to πœ‡ = 3 and the variance is equal to 𝜎2 = 32 = 9. The sample size is taken to be 𝑛 = 20. We start by producing the sampling distribution of the sample average 𝑋̄ and of the sample standard deviation 𝑆:
## [1] 0.94934 The nominal significance level of the confidence interval is 95%, which is prac- tically identical to the confidence level that was computed in the simulation. The confidence interval for the variance is obtained in a similar way. The only
Choosing the Sample Size One of the more important contributions of Statistics to research is providing guidelines for the design of experiments and surveys. A well planed experi- ment may produce accurate enough answers to the research questions while optimizing the use of resources. On the other hand, poorly planed trials may fail to produce such answers or may waste valuable resources. Unfortunately, in this book we do not cover the subject of experiment design. Still, we would like to give a brief discussion of a narrow aspect in design: The selection of the sample size. An important consideration at the stage of the planning of an experiment or a survey is the number of observations that should be collected. Indeed, having a larger sample size is usually preferable from the statistical point of view. How- ever, an increase in the sample size typically involves an increase in expenses. Thereby, one would prefer to collect the minimal number of observations that is still sufficient in order to reach a valid conclusion. As an example, consider an opinion poll aimed at the estimation of the propor- tion in the population of those that support a specific candidate that considers running for an office. How large the sample must be in order to assure, with high probability, that the percentage in the sample of supporters is within 0.5% of the percentage in the population? Within 0.25%? A natural way to address this problem is via a confidence interval for the proportion. If the range of the confidence interval is no more than 0.05 (or 0.025 in the other case) then with a probability equal to the confidence level it is assured that the population relative frequency is within the given distance from the sample proportion. Consider a confidence level of 95%. Recall that the structure of the confidence Exercises 197 interval for the proportion is 𝑃̂±1.96β‹…{𝑃̂(1βˆ’π‘ƒΜ‚)/𝑛}1/2. The range of the confidence interval is 1.96 β‹… {𝑃̂(1 βˆ’ 𝑃̂)/𝑛}1/2. How large should 𝑛 be in order to guarantee that the range is no more than 0.05? The answer to this question depends on the magnitude of 𝑃̂(1 βˆ’ 𝑃̂), which is a random quantity. Luckily, one may observe that the maximal value of the quadratic function 𝑓(𝑝) = 𝑝(1 βˆ’ 𝑝) is 1/4. It follows that
Consequently, 𝑛 = 1537 will do. Increasing the accuracy by 50% requires a sample size that is 4 times larger. More examples that involve selection of the sample size will be considered as part of the homework. 11.5 Exercises Exercise 11.1. This exercise deals with an experiment that was conducted among students. The aim of the experiment was to assess the effect of rumors and prior reputation of the instructor on the evaluation of the instructor by her students. The experiment was conducted by Towler and Dipboye. This case study is taken from the Rice Virtual Lab in Statistics. More details on this case study can be found in the case study β€œInstructor Reputation and Teacher Ratings” that is presented in that site. 4The derivative is 𝑓′(𝑝) = 1 βˆ’ 2𝑝. Solving 𝑓′(𝑝) = 0 produces 𝑝 = 1/2 as the maximizer. Plugging this value in the function gives 1/4 as the maximal value of the function. 5Towler, A. and Dipboye, R. L. (1998). The effect of instructor reputation and need for cognition on student behavior (poster presented at American Psychological Society confer- ence, May 1998). 6 7 198 11 Confidence Intervals The experiment involved 49 students that were randomly assigned to one of two conditions. Before viewing the lecture, students were give one of two β€œsum- maries” of the instructor’s prior teaching evaluations. The first type of sum- mary, i.e. the first condition, described the lecturer as a charismatic instructor. The second type of summary (second condition) described the lecturer as a punitive instructor. We code the first condition as β€œC” and the second condi- tion as β€œP”. All subjects watched the same twenty-minute lecture given by the exact same lecturer. Following the lecture, subjects rated the lecturer. The outcomes are stored in the file β€œteacher.csv”. The file can be found on the internet at . Down- load this file to your computer and store it in the working directory of R. Read the content of the file into an R data frame. Produce a summary of the content of the data frame and answer the following questions: Identify, for each variable in the file β€œteacher.csv”, the name and the type of the variable (factor or numeric). Estimate the expectation and the standard deviation among all students of the rating of the teacher. Estimate the expectation and the standard deviation of the rat- ing only for students who were given a summary that describes the teacher as charismatic. Construct a confidence interval of 99% confidence level for the expectation of the rating among students who were given a summary that describes the teacher as charismatic. (Assume the ratings have a Normal distribution.) Construct a confidence interval of 90% confidence level for the variance of the rating among students who were given a summary that describes the teacher as charismatic. (Assume the ratings have a Normal distribution.) Exercise 11.2. Twenty observations are used in order to construct a confi- dence interval for the expectation. In one case, the construction is based on the Normal approximation of the sample average and in the other case it is con- structed under the assumption that the observations are Normally distributed. Assume that in reality the measurement is distributed Exponential(1/4). Compute, via simulation, the actual confidence level for the first case of a confidence interval with a nominal confidence level of 95%. Compute, via simulation, the actual confidence level for the sec- ond case of a confidence interval with a nominal confidence level of 95%. Which of the two approaches would you prefer? 11.6 Summary 199 Exercise 11.3. Insurance companies are interested in knowing the population percent of drivers who always buckle up before riding in a car. When designing a study to determine this proportion, what is the minimal sample size that is required for a 99% confident that the population proportion is accurately estimated, up to an error of 0.03? Suppose that the insurance companies did conduct the study by surveying 400 drivers. They found that 320 of the drives claim to always buckle up. Produce an 80% confidence interval for the population proportion of drivers who claim to always buckle up. 11.6 Summary Glossary Confidence Interval: An interval that is most likely to contain the popula- tion parameter. Confidence Level: The sampling probability that random confidence inter- vals contain the parameter value. The confidence level of an observed inter- val indicates that it was constructed using a formula that produces, when applied to random samples, such random intervals. t-Distribution: A bell-shaped distribution that resembles the standard Nor- mal distribution but has wider tails. The distribution is characterized by a positive parameter called degrees of freedom. Chi-Square Distribution: A distribution associated with the sum of squares of Normal random variable. The distribution obtains only positive values and it is not symmetric. The distribution is characterized by a positive parameter called degrees of freedom. Discuss in the forum When large samples are at hand one may make fewer a-priori assumptions regarding the exact form of the distribution of the measurement. General limit theorems, such as the Central Limit Theorem, may be used in order to establish the validity of the inference under general conditions. On the other hand, for small sample sizes one must make strong assumptions with respect 200 11 Confidence Intervals to the distribution of the observations in order to justify the validity of the procedure. It may be claimed that making statistical inferences when the sample size is small is worthless. How can one trust conclusions that depend on assumptions regarding the distribution of the observations, assumptions that cannot be verified? What is your opinion? For illustration consider the construction of a confidence interval. Confidence interval for the expectation is implemented with a specific formula. The signif- icance level of the interval is provable when the sample size is large or when the sample size is small but the observations have a Normal distribution. If the sample size is small and the observations have a distribution different from the Normal then the nominal significance level may not coincide with the actual significance level.
Student Learning Objectives Hypothesis testing emerges as a crucial component in decision making where one of two competing options needs to be selected. Statistical hypothesis test- ing provides formal guidelines for making such a selection. This chapter deals with the formulation of statistical hypothesis testing and describes the associ- ated decision rules. Specifically, we consider hypothesis testing in the context of the expectation of a measurement and in the context of the probability of an event. In subsequent chapters we deal with hypothesis testing in the context of other parameters as well. By the end of this chapter, the student should be able to: Formulate statistical hypothesis for testing. Test, based on a sample, hypotheses regarding the expectation of the mea- surement and the probability of an event. Identify the limitations of statistical hypothesis testing and the danger of misinterpretation of the test’s conclusions. The Theory of Hypothesis Testing Statistical inference is used in order to detect and characterize meaningful phenomena that may be hidden in an environment contaminated by random noise. Hypothesis testing is an important step, typically the first, in the process of making inferences. In this step one tries to answer the question: β€œIs there a phenomena at all?”. The basic approach is to determine whether the observed data can or cannot be reasonably explained by a model of randomness that does not involve the phenomena. In this section we introduce the structure and characteristics of statistical hypothesis testing. We start with an informal application of a statistical test 201 202 12 Testing Hypothesis and proceed with formal definitions. In the next section we discuss in more detail the testing of hypotheses on the expectation of a measurement and the testing of hypotheses on the probability of an event. More examples are considered in subsequent chapters. An Example of Hypothesis Testing The variable β€œprice” in the file β€œcars.csv” contains data on the prices of differ- ent types of cars that were sold in the United States during 1985. The average price of a car back then β€” the average of the variable β€œprice” β€” was $13,207. One may be interested in the question: Do Americans pay today for cars a different price than what they used to pay in the 80’s? Has the price of cars changed significantly since 1985? The average price of a car in the United States in 2009 was $27,958. Clearly, this figure is higher than $13,207. However, in order to produce a fair answer to the question we have to take into account that, due to inflation, the prices of all products went up during these years. A more meaningful comparison will involve the current prices of cars in terms of 1985 Dollars. Indeed, if we take into account inflation then we get that, on the average, the cost of today’s cars corresponds to an average price of $13,662 in 1985 values. This price is still higher than the prices in the 1985 but not as much. The question we are asking is: β€œIs the difference between $13,207 and $13,662 significant or is it not so?”. In order to give a statistical answer to this question we carry out a statistical test. The specific test is conducted with the aid of the function β€œt.test”. Later we will discuss in more details some of the arguments that may be used in this function. Currently, we simply apply it to the data stored in the variable β€œprice” to test that the expected price is different than the $13,662, the average price of a car in 2009, adjusted for inflation:
1Source: β€œβ€. 2Source: β€œβ€. The interpretation of adjusting prices to infla- tion is that our comparison will correspond to changes in the price of cars, relative to other items that enter into the computation of the Consumer Price Index. 12.2 The Theory of Hypothesis Testing 203 ## alternative hypothesis: true mean is not equal to 13662 ## 95 percent confidence interval: ## 12101.80 14312.46 ## sample estimates: ## mean of x ## 13207.13 The data in the file β€œcars.csv” is read into a data frame that is given the name β€œcars”. Afterwards, the data on prices of car types in 1985 is entered as the first argument to the function β€œt.test”. The other argument is the expected value that we want to test, the current average price of cars, given in terms of 1985 Dollar value. The output of the function is reported under the title: β€œOne Sample t-test”. Let us read the report from the bottom up. The bottom part of the report describes the confidence interval and the point estimate of the expected price of a car in 1985, based on the given data. Indeed, the last line reports the sample average of the price, which is equal to 13,207.13. This number, the average of the 201 non-missing values of the variable β€œprice”, serves as the estimate of the expected price of a car in 1985. The 95% confidence interval of the expectation, the interval 12101.80, 14312.46 , is presented on the 4th line from the bottom. This is the confidence interval for the expectation that was computed in Subsection . The information relevant to conducting the statistical test itself is given in the upper part of the report. Specifically, it is reported that the data in β€œcars$price” is used in order to carry out the test. Based on this data a test statistic is computed and obtains the value of β€œt = -0.8115”. This statistic is associated with the 𝑑-distribution with β€œdf = 200” degrees of freedom. The last quantity that is being reported is denoted the 𝑝-value and it obtains the value β€œp-value = 0.4181”. The test may be carried out with the aid of the value of the 𝑑 statistic or, more directly, using the 𝑝-value. Currently we will use the 𝑝-value. The test itself examines the hypothesis that the expected price of a car in 1985 was equal to $13,662, the average price of a car in 2009, given in 1985 values. This hypothesis is called the null hypothesis. The alternative hypothesis is 3As a matter of fact, the confidence interval computed in Subsection 𝑠ᡆ𝑏𝑠𝑒𝑐 ∢ πΆπ‘œπ‘›π‘“π‘–π‘‘π‘’π‘›π‘π‘’_2.1 is 12108.47, 14305.79 , which is not identical to the confidence that appears in the report. The reason for the discrepancy is that we used the 0.975-percentile of the Normal distribution, 1.96, whereas the confidence interval computed here uses the 0.975-percentile of the 𝑑-distribution on 201- 1=200 degrees of freedom. The latter is equal to 1.971896. Nonetheless, for all practical purposes, the two confidence intervals are the same. 204 12 Testing Hypothesis that the expected price of a car in 1985 was not equal to that figure. The specification of the alternative hypothesis is reported on the third line of the output of the function β€œt.test”. One may decide between the two hypothesis on the basis of the size of the 𝑝-value. The rule of thumb is to reject the null hypothesis, and thus accept the alternative hypothesis, if the 𝑝-value is less than 0.05. In the current example the 𝑝-value is equal 0.4181 and is larger than 0.05. Consequently, we may conclude that the expected price of a car in 1985 was not significantly different than the current price of a car. In the rest of this section we give a more rigorous explanation of the theory and practice of statistical hypothesis testing. The Structure of a Statistical Test of Hypotheses The initial step in statistical inference in general, and in statistical hypothesis testing in particular, is the formulation of the statistical model and the identifi- cation of the parameter/s that should be investigated. In the current situation the statistical model may correspond to the assumption that the data in the variable β€œprice” are an instance of a random sample (of size 𝑛 = 201). The parameter that we want to investigate is the expectation of the measurement that produced the sample. The variance of the measurement is also relevant for the investigation. After the statistical model has been set, one may split the process of testing a statistical hypothesis into three steps: (i) formulation of the hypotheses, (ii) specification of the test, and (iii) reaching the final conclusion. The first two steps are carried out on the basis of the probabilistic characteristics of the statistical model and in the context of the sampling distribution. In principal, the first two steps may be conducted in the planning stage prior to the col- lection of the observations. Only the third step involves the actual data. In the example that was considered in the previous subsection the third step was applied to the data in the variable β€œprice” using the function β€œt.test”. Formulating the hypotheses: A statistical model involves a parameter that is the target of the investigation. In principle, this parameter may obtain any value within a range of possible values. The formulation of the hypothesis corresponds to splitting the range of values into two sub-collections: a sub- collection that emerges in response to the presence of the phenomena and a sub-collection that emerges in response to the situation when the phenomena is absent. The sub-collection of parameter values where the phenomena is absent is called the null hypothesis and is marked as β€œπ»0”. The other sub- collection, the one reflecting the presence of the phenomena, is denoted the alternative hypothesis and is marked β€œπ»1”. 12.2 The Theory of Hypothesis Testing 205 For example, consider the price of cars. Assume that the phenomena one wishes to investigate is the change in the relative price of a car in the 80’s as compared to prices today. The parameter of interest is the expected price of cars back then, which we denote by E(𝑋). The formulation of the statement that the expected price of cars has changed is β€œE(𝑋) β‰  13, 662”. This statement corresponds to the presence of a phenomena, to a change, and is customarily defined as the alternative hypothesis. On the other hand, the situation β€œE(𝑋) = 13, 662” corresponds to not having any change in the price of cars. Hence, this situation corresponds to the absence of the phenomena and is denoted the null hypothesis. In summary, in order to investigate the change in the relative price of cars we my consider the null hypothesis β€œπ»0 ∢ E(𝑋) = 13, 662” and test it against the alternative hypothesis β€œπ»1 ∢ E(𝑋) β‰  13, 662”. A variation in the formulation of the phenomena can change the definition of the null and alternative hypotheses. For example, if the intention is to investigate the rise in the price of cars then the phenomena will correspond to the expected price in 1985 being less than $13,662. Accordingly, the alternative hypothesis should be defined as 𝐻1 ∢ E(𝑋) < 13, 662, with the null hypothesis defined as 𝐻0 ∢ E(𝑋) β‰₯ 13, 662. Observe that in this case an expected price larger than $13,662 relates to the phenomena of rising (relative) prices not taking place. On the other hand, if one would wants to investigate a decrease in the price then one should define the alternative hypothesis to be 𝐻1 ∢ E(𝑋) > 13, 662, with the null hypothesis being 𝐻0 ∢ E(𝑋) ≀ 13, 662. The type of alternative that was considered in the example, 𝐻1 ∢ E(𝑋) β‰  13, 622 is called a two-sided alternative. The other two types of alternative hypotheses that were considered thereafter, 𝐻1 ∢ E(𝑋) < 13, 662 and 𝐻1 ∢ E(𝑋) > 13, 662, are both called one-sided alternatives. In summary, the formulation of the hypothesis is a reflection of the phenomena one wishes to examine. The setting associated with the presence of the phe- nomena is denoted the alternative hypothesis and the complimentary setting, the setting where the phenomena is absent, is denoted the null hypothesis. Specifying the test: The second step in hypothesis testing involves the selection of the decision rule, i.e. the statistical test, to be used in order to decide between the two hypotheses. The decision rule is composed of a statistic and a subset of values of the statistic that correspond to the rejection of the null hypothesis. The statistic is called the test statistic and the subset of values is called the rejection region. The decision is to reject the null hypothesis (and consequently choose the alternative hypothesis) if the test statistic falls in the rejection region. Otherwise, if the test statistic does not fall in the rejection region then the null hypothesis is selected. Return to the example in which we test between 𝐻0 ∢ E(𝑋) = 13, 662 and 𝐻1 ∢ E(𝑋) β‰  13, 662. One may compute the statistic:
where 𝑋̄ is the sample average (of the variable β€œprice”), 𝑆 is the sample stan- dard deviation, and 𝑛 is the sample size (𝑛 = 201 in the current example). The sample average 𝑋̄ is an estimator of a expected price of the car. In princi- ple, the statistic 𝑇 measures the discrepancy between the estimated value of the expectation (𝑋̄) and the expected value under the null hypothesis (E(𝑋) = 13, 662). This discrepancy is measured in units of the (estimated) standard deviation of the sample average. If the null hypothesis 𝐻0 ∢ E(𝑋) = 13, 662 is true then the sampling distribution of the sample average 𝑋̄ should be concentrated about the value 13,662. Values of the sample average much larger or much smaller than this value may serve as evidence against the null hypothesis. In reflection, if the null hypothesis holds true then the values of the sampling distribution of the statistic 𝑇 should tend to be in the vicinity of 0. Values with a relative small absolute value are consistent with the null hypothesis. On the other hand, extremely positive or extremely negative values of the statistic indicate that the null hypothesis is probably false. It is natural to set a value 𝑐 and to reject the null hypothesis whenever the absolute value of the statistic 𝑇 is larger than 𝑐. The resulting rejection re- gion is of the form {|𝑇| > 𝑐}. The rule of thumb, again, is to take threshold 𝑐 to be equal the 0.975-percentile of the 𝑑-distribution on 𝑛 βˆ’ 1 degrees of freedom, where 𝑛 is the sample size. In the current example, the sample size is 𝑛 = 201 and the percentile of the 𝑑-distribution is qt(0.975,200) = 1.971896. Consequently, the subset {|𝑇| > 1.971896} is the rejection region of the test. A change in the hypotheses that are being tested may lead to a change in the test statistic and/or the rejection region. For example, for testing 𝐻0 ∢ E(𝑋) β‰₯ 13, 662 versus 𝐻1 ∢ E(𝑋) < 13, 662 one may still use the same test statistic 𝑇 as before. However, only very negative values of the statistic are inconsistent with the null hypothesis. It turns out that the rejection region in this case is of the form {𝑇 < βˆ’1.652508}, where qt(0.05,200) = -1.652508 is the 0.05-percentile of the 𝑑-distribution on 200 degrees of freedom. On the other hand, the rejection region for testing between 𝐻0 ∢ E(𝑋) ≀ 13, 662 and 𝐻1 ∢ E(𝑋) > 13, 662 is {𝑇 > 1.652508}. In this case, qt(0.95,200) = 1.652508 is the 0.95-percentile of the 𝑑-distribution on 200 degrees of freedom. 4If the variance of the measurement V(𝑋) was known one could have use 𝑍 = (𝑋̄ βˆ’ βˆ’13, 662)/√V𝑋/𝑛 as a test statistic. This statistic corresponds to the discrepancy of the sample average from the null expectation in units of its standard deviation, i.e. the 𝑧-value of the sample average. Since the variance of the observation is unknown, we use an estimator of the variance (𝑆2) instead. 12.2 The Theory of Hypothesis Testing 207 Selecting the test statistic and deciding what rejection region to use specifies the statistical test and completes the second step. Reaching a conclusion: After the stage is set, all that is left is to apply the test to the observed data. This is done by computing the observed value of the test statistic and checking whether or not the observed value belongs to the rejection region. If it does belong to the rejection region then the decision is to reject the null hypothesis. Otherwise, if the statistic does not belong to the rejection region, then the decision is to accept the null hypothesis. Return to the example of testing the price of car types. The observed value of the 𝑇 statistic is part of the output of the application of the function β€œt.test” to the data. The value is β€œt = -0.8115”. As an exercise, let us recompute directly from the data the value of the 𝑇 statistic:
The observed value of the sample average is π‘₯Μ„ = 13207.13 and the observed value of the sample standard deviation is 𝑠 = 7947.066. The sample size (due to having 4 missing values) is 𝑛 = 201. The formula for the computation of the test statistic in this example is 𝑑 = [π‘₯Μ„ βˆ’ 13, 662]/[𝑠/βˆšπ‘›]. Plugging in this formula the sample size and the computed values of the sample average and standard deviation produces: ## [1] -0.8114824 This value, after rounding up, is equal to the value β€œt = -0.8115” that is reported in the output of the function β€œt.test”. The critical threshold for the absolute value of the 𝑇 statistic on 201 βˆ’ 1 = 200 degrees of freedom is qt(0.975,200) = 1.971896. Since the absolute observed value (|𝑑| = 0.8114824) is less then the threshold we get that the value of the statistic does not belong to the rejection region (which is composed of absolute values larger than the threshold). Consequently, we accept the null hypothesis. This null hypothesis declares that the expected price of a car was equal to the current expected price (after adjusting for the change in Consumer Price Index).
208 12 Testing Hypothesis Error Types and Error Probabilities The 𝑇 statistic was proposed for testing a change in the price of a car. This statistic measures the discrepancy between the sample average price of a car and the expected value of the sample average, where the expectation is com- puted under the null hypothesis. The structure of the rejection region of the test is {|𝑇| > 𝑐}, where 𝑐 is an appropriate threshold. In the current example the value of the threshold 𝑐 was set to be equal to qt(0.975,200) = 1.971896. In general, the specification of the threshold 𝑐 depends on the error probabil- ities that are associated with the test. In this section we describe these error probabilities. The process of making decisions may involve errors. In the case of hypothesis testing one may specify two types of error. On the one hand, the case may be that the null hypothesis is correct (in the example, E(𝑋) = 13, 662). However, the data is such that the null hypothesis is rejected (here, |𝑇| > 1.971896). This error is called a Type I error. A different type of error occurs when the alternative hypothesis holds (E(𝑋) β‰  13, 662) but the null hypothesis is not rejected (|𝑇| ≀ 1.971896). This other type of error is called Type II error. A summary of the types of errors can be found in Table : TABLE 12.1: Error Types
In statistical testing of hypothesis the two types of error are not treated sym- metrically. Rather, making a Type I error is considered more severe than making a Type II error. Consequently, the test’s decision rule is designed so as to assure an acceptable probability of making a Type I error. Reducing the probability of a Type II error is desirable, but is of secondary importance. Indeed, in the example that deals with the price of car types the threshold was set as high as qt(0.975,200) = 1.971896 in order to reject the null hypothesis. It is not sufficient that the sample average is not equal to 13,662 (corresponding to a threshold of 0), but it has to be significantly different from the expectation under the null hypothesis, the distance between the sample average and the null expectation should be relatively large, in order to exclude 𝐻0 as an option. this example is 0.4181. The null hypothesis was accepted since this value is larger than 0.05. As a matter of fact, the test that uses the 𝑇 statistic as a test statistic and reject the null hypothesis for absolute values larger than qt(0.975,n-1) is equivalent to the test that uses the 𝑝-value and rejects the null hypothesis for 𝑝-values less than 0.05. Below we discuss the computation of the 𝑝-value. 12.2 The Theory of Hypothesis Testing 209 The significance level of the evidence for rejecting the null hypothesis is based on the probability of the Type I error. The probabilities associated with the different types of error are presented in Table : TABLE 12.2: Error Probabilities
Observe that the probability of a Type I error is called the significance level. The significance level is set at some pre-specified level such as 5% or 1%, with 5% being the most widely used level. In particular, setting the threshold in the example to be equal to qt(0.975,200) = 1.971896 produces a test with a 5% significance level. This lack of symmetry between the two hypothesis proposes another interpre- tation of the difference between the hypothesis. According to this interpreta- tion the null hypothesis is the one in which the cost of making an error is greater. Thus, when one separates the collection of parameter values into two subsets then the subset that is associated with a more severe error is desig- nated as the null hypothesis and the other subset becomes the alternative. For example, a new drug must pass a sequence of clinical trials before it is approved for distribution. In these trials one may want to test whether the new drug produces beneficial effect in comparison to the current treatment. Naturally, the null hypothesis in this case would be that the new drug is no better than the current treatment and the alternative hypothesis would be that it is better. Only if the clinical trials demonstrates a significant beneficiary effect of the new drug would it be released for marketing. In scientific research, in general, the currently accepted theory, the conserva- tive explanation, is designated as the null hypothesis. A claim for novelty in the form of an alternative explanation requires strong evidence in order for it to be accepted and be favored over the traditional explanation. Hence, the novel explanation is designated as the alternative hypothesis. It replaces the current theory only if the empirical data clearly supports its. The test statis- tic is a summary of the empirical data. The rejection region corresponds to values that are unlikely to be observed according to the current theory. Ob- taining a value in the rejection region is an indication that the current theory is probably not adequate and should be replaced by an explanation that is more consistent with the empirical evidence. The second type of error probability in Table π‘‘π‘Žπ‘ ∢ 𝑇𝑒𝑠𝑑𝑖𝑛𝑔 2 is the probability of a Type II error. Instead of dealing directly with this 210 12 Testing Hypothesis probability the tradition is to consider the complementary probability that corresponds to the probability of not making a Type II error. This comple- mentary probability is called the statistical power: Statistical Power = 1 βˆ’ Probability of Type II Error The statistical power is the probability of rejecting the null hypothesis when the state of nature is the alternative hypothesis. (In comparison, the signifi- cance level is the probability of rejecting the null hypothesis when the state of nature is the null hypothesis.) When comparing two decision rules for testing hypothesis, both having the same significance level, the one that possesses a higher statistical power should be favored. 𝑝-Values The 𝑝-value is another test statistic. It is associated with a specific test statistic and a structure of the rejection region. The 𝑝-value is equal to the significance level of the test in which the observed value of the statistic serves as the threshold. In the current example, where the 𝑇 is the underlying test statistic and the structure of the rejection region is of the form {|𝑇| > 𝑐} then the 𝑝- value is equal to the probability of rejecting the null hypothesis in the case where the threshold 𝑐 is equal to the observed absolute value of the 𝑇 statistic. In other words: 𝑝-value = P(|𝑇| > |𝑑|) = P(|𝑇| > | βˆ’ 0.8114824|) = P(|𝑇| > 0.8114824) , where 𝑑 = βˆ’0.8114824 is the observed value of the 𝑇 statistic and the computa- tion of the probability is conducted under the null hypothesis. Specifically, under the null hypothesis 𝐻0 ∢ E(𝑋) = 13, 662 we get that the distribution of the statistic 𝑇 = [𝑋̄ βˆ’ 13, 662]/[𝑆/βˆšπ‘›] is the 𝑑-distribution on 𝑛 βˆ’ 1 = 200 degrees of freedom. The probability of the event {|𝑇| > 0.8114824} corresponds to the sum of the probabilities of both tails of the distribution. By the symmetry of the 𝑑-distribution this equals twice the probability of the upper tail: P(|𝑇| > 0.8114824) = 2 β‹… P(𝑇 > 0.8114824) = 2 β‹… [1 βˆ’ P(|𝑇| ≀ 0.8114824)] . When we compute this probability in R we get:
Testing Hypothesis on Expectation 211 The 𝑝-value is a function of the data. In the particular data set the computed value of the 𝑇 statistic was -0.8114824. For a different data set the evaluation of the statistic would have produced a different value. As a result, the threshold that would have been used in the computation would have been different, thereby changing the numerical value of the 𝑝-value. Being a function of the data, we conclude that the 𝑝-value is a statistic. The 𝑝-value is used as a test statistic by comparing its value to the pre-defined significance level. If the significance level is 1% then the null hypothesis is rejected for 𝑝-values less that 0.01. Likewise, if the significance level is set at the 5% level then the null hypothesis is rejected for 𝑝-values less than 0.05. The statistical test that is based directly on the 𝑇 statistic and the statistical test that is based on the 𝑝-value are equivalent to each other. The one rejects the null hypothesis if, and only if, the other does so. The advantage of using the 𝑝-value as the test statistic is that no further probabilistic computations are required. The 𝑝-value is compared directly to the significance level we seek. For the test that examines the 𝑇 statistic we still need to identify the threshold associated with the given significance level. In the next 2 sections we extend the discussion of the 𝑑-test and give further examples to the use of the function β€œt.test”. We also deal with tests on prob- abilities of events and introduce the function β€œprop.test” for conducting such tests. Testing Hypothesis on Expectation Let us consider the variable β€œdif.mpg” that contains the difference in fuel con- sumption between highway and city conditions. This variable was considered in Chapter . Examine the distribution of this variable:
In the first expression we created the variable β€œdif.mpg” that contains the difference in miles-per-gallon. The difference is computed for each car type between highway driving conditions and urban driving condition. The sum- mary of this variable is produced in the second expression. Observe that the values of the variable range between 0 and 11, with 50% of the distribution concentrated between 5 and 7. The median is 6 and the mean is 5.532. The last expression produces the bar plot of the distribution. It turns out that the variable β€œdif.mpg” obtains integer values. In this section we test hypotheses regarding the expected difference in fuel consumption between highway and city conditions. Energy is required in order to move cars. For heavier cars more energy is required. Consequently, one may conjecture that milage per gallon for heavier cars is less than the milage per gallon for lighter cars. The relation between the weight of the car and the difference between the milage-per-gallon in highway and city driving conditions is less clear. On the one hand, urban traffic involves frequent changes in speed in comparison to highway conditions. One may presume that this change in speed is a cause for reduced efficiency in fuel consumption. If this is the case then one may predict that heavier cars, which require more energy for acceleration, will be associ- ated with a bigger difference between highway and city driving conditions in comparison to lighter cars. One the other hand, heavier cars do less miles per gallon overall. The difference between two smaller numbers (the milage per gallon in highway and in city conditions for heavier cars) may tend to be smaller than the difference between two larger numbers (the milage per gallon in highway and in city conditions for lighter cars). If this is the case then one may predict that heavier cars will be associated with a smaller difference between highway and city driving conditions in comparison to lighter cars. The average difference between highway and city conditions is approximately 5.53 for all cars. Divide the cars into to two groups of equal size: One group is 12.3 Testing Hypothesis on Expectation 213 composed of the heavier cars and the other group is composed of the lighter cars. We will examine the relation between the weight of the car and difference in miles per gallon between the two driving conditions by testing hypotheses separately for each weight group. For each such group we start by testing the two-sided hypothesis 𝐻1 ∢ E(𝑋) β‰  5.53, where 𝑋 is the difference between highway and city miles-per-gallon in cars that belong to the given weight group. After carrying the test for the two-sided alternative we will discuss results of the application of tests for one-sided alternatives. We start by the definition of the weight groups. The variable β€œcurb.weight” measures the weight of the cars in the data frame β€œcars”. Let us examine the summary of the content of this variable: ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 1488 2145 2414 2556 2935 4066 Half of the cars in the data frame weigh less than 2,414 lb and half the cars weigh more. The average weight of a car is 2,556 lb. Let us take 2,414 as a threshold and denote cars below this weight as β€œlight” and cars above this threshold as β€œheavy”: ## heavy ## FALSE TRUE ## 103 102 The variable β€œheavy” indicates for each car type whether its weight is above or below the threshold weight of 2,414 lb. The variable is composed of a sequence with as many components as the number of observations in the data frame β€œcars” (𝑛 = 205). Each component is a logical value: β€œTRUE” if the car is heavier than the threshold and β€œFALSE” if it is not. When we apply the function β€œtable” to this sequence we get that 102 of the cars are heavier than the threshold and 103 are not so. We would like to apply the 𝑑-test first to the subset of all cars with weight above 2,414 lb (cars that are associated with the value β€œTRUE” in the variable β€œheavy”), and then to all cars with weights not exceeding the threshold (cars associated 6In the next chapters we will consider a more direct ways for comparing the effect of one variable (curb.weight in this example) on the distribution of another variable (dif.mpg in this example). Here, instead, we investigate the effect indirectly by the investigation of hypotheses on the expectation of the variable dif.mpg separately for heavier cars and for lighter cars. 214 12 Testing Hypothesis with value β€œFALSE”). In the past we showed that one may address components of a sequence using its position in the sequence. Here we demonstrate an alternative approach for addressing specific locations by using a sequence with logical components. In order to illustrate this second approach consider the two sequences:
## [1] 12 20 The expression β€œw > 5” is a sequence of logical components, with the value β€œTRUE” at the positions where β€œw” is above the threshold and the value β€œFALSE” at the positions where β€œw” is below the threshold. We may use the sequence with logical components as an index to the sequence of the same length β€œd”. The relevant expression is β€œd[w > 5]”. The output of this expression is the sub- sequence of elements from β€œd” that are associated with the β€œTRUE” values of the logical sequence. Indeed, β€œTRUE” values are present at the 4th and the 6th positions of the logical sequence. Consequently, the output of the expression β€œd[w > 5]” contains the 4th and the 6th components of the sequence β€œd”. The operator β€œ!”, when applied to a logical value, reverses the value. A β€œTRUE” becomes β€œFALSE” and a β€œFALSE” becomes β€œTRUE”. Consider the code:
## [1] 13 22 0 6 Observe that the sequence β€œ!(w > 5)” obtains a value of β€œTRUE” at the positions where β€œw” is less or equal to 5. Consequently, the output of the expression β€œd[!(w > 5)]” are all the values of β€œd” that are associated with components of β€œw” that are less or equal to 5. The variable β€œdif.mpg” contains data on the difference in miles-per-gallon be- tween highway and city driving conditions for all the car types. The sequence β€œheavy” identifies the car types with curb weight above the threshold of 2,414 lb. The components of this sequence are logical with the value β€œTRUE” at posi- tions associated with the heavier car types and the β€œFALSE” at positions asso- ciated with the lighter car types. Observe that the output of the expression β€œdif.mpg[heavy]” is the subsequence of differences in miles-per-gallon for the cars with curb weight above the given threshold. We apply the function β€œt.test” to this expression in order to conduct the 𝑑-test on the expectation of the variable β€œdif.mpg” for the heavier cars: ## ## One Sample t-test ## ## data: dif.mpg[heavy] ## t = -1.5385, df = 101, p-value = 0.127 ## alternative hypothesis: true mean is not equal to 5.53 ## 95 percent confidence interval: ## 4.900198 5.609606 ## sample estimates: ## mean of x ## 5.254902 The target population are the heavier car types. Notice that we test the null hypothesis that expected difference among he heavier cars is equal to 5.53 against the alternative hypothesis that the expected difference among heav- ier cars is not equal to 5.53. The null hypothesis is not rejected at the 5% significance level since the 𝑝-value, which is equal to 0.1735, is larger than 0.05. Consequently, based on the data at hand, we cannot conclude that the expected difference in miles-per-gallon for heavier cars is significantly different than the average difference for all cars. Observe also that the estimate of the expectation, the sample mean, is equal to 5.254902, with a confidence interval of the form [4.900198, 5.609606]. Next, let us apply the same test to the lighter cars. The expression β€œdif.pmg[!heavy]” produces the subsequence of differences in miles-per-gallon
## ## One Sample t-test ## ## data: dif.mpg[!heavy] ## t = 1.9692, df = 102, p-value = 0.05164 ## alternative hypothesis: true mean is not equal to 5.53 ## 95 percent confidence interval: ## 5.528002 6.083649 ## sample estimates: ## mean of x ## 5.805825 Again, the null hypothesis is not rejected at the 5% significance level since a 𝑝-value of 0.05164 is still larger than 0.05. However, unlike the case for heavier cars where the 𝑝-value was undeniably larger than the threshold. In this example it is much closer to the threshold of 0.05. Consequently, we may almost conclude that the expected difference in miles-per-gallon for lighter cars is significantly different than the average difference for all car. Why did we not reject the null hypothesis for the heavier cars but almost did so for the lighter cars? Both tests are based on the 𝑇 statistic, which measures the ratio between the deviation of the sample average from its expectation under the null, divided by the estimate of the standard deviation of the sample average. The value of this statistic is β€œt = -1.5385” for heavier cars and it is β€œt = 1.9692” for lighter cars, an absolute value of about 25% higher. The deviation of the sample average for the heavier cars and the expectation under the null is 5.254902 βˆ’ 5.53 = βˆ’0.275098. On the other hand, the deviation of the sample average for the lighter cars and the expectation under the null is 5.805825 βˆ’ 5.53 = 0.275825. The two deviations are practically equal to each other in the absolute value. The estimator of the standard deviation of the sample average is 𝑆/βˆšπ‘›, where 𝑆 is the sample standard deviation and 𝑛 is the sample size. The sample sizes, 103 for lighter cars and 102 for heavier cars, are almost equal. Therefore, the reason for the difference in the values of the 𝑇 statistics for both weight groups must be differences in the sample standard deviations. Indeed, when we compute the sample standard deviation for lighter and heavier cars we get 8The function β€œtapply” applies the function that is given as its third argument (the function β€œsd” in this case) to each subset of values of the sequence that is given as its first argument (the sequence β€œdif.mpg” in the current application). The subsets are determined
## FALSE TRUE ## 1.421531 1.805856 The important lesson to learn from this exercise is that simple minded notion of significance and statistical significance are not the same. A simple minded assessment of the discrepancy from the null hypothesis will put the evidence from the data on lighter cars and the evidence from the data on heavier cars on the same level. In both cases the estimated value of the expectation is the same distance away from the null value. However, statistical assessment conducts the analysis in the context of the sam- pling distribution. The deviation of the sample average from the expectation is compared to the standard deviation of the sample average. Consequently, in statistical testing of hypothesis a smaller deviation of the sample average from the expectation under the null may be more significant than a larger one if the sampling variability of the former is much smaller than the sampling variability of the later. Let us proceed with the demonstration of the application of the 𝑑-test by the testing of one-sided alternatives in the context of the lighter cars. One may test the one-sided alternative 𝐻1 ∢ E(𝑋) > 5.53 that the expected value of the difference in miles-per-gallon among cars with curb weight no more than 2,414 lb is greater than 5.53 by the application of the function β€œt.test” to the data on lighter cars. This data is the output of the expression β€œdif.mpg[!heavy]”. As before, we specify the null value of the expectation by the introduction of the expression β€œmu=5.53”. The fact that we are interested in the testing of the specific alternative is specified by the introduction of a new argument of the form: β€œalternative=greater”. The default value of the argument β€œalternative” is β€œtwo.sided”, which produces a test of a two-sided alternative. By changing the value of the argument to β€œgreater” we produce a test for the appropriate one-sided alternative:
by the levels of the second arguments (the sequence β€œheavy” in this case). The output is the sample standard deviation of the variable β€œdif.mpg” for lighter cars (the level β€œFALSE”) and for heavier cars (the level β€œTRUE”).
## t = 1.9692, df = 102, p-value = 0.02582 ## alternative hypothesis: true mean is greater than 5.53 ## 95 percent confidence interval: ## 5.573323 Inf ## sample estimates: ## mean of x ## 5.805825 The value of the test statistic (t = 1.9692) is the same as for the test of the two-sided alternative and so is the number of degrees of freedom associated with the statistic (df = 102). However, the 𝑝-value is smaller (p-value = 0.02582), compared to the 𝑝-value in the test for the two-sided alternative (p-value = 0.05164). The 𝑝-value for the one-sided test is the probability under the sam- pling distribution that the test statistic obtains vales larger than the observed value of 1.9692. The 𝑝-value for the two-sided test is twice that figure since it involves also the probability of being less than the negative of the observes value. The estimated value of the expectation, the sample average, is unchanged. However, instead of producing a confidence interval for the expectation the report produces a one-sided confidence interval of the form [5.573323, ∞). Such an interval corresponds to the smallest value that the expectation may rea- sonably obtain on the basis of the observed data. Finally, consider the test of the other one-sided alternative 𝐻1 ∢ E(𝑋) < 5.53: ## ## One Sample t-test ## ## data: dif.mpg[!heavy] ## t = 1.9692, df = 102, p-value = 0.9742 ## alternative hypothesis: true mean is less than 5.53 ## 95 percent confidence interval: ## -Inf 6.038328 ## sample estimates: ## mean of x ## 5.805825 The alternative here is determined by the expression β€œalternative=less”. The 𝑝- value is equal to 0.9742, which is the probability that the test statistic obtains values less than the observed value of 1.9692. Clearly, the null hypothesis is not rejected in this test.