text
stringlengths 200
319k
|
---|
Testing Hypothesis on Proportion
Consider the problem of testing hypothesis on the probability of an event. Recall that a probability 𝑝 of some event can be estimated by the observed relative frequency of the event in the sample, denoted 𝑃̂. The estimation is associated with the Bernoulli random variable 𝑋, that obtains the value 1 when the event occurs and the value 0 when it does not. The statistical model states that 𝑝 is the expectation of 𝑋. The estimator 𝑃̂ is the sample average of this measurement.
With this formulation we may relate the problem of testing hypotheses formu- lated in terms of 𝑝 to the problem of tests associated to the expectation of a measurement. For the latter problem we applied the 𝑡-test. A similar, though not identical, test is used for the problem of testing hypothesis on proportions.
Assume that one in interested in testing the null hypothesis that the prob- ability of the event is equal to some specific value, say one half, versus the alternative hypothesis that the probability is not equal to this value. These hypotheses are formulated as 𝐻0 ∶ 𝑝 = 0.5 and 𝐻1 ∶ 𝑝 ≠ 0.5.
The sample proportion of the event 𝑃̂ is the basis for the construction of the test statistic. Recall that the variance of the estimator 𝑃̂ is given by V(𝑃̂) =
𝑝(1−𝑝)/𝑛. Under the null hypothesis we get that the variance is equal to V(𝑃̂) = 0.5(1 − 0.5)/𝑛. A natural test statistic is the standardized sample proportion:
𝑃̂ − 0.5
𝑍 = ,
√0.5(1 − 0.5)/𝑛
that measures the ratio between the deviation of the estimator from its null expected value and the standard deviation of the estimator. The standard deviation of the sample proportion is used in the ratio.
If the null hypothesis that 𝑝 = 0.5 holds true then one gets that the value 0 is the center of the sampling distribution of the test statistic 𝑍. Values of the statistic that are much larger or much smaller than 0 indicate that the null hypothesis is unlikely. Consequently, one may consider a rejection region of the form {|𝑍| > 𝑐}, for some threshold value 𝑐. The threshold 𝑐 is set at a high enough level to assure the required significance level, namely the probability under the null hypothesis of obtaining a value in the rejection region. Equivalently, the rejection region can be written in the form {𝑍2 > 𝑐2}.
As a result of the Central Limit Theorem one may conclude that the distri- bution of the test statistic is approximately Normal. Hence, Normal computa- tions may be used in order to produce an approximate threshold or in order to compute an approximation for the 𝑝-value. Specifically, if 𝑍 has the standard
220 12 Testing Hypothesis
Normal distribution then 𝑍2 has a chi-square distribution on one degree of freedom.
In order to illustrate the application of hypothesis testing for proportion con- sider the following problem: In the previous section we obtained the curb weight of 2,414 lb as the sample median. The weights of half the cars in the sample were above that level and the weights of half the cars were below this level. If this level was actually the population median then the probability that the weight of a random car is not exceeding this level would be equal to 0.5.
Let us test the hypothesis that the median weight of cars that run on diesel is also 2,414 lb. Recall that 20 out of the 205 car types in the sample have diesel engines. Let us use the weights of these cars in order to test the hypothesis.
The variable “fuel.type” is a factor with two levels “diesel” and “gas” that identify the fuel type of each car. The variable “heavy” identifies for each car whether its weight is above the level of 2414 or not. Let us produce a 2 × 2 table that summarizes the frequency of each combination of weight group and the fuel type: |
Originally the function “table” was applied to a single factor and produced a sequence with the frequencies of each level of the factor. In the current application the input to the function are two factors. The output is a table of frequencies. Each entry to the table corresponds to the frequency of a combination of levels, one from the first input factor and the other from the second input factor. In this example we obtain that 6 cars use diesel and their curb weight was below the threshold. There are 14 cars that use diesel and their curb weight is above the threshold. Likewise, there are 97 light cars that use gas and 88 heavy cars with gas engines.
The function “prop.test” produces statistical tests for proportions. The rele- vant information for the current application of the function is the fact that frequency of light diesel cars is 6 among a total number of 20 diesel cars. The first entry to the function is the frequency of the occurrence of the event, 6 in
9To be more accurate, the variable “heavy” is not a factor but a sequence with logical components. Nonetheless, when the function “table” is applied to such a sequence it treats it as a factor with two levels, “TRUE” and “FALSE”. |
##
## 1-sample proportions test with continuity correction ##
## data: 6 out of 20, null probability 0.5 ## X-squared = 2.45, df = 1, p-value = 0.1175
## alternative hypothesis: true p is not equal to 0.5 ## 95 percent confidence interval:
## 0.1283909 0.5433071
## sample estimates:
## p ## 0.3
The function produces a report that is printed on the screen. The title identifies the test as a one-sample test of proportions. In later chapters we will apply the same function to more complex data structures and the title will change accordingly. The title also identifies the fact that a continuity correction is used in the computation of the test statistic.
The line under the title indicates the frequency of the event in the sample and the sample size. (In the current example, 6 diesel cars with weights below the threshold among a total of 20 diesel cars.) The probability of the event, under the null hypothesis, is described. The default value of this probability is “p = 0.5”, which is the proper value in the current example. This default value can be modified by replacing the value 0.5 by the appropriate probability.
The next line presents the information relevant for the test itself. The test statistic, which is essentially the square of the 𝑍 statistic described above, obtains the value 2.45. The sampling distribution of this statistic under the null hypothesis is, approximately, the chi-square distribution on 1 degree of freedom. The 𝑝-value, which is the probability that chi-square distribution on 1 degree of freedom obtains a value above 2.45, is equal to 0.1175. Consequently, the null hypothesis is not rejected at the 5% significance level. |
for the Normal approximation of the Binomial distribution. Specifically, the test statistic to the continuity correction for testing 𝐻0 ∶ 𝑝 = 𝑝0 takes the form [|𝑝̂ − 𝑝0| − 0.5/𝑛]2 /[𝑝0(1 −
𝑝0)/𝑛]. Compare this statistic with the statistic proposed in the text that takes the form [𝑝̂ − 𝑝0]2/[𝑝0(1 − 𝑝0)/𝑛]. The latter statistic is used if the argument “correct = FALSE” is added to the function.
222 12 Testing Hypothesis
The bottom part of the report provides the confidence interval and the point estimate for the probability of the event. The confidence interval for the given data is [0.1283909, 0.5433071] and the point estimate is 𝑝̂ = 6/20 = 0.3.
It is interesting to note that although the deviation between the estimated proportion 𝑝̂ = 0.3 and the null value of the probability 𝑝 = 0.5 is relatively large still the null hypothesis was not rejected. The reason for that is the smallness of the sample, 𝑛 = 20, that was used in order to test the hypothesis. Indeed, as an exercise let us examine the application of the same test to a setting where 𝑛 = 200 and the number of occurrences of the event is 60:
##
## 1-sample proportions test with continuity correction ##
## data: 60 out of 200, null probability 0.5
## X-squared = 31.205, df = 1, p-value = 2.322e-08
## alternative hypothesis: true p is not equal to 0.5 ## 95 percent confidence interval:
## 0.2384423 0.3693892
## sample estimates:
## p ## 0.3
The estimated value of the probability is the same as before since 𝑝̂ = 60/200 =
0.3. However, the 𝑝-value is 2.322 × 10−8, which is way below the significance threshold of 0.05. In this scenario the null hypothesis is rejected with flying colors.
This last example is yet another demonstration of the basic characteristic of statistical hypothesis testing. The consideration is based not on the discrep- ancy of the estimator of the parameter from the value of the parameter under the null. Instead, it is based on the relative discrepancy in comparison to the sampling variability of the estimator. When the sample size is larger the vari- ability is smaller. Hence, the chances of rejecting the null hypothesis for the same discrepancy increases. |
Exercises 223
to such condition involves using a placebo treatment as a control. A placebo treatment is a treatment that externally looks identical to the actual treatment but, in reality, it does not have the active ingredients. The reason for using placebo for control is the “placebo effect”. Patients tent to react to the fact that they are being treated regardless of the actual beneficial effect of the treatment.
As an example, consider the trial for testing magnets as a treatment for pain that was described in Question
𝑒𝑥 ∶ 𝐼𝑛𝑓𝑒𝑟𝑒𝑛𝑐𝑒.1
. The patients that where randomly assigned to the control (the last 21 ob- servations in the file “magnets.csv”) were treated with devises that looked like magnets but actually were not. The goal in this exercise is to test for the presence of a placebo effect in the case study “Magnets and Pain Relief” of Question
𝑒𝑥 ∶ 𝐼𝑛𝑓𝑒𝑟𝑒𝑛𝑐𝑒.1
using the data in the file “magnets.csv”.
Let 𝑋 be the measurement of change, the difference between the score of pain before the treatment and the score after the treatment, for patients that were treated with the inactive placebo. Express, in terms of the expected value of 𝑋, the null hypothesis and the alternative hypothesis for a statistical test to determine the presence of a placebo effect. The null hypothesis should reflect the situation that the placebo effect is absent.
Identify the observations that can be used in order to test the hypotheses.
Carry out the test and report your conclusion. (Use a signifi- cance level of 5%.)
Exercise 12.2. It is assumed, when constructing the 𝑡-test, that the measure- ments are Normally distributed. In this exercise we examine the robustness of the test to divergence from the assumption. You are required to compute the significance level of a two-sided 𝑡-test of 𝐻0 ∶ E(𝑋) = 4 versus 𝐻1 ∶ E(𝑋) ≠ 4. Assume there are 𝑛 = 20 observations and use a 𝑡-test with a nominal 5% significance level.
Consider the case where 𝑋 ∼ Exponential(1/4).
Consider the case where 𝑋 ∼ Uniform(0, 8).
Exercise 12.3. Assume that you are interested in testing 𝐻0 ∶ E(𝑋) = 20 versus 𝐻1 ∶ E(𝑋) ≠ 20 with a significance level of 5% using the 𝑡-test. Let the sample average, of a sample of size 𝑛 = 55, be equal to 𝑥̄ = 22.7 and the sample standard deviation be equal to 𝑠 = 5.4.
224 12 Testing Hypothesis
Do you reject the null hypothesis?
Use the same information. Only now you are interested in a significance level of 1%. Do you reject the null hypothesis?
Use the information the presentation of the exercise. But now you are interested in testing 𝐻0 ∶ E(𝑋) = 24 versus 𝐻1 ∶ E(𝑋) ≠ 24 (with a significance level of 5%). Do you reject the null hypothesis?
Summary
Glossary
Hypothesis Testing: A method for determining between two hypothesis, with one of the two being the currently accepted hypothesis. A determination is based on the value of the test statistic. The probability of falsely rejecting the currently accepted hypothesis is the significance level of the test.
Null Hypothesis (𝐻0): A sub-collection that emerges in response to the situation when the phenomena is absent. The established scientific theory that is being challenged. The hypothesis which is worse to erroneously reject.
Alternative Hypothesis (𝐻1): A sub-collection that emerges in response to the presence of the investigated phenomena. The new scientific theory that challenges the currently established theory.
Test Statistic: A statistic that summarizes the data in the sample in order to decide between the two alternative.
Rejection Region: A set of values that the test statistic may obtain. If the observed value of the test statistic belongs to the rejection region then the null hypothesis is rejected. Otherwise, the null hypothesis is not rejected.
Type I Error The null hypothesis is correct but it is rejected by the test.
Type II Error The alternative hypothesis holds but the null hypothesis is not rejected by the test.
Significance Level: The probability of a Type I error. The probability, com- puted under the null hypothesis, of rejecting the null hypothesis. The test is constructed to have a given significance level. A commonly used significance level is 5%.
Statistical Power: The probability, computed under the alternative hypoth- esis, of rejecting the null hypothesis. The statistical power is equal to 1 minus the probability of a Type II error.
Summary 225
𝑝-value: A form of a test statistic. It is associated with a specific test statistic and a structure of the rejection region. The 𝑝-value is equal to the signifi- cance level of the test in which the observed value of the statistic serves as the threshold.
Discuss in the forum
In statistical thinking there is a tenancy towards conservatism. The investiga- tors, enthusiastic to obtain positive results, may prefer favorable conclusions and may tend over-interpret the data. It is the statistician’s role to add to the objectivity in the interpretation of the data and to advocate caution.
On the other hand, the investigators may say that conservatism and science are incompatible. If one is too cautious, if one is always protecting oneself against the worst-case scenario, then one will not be able to make bold new discoveries.
Which of the two approach do you prefer?
When you formulate your answer to this question it may be useful to recall cases in your past in which you where required to analyze data or you were exposed to other people’s analysis. Could the analysis benefit or be harmed by either of the approaches?
For example, many scientific journal will tend to reject a research paper unless the main discoveries are statistically significant (𝑝-value < 5%). Should one not publish also results that show a significance level of 10%? |
Student Learning Objectives
The next 3 chapters deal with the statistical inference associated with the relation between two variables. The relation corresponds to the effect of one variable on the distribution of the other. The variable whose distribution is being investigated is called the response. The variable which may have an effect on the distribution of the response is called the explanatory variable.
In this section we consider the case where the explanatory variable is a factor with two levels. This factor splits the sample into two sub-samples. The sta- tistical inference compares between the distributions of the response variable in the two sub-samples. The statistical inference involves point estimation, confidence intervals, and hypothesis testing. R functions may be used in order to carry out the statistical inference. By the end of this chapter, the student should be able to:
Define estimators, confidence intervals, and tests for comparing the distri- bution of a numerical response between two sub-populations.
Apply the function “t.test” in order to investigate the difference between the expectations of the response variable in the two sub-samples.
Apply the function “var.test” in order to investigate the ratio between the variances of the response variable in the two sub-samples.
Comparing Two Distributions
Up until this point in the book we have been considering tools for the in- vestigation of the characteristics of the distribution of a single measurement. In most applications, however, one is more interested in inference regarding the relationships between several measurements. In particular, one may want |
228 13 Comparing Two Samples
to understand how the outcome of one measurement effects the outcome of another measurement.
A common form of a mathematical relation between two variables is when one of the variables is a function of the other. When such a relation holds then the value of the first variably is determined by the value of the second. However, in the statistical context relations between variables are more complex. Typically, a statistical relation between variables does not make one a direct function of the other. Instead, the distribution of values of one of the variables is affected by the value of the other variable. For a given value of the second variable the first variable may have one distribution, but for a different value of the second variable the distribution of the first variable may be different. In statistical terminology the second variable in this setting is called an explanatory variable and the first variable, with a distribution affected by the second variable, is called the response.
As an illustration of the relation between the response and the explanatory variable consider the following example. In a clinical trial, which is a precon- dition for the marketing of a new medical treatment, a group of patients is randomly divided into a treatment and a control sub-groups. The new treat- ment is anonymously administered to the treatment sub-group. At the same time, the patients in the control sub-group obtain the currently standard treatment. The new treatment passes the trial and is approved for marketing by the Health Authorities only if the response to the medical intervention is better for the treatment sub-group than it is for the control sub-group. This treatment-control experimental design, in which a response is measured under two experimental conditions, is used in many scientific and industrial settings.
In the example of a clinical trial one may identify two variables. One vari- able measures the response to the medical intervention for each patient that participated in the trial. This variable is the response variable, the distribu- tion of which one seeks to investigate. The other variable indicates to which sub-group, treatment or control, each patient belongs. This variable is the explanatory variable. In the setting of a clinical trial the explanatory variable is a factor with two levels, “treatment” and “control”, that splits the sample into two sub-samples. The statistical inference compares the distribution of the response variable among the patients in the treatment sub-sample to the distribution of the response among the patients in the control sub-group.
The analysis of experimental settings such as the treatment-control trial ia a special case that involves the investigation of the effect an explanatory vari- able may have on the response variable. In this special case the explanatory variable is a factor with two distinct levels. Each level of the factor is associate with a sub-sample, either treatment or control. The analysis seeks to compare the distribution of the response in one sub-sample with the distribution in the other sub-sample. If the response is a numeric measurement then the analysis may take the form of comparing the response’s expectation in one sub-group
Comparing the Sample Means 229
to the expectation in the other. Alternatively, the analysis may involve com- paring the variance. In a different case, if the response is the indicator of the occurrence of an event then the analysis may compare two probabilities, the probability of the event in the treatment group to the probability of the same event in the control group.
In this chapter we deal with statistical inference that corresponds to the com- parison of the distribution of a numerical response variable between two sub- groups that are determined by a factor. The inference includes testing hy- potheses, mainly the null hypothesis that the distribution of the response is the same in both subgroups versus the alternative hypothesis that the distri- bution is not the same. Another element in the inference is point estimation and confidence intervals of appropriate parameters.
In the next chapter we will consider the case where the explanatory variable is numeric and in the subsequent chapter we describe the inference that is used in the case that the response is the indicator of the occurrence of an event.
Comparing the Sample Means
In this section we deal with the issue of statistical inference when comparing the expectation of the response variable in two sub-samples. The inference is used in order to address questions such as the equality of the two expectations to each other and, in the case they are not equal, the assessment of the differ- ence between the expectations. For the first question one may use statistical hypothesis testing and for the assessment one may use point estimates and/or confidence intervals.
In the first subsection we provide an example of a test of the hypothesis that the expectations are equal. A confidence interval for the difference between expectations is given in the output of the report of the R function that applies the test. The second subsection considers the construction of the confidence interval and the third subsection deals with the theory behind the statistical test.
An Example of a Comparison of Means
In order to illustrate the statistical inference that compars two expectations let us return to an example that was considered in Chapter . The response of interest is the difference in miles-per-gallon between driving in highway condi- tions and driving in city conditions. This response is produced as the difference |
The object “heavy” was defined in the previous chapter as a sequence with logical components. A component had the value “TRUE” if the curb weight of the car type associated with this component was above the median level of 2,414 lb. The component obtained the value “FALSE” if the curb weight did not exceed that level. The logical sequence “heavy” was used in order to select the subsequences associated with each weight sub-group. Statistical inference was applied separately to each subsequence.
In the current analysis we want to examine directly the relation between the response variable “dif.mpg” and an explanatory factor variable “heavy”. In order to do so we redefine the variable “heavy” to be a factor:
The variable “curb.weight” is numeric and the expression “cars$curb.weight > 2414” produces a sequence with logical “TRUE” or “FALSE” components. This sequence is not a factor. In order to produce a factor we apply the function “factor” to the sequence. The function “factor” transforms its input into a factor. Specifically, the application of this function to a sequence with logical components produces a factor with two levels that are given the names “TRUE” and “FALSE”.
We want to examine the relation between the response variable “dif.mpg” and the explanatory factor “heavy”. Towards that end we produce a plot of the relation with the function “plot” and test for the equality of the expectations of the response with the function “t.test”. First the plot:
1It should be noted that the redefined sequence “heavy” is no longer a sequence with
logical components. It cannot be used, for example, as an index to another sequence in order to sellect the components that are associated with the “TRUE” logical value. |
Observe that the figure contains two box plots, one associated with the level “FALSE” of the explanatory factor and the other with the level “TRUE” of that factor. The box plots describe the distribution of the response variable for each level of the explanatory factor. Overall, the distribution of the response for heavier cars (cars associated with the level “TRUE”) tends to obtain smaller values than the distribution of the response for lighter cars (cars associated with the level “FALSE”).
The input to the function “plot” is a formula expression of the form: “response ̃explanatory.variable”. A formula identifies the role of variables. The variable to the left of the tilde character ( ̃) in a formula is the response and the variable to the right is the explanatory variable. In the current case the variable “dif.mpg” is the response and the variable “heavy” is the explanatory variable.
Let us use a formal test in order to negate the hypothesis that the ex- pectation of the response for the two weight groups is the same. The test is provided by the application of the function “t.test” to the formula “dif.mpg\;\tilde{}\;heavy”: |
## mean in group FALSE mean in group TRUE ## 5.805825 5.254902
The function “t.test”, when applied to a formula that describes the relation between a numeric response and a explanatory factor with two level, produces a special form of a 𝑡-test that is called the Welch Two Sample 𝑡-test. The statistical model associated with this test assumes the present of two inde- pendent sub-samples, each associated with a level of the explanatory variable. The relevant parameters for this model are the two expectations and the two variances associated with the sub-samples.
The hypotheses tested in the context of the Welch test are formulated in terms of the difference between the expectation of the first sub-sample and the expectation of the second sub-sample. In the default application of the test the null hypothesis is that the difference is equal to 0 (or, equivalently, that the expectations are equal to each other). The alternative is that the difference is not equal to 0 (hence, the expectations differ).
The test is conducted with the aid of a test statistic. The computed value of the test statistic in this example is “t = 2.4255”. Under the null hypothesis the distribution of the test statistic is (approximately) equal to the 𝑡-distribution on “df = 191.561” degrees of freedom. The resulting 𝑝-value is “p-value = 0.01621”. Since the computed 𝑝-value is less than 0.05 we reject the null hypothesis with a significance level of 5% and declare that the expectations are not equal to each other.
The bottom part of the report presents points estimates and a confidence inter- val. The point estimates of the two expectations are the sub-samples averages. The estimated value of the expected difference in miles-per-gallon for lighter cars is 5.805825, which is the average of the measurements associated with the level “FALSE”. The estimated value of the expected difference for heavier cars is 5.254902, the average of measurements associated with the level “TRUE”.
The point estimate for the difference between the two expectations is the difference between the two sample averages: 5.805825 − 5.254902 = 0.550923. A confidence interval for the difference between the expectations is reported under the title “95 percent confidence interval:”. The computed value of the confidence interval is [0.1029150, 0.9989315].
In the rest of this section we describe the theory behind the construction of the confidence interval and the statistical test.
Confidence Interval for the Difference
Consider the statistical model that is used for the construction of the con- fidence interval. The main issue is that the model actually deals with two populations rather than one population. In previous theoretical discussions
13.3 Comparing the Sample Means 233
we assumed the presence of a single population and a measurement taken for the members of this population. When the measurement was considered as a random variable it was denoted by a capital Latin letter such as 𝑋. Of concern were characteristics of the distribution of 𝑋 such as E(𝑋), the expectation of
𝑋, and V(𝑋), the variance.
In the current investigation two populations are considered. One population is the sub-population associated with the first level of the factor and the other population is associated with the second level. The measurement is taken for the members of both sub-populations. However, the measurement involves two random variables, one associated with the first sub-population and the other associated with the second sub-population. Moreover, the distribution of the measurement for one population may differ from the distribution for the other population. We denote the random variable associated with the first sub-population by 𝑋𝑎 and the one associated with the other sub-population by 𝑋𝑏.
Consider the example in which the measurement is the difference in miles-per- gallon between highway and city driving conditions. In this example 𝑋𝑎 is the measurement for cars with curb weight up to 2,414 lb and 𝑋𝑏 is the same measurement for cars with curb weight above that threshold.
The random variables 𝑋𝑎 and 𝑋𝑏 may have different distributions. Conse- quently, the characteristics of their distributions may also vary. Denote by E(𝑋𝑎) and E(𝑋𝑏) the expectations of the first and second random variable, respectively. Likewise, V(𝑋𝑎) and V(𝑋𝑏) are the variances of the two random variables. These expectations and variances are subjects of the statistical in- ference.
The sample itself may also be divided into two sub-samples according to the sub-population each observation originated from. In the example, one sub- sample is associated with the lighter car types and the other sub-sample with the heavier ones. These sub-samples can be used in order to make inference with respect to the parameters of 𝑋𝑎 and 𝑋𝑏, respectively. For example, the average of the observations from first sub-sample, 𝑋̄𝑎, can serve as the estima- tor of the of the expectation E(𝑋𝑎) and the second sub-sample‘s average 𝑋̄𝑏 may be used in order to estimate E(𝑋𝑏).
Our goal in this section is to construct a confidence interval for the difference in expectations E(𝑋𝑎) − E(𝑋𝑏). A natural estimator for this difference in ex- pectations is the difference in averages 𝑋̄𝑎 − 𝑋̄𝑏. The average difference will also serve as the basis for the construction of a confidence interval.
Recall that the construction of the confidence interval for a signal expectation was based on the sample average 𝑋̄. We exploited the fact that the distribution of 𝑍 = (𝑋̄ − E(𝑋)/√V(𝑋)/𝑛, the standardized sample average, is approximately standard Normal. From this Normal approximation we obtained an approxi- mate 0.95 probability for the event |
{ − 1.96 ⋅ √V(𝑋)/𝑛 ≤ 𝑋̄ − E(𝑋) ≤ 1.96 ⋅ √V(𝑋)/𝑛} ,
where 1.96 = qnorm(0.975) is the 0.975-percentile of the standard Normal dis- tribution. Substituting the estimator 𝑆 for the unknown variance of the mea- surement and rewriting the event in a format that puts the expectation E(𝑋) in the center, between two boundaries, produced the confidence interval: |
Similar considerations can be used in the construction of a confidence interval for the difference between expectations on the basis of the difference between sub-sample averages. The deviation {𝑋̄𝑎 − 𝑋̄𝑏} − {E(𝑋𝑎) − E(𝑋𝑏)} between the difference of the averages and the difference of the expectations that they estimate can be standardized. By the Central Limit Theorem one may obtain that the distribution of the standardized deviation is approximately standard Normal.
Standardization is obtained by dividing by the standard deviation of the es- timator. In the current setting the estimator is the difference between the averages. The variance of the difference is given by |
where 𝑛𝑎 is the size of the sub-sample that produces the sample average 𝑋̄𝑎 and 𝑛𝑏 is the size of the sub-sample that produces the sample average 𝑋̄𝑏. Observe that both 𝑋̄𝑎 and 𝑋̄𝑏 contribute to the variability of the difference. The total variability is the sum of the two contributions. Finally, we use the fact that the variance of the sample average is equal to he variance of a single measurement divided by the sample size. This fact is used for both averages in order to obtain a representation of the variance of the estimator in terms of the variances of the measurement in the two sub-population and the sizes of the two sub-samples.
The standardized deviation takes the form:
𝑋̄𝑎 − 𝑋̄𝑏 − {E(𝑋𝑎) − E(𝑋𝑏)}
𝑍 = .
√ (𝑋𝑎)/𝑛𝑎 + (𝑋𝑏)/𝑛𝑏
When both sample sizes 𝑛𝑎 and 𝑛𝑏 are large then the distribution of 𝑍 is ap- proximately standard Normal. As a corollary from the Normal approximation one gets that P(−1.96 ≤ 𝑍 ≤ 1.96) ≈ 0.95.
2In the case where the sample size is small and the observations are Normally distributed we used the 𝑡-distribution instead. The percentile that was used in that case was qt(0.975,n-1), the 0.975 percentile of the 𝑡-distribution on 𝑛 − 1 degrees of freedom.
3It can be proved mathematically that the variance of a difference (or a sum) of two
independent random variables is the sum of the variances. The situation is different when the two random variables are correlated.
Comparing the Sample Means 235
The values of variances V(𝑋𝑎) and V(𝑋𝑏) that appear in the definition of 𝑍
are unknown. However, these values can be estimated using the sub-samples
variances 𝑆2 and 𝑆2. When the size of both sub-samples is large then these
estimators will produce good approximations of the unknown variances: |
The approximation results from the use of the sub-sample variances as a substi- tute for the unknown variances of the measurement in the two sub-populations. When the two sample sizes 𝑛𝑎 and 𝑛𝑏 are large then the probability of the given event will also be approximately equal to 0.95.
Finally, reexpressing the least event in a format that puts the parameter E(𝑋𝑎) − E(𝑋𝑏) in the center will produce the confidence interval with bound- aries of the form: |
In order to illustrate the computations that are involved in the construction of a confidence interval for the difference between two expectations let us return to the example of difference in miles-per-gallon for lighter and for heavier cars. Compute the two sample sizes, sample averages, and sample variances:
## heavy
## FALSE TRUE ## 103 102
## FALSE TRUE ## 5.805825 5.254902
## FALSE TRUE ## 2.020750 3.261114
236 13 Comparing Two Samples
Observe that there 103 lighter cars and 102 heavier ones. These counts were obtained by the application of the function “table” to the factor “heavy”. The lighter cars are associated with the level “FALSE” and heavier cars are associated with the level “TRUE”.
The average difference in miles-per-gallon for lighter cars is 5.805825 and the variance is 2.020750. The average difference in miles-per-gallon for heavier cars is 5.254902 and the variance is 3.261114. These quantities were obtained by the application of the functions “mean” or “var” to the values of the vari- able “dif.mpg” that are associated with each level of the factor “heavy”. The application was carried out using the function “tapply”.
The computed values of the means are equal to the vales reported in the output of the application of the function “t.test” to the formula “dif.mpg\;\tilde{}\;heavy”. The difference between the averages is 𝑥̄𝑎 − 𝑥̄𝑏 = 5.805825 − 5.254902 = 0.550923. This value is the center of the confidence in- terval. The estimate of the standard deviation of the difference in averages is: |
The t-Test for Two Means
The statistical model that involves two sub-populations may be considered also in the context of hypothesis testing. Hypotheses can be formulated regarding the relations between the parameters of the model. These hypotheses can be tested using the data. For example, in the current application of the 𝑡-test, the null hypothesis is 𝐻0 ∶ E(𝑋𝑎) = E(𝑋𝑏) and the alternative hypothesis is
𝐻1 ∶ E(𝑋𝑎) ≠ E(𝑋𝑏). In this subsection we explain the theory behind this test.
Recall that the construction of a statistical test included the definition of a test statistic and the determination of a rejection region. The null hypothesis is
4The confidence interval given in the output of the function “t.test” is [0.1029150, 0.9989315], which is very similar, but not identical, to the confidence interval that we computed. The discrepancy stems from the selection of the percentile. We used the percentile of the normal distribution 1.96 = qnorm(0.975). The function “t.test”, on the other hand, uses the percentile of the 𝑡-distribution 1.972425 = qt(0.975,191.561). Using this value instead would give 0.550923 ± 1.972425 ⋅ 0.227135, which coincides with the interval reported by “t.test”. For practical applications the difference between the two confidence intervals are not negligible.
13.3 Comparing the Sample Means 237
rejected if, and only if, the test statistic obtains a value in the rejection region. The determination of the rejection region is based on the sampling distribution of the test statistic under the null hypothesis. The significance level of the test is the probability of rejecting the null hypothesis (i.e., the probability that the test statistic obtains a value in the rejection region) when the null hypothesis is correct (the distribution of the test statistic is the distribution under the null hypothesis). The significance level of the test is set at a given value, say 5%, thereby restricting the size of the rejection region.
In the previous chapter we consider the case where there is one population. For review, consider testing the hypothesis that the expectation of the measure- ment is equal to zero (𝐻0 ∶ E(𝑋) = 0) against the alternative hypothesis that it is not (𝐻1 ∶ E(𝑋) ≠ 0). A sample of size 𝑛 is obtained from this population. Based on the sample one may compute a test statistic: |
where 𝑋̄ is the sample average and 𝑆 is the sample standard deviation. The rejection region of this test is {|𝑇| > qt(0.975,n-1)}, for “qt(0.975,n-1)” the 0.975-percentile of the 𝑡-distribution on 𝑛 − 1 degrees of freedom.
Alternatively, one may compute the 𝑝-value and reject the null hypothesis if the 𝑝-value is less than 0.05. The 𝑝-value in this case is equal to P(|𝑇| > |𝑡|), where 𝑡 is the computed value of the test statistic. The distribution of 𝑇 is the
𝑡-distribution of 𝑛 − 1 degrees of freedom.
A similar approach can be used in the situation where two sub-population are involved and one wants to test the null hypothesis that the expectations are equal versus the alternative hypothesis that they are not. The null hypothesis can be written in the form 𝐻0 ∶ E(𝑋𝑎) − E(𝑋𝑏) = 0 with the alternative hypothesis given as 𝐻1 ∶ E(𝑋𝑎) − E(𝑋𝑏) ≠ 0.
It is natural to base the test static on the difference between sub-samples averages 𝑋̄𝑎 − 𝑋̄𝑏. The 𝑇 statistic is the ratio between the deviation of the estimator from the null value of the parameter, divided by the (estimated) standard deviation of the estimator. In the current setting the estimator is difference in sub-samples averages 𝑋̄𝑎 − 𝑋̄𝑏, the null value of the parameter, the difference between the expectations, is 0, and the (estimated) standard
deviation of the estimator is 𝑆2 /𝑛𝑎 + 𝑆2/𝑛𝑏. It turns out that the test statistic |
which, after rounding up, is equal to the value presented in the report that was produced by the function “t.test”.
The 𝑝-value is computed as the probability of obtaining values of the test statistic more extreme than the value that was obtained in our data. The computation is carried out under the assumptions of the null hypothesis. The limit distribution of the 𝑇 statistic, when both sub-sample sizes 𝑛𝑎 and 𝑛𝑏 are large, is standard Normal. In the case when the measurements are Normally distributed then a refined approximation of the distribution of the statistic is the 𝑡-distribution. Both the standard Normal and the 𝑡-distribution are symmetric about the origin.
The probability of obtaining a value in either tails for a symmetric distribution is equal to twice the probability of obtaining a value in the upper tail:
P(|𝑇| > 2.4255) = 2 × P(𝑇 > 2.4255) = 2 × [1 − P(𝑇 ≤ 2.4255)] .
The function “t.test” computes the 𝑝-value using the 𝑡-distribution. For the current data, the number of degrees of freedom that are used in this approxi- mation is df = 191.561. When we apply the function “pt” for the computation of the cumulative probability of the 𝑡-distribution we get: |
5The Weltch 𝑡-test for the comparison of two means uses the 𝑡-distribution as an approx-
imation of the null distribution of the 𝑇 test statistic. The number of degrees of freedom is computed by the formula: df = (𝑣𝑎 + 𝑣𝑏)2 /{𝑣2 /(𝑛𝑎 − 1) + 𝑣2 /(𝑛𝑏 − 1)}, where 𝑣𝑎 = 𝑠2 /𝑛𝑎 and |
13.4 Comparing Sample Variances 239
which (after rounding) is equal to the reported 𝑝-value of 0.01621. This 𝑝- value is less than 0.05, hence the null hypothesis is rejected in favor of the alternative hypothesis that assumes an effect of the weight on the expectation.
Comparing Sample Variances
In the previous section we discussed inference associated with the comparison of the expectations of a numerical measurement between two sub-population. Inference included the construction of a confidence interval for the difference between expectations and the testing of the hypothesis that the expectations are equal to each other.
In this section we consider a comparisons between variances of the measure- ment in the two sub-populations. For this inference we consider the ratio between estimators of the variances and introduce a new distribution, the
𝐹-distribution, that is associated with this ratio.
Assume, again, the presence of two sub-populations, denoted 𝑎 and 𝑏. A numer- ical measurement is taken over a sample. The sample can be divided into two sub-samples according to the sub-population of origin. In the previous section we were interested in inference regarding the relation between the expecta- tions of the measurement in the two sub-populations. Here we are concerned with the comparison of the variances.
Specifically, let 𝑋𝑎 be the measurement at the first sub-population and let
𝑋𝑏 be the measurement at the second sub-population. We want to compare V(𝑋𝑎), the variance in the first sub-population, to V(𝑋𝑏), the variance in the second sub-population. As the basis for the comparison we may use 𝑆2 and 𝑆2,
𝑎 𝑏
the sub-samples variances, which are computed from the observations in the first and the second sub-sample, respectively.
Consider the confidence interval for the ratio of the variances. In Chapter we discussed the construction of the confidence interval for the variance in a single sample. The derivation was based on the sample variance 𝑆2 that serves as an estimator of the population variance V(𝑋). In particular, the distribution of the random variable (𝑛 − 1)𝑆2/V(𝑋) was identified as the chi- square distribution on 𝑛 − 1 degrees of freedom. A confidence interval for the variance was obtained as a result of the identification of a central region in the chi-square distribution that contains a pre-subscribed probability. |
240 13 Comparing Two Samples
In order to construct a confidence interval for the ratio of the variances we consider the random variable that is obtained as a ratio of the estimators of the variances:
𝑆2 /V(𝑋 ) |
𝑏 𝑏
The distribution of this random variable is denoted the 𝐹-distribution. This distribution is characterized by the number of degrees of freedom associated with the estimator of the variance at the numerator and by the number of degrees of freedom associated with the estimator of the variance at the de- nominator. The number of degrees of freedom associated with the estimation of each variance is the number of observation used for the computation of the estimator, minus 1. In the current setting the numbers of degrees of freedom are 𝑛𝑎 − 1 and 𝑛𝑏 − 1, respectively.
The percentiles of the 𝐹-distribution can be computed in R using the function “qf”. For example, the 0.025-percentile of the distribution for the ratio between sample variances of the response for two sub-samples is computed by the ex- pression “qf(0.025,dfa,dfb)”, where dfa = 𝑛𝑎 − 1 and dfb = 𝑛𝑏 − 1. Likewise, the 0.975-percentile is computed by the expression “qf(0.975,dfa,dfb)”. Between these two numbers lie 95% of the given 𝐹-distribution. Consequently, the probability that the random variable {𝑆2 /V(𝑋 )}/{𝑆2/V(𝑋 )} obtains its values
𝑎 𝑎 𝑏 𝑏
between these two percentiles is equal to 0.95:
𝑆2 /V(𝑋 ) |
{ 𝑎 𝑏 ≤ 𝑎 ≤ 𝑎 𝑏 } .
qf(0.975,dfa,fdb) V(𝑋𝑏) qf(0.025,dfa,dfb)
This confidence interval has a significance level of 95%.
Next, consider testing hypotheses regarding the relation between the vari- ances. Of particular interest is testing the equality of the variances. One may formulate the null hypothesis as 𝐻0 ∶ V(𝑋𝑎)/V(𝑋𝑏) = 1 and test it against the alternative hypothesis 𝐻1 ∶ V(𝑋𝑎)/V(𝑋𝑏) ≠ 1.
The statistic 𝐹 = 𝑆2 /𝑆2 can used in order to test the given null hypothesis.
𝑎 𝑏
8The 𝐹 distribution is obtained when the measurement has a Normal distribution. When the distribution of the measurement is not Normal then the distribution of the given random variable will not be the 𝐹-distribution.
Comparing Sample Variances 241
Values of this statistic that are either much larger or much smaller than 1 are evidence against the null hypothesis and in favor of the alternative hypothesis. The sampling distribution, under that null hypothesis, of this statistic is the
𝐹(𝑛 −1,𝑛 −1) distribution. Consequently, the null hypothesis is rejected either
𝑎 𝑏
if 𝐹 < qf(0.025,dfa,dfb) or if 𝐹 > qf(0.975,dfa,dfb), where dfa = 𝑛𝑎 − 1 and
dfb = 𝑛𝑏 − 1. The significance level of this test is 5%.
Given an observed value of the statistic, the 𝑝-value is computed as the sig- nificance level of the test which uses the observed value as the threshold. If the observed value 𝑓 is less than 1 then the 𝑝-value is twice the probability of the lower tail: 2 ⋅ P(𝐹 < 𝑓). On the other hand, if 𝑓 is larger than 1 one takes twice the upper tail as the 𝑝-value: 2 ⋅ P(𝐹 > 𝑓) = 2 ⋅ [1 − P(𝐹 ≤ 𝑓)]. The null hypothesis is rejected with a significance level of 5% if the 𝑝-value is less than 0.05.
In order to illustrate the inference that compares variances let us return to the variable “dif.mpg” and compare the variances associated with the two levels of the factor “heavy”. The analysis will include testing the hypothesis that the two variances are equal and an estimate and a confidence interval for their ratio.
The function “var.test” may be used in order to carry out the required tasks. The input to the function is a formula such “dif.mpg\;\tilde{}\;heavy”, with a numeric variable on the left and a factor with two levels on the right. The default application of the function to the formula produces the desired test and confidence interval:
##
## F test to compare two variances ##
## data: dif.mpg by heavy
## F = 0.61965, num df = 102, denom df = 101, p-value = 0.01663
## alternative hypothesis: true ratio of variances is not equal to 1 ## 95 percent confidence interval:
## 0.4189200 0.9162126
## sample estimates:
## ratio of variances ## 0.6196502
Consider the report produced by the function. The observed value of the test statistic is “F = 0.6197”, and it is associated with the 𝐹-distribution on “num df = 102” and “denom df = 101” degrees of freedom. The test statistic can be used in order to test the null hypothesis 𝐻0 ∶ V(𝑋𝑎)/V(𝑋𝑏) = 1, that states that the two variance are equal, against the alternative hypothesis that they are not. The 𝑝-value for this test is “p-value = 0.01663”, which is less than 0.05.
242 13 Comparing Two Samples
Consequently, the null hypothesis is rejected and the conclusion is that the two variances are significantly different from each other. The estimated ratio of variances, given at the bottom of the report, is 0.6196502. The confidence interval for the ratio is reported also and is equal to [0.4189200, 0.9162126].
In order to relate the report to the theoretical discussion above let us recall that the sub-samples variances are 𝑠2 = 2.020750 and 𝑠2 = 3.261114. The sub-
𝑎 𝑏
samples sizes are 𝑛𝑎 = 103 and 𝑛𝑏 = 102, respectively. The observed value of the statistic is the ratio 𝑠2 /𝑠2 = 2.020750/3.261114 = 0.6196502, which is the
𝑎 𝑏
value that appears in the report. Notice that this is the estimate of the ration between the variances that is given at the bottom of the report.
The 𝑝-value of the two-sided test is equal to twice the probability of the tail that is associated with the observed value of the test statistic as a threshold. The number of degrees of freedom is dfa = 𝑛𝑎 − 1 = 102 and dfb = 𝑛𝑏 − 1 = 101. The observed value of the ratio test statistic is 𝑓 = 0.6196502. This value is less than one. Consequently, the probability P(𝐹 < 0.6196502) enters into the computation of the 𝑝-value, which equals twice this probability:
## [1] 0.01662612
Compare this value to the 𝑝-value that appears in the report and see that, after rounding up, the two are the same.
For the confidence interval of the ratio compute the percentiles of the 𝐹 dis- tribution: |
Exercises 243
Exercises
Exercise 13.1. In this exercise we would like to analyze the results of the trial that involves magnets as a treatment for pain. The trial is described in Question
𝑒𝑥 ∶ 𝐼𝑛𝑓𝑒𝑟𝑒𝑛𝑐𝑒.1
. The results of the trial are provided in the file “magnets.csv”.
Patients in this trail where randomly assigned to a treatment or to a control. The responses relevant for this analysis are either the variable “change”, which measures the difference in the score of pain reported by the patients before and after the treatment, or the variable “score1”, which measures the score of pain before a device is applied. The explanatory variable is the factor “active”. This factor has two levels, level “1” to indicate the application of an active magnet and level “2” to indicate the application of an inactive placebo.
In the following questions you are required to carry out tests of hypotheses. All tests should conducted at the 5% significance level:
Is there a significance difference between the treatment and the control groups in the expectation of the reported score of pain before the application of the device?
Is there a significance difference between the treatment and the control groups in the variance of the reported score of pain before the application of the device?
Is there a significance difference between the treatment and the control groups in the expectation of the change in score that resulted from the application of the device?
Is there a significance difference between the treatment and the control groups in the variance of the change in score that resulted from the application of the device?
Exercise 13.2. It is assumed, when constructing the 𝐹-test for equality of variances, that the measurements are Normally distributed. In this exercise we what to examine the robustness of the test to divergence from the assumption. You are required to compute the significance level of a two-sided 𝐹-test of
𝐻0 ∶ V(𝑋𝑎) = V(𝑋𝑏) versus 𝐻1 ∶ V(𝑋𝑎) ≠ V(𝑋𝑏). Assume there are 𝑛𝑎 = 29 observations in one group and 𝑛𝑏 = 21 observations in the other group. Use an 𝐹-test with a nominal 5% significance level.
Consider the case where 𝑋 ∼ Normal(4, 42).
Consider the case where 𝑋 ∼ Exponential(1/4).
244 13 Comparing Two Samples
Exercise 13.3. The sample average in one sub-sample is 𝑥̄𝑎 = 124.3 and the sample standard deviation is 𝑠𝑎 = 13.4. The sample average in the second sub- sample is 𝑥̄𝑏 = 80.5 and the sample standard deviation is 𝑠𝑏 = 16.7. The size of the first sub-sample is 𝑛𝑎 = 15 and this is also the size of the second sub-sample. We are interested in the estimation of the ratio of variances V(𝑋𝑎)/V(𝑋𝑏).
Compute the estimate of parameter of interest.
Construct a confidence interval, with a confidence level of 95%, to the value of the parameter of interest.
It is discovered that the size of each of the sub-samples is actu- ally equal to 150, and no to 15 (but the values of the other quantities are unchanged). What is the corrected estimate? What is the cor- rected confidence interval? |
Discuss in the forum
Statistics has an important role in the analysis of data. However, some claim that the more important role of statistics is in the design stage when one decides how to collect the data. Good design may improve the chances that the eventual inference of the data will lead to a meaningful and trustworthy conclusion.
Some say that the quantity of data that is collected is most important. Other say that the quality of the data is more important than the quantity. What is your opinion?
When formulating your answer it may be useful to come up with an example from your past experience where the quantity of data was not sufficient. Else, you can describe a case where the quality of the data was less than satisfac- tory. How did these deficiencies affected the validity of the conclusions of the analysis of the data?
Summary 245
For illustration consider the surveys. Conducting the survey by the telephone may be a fast way to reach a large number of responses. However, the quality of the response may be less that the response obtained by face-to-face interviews.
Formulas:
Test statistic for equality of expectations: 𝑡 = (𝑥̄𝑎 − 𝑥̄𝑏)/ 𝑠2 /𝑛𝑎 + 𝑠2 /𝑛𝑏.
√ 𝑎 𝑏
Confidence interval: (𝑥̄𝑎 − 𝑥̄𝑏) ± qnorm(0.975) 𝑠2 /𝑛𝑎 + 𝑠2 /𝑛𝑏.
√ 𝑎 𝑏
Test statistic for equality of variances: 𝑓 = 𝑠2 /𝑠2 . |
Student Learning Objectives
In the previous chapter we examined the situation where the response is nu- meric and the explanatory variable is a factor with two levels. This chapter deals with the case where both the response and the explanatory variables are numeric. The method that is used in order to describe the relations between the two variables is regression. Here we apply linear regression to deal with a linear relation between two numeric variables. This type of regression fits a line to the data. The line summarizes the effect of the explanatory variable on the distribution of the response.
Statistical inference can be conducted in the context of regression. Specifically, one may fit the regression model to the data. This corresponds to the point estimation of the parameters of the model. Also, one may produce confidence intervals for the parameters and carry out hypotheses testing. Another issue that is considered is the assessment of the percentage of variability of the response that is explained by the regression model.
By the end of this chapter, the student should be able to:
Produce scatter plots of the response and the explanatory variable.
Explain the relation between a line and the parameters of a linear equation. Add lines to a scatter plot.
Fit the linear regression to data using the function “lm” and conduct statis- tical inference on the fitted model.
Explain the relations among 𝑅2, the percentage of response variability ex- plained by the regression model, the variability of the regression residuals, and the variance of the response. |
Points and Lines
In this section we consider the graphical representation of the response and the explanatory variables on the same plot. The data associated with both variables is plotted as points in a two-dimensional plane. Linear equations can be represented as lines on the same two-dimensional plane. This section prepares the background for the discussion of the linear regression model. The actual model of linear regression is introduced in the next section.
The Scatter Plot
Consider two numeric variables. A scatter plot can be used in order to display the data in these two variables. The scatter plot is a graph in which each observation is represented as a point. Examination of the scatter plot may revile relations between the two variables.
Consider an example. A marine biologist measured the length (in millimeters) and the weight (in grams) of 10 fish that where collected in one of her ex- peditions. The results are summarized in a data frame that is presented in Table
𝑡𝑎𝑏 ∶ 𝑅𝑒𝑔𝑟𝑒𝑠𝑠𝑖𝑜𝑛_1
. Notice that the data frame contains 10 observations. The variable 𝑥 corre- sponds to the length of the fish and the variable 𝑦 corresponds to the weight.
𝑡𝑎𝑏 ∶ 𝑅𝑒𝑔𝑟𝑒𝑠𝑠𝑖𝑜𝑛_1
14.2 Points and Lines 249
Let us display this data in a scatter plot. Towards that end, let us read the length data into an object by the name “x” and the weight data into an object by the name “y”. Finally, let us apply the function “plot” to the formula that relates the response “y” to the explanatory variable “x”: |
The scatter plot that is produced by the last expression is presented in Fig- ure .
A scatter plot is a graph that displays jointly the data of two numerical vari- ables. The variables (“x” and “y” in this case) are represented by the 𝑥-axis and the 𝑦-axis, respectively. The 𝑥-axis is associated with the explanatory variable and the 𝑦-axis is associated with the response.
Each observation is represented by a point. The 𝑥-value of the point corre- sponds to the value of the explanatory variable for the observation and the
𝑦-value corresponds to the value of the response. For example, the first obser- vation is represented by the point (𝑥 = 4.5, 𝑦 = 9.5). The two rightmost points have an 𝑥 value of 4.5. The higher of the two has a 𝑦 value of 9.5 and is therefore point associated with the first observation. The lower of the two has a 𝑦 value of 8.0, and is thus associated with the 8th observation. Altogether there are 10 points in the plot, corresponding to the 10 observations in the data frame.
Let us consider another example of a scatter plot. The file “cars.csv” contains data regarding characteristics of cars. Among the variables in this data frame are the variables “horsepower” and the variable “engine.size”. Both variables are numeric.
250 14 Linear Regression
The variable “engine.size” describes the volume, in cubic inches, that is swept by all the pistons inside the cylinders. The variable “horsepower” measures the power of the engine in units of horsepower. Let us examine the relation between these two variables with a scatter plot: |
In the first line of code we read the data from the file into an R data frame that is given the name “cars”. In the second line we produce the scatter plot with “horsepower” as the response and “engine.size” as the explanatory variable. Both variables are taken from the data frame “cars”. The plot that is produced by the last expression is presented in Figure
𝑓𝑖𝑔 ∶ 𝑅𝑒𝑔𝑟𝑒𝑠𝑠𝑖𝑜𝑛 2
.
Consider the expression “plot(horsepower~engine.size, data=cars)”. Both the re- sponse variable and the explanatory variables that are given in this expres- sion do not exist in the computer’s memory as independent objects, but only as variables within the object “cars”. In some cases, however, one may re- fer to these variables directly within the function, provided that the argu- ment “data=data.frame.name” is added to the function. This argument in- forms the function in which data frame the variables can be found, where data.frame.name is the name of the data frame. In the current example, the variables are located in the data frame “cars”.
Examine the scatter plot in Figure . One may see that the values of the response (horsepower) tend to increase with the increase in the values of the explanatory variable (engine.size). Overall, the increase tends to follow a linear trend, a straight line, although the data points are not located exactly |
Linear Equation
Linear regression describes linear trends in the relation between a response and an explanatory variable. Linear trends may be specified with the aid of linear equations. In this subsection we discuss the relation between a linear equation and a linear trend (a straight line).
A linear equation is an equation of the form:
𝑦 = 𝑎 + 𝑏 ⋅ 𝑥 ,
where 𝑦 and 𝑥 are variables and 𝑎 and 𝑏 are the coefficients of the equation. The coefficient 𝑎 is called the intercept and the coefficient 𝑏 is called the slope.
A linear equation can be used in order to plot a line on a graph. With each value on the 𝑥-axis one may associate a value on the 𝑦-axis: the value that satisfies the linear equation. The collection of all such pairs of points, all possible 𝑥 values and their associated 𝑦 values, produces a straight line in the two-dimensional plane. |
As an illustration consider the three lines in Figure . The green line is produced via the equation 𝑦 = 7 + 𝑥, the intercept of the line is 7 and the slope is 1. The blue is a result of the equation 𝑦 = 14 − 2𝑥. For this line the intercept is 14 and the slope is -2. Finally, the red line is produced by the equation 𝑦 = 8.97. The intercept of the line is 8.97 and the slope is equal to 0.
The intercept describes the value of 𝑦 when the line crosses the 𝑦-axis. Equiv- alently, it is the result of the application of the linear equation for the value
𝑥 = 0. Observe in Figure that the green line crosses the 𝑦-axis at the level
252 14 Linear Regression
𝑦 = 7. Likewise, the blue line crosses the 𝑦-axis at the level 𝑦 = 14. The red line stays constantly at the level 𝑦 = 8.97, and this is also the level at which it crosses the 𝑦-axis.
The slope is the change in the value of 𝑦 for each unit change in the value of 𝑥. Consider the green line. When 𝑥 = 0 the value of 𝑦 is 𝑦 = 7. When 𝑥 changes to 𝑥 = 1 then the value of 𝑦 changes to 𝑦 = 8. A change of one unit in
𝑥 corresponds to an increase in one unit in 𝑦. Indeed, the slope for this line is
𝑏 = 1. As for the blue line, when 𝑥 changes from 0 to 1 the value of 𝑦 changes from 𝑦 = 14 to 𝑦 = 12; a decrease of two units. This decrease is associated with the slope 𝑏 = −2. Lastly, for the constant red line there is no change in the value of 𝑦 when 𝑥 changes its value from 𝑥 = 0 to 𝑥 = 1. Therefore, the slope is 𝑏 = 0. A positive slope is associated with an increasing line, a negative slope is associated with a decreasing line and a zero slope is associated with a constant line.
Lines can be considered in the context of scatter plots. Figure contains the scatter plot of the data on the relation between the length of fish and their weight. A regression line is the line that best describes the linear trend of the relation between the explanatory variable and the response. Neither of the lines in the figure is the regression line, although the green line is a better description of the trend than the blue line. The regression line is the best description of the linear trend.
The red line is a fixed line that is constructed at a level equal to the average value of the variable 𝑦. This line partly reflects the information in the data. The regression line, which we fit in the next section, reflects more of the information by including a description of the trend in the data.
Lastly, let us see how one can add lines to a plot in R. Functions to produce plots in R can be divided into two categories: high level and low level plotting functions. High level functions produce an entire plot, including the axes and the labels of the plot. The plotting functions that we encountered in the past such as “plot”, “hist”, “boxplot” and the like are all high level plotting functions. Low level functions, on the other hand, add features to an existing plot.
An example of a low level plotting function is the function “abline”. This function adds a straight line to an existing plot. The first argument to the function is the intercept of the line and the second argument is the slope of the line. Other arguments may be used in order to specify the characteristics of the line. For example, the argument “col=color.name” may be used in order to change the color of the line from its default black color. A plot that is very similar to plot in Figure may be produced with the following code:
1Run the expression “mean(y)” to obtain 𝑦̄ = 8.97 as the value of the sample average.
2The actual plot in Figure
𝑓𝑖𝑔 ∶ 𝑅𝑒𝑔𝑟𝑒𝑠𝑠𝑖𝑜𝑛_3
is produced by a slightly modified code. First an empty plot is produced with the expression |
Initially, the scatter plot is created and the lines are added to the plot one after the other. Observe that color of the first line that is added is green, it has an intercept of 7 and a slope of 1. The second line is blue, with a intercept of 14 and a negative slope of -2. The last line is red, and its constant value is the average of the variable 𝑦.
In the next section we discuss the computation of the regression line, the line that describes the linear trend in the data. This line will be added to scatter plots with the aid of the function “abline”.
Linear Regression
Data that describes the joint distribution of two numeric variables can be represented with a scatter plot. The 𝑦-axis in this plot corresponds to the re- sponse and the 𝑥-axis corresponds to the explanatory variable. The regression line describes the linear trend of the response as a function of the explanatory variable. This line is characterized by a linear equation with an intercept and a slope that are computed from the data.
In the first subsection we present the computation of the regression linear |
254 14 Linear Regression
equation from the data. The second subsection discusses regression as a sta- tistical model. Statistical inference can be carried out on the basis of this model. In the context of the statistical model, one may consider the inter- cept and the slope of the regression model that is fitted to the data as point estimates of the model’s parameter. Based on these estimates, one may test hypotheses regarding the regression model and construct confidence intervals for parameters.
Fitting the Regression Line
The R function that fits the regression line to data is called “lm”, an acronym for Linear Model. The input to the function is a formula, with the response variable to the left of the tilde character and the explanatory variable to the right of it. The output of the function is the fitted linear regression model.
Let us apply the linear regression function to the data on the weight and the length of fish. The output of the function is saved by us in a object called “fit”. Subsequently, the content of the object “fit” is displayed:
##
## Call:
## lm(formula = y ~ x) ##
## Coefficients:
## (Intercept) x
## 4.616 1.427
When displayed, the output of the function “lm” shows the formula that was used by the function and provides the coefficients of the regression linear equation. Observe that the intercept of the line is equal to 4.616. The slope of the line, the coefficient that multiplies “x” in linear equation, is equal to 1.427.
One may add the regression line to the scatter plot with the aid of the function “abline”:
The first expression produces the scatter plot of the data on fish. The second expression adds the regression line to the scatter plot. When the input to the graphical function “abline” is the output of the function “lm” that fits |
the regression line, then the result is the addition of the regression line to the existing plot. The line that is added is the line characterized by the coefficients that are computed by the function “lm”. The coefficients in the current setting are 4.616 for the intercept and 1.427 for the slope.
The scatter plot and the added regression line are displayed in Figure . Observe that line passes through the points, balancing between the points that are above the line and the points that are below. The line captures the linear trend in the data.
Examine the line in Figure . When 𝑥 = 1 then the 𝑦 value of the line is slightly above 6. When the value of 𝑥 is equal to 2, a change of one unit, then value of 𝑦 is below 8, and is approximately equal to 7.5. This observation is consistent with the fact that the slop of the line is 1.427. The value of 𝑥 is decreased by 1 when changing from 𝑥 = 1 to 𝑥 = 0. Consequently, the value of
𝑦 when 𝑥 = 0 should decrease by 1.427 in comparison to its value when 𝑥 = 1. The value at 𝑥 = 1 is approximately 6. Therefore, the value at 𝑥 = 0 should be approximately 4.6. Indeed, we do get that the intercept is equal to 4.616.
The coefficients of the regression line are computed from the data and are hence statistics. Specifically, the slope of the regression line is computed as the ratio between the covariance of the response and the explanatory variable, divided by the variance of the explanatory variable. The intercept of the re- gression line is computed using the sample averages of both variables and the computed slope.
Start with the slope. The main ingredient in the formula for the slope, the numerator in the ratio, is the covariance between the two variables. The covari- ance measures the joint variability of two variables. Recall that the formula for the sample variance of the variable 𝑥 is equal to:: |
The function “cov” computes the sample covariance between two numeric vari- ables. The two variables enter as arguments to the function and the sample covariance is the output. Let us demonstrate the computation by first apply- ing the given function to the data on fish and then repeating the computations without the aid of the function: |
## [1] 2.386111
In both cases we obtained the same result. Notice that the sum of products of deviations in the second expression was divided by 9, which is the number of observations, minus 1.
The slope of the regression line is the ratio between the covariance and the variance of the explanatory variable.
The regression line passes through the point (𝑥̄, 𝑦̄), a point that is determined by the means of the both the explanatory variable and the response. It follows that the intercept should obey the equation:
𝑦̄ = 𝑎 + 𝑏 ⋅ 𝑥̄ ⟹ 𝑎 = 𝑦̄ − 𝑏 ⋅ 𝑥̄ ,
The left-hand-side equation corresponds to the statement that the value of the regression line at the average 𝑥̄ is equal to the average of the response 𝑦̄. The right-hand-side equation is the solution to the left-hand-side equation.
One may compute the coefficients of the regression model manually by com- puting first the slope as a ratio between the covariance and the variance of explanatory variable. The intercept can then be obtained by the equation that uses the computed slope and the averages of both variables: |
## [1] 1.427385
Applying the manual method we obtain, after rounding up, the same coeffi- cients that were produced by the application of the function “lm” to the data.
As an exercise, let us fit the regression model to the data on the relation between the response “horsepower” and the explanatory variable “engine.size”. Apply the function “lm” to the data and present the results:
##
## Call:
## lm(formula = horsepower ~ engine.size, data = cars) ##
## Coefficients:
## (Intercept) engine.size ## 6.6414 0.7695
The fitted regression model is stored in an object called “fit.power”. The in- tercept in the current setting is equal to 6.6414 and the slope is equal to 0.7695.
Observe that one may refer to variables that belong to a data frame, provided that the name of the data frame is entered as the value of the argument “data” in the function “lm”. Here we refer to variables that belong to the data frame “cars”.
Next we plot the scatter plot of the data and add the regression line:
The output of the plotting functions is presented in Figure . Again, the regression line describes the general linear trend in the data. Overall, with the increase in engine size one observes increase in the power of the engine. |
Inference
Up to this point we have been considering the regression model in the context of descriptive statistics. The aim in fitting the regression line to the data was to characterize the linear trend observed in the data. Our next goal is to deal with regression in the context of inferential statistics. The goal here is to produce statements on characteristics of an entire population on the basis of the data contained in the sample.
The foundation for statistical inference in a given setting is a statistical model that produces the sampling distribution in that setting. The sampling distri- bution is the frame of reference for the analysis. In this context, the observed sample is a single realization of the sampling distribution, one realization among infinitely many potential realizations that never take place. The set- ting of regression involves a response and an explanatory variable. We provide a description of the statistical model for this setting.
The relation between the response and the explanatory variable is such that the value of the later affects the distribution of the former. Still, the value of the response is not uniquely defined by the value of the explanatory variable. This principle also hold for the regression model of the relation between the response 𝑌 and the explanatory variable 𝑋. According to the model of linear regression the value of the expectation of the response for observation 𝑖, E(𝑌𝑖), is a linear function of the value of the explanatory variable for the same observation. Hence, there exist and intercept 𝑎 and a slope 𝑏, common for all observations, such that if 𝑋𝑖 = 𝑥𝑖 then
E(𝑌𝑖) = 𝑎 + 𝑏 ⋅ 𝑥𝑖 .
The regression line can thus be interpreted as the average trend of the response
Linear Regression 259
in the population. This average trend is a linear function of the explanatory variable.
The intercept 𝑎 and the slope 𝑏 of the statistical model are parameters of the sampling distribution. One may test hypotheses and construct confidence intervals for these parameters based on the observed data and in relation to the sampling distribution.
Consider testing hypothesis. A natural null hypothesis to consider is the hy- pothesis that the slope is equal to zero. This hypothesis corresponds to state- ment that the expected value of the response is constant for all values of the explanatory variable. In other words, the hypothesis is that the explanatory variable does not affect the distribution of the response. One may formulate this null hypothesis as 𝐻0 ∶ 𝑏 = 0 and test it against the alternative 𝐻1 ∶ 𝑏 ≠ 0 that states that the explanatory variable does affect the distribution of the response.
A test of the given hypotheses can be carried out by the application of the function “summary” to the output of the function “lm”. Recall that the function “lm” was used in order to fit the linear regression to the data. In particular, this function was applied to the data on the relation between the size of the engine and the power that the engine produces. The function fitted a regression line that describes the linear trend of the data. The output of the function was saved in an object by the name “fit.power”. We apply the function “summary” to this object: |
3According to the model of linear regression, the only effect of the explanatory variable on the distribution of the response is via the expectation. If such an effect, according to the null hypothesis, is also excluded then the so called explanatory variable is not effecting at all the distribution of the response. |
##
## Residual standard error: 23.31 on 201 degrees of freedom ## (2 observations deleted due to missingness)
## Multiple R-squared: 0.6574, Adjusted R-squared: 0.6556 ## F-statistic: 385.6 on 1 and 201 DF, p-value: < 2.2e-16
The output produced by the application of the function “summary” is long and detailed. We will discuss this output in the next section. Here we concentrate on the table that goes under the title “Coefficients:”. The said table is made of 2 rows and 4 columns. It contains information for testing, for each of the coefficients, the null hypothesis that the value of the given coefficient is equal to zero. In particular, the second row may be used is order to test this hy- pothesis for the slope of the regression line, the coefficient that multiplies the explanatory variable.
Consider the second row. The first value on this row is 0.76949, which is equal (after rounding up) to the slope of the line that was fitted to the data in the previous subsection. However, in the context of statistical inference this value is the estimate of the slope of the population regression coefficient, the realization of the estimator of the slope.
The second value is 0.03919. This is an estimate of the standard deviation of the estimator of the slope. The third value is the test statistic. This statistic is the ratio between the deviation of the sample estimate of the parameter (0.76949) from the value of the parameter under the null hypothesis (0), di- vided by the estimated standard deviation (0.03919): (0.76949 − 0)/0.03919 = 0.76949/0.03919 = 19.63486, which is essentially the value given in the report.
The last value is the computed 𝑝-value for the test. It can be shown that the sampling distribution of the given test statistic, under the null distribution which assumes no slope, is asymptotically the standard Normal distribution. If the distribution of the response itself is Normal then the distribution of the statistic is the 𝑡-distribution on 𝑛 − 2 degrees of freedom. In the current situation this corresponds to 201 degrees of freedom. The computed 𝑝-value is extremely small, practically eliminating the possibility that the slope is equal to zero.
The first row presents information regarding the intercept. The estimated intercept is 6.64138 with an estimated standard deviation of 5.23318. The value of the test statistic is 1.269 and the 𝑝-value for testing the null hypothesis that the intercept is equal to zero against the two sided alternative is 0.206.
4The estimator of the slope is obtained via the application of the formula for the compu- |
Our computation involves rounding up errors, hence the small discrepancy between the value we computed and the value in the report.
6Notice that the “horsepower” measurement is missing for two observation. These obser-
vations are deleted for the analysis, leaving a total of 𝑛 = 203 observations. The number of degrees of freedom is 𝑛 − 2 = 203 − 2 = 201.
Linear Regression 261
In this case the null hypothesis is not rejected since the 𝑝-value is larger than 0.05.
The report contains an inference for the intercept. However, one is advised to take this inference in the current case with a grain of salt. Indeed, the intercept is the expected value of the response when the explanatory variable is equal to zero. Here the explanatory variable is the size of the engine and the response is the power of that engine. The power of an engine of size zero is a quantity that has no physical meaning! In general, unless the intercept is in the range of observations (i.e. the value 0 is in the range of the observed explanatory variable) one should treat the inference on the intercept cautiously. Such inference requires extrapolation and is sensitive to the miss-specification of the regression model.
Apart from testing hypotheses one may also construct confidence intervals for the parameters. A crude confidence interval may be obtained by taking
1.96 standard deviations on each side of the estimate of the parameter. Hence, a confidence interval for the slope is approximately equal to 0.76949 ± 1.96 × 0.03919 = [0.6926776, 0.8463024]. In a similar way one may obtain a confidence interval for the slope: 6.64138 ± 1.96 × 5.23318 = [−3.615653, 16.89841].
Alternatively, one may compute confidence intervals for the parameters of the linear regression model using the function “confint”. The input to this function is the fitted model and the output is a confidence interval for each of the parameters:
## 2.5 % 97.5 %
## (Intercept) -3.6775989 16.9603564
## engine.size 0.6922181 0.8467537
Observe the similarity between the confidence intervals that are computed by the function and the crude confidence intervals that were produced by us. The small discrepancies that do exist between the intervals result from the fact that the function “confint” uses the 𝑡-distribution whereas we used the Normal approximation.
7The warning message that was made in the context of testing hypotheses on the intercept should be applied also to the construction of confidence intervals. If the value 0 is not in the range of the explanatory variable then one should be careful when interpreting a confidence interval for the intercept. |
R-squared and the Variance of Residuals
In this section we discuss the residuals between the values of the response and their estimated expected value according to the regression model. These residuals are the regression model equivalence of the deviations between the observations and the sample average. We use these residuals in order compute the variability that is not accounted for by the regression model. Indeed, the ratio between the total variability of the residuals and the total variability of the deviations from the average serves as a measure of the variability that is not explained by the explanatory variable. R-squared, which is equal to 1 minus this ratio, is interpreted as the fraction of the variability of the response that is explained by the regression model.
We start with the definition of residuals. Let us return to the artificial example that compared length of fish to their weight. The data for this example was given in Table
𝑡𝑎𝑏 ∶ 𝑅𝑒𝑔𝑟𝑒𝑠𝑠𝑖𝑜𝑛_1
and was saved in the objects “x” and “y”. The regression model was fitted to this data by the application of the function “lm” to the formula “y~x” and the fitted model was saved in an object called “fit”. Let us apply the function “summary” to the fitted model: |
R-squared and the Variance of Residuals 263
The given report contains a table with estimates of the regression coefficients and information for conducting hypothesis testing. The report contains other information that is associated mainly with the notion of the residuals from regression line. Our current goal is to understand what is that other informa- tion.
The residual from regression for each observation is the difference between the value of the response for the observation and the estimated expectation of the response under the regression model. An observation is a pair (𝑥𝑖 , 𝑦𝑖 ), with
𝑦𝑖 being the value of the response. The expectation of the response according to the regression model is 𝑎 + 𝑏 ⋅ 𝑥𝑖 , where 𝑎 and 𝑏 are the coefficients of the model. The estimated expectation is obtained by using, in the formula for the expectation, the coefficients that are estimated from the data. The residual is the difference between 𝑦𝑖 and 𝑎 + 𝑏 ⋅ 𝑥𝑖 .
Consider an example. The first observation on the fish is (4.5, 9.5), where 𝑥1 =
4.5 and 𝑦1 = 9.5. The estimated intercept is 4.6165 and the estimated slope is 1.4274. The estimated expectation of the response for the first variable is equal to |
The residuals for the other observations are computed in the same manner. The values of the intercept and the slope are kept the same but the values of the explanatory variable and the response are changed. |
264 14 Linear Regression
together with the regression line in black and the line of the average in red. A vertical arrow extends from each data point to the regression line. The point where each arrow hits the regression line is associated with the estimated value of the expectation for that point. The residual is the difference between the value of the response at the origin of the arrow and the value of the response at the tip of its head. Notice that there are as many residuals as there are observations.
The function “residuals” computes the residuals. The input to the function is the fitted regression model and the output is the sequence of residuals. When we apply the function to the object “fit”, which contains the fitted regression model for the fish data, we get the residuals:
## 1 2 3 4 5 6
Indeed, 10 residuals are produced, one for each observation. In particular, the residual for the first observation is -1.5397075, which is essentially the value that we obtained.
Return to the report produced by the application of the function “summary” to the fitted regression model. The first component in the report is the formula that identifies the response and the explanatory variable. The second compo- nent, the component that comes under the title “Residuals:”, gives a summary of the distribution of the residuals. This summary includes the smallest and the largest values in the sequence of residuals, as well as the first and third quartiles and the median. The average is not reported since the average of the residuals from the regression line is always equal to 0.
The table that contains information on the coefficients was discussed in the previous section. Let us consider the last 3 lines of the report.
The first of the three lines contains the estimated value of the standard de- viation of the response from the regression model. If the expectations of the measurements of the response are located on the regression line then the vari- ability of the response corresponds to the variability about this line. The resulting variance is estimated by the sum of squares of the residuals from the regression line, divided by the number of observations minus 2. A division by the number of observation minus 2 produces an unbiased estimator of the
9The discrepancy between the value that we computed and the value computed by the function results from rounding up errors. We used the vales of the coefficients that appear in the report. These values are rounded up. The function “residuals” uses the coefficients without rounding. |
## [1] 2.790787
The last computation is a manual computation of the estimated standard deviation. It involves squaring the residuals and summing the squares. This sum is divided by the number of observations minus 2 (10 − 2 = 8). Taking the square root produces estimate. The value that we get for the estimated standard deviation is 2.790787, which coincides with the value that appears in the first of the last 3 lines of the report.
The second of these lines reports the R-squared of the linear fit. In order to explain the meaning of R-squared let us consider Figure once again. The two plots in the figure present the scatter plot of the data together with the regression line and the line of the average. Vertical black arrows that represent the residuals from the regression are added to the upper plot. The lower plot contains vertical red arrows that extend from the data points to the line of the average. These arrows represent the deviations of the response from the average.
Consider two forms of variation. One form is the variation of the response from its average value. This variation is summarized by the sample variance, the sum of the squared lengths of the red arrows divided by the number of observations minus 1. The other form of variation is the variation of the response from the fitted regression line. This variation is summarized by the sample variation of the residuals, the sum of squared lengths of the black arrows divided by the number of observations minus 1. The ratio between these two quantities gives the relative variability of the response that remains after fitting the regression line to the data.
The line of the average is a straight line. The deviations of the observations from this straight line can be thought of as residuals from that line. The variability of these residuals, the sum of squares of the deviations from the average divided by the number of observations minus 1, is equal to the sample variance.
The regression line is the unique straight line that minimizes the variability of its residuals. Consequently, the variability of the residuals from the regression, the sum of squares of the residuals from the regression divided by the number of observations minus 1, is the smallest residual variability produced by any straight line. It follows that the sample variance of the regression residuals is less than the sample variance of the response. Therefore, the ratio between the variance of the residuals and the variance of the response is less than 1.
R-squared is the difference between 1 and the ratio of the variances. Its value
266 14 Linear Regression
is between 0 and 1 and it represents the fraction of the variability of the response that is explained by the regression line. The closer the points are to the regression line the larger the value of R-squared becomes. On the other hand, the less there is a linear trend in the data the closer to 0 is the value of R-squared. In the extreme case of R-squared equal to 1 all the data point are positioned exactly on a single straight line. In the other extreme, a value of 0 for R-squared implies no linear trend in the data.
Let us compte manually the difference between 1 and the ratio between the variance of the residuals and the variance of the response:
## [1] 0.3297413
Observe that the computed value of R-squared is the same as the value “Multiple R-squared: 0.3297” that is given in the report.
The report provides another value of R-squared, titled Adjusted R-squared. The difference between the adjusted and unadjusted quantities is that in the former the sample variance of the residuals from the regression is replaced by an unbiased estimate of the variability of the response about the regression line. The sum of squares in the unbiased estimator is divided by the number of observations minus 2. Indeed, when we re-compute the ratio using the unbiased estimate, the sum of squared residuals divided by 10 − 2 = 8, we get:
## [1] 0.245959
The value of this adjusted quantity is equal to the value “Adjusted R-squared: 0.246” in the report.
Which value of R-squared to use is a matter of personal taste. In any case, for a larger number of observations the difference between the two values becomes negligible.
The last line in the report produces an overall goodness of fit test for the re- gression model. In the current application of linear regression this test reduces to a test of the slope being equal to zero, the same test that is reported in the second row of the table of coefficients. The 𝐹 statistic is simply the square of the 𝑡 value that is given in the second row of the table. The sampling dis- tribution of this statistic under the null hypothesis is the 𝐹-distribution on 1 and 𝑛 − 2 degrees of freedom, which is the sampling distribution of the square of the test statistic for the slope. The computed 𝑝-value, “p-value: 0.08255” is |
R-squared and the Variance of Residuals 267
the identical (after rounding up) to the 𝑝-value given in the second line of the table.
Return to the R-squared coefficient. This coefficient is a convenient measure of the goodness of fit of the regression model to the data. Let us demonstrate this point with the aid of the “cars” data. In Subsection we fitted a regression model to the power of the engine as a response and the size of the engine as an explanatory variable. The fitted model was saved in the object called “fit.power”. A report of this fit, the output of the expression “summary(fit.power)” was also presented. The null hypothesis of zero slope was clearly rejected. The value of R-squared for this fit was 0.6574. Consequently, about 2/3 of the variability in the power of the engine is explained by the size of the engine.
Consider trying to fit a different regression model for the power of the engine as a response. The variable “length” describes the length of the car (in inches). How well would the length explain the power of the car? We may examine this question using linear regression: |
From the examination of the figure we may see that indeed there is a linear trend in the relation between the length and the power of the car. Longer cars tend to have more power. Testing the null hypothesis that the slope is equal to zero produces a very small 𝑝-value and leads to the rejection of the null hypothesis.
The length of the car and the size of the engine are both statistically signifi- cant in their relation to the response. However, which of the two explanatory variables produces a better fit?
An answer to this question may be provided by the examination of values of R-squared, the ratio of the variance of the response explained by each of the explanatory variable. The R-squared for the size of the engine as an explanatory variable is 0.6574, which is approximately equal to 2/3. The value of R-squared for the length of the car as an explanatory variable is 0.308, less than 1/3. It follows that the size of the engine explains twice as much of the variability of the power of the engine than the size of car and is a better explanatory variable.
Exercises
Exercise 14.1. Figure presents 10 points and three lines. One of the lines is colored red and one of the points is marked as a red triangle. The points in the plot refer to the data frame in Table
𝑡𝑎𝑏 ∶ 𝑅𝑒𝑔𝑟𝑒𝑠𝑠𝑖𝑜𝑛 2 |
Exercises 269
1. 𝑦 = 4
𝑦 = 5 − 2𝑥
𝑦 = 𝑥
You are asked to match the marked line to the appropriate linear equation and match the marked point to the appropriate observation:
Which of the three equations, 1, 2 or 3, describes the line marked in red?
The poind marked with a red triangle represents which of the observations. (Identify the observation number.) |
E(𝑌𝑖) = 2.13 ⋅ 𝑥𝑖 − 3.60 .
What is the value of the intercept and what is the value of the slope in the linear equation that describes the model?
Assume the 𝑥1 = 5.5, 𝑥2 = 12.13, 𝑥3 = 4.2, and 𝑥4 = 6.7. What is the expected value of the response of the 3rd observation?
Exercise 14.3. The file “aids.csv” contains data on the number of diagnosed cases of Aids and the number of deaths associated with Aids among adults |
and adolescents in the United States between 1981 and 2002. The file can be found on the internet at .
The file contains 3 variables: The variable “year” that tells the relevant year, the variable “diagnosed” that reports the number of Aids cases that were diag- nosed in each year, and the variable “deaths” that reports the number of Aids related deaths in each year. The following questions refer to the data in the file:
Consider the variable “deaths” as response and the variable “diagnosed” as an explanatory variable. What is the slope of the regression line? Produce a point estimate and a confidence interval. Is it statistically significant (namely, significantly different than 0)?
Plot the scatter plot that is produced by these two variables and add the regression line to the plot. Does the regression line provided a good description of the trend in the data?
Consider the variable “diagnosed” as the response and the vari- able “year” as the explanatory variable. What is the slope of the regression line? Produce a point estimate and a confidence interval. Is the slope in this case statistically significant?
Plot the scatter plot that is produced by the later pair of vari- ables and add the regression line to the plot. Does the regression line provided a good description of the trend in the data?
Exercise 14.4. Below are the percents of the U.S. labor force (excluding |
Produce the scatter plot of the data and add the regression line. Is the regression model reasonable for this data?
Compute the sample averages and the sample standard devia- tions of both variables. Compute the covariance between the two variables.
Using the summaries you have just computed, recompute the coefficients of the regression model.
Exercise 14.5. Assume a regression model was fit to some data that describes the relation between the explanatory variable 𝑥 and the response 𝑦. Assume that the coefficients of the fitted model are 𝑎 = 2.5 and 𝑏 = −1.13, for the intercept and the slope, respectively. The first 4 observations in the data are (𝑥1, 𝑦1) = (5.5, 3.22), (𝑥2, 𝑦2) = (12.13, −6.02), (𝑥3, 𝑦3) = (4.2, −8.3), and (𝑥4, 𝑦4) =
(6.7, 0.17).
What is the estimated expectation of the response for the 4th observation?
What is the residual from the regression line for the 4th obser- vation?
Exercise 14.6. In Chapter we analyzed an example that involved the difference between fuel consumption in highway and city driving conditions |
272 14 Linear Regression
as the response. The explanatory variable was a factor that was produced by splitting the cars into two weight groups. In this exercise we would like to revisit this example. Here we use the weight of the car directly as an ex- planatory variable. We also consider the size of the engine as an alternative explanatory variable and compare between the two regression models.
Fit the regression model that uses the variable “curb.weight” as an explanatory variable. Is the slope significantly different than 0? What fraction of the standard deviation of the response is explained by a regression model involving this variable?
Fit the regression model that uses the variable “engine.size” as an explanatory variable. Is the slope significantly different than 0? What fraction of the standard deviation of the response is explained by a regression model involving this variable?
Which of the two models fits the data better?
Summary
Glossary
Regression: Relates different variables that are measured on the same sam- ple. Regression models are used to describe the effect of one of the variables on the distribution of the other one. The former is called the explanatory variable and the later is called the response.
Linear Regression: The effect of a numeric explanatory variable on the distribution of a numeric response is described in terms of a linear trend.
Scatter Plot: A plot that presents the data in a pair of numeric variables. The axes represents the variables and each point represents an observation.
Intercept: A coefficient of a linear equation. Equals the value of 𝑦 when the line crosses the 𝑦-axis.
Slope: A coefficient of a linear equation. The change in the value of 𝑦 for each unit change in the value of 𝑥. A positive slope corresponds to an increasing line and a negative slope corresponds to a decreasing line.
Covariance: A measures the joint variability of two numeric variables. It is equal to the sum of the product of the deviations from the mean, divided by the number of observations minus 1. |
Summary 273
Residuals from Regression: The residual differences between the values of the response for the observation and the estimated expectations of the response under the regression model (the predicted response).
R-Square: is the difference between 1 and the ratio between the variance of the residuals from the regression and the variance of the response. Its value is between 0 and 1 and it represents the fraction of the variability of the response that is explained by the regression line.
Discuss in the Forum
The topic for discussion in the Forum of Chapter was mathematical models and how good they should fit reality. In this Forum we would like to return to the same topic subject, but consider it specifically in the context of statistical models.
Some statisticians prefer complex models, models that try to fit the data as closely as one can. Others prefer a simple model. They claim that although simpler models are more remote from the data yet they are easier to interpret and thus provide more insight. What do you think? Which type of model is better to use?
When formulating your answer to this question you may thing of a situation that involves inference based on data conducted by yourself for the sack of others. What would be the best way to report your findings and explain them to the others? |
Student Learning Objectives
Chapters and introduced statistical inference that involves a response and an explanatory variable that may affect the distribution of the response. In both chapters the response was numeric. The two chapters differed in the data type of the explanatory variable. In Chapter the explanatory variable was a factor with two levels that splits the sample into two sub-samples. In Chapter the explanatory variable was numeric and produced, together with the response, a linear trend. The aim in this chapter is to consider the case where the response is a Bernoulli variable. Such a variable may emerge as the indicator of the occurrence of an event associated with the response or as a factor with two levels. The explanatory variable is a factor with two levels in one case or a numerical variable in the other case.
Specifically, when the explanatory variable is a factor with two levels then we may use the function “prop.test”. This function was used in Chapter for the analysis of the probability of an event in a single sample. Here we use it in order to compare between two sub-samples. This is similar to the way the function “t.test” was used for a numeric response for both a single sample and for the comparison between sub-samples. For the case where the explanatory variable is numeric we may use the function “glm”, acronym for Generalized Linear Model, in order to fit an appropriate regression model to the data.
By the end of this chapter, the student should be able to:
Produce mosaic plots of the response and the explanatory variable.
Apply the function “prop.test” in order to compare the probability of an event between two sub-populations
Define the logistic regression model that relates the probability of an event in the response to a numeric explanatory variable.
Fit the logistic regression model to data using the function “glm” and produce statistical inference on the fitted model. |
Comparing Sample Proportions
In this chapter we deal with a Bernoulli response. Such a response has two levels, “TRUE” or “FALSE”, and may emerges as the indicator of an event. Else, it may be associated with a factor with two levels and correspond to the indication of one of the two levels. Such response was considered in Chapters and where confidence intervals and tests for the probability of an event where discussed in the context of a single sample. In this chapter we discuss the investigation of relations between a response of this form and an explanatory variable.
We start with the case where the explanatory variable is a factor that has two levels. These levels correspond to two sub-populations (or two settings). The aim of the analysis is to compare between the two sub-populations (or between the two settings) the probability of the even.
The discussion in this section is parallel to the discussion in Section . That section considered the comparison of the expectation of a numerical response between two sub-populations. We denoted these sub-populations 𝑎 and 𝑏 with expectations E(𝑋𝑎) and E(𝑋𝑏), respectively. The inference used the average
𝑋̄𝑎, which was based on a sub-sample of size 𝑛𝑎, and the average 𝑋̄𝑏, which was based on the other sub-sample of size 𝑛𝑏. The sub-samples variances 𝑆2 and 𝑆2 participated in the inference as well. The application of a test for the equality of the expectations and a confidence interval where produced by the application of the function “t.test” to the data.
The inference problem, which is considered in this chapter, involves an event. This event is being examined in two different settings that correspond to two different sub-population 𝑎 and 𝑏. Denote the probabilities of the event in each of the sub-populations by 𝑝𝑎 and 𝑝𝑏. Our concern is the statistical inference associated with the comparison of these two probabilities to each other.
Natural estimators of the probabilities are 𝑃𝑎̂ and 𝑃𝑏̂ , the sub-samples propor-
tions of occurrence of the event. These estimators are used in order to carry out the inference. Specifically, we consider here the construction of a confi- dence interval for the difference 𝑝𝑎 − 𝑝𝑏 and a test of the hypothesis that the probabilities are equal.
The methods for producing the confidence intervals for the difference and for testing the null hypothesis that the difference is equal to zero are similar is principle to the methods that were described in Section
𝑠𝑒𝑐 ∶ 𝑇𝑤𝑜𝑆𝑎𝑚𝑝 3 |
15.2 Comparing Sample Proportions 277
for making parallel inferences regarding the relations between expectations. However, the derivations of the tools that are used in the current situation are not identical to the derivations of the tools that were used there. The main differences between the two cases is the replacement of the sub-sample averages by the sub-sample proportions, a difference in the way the standard deviation of the statistics are estimated, and the application of a continuity correction. We do not discuss in this chapter the theoretical details associated with the derivations. Instead, we demonstrate the application of the inference in an example.
The variable “num.of.doors” in the data frame “cars” describes the number of doors a car has. This variable is a factor with two levels, “two” and “four”. We treat this variable as a response and investigate its relation to explanatory variables. In this section the explanatory variable is a factor with two levels and in the next section it is a numeric variable. Specifically, in this section we use the factor “fuel.type” as the explanatory variable. Recall that this variable identified the type of fuel, diesel or gas, that the car uses. The aim of the analysis is to compare the proportion of cars with four doors between cars that run on diesel and cars that run on gas.
Let us first summarize the data in a 2 × 2 frequency table. The function “table” may be used in order to produce such a table:
When the function “table” is applied to a combination of two factors then the output is a table of joint frequencies. Each entry in the table contains the frequency in the sample of the combination of levels, one from each variable, that is associated with the entry. For example, there are 16 cars in the data set that have the level “four” for the variable “num.of.doors” and the level “diesel” for the variable “fuel.type”. Likewise, there are 3 cars that are associated with the combination “two” and “diesel”. The total number of entries to the table is 16 + 3 + 98 + 86 = 203, which is the number of cars in the data set, minus the two missing values in the variable “num.of.doors”.
A graphical representation of the relation between the two factors can be obtained using a mosaic plot. This plot is produced when the input to the function “plot” is a formula where both the response and the explanatory variables are factors: |
The box plot describes the distribution of the explanatory variable and the distribution of the response for each level of the explanatory variable. In the current example the explanatory variable is the factor “fuel” that has 2 levels. The two levels of this variable, “diesel” and “gas”, are given at the 𝑥-axis. A vertical rectangle is associated with each level. These 2 rectangles split the total area of the square. The total area of the square represents the total rela- tive frequency (which is equal to 1). Consequently, the area of each rectangle represents the relative frequency of the associated level of the explanatory factor.
A rectangle associated with a given level of the explanatory variable is further divided into horizontal sub-rectangles that are associated with the response factor. In the current example each darker rectangle is associated with the level “four” of the response “num.of.door” and each brighter rectangle is associated with the level “two”. The relative area of the horizontal rectangles within each vertical rectangle represent the relative frequency of the levels of the response within each subset associated with the level of the explanatory variable.
Looking at the plot one may appreciate the fact that diesel cars are less frequent than cars that run on gas. The graph also displays the fact that the relative frequency of cars with four doors among diesel cars is larger than the relative frequency of four doors cars among cars that run on gas.
The function “prop.test” may be used in order test the hypothesis that, at the population level, the probability of the level “four” of the response within the sub-population of diesel cars (the height of the leftmost darker rectangle in the theoretic mosaic plot that is produced for the entire population) is equal to the probability of the same level of the response with in the sub-population of cars that run on gas (the height of the rightmost darker rectangle in that
Comparing Sample Proportions 279
theoretic mosaic plot). Specifically, let us test the hypothesis that the two probabilities of the level “four”, one for diesel cars and one for cars that run on gas, are equal to each other.
The output of the function “table” may serve as the input to the function “prop.test”. The Bernoulli response variable should be the second variable in the input to the table whereas the explanatory factor is the first variable in the table. When we apply the test to the data we get the report:
##
## 2-sample test for equality of proportions with continuity ## correction
##
## data: table(cars$fuel.type, cars$num.of.doors) ## X-squared = 5.5021, df = 1, p-value = 0.01899 ## alternative hypothesis: two.sided
## 95 percent confidence interval:
## 0.1013542 0.5176389
## sample estimates:
## prop 1 prop 2
## 0.8421053 0.5326087
The two sample proportions of cars with four doors among diesel and gas cars are presented at the bottom of the report and serve as estimates of the sub- populations probabilities. Indeed, the relative frequency of cars with four doors among diesel cars is equal to 𝑝̂𝑎 = 16/(16 + 3) = 16/19 = 0.8421053. Likewise, the relative frequency of cars with four doors among cars that ran on gas is equal to 𝑝̂𝑏 = 98/(98 + 86) = 98/184 = 0.5326087. The confidence interval for the difference in the probability of a car with four doors between the two sub- populations, 𝑝𝑎 −𝑝𝑏, is reported under the title “95 percent confidence interval” and is given as [0.1013542, 0.5176389].
The null hypothesis, which is the subject of this test, is 𝐻0 ∶ 𝑝𝑎 = 𝑝𝑏. This hypothesis is tested against the two-sided alternative hypothesis 𝐻1 ∶ 𝑝𝑎 ≠ 𝑝𝑏. The test itself is based on a test statistic that obtains the value X-squared = 5.5021. This test statistic corresponds essentially to the deviation between the estimated value of the parameter (the difference in sub-sample proportions of the event) and the theoretical value of the parameter (𝑝𝑎 − 𝑝𝑏 = 0). This devi- ation is divided by the estimated standard deviation and the ratio is squared.
2The function “prop.test” was applied in Section in order to test that the probability of an event is equal to a given value (“p = 0.5” by default). The input to the function was a pair of numbers: the total number of successes and the sample size. In the current application the input is a 2 × 2 table. When applied to such input the function carries out a test of the equality of the probability of the first column between the rows of the table.
280 15 A Bernoulli Response
The statistic itself is produced via a continuity correction that makes its null distribution closer to the limiting chi-square distribution on one degree of free- dom. The 𝑝-value is computed based on this limiting chi-square distribution.
Notice that the computed 𝑝-value is equal to p-value = 0.01899. This value is smaller than 0.05. Consequently, the null hypothesis is rejected at the 5% significance level in favor of the alternative hypothesis. This alternative hy- pothesis states that the sub-populations probabilities are different from each other.
Logistic Regression
In the previous section we considered a Bernoulli response and a factor with two levels as an explanatory variable. In this section we use a numeric variable as the explanatory variable. The discussion in this section is parallel to the discussion in Chapter
𝑐ℎ ∶ 𝑅𝑒𝑔𝑟𝑒𝑠𝑠𝑖𝑜𝑛
that presented the topic of linear regression. However, since the response is not of the same form, it is the indicator of a level of a factor and not a regular numeric response, then the tools the are used are different. Instead of using linear regression we use another type of regression that is called Logistic Regression.
Recall that linear regression involved fitting a straight line to the scatter plot of data points. This line corresponds to the expectation of the response as a function of the explanatory variable. The estimated coefficients of this line are computed from the data and used for inference.
In logistic regression, instead of the consideration of the expectation of a numerical response, one considers the probability of an event associated with the response. This probability is treated as a function of the explanatory variable. Parameters that determine this function are estimated from the data and are used for inference regarding the relation between the explanatory variable and the response. Again, we do not discuss the theoretical details involved in the derivation of logistic regression. Instead, we apply the method to an example.
We consider the factor “num.of.doors” as the response and the probability of a car with four doors as the probability of the response. The length of the car will serve as the explanatory variable. Measurements of lengths of the cars are stored in the variable “length” in the data frame “cars”.
First, let us plot the relation between the response and the explanatory vari- able: |
The plot is a type of a mosaic plot and it is produced when the input to the function “plot” is a formula with a factor as a response and a numeric variable as the explanatory variable. The plot presents, for interval levels of the explanatory variable, the relative frequencies of each interval. It also presents the relative frequency of the levels of the response within each interval level of the explanatory variable.
In order to get a better understanding of the meaning of the given mosaic plot one may consider the histogram of the explanatory variable. |
282 15 A Bernoulli Response
vals. These interval are the basis for rectangles. The height of the rectangles represent the frequency of cars with lengths that fall in the given interval.
The mosaic plot in Figure is constructed on the basis of this histogram. The 𝑥-axis in this plot corresponds to the explanatory variable “length”. The total area of the square in the plot is divided between 7 vertical rectangles. These vertical rectangles correspond to the 7 rectangles in the histogram of Figure , turn on their sides. Hence, the width of each rectangle in Fig- ure correspond to the hight of the parallel rectangle in the histogram. Consequently, the area of the vertical rectangles in the mosaic plot represents the relative frequency of the associated interval of values of the explanatory variable.
The rectangle that is associated with each interval of values of the explanatory variable is further divided into horizontal sub-rectangles that are associated with the response factor. In the current example each darker rectangle is associated with the level “four” of the response “num.of.door” and each brighter rectangle is associated with the level “two”. The relative area of the horizontal rectangles within each vertical rectangle represent the relative frequency of the levels of the response within each interval of values of the explanatory variable.
From the examination of the mosaic plot one may identify relations between the explanatory variable and the relative frequency of an identified level of the response. In the current example one may observe that the relative frequency of the cars with four doors is, overall, increasing with the increase in the length of cars.
Logistic regression is a method for the investigation of relations between the probability of an event and explanatory variables. Specifically, we use it here for making inference on the number of doors as a response and the length of the car as the explanatory variable.
Statistical inference requires a statistical model. The statistical model in lo- gistic regression relates the probability 𝑝𝑖 , the probability of the event for observation 𝑖, to 𝑥𝑖 , the value of the response for that observation. The rela- tion between the two in given by the formula:
𝑒𝑎+𝑏⋅𝑥𝑖
𝑝𝑖 = 1 + 𝑒𝑎+𝑏⋅𝑥𝑖 ,
where 𝑎 and 𝑏 are coefficients common to all observations. Equivalently, one may write the same relation in the form:
log(𝑝𝑖/[1 − 𝑝𝑖]) = 𝑎 + 𝑏 ⋅ 𝑥𝑖 ,
that states that the relation between a (function of) the probability of the event and the explanatory variable is a linear trend. |
##
## Call:
## glm(formula = num.of.doors == "four" ~ length, family = binomial, ## data = cars)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max ## -2.1646 -1.1292 0.5688 1.0240 1.6673 ##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -13.14767 2.58693 -5.082 3.73e-07 ***
## length 0.07726 0.01495 5.168 2.37e-07 *** ## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ##
## (Dispersion parameter for binomial family taken to be 1) ##
## Null deviance: 278.33 on 202 degrees of freedom ## Residual deviance: 243.96 on 201 degrees of freedom ## (2 observations deleted due to missingness)
## AIC: 247.96 ##
## Number of Fisher Scoring iterations: 3
Generally, the function “glm” can be used in order to fit regression models in cases where the distribution of the response has special forms. Specifically, when the argument “family=binomial” is used then the model that is being used in the model of logistic regression. The formula that is used in the function involves a response and an explanatory variable. The response may be a se- quence with logical “TRUE” or “FALSE” values as in the example. Alternatively, it may be a sequence with “1” or “0” values, “1” corresponding to the event occurring to the subject and “0” corresponding to the event not occurring. The argument “data=cars” is used in order to inform the function that the variables are located in the given data frame. |
284 15 A Bernoulli Response
The “glm” function is applied to the data and the fitted model is stored in the object “fit.doors”.
A report is produced when the function “summary” is applied to the fitted model. Notice the similarities and the differences between the report presented here and the reports for linear regression that are presented in Chapter . Both reports contain estimates of the coefficients 𝑎 and 𝑏 and tests for the equality of these coefficients to zero. When the coefficient 𝑏, the coefficient that represents the slope, is equal to 0 then the probability of the event and the explanatory variable are unrelated. In the current case we may note that the null hypothesis
𝐻0 ∶ 𝑏 = 0, the hypothesis that claims that there is no relation between the explanatory variable and the response, is clearly rejected (𝑝-value 2.37 × 10−7).
The estimated values of the coefficients are −13.14767 for the intercept 𝑎 and 0.07726 for the slope 𝑏. One may produce confidence intervals for these coeffi- cients by the application of the function “confint” to the fitted model: |
Exercises
Exercise 15.1. This exercise deals with a comparison between Mediterranean diet and low-fat diet recommended by the American Heart Association in the context of risks for illness or death among patients that survived a heart attack. This case study is taken from the Rice Virtual Lab in Statistics. More details on this case study can be found in the case study “Mediterranean Diet and Health” that is presented in that site.
The subjects, 605 survivors of a heart attack, were randomly assigned follow either (1) a diet close to the “prudent diet step 1” of the American Heart Association (AHA) or (2) a Mediterranean-type diet consisting of more bread |
Exercises 285
and cereals, more fresh fruit and vegetables, more grains, more fish, fewer delicatessen food, less meat.
The subjects‘ diet and health condition were monitored over a period of four- year. Information regarding deaths, development of cancer or the development of non-fatal illnesses was collected. The information from this study is stored in the file “diet.csv”. The file “diet.csv” contains two factors: “health” that describes the condition of the subject, either healthy, suffering from a non- fatal illness, suffering from cancer, or dead; and the “type” that describes the type of diet, either Mediterranean or the diet recommended by the AHA. The file can be found on the internet at . Answer the following questions based on the data in the file:
Produce a frequency table of the two variable. Read off from the table the number of healthy subjects that are using the Mediter- ranean diet and the number of healthy subjects that are using the diet recommended by the AHA.
Test the null hypothesis that the probability of keeping healthy following an heart attack is the same for those that use the Mediter- ranean diet and for those that use the diet recommended by the AHA. Use a two-sided alternative and a 5% significance level.
Compute a 95% confidence interval for the difference between the two probabilities of keeping healthy.
Exercise 15.2. Cushing’s syndrome disorder results from a tumor (adenoma) in the pituitary gland that causes the production of high levels of cortisol. The symptoms of the syndrome are the consequence of the elevated levels of this steroid hormone in the blood. The syndrome was first described by Harvey Cushing in 1932.
The file “coshings.csv" contains information on 27 patients that suffer from Cushing’s syndrome. The three variables in the file are “tetra”, “pregn”, and “type”. The factor “type” describes the underlying type of syndrome, coded as “a“, (adenoma),”b” (bilateral hyperplasia), “c” (carcinoma) or “u” for unknown. The variable “tetra” describe the level of urinary excretion rate (mg/24hr) of Tetrahydrocortisone, a type of steroid, and the variable “pregn” describes urinary excretion rate (mg/24hr) of Pregnanetriol, another type of steroid. The file can be found on the internet at . Answer the following questions based on the information in this file:
Plot the histogram of the variable “tetra” and the mosaic plot that describes the relation between the variable “type” as a response and the variable “tetra”. What is the information that is conveyed |
286 15 A Bernoulli Response
by the second vertical triangle from the right (the third from the left) in the mosaic plot.
Test the null hypothesis that there is no relation between the variable “tetra” as an explanatory variable and the indicator of the type being equal to “b” as a response. Compute a confidence interval for the parameter that describes the relation.
Repeat the analysis from 2 using only the observations for which the type is known. (Hint: you may fit the model to the required subset by the inclusion of the argument “subset=(type!=u)” in the function that fits the model.) Which of the analysis do you think is more appropriate?
Glossary
Mosaic Plot: A plot that describes the relation between a response factor and an explanatory variable. Vertical rectangles represent the distribution of the explanatory variable. Horizontal rectangles within the vertical ones represent the distribution of the response.
Logistic Regression: A type of regression that relates between an explana- tory variable and a response of the form of an indicator of an event.
Discuss in the forum
In the description of the statistical models that relate one variable to the other we used terms that suggest a causality relation. One variable was called the “explanatory variable” and the other was called the “response”. One may get the impression that the explanatory variable is the cause for the statistical behavior of the response. In negation to this interpretation, some say that all that statistics does is to examine the joint distribution of the variables, but casuality cannot be inferred from the fact that two variables are statis- tically related. What do you think? Can statistical reasoning be used in the determination of casuality?
As part of your answer in may be useful to consider a specific situation where the determination of casuality is required. Can any of the tools that were discussed in the book be used in a meaningful way to aid in the process of such determination?
Notice that the last 3 chapters dealt with statistical models that related an explanatory variable to a response. We considered tools that can be used when both variable are factors and when both are numeric. Other tools may be used when one of the variables is a factor and the other is numeric. An
Exercises 287
analysis that involves one variable as the response and the other as explanatory variable can be reversed, possibly using a different statistical tool, with the roles of the variables exchanged. Usually, a significant statistical finding will be still significant when the roles of a response and an explanatory variable are reversed. |
Student Learning Objective
This chapter concludes this book. We start with a short review of the topics that were discussed in the second part of the book, the part that dealt with statistical inference. The main part of the chapter involves the statistical anal- ysis of 2 case studies. The tools that will be used for the analysis are those that were discussed in the book. We close this chapter and this book with some concluding remarks. By the end of this chapter, the student should be able to:
Review the concepts and methods for statistical inference that were pre- sented in the second part of the book.
Apply these methods to requirements of the analysis of real data.
Develop a resolve to learn more statistics.
A Review
The second part of the book dealt with statistical inference; the science of making general statement on an entire population on the basis of data from a sample. The basis for the statements are theoretical models that produce the sampling distribution. Procedures for making the inference are evaluated based on their properties in the context of this sampling distribution. Pro- cedures with desirable properties are applied to the data. One may attach to the output of this application summaries that describe these theoretical properties.
In particular, we dealt with two forms of making inference. One form was estimation and the other was hypothesis testing. The goal in estimation is to determine the value of a parameter in the population. Point estimates or confidence intervals may be used in order to fulfill this goal. The properties
289
290 16 Case Studies
of point estimators may be assessed using the mean square error (MSE) and the properties of the confidence interval may be assessed using the confidence level.
The target in hypotheses testing is to decide between two competing hypothe- sis. These hypotheses are formulated in terms of population parameters. The decision rule is called a statistical test and is constructed with the aid of a test statistic and a rejection region. The default hypothesis among the two, is rejected if the test statistic falls in the rejection region. The major property a test must possess is a bound on the probability of a Type I error, the prob- ability of erroneously rejecting the null hypothesis. This restriction is called the significance level of the test. A test may also be assessed in terms of it’s statistical power, the probability of rightfully rejecting the null hypothesis.
Estimation and testing were applied in the context of single measurements and for the investigation of the relations between a pair of measurements. For single measurements we considered both numeric variables and factors. For numeric variables one may attempt to conduct inference on the expectation and/or the variance. For factors we considered the estimation of the probability of obtaining a level, or, more generally, the probability of the occurrence of an event.
We introduced statistical models that may be used to describe the relations between variables. One of the variables was designated as the response. The other variable, the explanatory variable, is identified as a variable which may affect the distribution of the response. Specifically, we considered numeric variables and factors that have two levels. If the explanatory variable is a factor with two levels then the analysis reduces to the comparison of two sub-populations, each one associated with a level. If the explanatory variable is numeric then a regression model may be applied, either linear or logistic regression, depending on the type of the response.
The foundations of statistical inference are the assumption that we make in the form of statistical models. These models attempt to reflect reality. However, one is advised to apply healthy skepticism when using the models. First, one should be aware what the assumptions are. Then one should ask oneself how reasonable are these assumption in the context of the specific analysis. Finally, one should check as much as one can the validity of the assumptions in light of the information at hand. It is useful to plot the data and compare the plot to the assumptions of the model. |
Case Studies
Let us apply the methods that were introduced throughout the book to two examples of data analysis. Both examples are taken from the case studies of the Rice Virtual Lab in Statistics can be found in their Case Studies section. The analysis of these case studies may involve any of the tools that were described in the second part of the book (and some from the first part). It may be useful to read again Chapters – before reading the case studies.
Physicians’ Reactions to the Size of a Patient
Overweight and obesity is common in many of the developed contrives. In some cultures, obese individuals face discrimination in employment, educa- tion, and relationship contexts. The current research, conducted by Mikki Hebl and Jingping Xu, examines physicians’ attitude toward overweight and obese patients in comparison to their attitude toward patients who are not overweight.
The experiment included a total of 122 primary care physicians affiliated with one of three major hospitals in the Texas Medical Center of Houston. These physicians were sent a packet containing a medical chart similar to the one they view upon seeing a patient. This chart portrayed a patient who was displaying symptoms of a migraine headache but was otherwise healthy. Two variables (the gender and the weight of the patient) were manipulated across six different versions of the medical charts. The weight of the patient, described in terms of Body Mass Index (BMI), was average (BMI = 23), overweight (BMI = 30), or obese (BMI = 36). Physicians were randomly assigned to receive one of the six charts, and were asked to look over the chart carefully and complete two medical forms. The first form asked physicians which of 42 tests they would recommend giving to the patient. The second form asked physicians to indicate how much time they believed they would spend with the patient, and to describe the reactions that they would have toward this patient.
In this presentation, only the question on how much time the physicians be- lieved they would spend with the patient is analyzed. Although three patient weight conditions were used in the study (average, overweight, and obese) only the average and overweight conditions will be analyzed. Therefore, there |
292 16 Case Studies
are two levels of patient weight (average and overweight) and one dependent variable (time spent).
The data for the given collection of responses from 72 primary care physicians is stored in the file “discriminate.csv”. We start by reading the content of the file into a data frame by the name “patient” and presenting the summary of the variables:
## weight time
## BMI=23:33 Min. : 5.00
Observe that of the 72 “patients”, 38 are overweight and 33 have an average weight. The time spend with the patient, as predicted by physicians, is dis- tributed between 5 minutes and 1 hour, with a average of 27.82 minutes and a median of 30 minutes.
It is a good practice to have a look at the data before doing the analysis. In this examination on should see that the numbers make sense and one should identify special features of the data. Even in this very simple example we may want to have a look at the histogram of the variable “time”: |
16.3 Case Studies 293
A feature in this plot that catches attention is the fact that there is a high concventration of values in the interval between 25 and 30. Together with the fact that the median is equal to 30, one may suspect that, as a matter of fact, a large numeber of the values are actually equal to 30. Indeed, let us produce a table of the response:
##
## 5 15 20 25 30 40 45 50 60
## 1 10 15 3 30 4 5 2 1
Notice that 30 of the 72 physicians marked “30” as the time they expect to spend with the patient. This is the middle value in the range, and may just be the default value one marks if one just needs to complete a form and do not really place much importance to the question that was asked.
The goal of the analysis is to examine the relation between overweigh and the Doctor’s response. The explanatory variable is a factor with two levels. The response is numeric. A natural tool to use in order to test this hypothesis is the 𝑡-test, which is implemented with the function “t.test”.
First we plot the relation between the response and the explanatory variable and then we apply the test: |
## t = 2.8516, df = 67.174, p-value = 0.005774
## alternative hypothesis: true difference in means is not equal to 0 ## 95 percent confidence interval:
## 1.988532 11.265056
## sample estimates:
## mean in group BMI=23 mean in group BMI=30 ## 31.36364 24.73684
Nothing seems problematic in the box plot. The two distributions, as they are reflected in the box plots, look fairly symmetric.
When we consider the report that produced by the function “t.test” we may observe that the 𝑝-value is equal to 0.005774. This 𝑝-value is computed in testing the null hypothesis that the expectation of the response for both types of patients are equal against the two sided alternative. Since the 𝑝-value is less than 0.05 we do reject the null hypothesis.
The estimated value of the difference between the expectation of the response for a patient with BMI=23 and a patient with BMI=30 is 31.36364 − 24.73684 ≈
6.63 minutes. The confidence interval is (approximately) equal to [1.99, 11.27]. Hence, it looks as if the physicians expect to spend more time with the average weight patients.
After analyzing the effect of the explanatory variable on the expectation of the response one may want to examine the presence, or lack thereof, of such effect on the variance of the response. Towards that end, one may use the function “var.test”:
##
## F test to compare two variances ##
## data: time by weight
## F = 1.0443, num df = 32, denom df = 37, p-value = 0.8931
## alternative hypothesis: true ratio of variances is not equal to 1 ## 95 percent confidence interval:
## 0.5333405 2.0797269
## sample estimates:
## ratio of variances ## 1.044316
In this test we do not reject the null hypothesis that the two variances of the response are equal since the 𝑝-value is larger than 0.05. The sample variances are almost equal to each other (their ratio is 1.044316), with a confidence interval for the ration that essentially ranges between 1/2 and 2.
The production of 𝑝-values and confidence intervals is just one aspect in the
16.3 Case Studies 295
analysis of data. Another aspect, which typically is much more time consum- ing and requires experience and healthy skepticism is the examination of the assumptions that are used in order to produce the 𝑝-values and the confidence intervals. A clear violation of the assumptions may warn the statistician that perhaps the computed nominal quantities do not represent the actual statis- tical properties of the tools that were applied.
In this case, we have noticed the high concentration of the response at the value “30”. What is the situation when we split the sample between the two levels of the explanatory variable? Let us apply the function “table” once more, this time with the explanatory variable included:
Not surprisingly, there is still high concentration at that level “30”. But one can see that only 2 of the responses of the “BMI=30” group are above that value in comparison to a much more symmetric distribution of responses for the other group.
The simulations of the significance level of the one-sample 𝑡-test for an Expo- nential response that were conducted in Question
𝑒𝑥 ∶ 𝑇𝑒𝑠𝑡𝑖𝑛𝑔.2
may cast some doubt on how trustworthy are nominal 𝑝-values of the 𝑡-test when the measurements are skewed. The skewness of the response for the group “BMI=30” is a reason to be worry.
We may consider a different test, which is more robust, in order to validate the significance of our findings. For example, we may turn the response into a factor by setting a level for values larger or equal to “30” and a different level for values less than “30”. The relation between the new response and the explanatory variable can be examined with the function “prop.test”. We first plot and then test: |
##
## 2-sample test for equality of proportions with continuity ## correction
##
## data: table(patient$time >= 30, patient$weight) ## X-squared = 3.7098, df = 1, p-value = 0.05409
## alternative hypothesis: two.sided ## 95 percent confidence interval:
## -0.515508798 -0.006658689
## sample estimates:
## prop 1 prop 2
## 0.3103448 0.5714286
The mosaic plot presents the relation between the explanatory variable and the new factor. The level “TRUE” is associated with a value of the predicted time spent with the patient being 30 minutes or more. The level “FALSE” is associated with a prediction of less than 30 minutes.
The computed 𝑝-value is equal to 0.05409, that almost reaches the significance level of 5%. Notice that the probabilities that are being estimated by the func- tion are the probabilities of the level “FALSE”. Overall, one may see the outcome of this test as supporting evidence for the conclusion of the 𝑡-test. However, the 𝑝-value provided by the 𝑡-test may over emphasize the evidence in the
5One may propose splinting the response into two groups, with one group being associated with values of “time” strictly larger than 30 minutes and the other with values less or equal to
30. The resulting 𝑝-value from the expression “prop.test(table(patient$time>30,patient$weight))” is 0.01276. However, the number of subjects in one of the cells of the table is equal only to 2, which is problematic in the context of the Normal approximation that is used by this test. |
Physical Strength and Job Performance
The next case study involves an attempt to develop a measure of physical ability that is easy and quick to administer, does not risk injury, and is related to how well a person performs the actual job. The current example is based on study by Blakely et al. , published in the journal Personnel Psychology.
There are a number of very important jobs that require, in addition to cog- nitive skills, a significant amount of strength to be able to perform at a high level. Construction worker, electrician and auto mechanic, all require strength in order to carry out critical components of their job. An interesting applied problem is how to select the best candidates from amongst a group of appli- cants for physically demanding jobs in a safe and a cost effective way.
The data presented in this case study, and may be used for the development of a method for selection among candidates, were collected from 147 individuals working in physically demanding jobs. Two measures of strength were gath- ered from each participant. These included grip and arm strength. A piece of equipment known as the Jackson Evaluation System (JES) was used to collect the strength data. The JES can be configured to measure the strength of a number of muscle groups. In this study, grip strength and arm strength were measured. The outcomes of these measurements were summarized in two scores of physical strength called “grip” and “arm”.
Two separate measures of job performance are presented in this case study. First, the supervisors for each of the participants were asked to rate how well their employee(s) perform on the physical aspects of their jobs. This measure is summarizes in the variable “ratings”. Second, simulations of physically de- manding work tasks were developed. The summary score of these simulations are given in the variable “sims”. Higher values of either measures of perfor- mance indicates better performance.
The data for the 4 variables and 147 observations is stored in “job.csv”. We start by reading the content of the file into a data frame by the name “job”, presenting a summary of the variables, and their histograms: |
All variables are numeric. Examination of the 4 summaries and histograms does not produce interest findings. All variables are, more or less, symmetric with the distribution of the variable “ratings” tending perhaps to be more uniform then the other three.
The main analyses of interest are attempts to relate the two measures of physical strength “grip” and “arm” with the two measures of job performance, “ratings” and “sims”. A natural tool to consider in this context is a linear re- gression analysis that relates a measure of physical strength as an explanatory variable to a measure of job performance as a response. |
Let us consider the variable “sims” as a response. The first step is to plot a scatter plot of the response and explanatory variable, for both explanatory variables. To the scatter plot we add the line of regression. In order to add the regression line we fit the regression model with the function “lm” and then apply the function “abline” to the fitted model. The plot for the relation between the response and the variable “grip” is produced by the code: |
The plot that is produced by the last code is presented on the upper-right panel of Figure .
Both plots show similar characteristics. There is an overall linear trend in the relation between the explanatory variable and the response. The value of the response increases with the increase in the value of the explanatory variable (a positive slope). The regression line seems to follow, more or less, the trend that is demonstrated by the scatter plot.
A more detailed analysis of the regression model is possible by the applica- tion of the function “summary” to the fitted model. First the case where the explanatory variable is “grip”: |
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ##
## Residual standard error: 1.295 on 145 degrees of freedom ## Multiple R-squared: 0.4094, Adjusted R-squared: 0.4053 ## F-statistic: 100.5 on 1 and 145 DF, p-value: < 2.2e-16
Examination of the report reviles a clear statistical significance for the effect of the explanatory variable on the distribution of response. The value of R- squared, the ration of the variance of the response explained by the regression is 0.4094. The square root of this quantity, √0.4094 ≈ 0.64, is the proportion of the standard deviation of the response that is explained by the explanatory variable. Hence, about 64% of the variability in the response can be attributed to the measure of the strength of the grip.
For the variable “arm” we get: |
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ##
## Residual standard error: 1.226 on 145 degrees of freedom ## Multiple R-squared: 0.4706, Adjusted R-squared: 0.467 ## F-statistic: 128.9 on 1 and 145 DF, p-value: < 2.2e-16
This variable is also statistically significant. The value of R-squared is 0.4706. The proportion of the standard deviation that is explained by the strength of the are is √0.4706 ≈ 0.69, which is slightly higher than the proportion explained by the grip.
Overall, the explanatory variables do a fine job in the reduction of the vari- ability of the response “sims” and may be used as substitutes of the response in order to select among candidates. A better prediction of the response based on the values of the explanatory variables can be obtained by combining the information in both variables. The production of such combination is not dis- cussed in this book, though it is similar in principle to the methods of linear regression that are presented in Chapter . The produced score takes the form: |
The scatter plot that includes the regression line can be found at the lower- left panel of Figure . Indeed, the linear trend is more pronounced for this scatter plot and the regression line a better description of the relation between the response and the explanatory variable. A summary of the regression model produces the report:
##
## Call:
## lm(formula = sims ~ score, data = job) ##
## Residuals:
## Min 1Q Median 3Q Max ## -3.1890 -0.7390 -0.0698 0.7411 2.8636 ##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 0.07479 0.09452 0.791 0.43
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ##
## Residual standard error: 1.14 on 145 degrees of freedom ## Multiple R-squared: 0.5422, Adjusted R-squared: 0.539
## F-statistic: 171.7 on 1 and 145 DF, p-value: < 2.2e-16
Indeed, the score is highly significant. More important, the R-squared coeffi- cient that is associated with the score is 0.5422, which corresponds to a ratio of the standard deviation that is explained by the model of √0.5422 ≈ 0.74. Thus, almost 3/4 of the variability is accounted for by the score, so the score is a reasonable mean of guessing what the results of the simulations will be. This guess is based only on the results of the simple tests of strength that is conducted with the JES device.
Before putting the final seal on the results let us examine the assumptions of the statistical model. First, with respect to the two explanatory variables. Does each of them really measure a different property or do they actually measure the same phenomena? In order to examine this question let us look at the scatter plot that describes the relation between the two explanatory variables. This plot is produced using the code: |
It is presented in the lower-right panel of Figure . Indeed, one may see that the two measurements of strength are not independent of each other but tend to produce an increasing linear trend. Hence, it should not be surprising that the relation of each of them with the response produces essentially the same goodness of fit. The computed score gives a slightly improved fit, but still, it basically reflects either of the original explanatory variables.
In light of this observation, one may want to consider other measures of strength that represents features of the strength not captures by these two variable. Namely, measures that show less joint trend than the two consid- ered.
Another element that should be examined are the probabilistic assumptions that underly the regression model. We described the regression model only in terms of the functional relation between the explanatory variable and the expectation of the response. In the case of linear regression, for example, this relation was given in terms of a linear equation. However, another part of the model corresponds to the distribution of the measurements about the line of regression. The assumption that led to the computation of the reported
𝑝-values is that this distribution is Normal.
A method that can be used in order to investigate the validity of the Normal assumption is to analyze the residuals from the regression line. Recall that these residuals are computed as the difference between the observed value of the response and its estimated expectation, namely the fitted regression line. The residuals can be computed via the application of the function “residuals” to the fitted regression model.
Specifically, let us look at the residuals from the regression line that uses the score that is combined from the grip and arm measurements of strength. One may plot a histogram of the residuals: |
The produced histogram is represented on the upper panel. The histogram portrays a symmetric distribution that my result from Normally distributed observations. A better method to compare the distribution of the residuals to the Normal distribution is to use the Quantile-Quantile plot. This plot can be found on the lower panel. We do not discuss here the method by which this plot is produced. However, we do say that any deviation of the points from a straight line is indication of violation of the assumption of Normality. In the current case, the points seem to be on a single line, which is consistent with the assumptions of the regression model.
The next task should be an analysis of the relations between the explanatory variables and the other response “ratings”. In principle one may use the same steps that were presented for the investigation of the relations between the ex- planatory variables and the response “sims”. But of course, the conclusion may differ. We leave this part of the investigation as an exercise to the students.
Summary
Concluding Remarks
The book included a description of some elements of statistics, element that we thought are simple enough to be explained as part of an introductory course to statistics and are the minimum that is required for any person that is involved in academic activities of any field in which the analysis of data is required. Now, as you finish the book, it is as good time as any to say some words regarding the elements of statistics that are missing from this book.
9Generally speaking, the plot is composed of the empirical percentiles of the residuals, plotted against the theoretical percentiles of the standard Normal distribution. The current plot is produced by the expression “qqnorm(residuals(sims.score))”.
16.4 Summary 305
One element is more of the same. The statistical models that were presented are as simple as a model can get. A typical application will required more com- plex models. Each of these models may require specific methods for estimation and testing. The characteristics of inference, e.g. significance or confidence levels, rely on assumptions that the models are assumed to possess. The user should be familiar with computational tools that can be used for the analysis of these more complex models. Familiarity with the probabilistic assumptions is required in order to be able to interpret the computer output, to diagnose possible divergence from the assumptions and to assess the severity of the possible effect of such divergence on the validity of the findings.
Statistical tools can be used for tasks other than estimation and hypothesis testing. For example, one may use statistics for prediction. In many applica- tions it is important to assess what the values of future observations may be and in what range of values are they likely to occur. Statistical tools such as regression are natural in this context. However, the required task is not testing or estimation the values of parameters, but the prediction of future values of the response.
A different role of statistics in the design stage. We hinted in that direction when we talked about in Chapter
𝑐ℎ ∶ 𝐶𝑜𝑛𝑓𝑖𝑑𝑒𝑛𝑐𝑒
about the selection of a sample size in order to assure a confidence interval with a given accuracy. In most applications, the selection of the sample size emerges in the context of hypothesis testing and the criteria for selection is the minimal power of the test, a minimal probability to detect a true finding. Yet, statistical design is much more than the determination of the sample size. Statistics may have a crucial input in the decision of how to collect the data. With an eye on the requirements for the final analysis, an experienced statistician can make sure that data that is collected is indeed appropriate for that final analysis. Too often is the case where researcher steps into the statistician’s office with data that he or she collected and asks, when it is already too late, for help in the analysis of data that cannot provide a satisfactory answer to the research question the researcher tried to address. It may be said, with some exaggeration, that good statisticians are required for the final analysis only in the case where the initial planning was poor.
Last, but not least, is the theoretical mathematical theory of statistics. We tried to introduce as little as possible of the relevant mathematics in this course. However, if one seriously intends to learn and understand statistics then one must become familiar with the relevant mathematical theory. Clearly, deep knowledge in the mathematical theory of probability is required. But apart from that, there is a rich and rapidly growing body of research that deals with the mathematical aspects of data analysis. One cannot be a good statistician unless one becomes familiar with the important aspects of this theory.
306 16 Case Studies
I should have started the book with the famous quotation: “Lies, damned lies, and statistics”. Instead, I am using it to end the book. Statistics can be used and can be misused. Learning statistics can give you the tools to tell the difference between the two. My goal in writing the book is achieved if reading it will mark for you the beginning of the process of learning statistics and not the end of the process.
Discussion in the Forum
In the second part of the book we have learned many subjects. Most of these subjects, especially for those that had no previous exposure to statistics, were unfamiliar. In this forum we would like to ask you to share with us the diffi- culties that you encountered.
What was the topic that was most difficult for you to grasp? In your opinion, what was the source of the difficulty?
When forming your answer to this question we will appreciate if you could elaborate and give details of what the problem was. Pointing to deficiencies in the learning material and confusing explanations will help us improve the presentation for the future editions of this book. |
Chapter 1
Exercise 1.1
According to the information in the question the polling was con- ducted among 500 registered voters. The 500 registered voters cor- responds to the sample.
The percentage, among all registered voters of the given party, of those that prefer a male candidate is a parameter. This quantity is a characteristic of the population.
It is given that 42% of the sample prefer a female candidate. This quantity is a numerical characteristic of the data, of the sample. Hence, it is a statistic.
The voters in the state that are registered to the given party is the target population. |
308 16 Exercise Solutions
The number of days in which 5 costumers where waiting is 3, since the frequency of the value “5” in the data is 3. That can be seen from the table by noticing the number below value “5” is 3. It can also be seen from the bar plot by observing that the hight of the bar above the value “5” is equal to 3.
The number of waiting costumers that occurred the largest number of times is 1. The value “1” occurred 8 times, more than any other value. Notice that the bar above this value is the highest.
The value “0”, which occurred only once, occurred the least number of times.
Chapter 2
Exercise 2.1
The relative frequency of direct hits of category 1 is 0.3993. Notice that the cumulative relative frequency of category 1 and 2 hits, the sum of the relative frequency of both categories, is 0.6630. The relative frequency of category 2 hits is 0.2637. Consequently, the relative frequency of direct hits of category 1 is 0.6630 - 0.2637 = 0.3993.
The relative frequency of direct hits of category 4 or more is 0.0769. Observe that the cumulative relative of the value “3” is 0.6630 + 0.2601 = 0.9231. This follows from the fact that the cumulative rel- ative frequency of the value “2” is 0.6630 and the relative frequency of the value “3” is 0.2601. The total cumulative relative frequency is 1.0000. The relative frequency of direct hits of category 4 or more is the difference between the total cumulative relative frequency and cumulative relative frequency of 3 hits: 1.0000 - 0.9231 = 0.0769.
Exercise 2.2
The total number of cows that were involved in this study is 45. The object “freq” contain the table of frequency of the cows, divided according to the number of calves that they had. The cumulative frequency of all the cows that had 7 calves or less, which includes all cows in the study, is reported under the number “7” in the output of the expression “cumsum(freq)”. This number is 45.
16.4 Exercise Solutions 309
The number of cows that gave birth to a total of 4 calves is 10. Indeed, the cumulative frequency of cows that gave birth to 4 calves or less is 28. The cumulative frequency of cows that gave birth to 3 calves or less is 18. The frequency of cows that gave birth to exactly 4 calves is the difference between these two numbers: 28 - 18 = 10.
The relative frequency of cows that gave birth to at least 4 calves is 27/45 = 0.6. Notice that the cumulative frequency of cows that gave at most 3 calves is 18. The total number of cows is 45. Hence, the number of cows with 4 or more calves is the difference between these two numbers: 45 - 18 = 27. The relative frequency of such cows is the ratio between this number and the total number of cows: 27/45 = 0.6.
Chapter 3
Exercise 3.1
Consider the data “x1”. From the summary we see that it is dis- tributed in the range between 0 and slightly below 5. The central 50% of the distribution are located between 2.5 and 3.8. The mean and median are approximately equal to each other, which suggests an approximately symmetric distribution. Consider the histograms in Figure . Histograms 1 and 3 correspond to a distributions in the appropriate range. However, the distribution in Histogram 3 is concentrated in lower values than suggested by the given first and third quartiles. Consequently, we match the summary of “x1” with Histogram 1.
Consider the data “x2”. Again, the distribution is in the range be- tween 0 and slightly below 5. The central 50% of the distribution are located between 0.6 and 1.8. The mean is larger than the me- dian, which suggests a distribution skewed to the right. Therefore, we match the summary of “x2” with Histogram 3.
For the data in “x3” we may note that the distribution is in the range between 2 and 6. The histogram that fits this description is Histograms 2.
The box plot is essentially a graphical representation of the infor- mation presented by the function “summary”. Following the rational of matching the summary with the histograms we may obtain that Histogram 1 should be matched with Box-plot 2 in Figure , His-
310 16 Exercise Solutions
togram 2 matches Box-plot 3, and Histogram 3 matches Box-plot 1. Indeed, it is easier to match the box plots with the summaries. How- ever, it is a good idea to practice the direct matching of histograms with box plots.
The data in “x1” fits Box-plot 2 in Figure . The value 0.000 is the smallest value in the data and it corresponds to the smallest point in the box plot. Since this point is below the bottom whisker it follows that it is an outlier. More directly, we may note that the inter-quartile range is equal to 𝐼𝑄𝑅 = 3.840 − 2.498 = 1.342. The lower threshold is equal to 2.498 − 1.5 × 1.342 = 0.485, which is larger that the given value. Consequently, the given value 0.000 is an outlier.
Observe that the data in “x3” fits Box-plot 3 in Figure . The vale 6.414 is the largest value in the data and it corresponds to the endpoint of the upper whisker in the box plot and is not an outlier. Alternatively, we may note that the inter-quartile range is equal to 𝐼𝑄𝑅 = 4.690 − 3.391 = 1.299. The upper threshold is equal to 4.690 + 1.5 × 1.299 = 6.6385, which is larger that the given value. Consequently, the given value 6.414 is not an outlier. |
We created an object “x.val” that contains the unique values of the data and an object “freq” that contains the frequencies of the values. The object “rel.freq” contains the relative frequencies, the ratios between the frequencies and the number of observations. The average is computed as the sum of the products of the values with their relative frequencies. It is stored in the objects “x.bar” and obtains the value 4.666667.
An alternative approach is to reconstruct the original data from the frequency table. A simple trick that will do the job is to use the
16.4 Exercise Solutions 311
function “rep”. The first argument to this function is a sequence of values. If the second argument is a sequence of the same length that contains integers then the output will be composed of a sequence that contains the values of the first sequence, each repeated a num- ber of times indicated by the second argument. Specifically, if we enter to this function the unique value “x.val” and the frequency of the values “freq” then the output will be the sequence of values of the original sequence “x”: |
Observe that when we apply the function “mean” to “x” we get again the value 4.666667.
In order to compute the sample standard deviation we may compute first the sample variance and then take the square root of the result: |
Notice that the expression “sum((x.val-x.bar)^2*freq)” compute the sum of square deviations. The expression “(sum(freq)-1)” produces the number of observations minus 1 (𝑛 − 1). The ratio of the two gives the sample variance.
Alternatively, had we produced the object “x” that contains the data, we may apply the function “sd” to get the sample standard deviation: |
Recall that the object “x.val” contains the unique values of the data. The expression “cumsum(rel.freq)” produces the cumulative relative frequencies. The function “data.frame” puts these two variables into a single data frame and provides a clearer representation of the results.
Notice that more that 50% of the observations have value 4 or less. However, strictly less than 50% of the observations have value 2 or less. Consequently, the median is 4. (If the value of the cumulative relative frequency at 4 would have been exactly 50% then the me- dian would have been the average between 4 and the value larger than 4.)
In the case that we produce the values of the data “x” then we may apply the function “summary” to it and obtain the median this way
As for the inter-quartile range (IQR) notice that the first quartile is 2 and the third quartile is 6. Hence, the inter-quartile range is equal to 6 - 2 = 4. The quartiles can be read directly from the output of the function “summary” or can be obtained from the data frame of the cumulative relative frequencies. For the later observe that more than 25% of the data are less or equal to 2 and more 75% of the
16.4 Exercise Solutions 313
data are less or equal to 6 (with strictly less than 75% less or equal to 4).
In order to answer the last question we conduct the computation: (10 − 4.666667)/2.425914 = 2.198484. We conclude that the value 10 is approximately 2.1985 standard deviations above the mean. |
We obtain an expectation E(𝑌) = 3.3333.
The values that the random variable 𝑌 obtains are the numbers 0, 1, 2, …, 5, with probabilities {1/21, 2/21, … , 6/21}, respectively. The expectation is equal to E(𝑌) = 3.333333. The variance is obtained by the multiplication of the squared deviation from the expectation of the values by their respective probabilities and the summation of the products. Let us carry out the computation in R: |
Exercise 4.2
An outcome of the game of chance may be represented by a sequence of length three composed of the letters “H” and “T”. For example, the sequence “THH“ corresponds to the case where the first toss produced a”Tail“, the second a “Head” and the third a “Head”.
With this notation we obtain that the possible outcomes of the game are {HHH, THH, HTH, TTH, HHT, THT, HTT, TTT}. All out-
comes are equally likely. There are 8 possible outcomes and only one of which corresponds to winning. Consequently, the probability of winning is 1/8.
Consider the previous solution. One looses if any other of the out- comes occurs. Hence, the probability of loosing is 7/8.
Denote the gain of the player by 𝑋. The random variable 𝑋 may obtain two values: 10-2 = 8 if the player wins and -2 if the player looses. The probabilities of these values are {1/8, 7/8}, respectively. Therefore, the expected gain, the expectation of 𝑋 is: |
Chapter 5
Exercise 5.1
The Binomial distribution is a reasonable model for the number of people that develop high fever as result of the vaccination. Let
𝑋 be the number of people that do so in a give day. Hence, 𝑋 ∼ Binomial(500, 0.09). According to the formula for the expectation in the Binomial distribution, since 𝑛 = 500 and 𝑝 = 0.09, we get that: |
P(𝑋 > 40) = 1 − P(𝑋 ≤ 40) .
The probability can be computes with the aid of the function “pbinom” that produces the cumulative probability of the Binomial distribution:
```r
1 - pbinom(40,500,0.09)
``` |
316 16 Exercise Solutions
The probability that the number of people that will develop a re- action is between 50 and 45 (inclusive) is the difference between P(𝑋 ≤ 50) and P(𝑋 < 45) = P(𝑋 ≤ 44). Apply the function “pbinom” to get: |
The first plot, that corresponds to 𝑋1 ∼ Negative-Binomial(2, 0.5), fits Barplot 3. Notice that the distribution tends to obtain smaller values and that the probability of the value “0” is equal to the probability of the value “1”.
The second plot, the one that corresponds to 𝑋2 ∼ Negative-Binomial(4, 0.5), is associated with Barplot 1. Notice that the distribution tends to obtain larger values. For example, the probability of the value “10” is substantially larger than zero, where for the other two plots this is not the case.
The third plot, the one that corresponds to 𝑋3 ∼ Negative-Binomial(8, 0.8), matches Barplot 2. Observe that this distribution tends to produce smaller probabilities for the small values as well as for the larger values. Overall, it is more concentrated than the other two.
Barplot 1 corresponds to a distribution that tends to obtain larger values than the other two distributions. Consequently, the expecta-
318 16 Exercise Solutions
tion of this distribution should be larger. The conclusion is that the pair E(𝑋) = 4, V(𝑋) = 8 should be associated with this distribution.
Barplot 2 describes a distribution that produce smaller probabili- ties for the small values as well as for the larger values and is more concentrated than the other two. The expectations of the two re- maining distributions are equal to each other and the variance of the pair E(𝑋) = 2, V(𝑋) = 2.5 is smaller. Consequently, this is the pair that should be matched with this box plot.
This leaves only Barplot 3, that should be matched with the pair
E(𝑋) = 2, V(𝑋) = 4.
Chapter 6
Exercise 6.1
Let 𝑋 be the total weight of 8 people. By the assumption, 𝑋 ∼ Normal(560, 572). We are interested in the probability P(𝑋 > 650). This probability is equal to the difference between 1 and the proba- bility P(𝑋 ≤ 650). We use the function “pnorm” in order to carry out the computation: |
We get that the probability that the total weight of 8 people exceeds 650kg is equal to 0.05717406.
Let 𝑌 be the total weight of 9 people. By the assumption, 𝑌 ∼ Normal(630, 612). We are interested in the probability P(𝑌 > 650). This probability is equal to the difference between 1 and the prob- ability P(𝑌 ≤ 650). We use again the function “pnorm” in order to carry out the computation: |
16.4 Exercise Solutions 319
We get that the probability that the total weight of 9 people exceeds 650kg is much higher and is equal to 0.3715054.
Again, 𝑋 ∼ Normal(560, 572), where 𝑋 is the total weight of 8 people. In order to find the central region that contains 80% of the distribu- tion we need to identify the 10%-percentile and the 90%-percentile of 𝑋. We use the function “qnorm” in the code: |
The requested region is the interval
486.9516, 633.0484
.
As before, 𝑌 ∼ Normal(630, 612), where 𝑌 is the total weight of 9 people. In order to find the central region that contains 80% of the distribution we need to identify the 10%-percentile and the 90%- percentile of 𝑌. The computation this time produces: |
320 16 Exercise Solutions
Exercise 6.2
The probability P(𝑋 > 11) can be computed as the difference be- tween 1 and the probability P(𝑋 ≤ 11). The latter probability can be computed with the function “pbinom”: |
The Normal approximation with continuity correction proposes
P(𝑋 > 11) ≈ 0.1190149.
The Poisson approximation replaces the Binomial distribution by the Poisson distribution with the same expectation. The expectation |
Chapter 7
Exercise 7.1
After placing the file “pop2.csv” in the working directory one may produce a data frame with the content of the file and compute the average of the variable “bmi” using the code: |
In turns out that the standard deviation of the measurement is 4.188511.
In order to compute the expectation under the sampling distribution of the sample average we conduct a simulation. The simulation pro- duces (an approximation) of the sampling distribution of the sample |
Initially, we produce a vector of zeros of the given lenght (100,000). In each iteration of the “for” loop a random sample of size 150 is selected from the population. The sample average is computed and stored in the sequence “X.bar”. At the end of all the iterations all the zeros are replaced by evaluations of the sample average.
The expectation of the sampling distribution of the sample average is computed by the application of the function “mean” to the sequence that represents the sampling distribution of the sample average. The result for the current is 24.98681, which is vary similar to the population average 24.98446.
The standard deviation of the sample average under the sampling distribution is computed using the function “sd”: |