text
stringlengths
200
319k
The resulting standard deviation is 0.3422717. Recall that the stan- dard deviation of a single measurement is equal to 4.188511 and that the sample size is 𝑛 = 150. The ratio between the stan- dard deviation of the measurement and the square root of 150 is 4.188511/√150 = 0.3419905, which is similar in value to the standard deviation of the sample average[^4]. The central region that contains 80% of the sampling distribution of the sample average can be identified with the aid of the function “quantile”:
The value 24.54972 is the 10%-percentile of the sampling distribu- tion. To the left of this value are 10% of the distribution. The value 25.42629 is the 90%-percentile of the sampling distribution. To the right of this value are 10% of the distribution. Between these two values are 80% of the sampling distribution. The Normal approximation, which is the conclusion of the Central Limit Theorem substitutes the sampling distribution of the sample average by the Normal distribution with the same expectation and standard deviation. The percentiles are computed with the function “qnorm”:
Observe that we used the expectation and the standard deviation of the sample average in the function. The resulting interval is [24.54817, 25.42545], which is similar to the interval [24.54972, 25.42629] which was obtained via simulations. Exercise 7.2 Denote by 𝑋 the distance from the specified endpoint of a random hit. Observe that 𝑋 ∼ Uniform(0, 10). The 25 hits form a sample 𝑋1, 𝑋2, … , 𝑋25 from this distribution and the sample average 𝑋̄ is the average of these random locations. The expectation of the average is equal to the expectation of a single measurement. Since E(𝑋) = (𝑎 + 𝑏)/2 = (0 + 10)/2 = 5 we get that E(𝑋̄) = 5. The variance of the sample average is equal to the variance of a single measurement, divided by the sample size. The variance of the Uniform distribution is V(𝑋) = (𝑎 + 𝑏)2/12 = (10 − 0)2/12 = 8.333333. The standard deviation of the sample average is equal to the standard deviation of the sample average is equal to the standard deviation of a single measurement, divided by the square 324 16 Exercise Solutions root of the sample size. The sample size is 𝑛 = 25. Consequently, the standard deviation of the average is √8.333333/25 = 0.5773503. The left-most third of the detector is the interval to the left of 10/3. The distribution of the sample average, according to the Central Limit Theorem, is Normal. The probability of being less than 10/3 for the Normal distribution may be computed with the function “pnorm”:
The expectation and the standard deviation of the sample aver- age are used in computation of the probability. The probability is 0.001946209, about 0.2%. The central region in the Normal(𝜇, 𝜎2) distribution that contains 99% of the distribution is of the form 𝜇 ± qnorm(0.995) ⋅ 𝜎, where “qnorm(0.995)” is the 99.5%-percentile of the Standard Normal distri- bution. Therefore, 𝑐 = qnorm(0.995) ⋅ 𝜎:
The variable “change” contains the difference between the patient’s rating before the application of the device and the rating after the application. The sample average of this variable is reported as the “Mean” for this variable and is equal to 3.5. The variable “active” is a factor. Observe that the summary of this variable lists the two levels of the variable and the frequency of each level. Indeed, the levels are coded with numbers but, nonetheless, the variable is a factor[^12]. Based on the hint we know that the expressions “change[1:29]” and “change[30:50]” produce the values of the variable “change” for the patients that were treated with active magnets and by inactive placebo, respectively. We apply the function “mean” to these sub- sequences:
The sample average for the patients that were treated with active magnets is 5.241379 and sample average for the patients that were treated with inactive placebo is 1.095238. We apply the function “sd” to these sub-sequences:
The sample standard deviation for the patients that were treated with active magnets is 3.236568 and sample standard deviation for the patients that were treated with inactive placebo is 1.578124. We apply the function “boxplot” to each sub-sequences: 16.4 Exercise Solutions 327 The first box-plot corresponds to the sub-sequence of the patients that received an active magnet. There are no outliers in this plot. The second box-plot corresponds to the sub-sequence of the patients that received an inactive placebo. Three values, the values “3”, “4”, and “5” are associated with outliers. Let us see what is the total number of observations that receive these values:
One may see that a single observation obtained the value “3”, an- other one obtained the value “5” and 2 observations obtained the value “4”, a total of 4 outliers[^13]. Notice that the single point that is associated with the value “4” actually represents 2 observations and not one.
328 16 Exercise Solutions Observe that each iteration of the simulation involves the generation of two samples. One sample is of size 29 and it is generated from the Normal(3.5, 32) distribution and the other sample is of size 21 and it is generated from the Normal(3.5, 1.52) distribution. The sample average and the sample variance are computed for each sample. The test statistic is computed based on these averages and variances and it is stored in the appropriate position of the sequence “test.stat”. The values of the sequence “test.stat” at the end of all the itera- tions represent the sampling distribution of the static. The appli- cation of the function “quantile” to the sequence gives the 0.025- percentiles and the 0.975-percentiles of the sampling distribution, which are -2.014838 and 2.018435. It follows that the interval [−2.014838, 2.018435] contains about 95% of the sampling distribu- tion of the statistic. In order to evaluate the statistic for the given data set we apply the same steps that were used in the simulation for the computation of the statistic:
In the first line we compute the sample average for the first 29 patients and in the second line we compute it for the last 21 pa- tients. In the third and fourth lines we do the same for the sample variances of the two types of patients. Finally, in the fifth line we evaluate the statistic. The computed value of the statistic turns out to be 5.985601, a value that does not belong to the interval [−2.014838, 2.018435].
Chapter 10 Exercise 10.1 We simulate the sampling distribution of the average and the me- dian in a sample generated from the Normal distribution. In order to do so we copy the code that was used in Subsection , re- placing the object “mid.range” by the object “X.med” and using the function “median” in order to compute the sample median instead of the computation of the mid-range statistic: The sequence “X.bar” represents the sampling distribution of the sample average and the sequence “X.med” represents the sampling distribution of the sample median. We apply the function “mean” to these sequences in order to obtain the expectations of the estima- tors:
330 16 Exercise Solutions equal to 3. Observe that expectations of the estimators are essen- tially equal to the expectation[^13]. Consequently, both estimators are unbiased estimators of the expectation of the measurement. In order to obtain the variances of the estimators we apply the function “var” to the sequences that represent their sampling distri- butions:
Observe that the variance of the sample average is essentially equal to 0.020 and the variance of the sample median is essentially equal to 0.0312. The mean square error of an unbiased estimator is equal to its variance. Hence, these numbers represent the mean square errors of the estimators. It follows that the mean square error of the sample average is less than the mean square error of the sample median in the estimation of the expectation of a Normal measurement. We repeat the same steps as before for the Uniform distribution. No- tice that we use the parameters 𝑎 = 0.5 and 𝑏 = 5.5 the same way we did in Subsection . These parameters produce an expectation E(𝑋) = 3 and a variance V(𝑋) = 2.083333:
Observe 0.021 is, essentially, the value of the variance of the sample average[^15]. The variance of the sample median is essentially equal to 0.061. The variance of each of the estimators is equal to it’s mean square error. This is the case since the estimators are unbiased. Consequently, we again obtain that the mean square error of the sample average is less than that of the sample median. Exercise 10.2 Assuming that the file “ex2.csv” is saved in the working directory, one may read the content of the file into a data frame and produce a summary of the content of the data frame using the code:
Notice that the expression “ex2$group == HIGH” produces a sequence of length 150 with logical entries. The entry is equal to “TRUE” if the equality holds and it is equal to “FALSE” if it dos not[^17]. When the function “mean” is applied to a sequence with logical entries it produces the relative frequeny of the TRUEs in the sequence. This corresponds, in the corrent context, to the sample proportion of the level “HIGH” in the variable “ex2$group”. Make sure that the file “pop2.csv” is saved in the working directory. In order to compute the proportion in the population we read the content of the file into a data frame and compute the relative fre- quency of the level “HIGH” in the variable “group”:
We get that the proporion in the population is 𝑝 = 0.28126. The simulation of the sampling distribution involves a selection of a random sample of size 150 from the population and the computation of the proportion of the level “HIGH” in that sample. This procedure is iterated 100,000 times in order to produce an approximation of the distribution:
Observe that the sampling distribution is stored in the object “P.hat”. The function “sample” is used in order to sample 150 observation from the sequence “pop2$group”. The sample is stored in the object “X”. The expression “mean(X == HIGH)” computes the relative frequency of the level “HIGH” in the sequence “X”. At the last line, after the production of the sequence “P.hat” is completed, the function “mean” is applied to the sequence. The result is the expected value of estimator 𝑃̂, which is equal to 0.2812307. This expectation is essentially equal to the probability of the event 𝑝 = 0.28126.[^18] The application of the function “var” to the sequence “P.hat” pro- duces:
There are two variables: The variable “condition” is a factor with two levels, “C” that codes the Charismatic condition and “P” that codes the Punitive condition. The second variable is “rating”, which is a numeric variable. The sample average for the variable “rating” can be obtained from
Observe that the sample average is equal to 2.428567 and the sample standard deviation is equal to 0.5651949. The sample average and standard deviation for each sub-sample may be produced with the aid of the function “tapply”. We apply the function in the third argument, first “mean” and then “sd” to the variable rating, in the first argument, over each level of the factor “condition” in the second argument:
Obtain that average for the condition “C” is 2.613332 and the stan- dard deviation is 0.5329833. You may note that the rating given by students that were exposed to the description of the lecturer as charismatic is higher on the average than the rating given by students that were exposed to a less favorable description. The box plots of the ratings for the two conditions are presented in Figure .
The 99% confidence interval for the expectation is computed by the formula 𝑥̄ ± qt(0.995,n-1) ⋅ 𝑠/√𝑛. Only 0.5% of the 𝑡-distribution on (𝑛 − 1) degrees of freedom resides above the percentile “qt(0.995,n- 1)”. Using this percentile leaves out a total of 1% in both tails and keeps 99% of the distribution inside the central interval. For the students that were exposed to Condition “C”, 𝑥̄ = 2.613332, 𝑠 = 0.5329833, and 𝑛 = 25:
of the chi-square distribution on (𝑛 − 1) degrees of freedom is above the percentile “qchisq(0.95,n-1)” and 5% are below the percentile “qchisq(0.05,n-1)”. For the students that were exposed to Condition “C”, 𝑠 = 0.5329833, and 𝑛 = 25:
We compute the confidence level for a confidence interval with a nominal confidence level of 95%. Observe that using the Normal ap- proximation of the sample average corresponds to the application of the Normal percentile in the construction of the confidence interval.
338 16 Exercise Solutions The expectation of the measurement is equal to 4. This number belongs to the confidence interval 90.47% of the times. Consequently, the actual confidence level is 90.47%. Using the same sampling distribution that was produced in the solution to Question 1 we now compute the actual confidence level of a confidence interval that is constructed under the assumption that the measurement has a Normal distribution:
Based on the assumption we used the percentiles of the 𝑡- distribution. The actual significance level is 91.953% ≈ 92%, still short of the nominal 95% confidence level. It would be preferable to use the (incorrect) assumption that the observations have a Normal distribution than to apply the Normal approximation to such a small sample. In the current setting the former produced a confidence level that is closer to the nominal one. In general, using the percentiles of the 𝑡-distribution will pro- duce wider and more conservative confidence intervals than those produces under the Normal approximation of the average. To be on the safer size, one typically prefers the more conservative confidence intervals.
Chapter 12 Exercise 12.1 The null hypothesis of no placebo effect corresponds to the expec- tation of the change equal to 0 (𝐻0 ∶ E(𝑋) = 0). The alternative hypothesis may be formulated as the expectation not being equal to 0 (𝐻1 ∶ E(𝑋) ≠ 0). This corresponds to a two sided alternative. Observe that a negative expectation of the change still corresponds to the placebo having an effect. The observations that can be used in order to test the hypothesis are those associated with patients that were treated with the inactive placebo, i.e. the last 21 observations. We extracts these values from the data frame using the expression “magnets$change[30:50]”. In order to carry out the test we read the data from the file “magnets.csv” into the data frame “magnets”. The function “t.test”
We compute the test statistic “T” from the sample average “X.bar” and the sample standard deviation “S”. In the last expression we compute the probability that the absolute value of the test statistic is larger than “qt(0.975,19)”, which is the threshold that should be used in order to obtain a significance level of 5% for Normal measurements. We obtain that the actual significance level of the test is 0.08047, which is substantially larger than the nominal significance level. We repeat essentially the same simulations as before. We only change the distribution of the sample from the Exponential to the Uniform distribution:
The actual significance level of the test is 0.05163, much closer to the nominal significance level of 5%. A possible explanation for the difference between the two cases is that the Uniform distribution is symmetric like the Normal distri- bution, whereas the Exponential is skewed. In any case, for larger sample sizes one may expect the Central Limit Theorem to kick in and produce more satisfactory results, even for the Exponential case.
Observe that the absolute value of the statistic (3.708099) is larger than the threshold for rejection (2.004879). Consequently, we reject the null hypothesis. We recompute the percentile of the 𝑡-distribution:
Again, the absolute value of the statistic (3.708099) is larger than the threshold for rejection (2.669985). Consequently, we reject the null hypothesis. In this question we should recompute the test statistic:
Chapter 13 Exercise 13.1 The score of pain before the application of the device is measured in the variable “score1”. This variable is used as the response. We apply the function “t.test” in order to test the equality of the expectation of the response in the two groups. First we read in the data from the file into a data frame and then we apply the test:
The computed 𝑝-value is 0.6806, which is above 0.05. Consequently, we do not reject the null hypothesis that the expectations in the two groups are equal. This should not come as a surprise, since patients were assigned to the groups randomly and without knowledge to which group they belong. Prior to the application of the device, the two groups should look alike. Again, we use the variable “score1” as the response. Now apply the function “var.test” in order to test the equality of the variances of the response in the two groups:
The computed 𝑝-value is 0.3687, which is once more above 0.05. Consequently, we do not reject the null hypothesis that the vari- ances in the two groups are equal. This fact is reassuring. Indeed, prior to the application of the device, the two groups have the same characteristics. Therefore, any subsequent difference between the two groups can be attributed to the difference in the treatment. The difference in score between the treatment and the control groups is measured in the variable “change”. This variable is used as the response for the current analysis. We apply the function “t.test” in order to test the equality of the expectation of the response in the two groups:
The computed 𝑝-value is 3.86 × 10−7, which is much below 0.05. Con- sequently, we reject the null hypothesis that the expectations in the two groups are equal. The conclusion is that, according to this trial, magnets do have an effect on the expectation of the response[^9]. Once more we consider the variable “change” as the response. We apply the function “var.test” in order to test the equality of the variances of the response in the two groups:
The computed 𝑝-value is 0.001535, which is much below 0.05. Con- sequently, we reject the null hypothesis that the variances in the two groups are equal. Hence, magnets also affect the variance of the response. Exercise 13.2 We simulate the sampling distribution of the sample standard de- viation for two samples, one sample of size 𝑛𝑎 = 29 and the other of size 𝑛𝑏 = 21. Both samples are simulated from the given Normal distribution:
We compute the test statistic “F” as the ratio of the two sample stan- dard deviations “S.a” and “S.b”. The last expression computes the probability that the test statistic is either less than “qt(0.025,n.a- 1,n.b-1)”, or it is larger than “qt(0.975,n.a-1,n.b-1)”. The term “qt(0.025,n.a-1,n.b-1)” is the 0.025-percentile of the 𝐹-distribution on 28 and 20 degrees of freedom and the term “qt(0.975,n.a-1,n.b-1)” is the 0.975-percentile of the same 𝐹-distribution. The result of the last expression is the actual significance level of the test. We obtain that the actual significance level of the test when the measurements are Normally distributed is 0.05074, which in agree- ment with the nominal significance level of 5%. Indeed, the nominal significance level is computed under the assumption that the distri- bution of the measurement is Normal. We repeat essentially the same simulations as before. We only change the distribution of the samples from the Normal to the Ex- ponential distribution:
The actual significance level of the test is 0.27596, which is much higher than the nominal significance level of 5%. Through this experiment we may see that the 𝐹-test is not robust to the divergence from the assumed Normal distribution of the mea- surement. If the distribution of the measurement is skewed (the Exponential distribution is an example of such skewed distribution)
The estimate is equal to the ratio of the sample variances 𝑠2 /𝑠2 . It 𝑎 𝑏 obtains the value 0.6438381. Notice that the information regarding the sample averages and the sizes of the sub-samples is not relevant for the point estimation of the parameter. We use the formula:
Chapter 14 Exercise 14.1 The line marked in red is increasing and at 𝑥 = 0 it seems to ob- tain the value 𝑦 = 0. An increasing line is associated with a linear equation with a positive slope coefficient (𝑏 > 0). The only equation with that property is Equation 3, for which 𝑏 = 1. Notice that the intercept of this equation is 𝑎 = 0, which agrees with the fact that the line passes through the origin (𝑥, 𝑦) = (0, 0). If the 𝑥-axis and the 𝑦-axis were on the same scale then one would expect the line to be tilted in the 45 degrees angle. However, here the axes are not on the same scale, so the tilting is different. The 𝑥-value of the line marked with a red triangle is 𝑥 = −1. The Exercise Solutions 349 𝑦-value is below 5. The observation that has an 𝑥-value of -1 is Observation 6. The 𝑦-value of this observation is 𝑦 = 4.3. Notice that there is another observation with the same 𝑦-value, Observation 3. However, the 𝑥-value of that observation is 𝑥 = 1.6. Hence it is the point that is on the same level as the marked point, but it is placed to the right of it. Exercise 14.2 The intercept is equal to 𝑎 = −3.60 and the slope is equal to 𝑏 = 2.13. Notice that the lope is the coefficient that multiplies the explanatory variable and the intercept is the coefficient that does not multiply the explanatory variable. The value of the explanatory variable for the 3rd observation is 𝑥3 = 4.2. When we use this value in the regression formula we obtain that:
Exercise 14.3 After saving the file “aids.csv” in the working directory of R we read it’s content into a data frame by the name “aids”. We then produce a summary of the fit of the linear regression model of “deaths” as a response and “diagnosed” as the explanatory variable:
The estimated value of the slope 0.6073. The computed 𝑝-value associated with this slope is 1.81 × 10−14, which is much smaller than the 5% threshold. Consequently, the slope is statistically significant. For confidence intervals apply the function “confint” to the fitted model:
We get that the confidence interval for the slope is [0.5422759, 0.672427]. A scatter plot of the two variables is produced by the application of the function “plot” to the formula that involves these two variables. The regression line is added to the plot using the function “abline” with the fitted model as an input:
Observe that the points are nicely placed very to a line that char- acterizes the linear trend of the regression. We fit a linear regression model of “diagnosed” as a response and “year” as the explanatory variable and save the fit in the object “fit.diagnosed”. A summary of the model is produced by the appli- cation of the function “summary” to the fit:
Observe that the points do not follow a linear trend. It seems that the number of diagnosed cases increased in an exponential rate dur- ing the first years after Aids was discovered. The trend changed in the mid 90’s with a big drop in the number of diagnosed Aids patients. This drop may be associated with the administration of therapies such as AZT to HIV infected subjects that reduced the number of such subjects that developed Aids. In the late 90’s there seams to be yet again a change in the trend and the possible increase in numbers. The line of linear regression misses all these changes and is a poor representation of the historical development. The take home message from this exercise is to not use models blindly. A good advise is to plot the data. An examination of the
Exercise 14.4 We read the data in the table into R. The variable “year” is the explanatory variable and the variable “percent” is the response. The scatter plot is produced using the function “plot” and the regression line, fitted to the data with the function “lm”, is added to the plot using the function “abline”:
Observe that a linear trend is a reasonable description of the reduc- tion in the percentage of workers that belong to labor unions in the post World War II period. We compute the averages, standard deviations and the covariance:
The average of the variable “year” is 1969.143 and the standard de- viation is 18.27957. The average of the variable “percent” is 25.84286 and the standard deviation is 7.574927. The covariance between the two variables is −135.6738. The slope of the regression line is the ratio between the covariance and the variance of the explanatory variable. The intercept is the solution of the equation that states that the value of regression line at the average of the explanatory variable is the average of the response:
Exercise 14.5 The estimate for the expected value for the 𝑖th observation is ob- tained by the evaluation of the expression 𝑎 + 𝑏 ⋅ 𝑥𝑖 , where 𝑎 and 𝑏 are the coefficients of the fitted regression model and 𝑥𝑖 is the value of the explanatory variable for the 𝑖th observation. In our case 𝑖 = 4 and 𝑥4 = 6.7: 𝑎 + 𝑏 ⋅ 𝑥4 = 2.5 − 1.13 ⋅ 𝑥4 = 2.5 − 1.13 ⋅ 6.7 = −5.071 . Therefore, the estimate expectation of the response is −5.071. The residual from the regression line is the difference between the observed value of the response and the estimated expectation of the response. For the 4th observation we have that the observed value of the response is 𝑦4 = 0.17. The estimated expectation was computed in the previous question. Therefore, the residual from the regression line for the 4th observation is:
The 𝑝-value associated with the slope, 1.77 × 10−6, is much smaller than the 5% threshold proposing a significant (negative) trend. The value of R-squared, the fraction of the variability of the response that is explained by a regression model, is 0.1066. The standard deviation is the square root of the variance. It follows that the fraction of the standard deviation of the response that is explained by the regression is √0.1066 = 0.3265. Following our own advice, we plot the data and the regression model:
One may observe that although there seems to be an overall down- ward trend, there is still allot of variability about the line of regres- sion. We now fit and summarize the regression model with the size of engine as the explanatory variable: The regression slope is negative. The 𝑝-value is 0.000633, which is statistically significant. The value of R-squared is 0.05603. Conse- quently, the fraction of the standard deviation of the response that is explained by the current regression model is √0.05603 = 0.2367.
Again, there is variability about the line of regression. Of the two models, the model that uses the curb weigh as the ex- planatory variable explains a larger portion of the variability in the response. Hence, unless other criteria tells us otherwise, we will pre- fer this model over the model that uses the size of engine as an explanatory variable. Chapter 15 Exercise 15.1 First we save the file “diet.csv” in the working directory of R and read it’s content. Then we apply the function “table” to the two variables in the file in order to obtain the requested frequency table: 16.4 Exercise Solutions 359 ## healthy 239 273 ## illness 25 8 The resulting table has two columns and 4 rows. The third row corresponds to healthy subjects. Of these, 239 subjects used the AHA recommended diet and 273 used the Mediterranean diet. We may also plot this data using a mosaic plot:
Examining this plot one may appreciate the fact that the vast ma- jority of the subjects were healthy and the relative proportion of healthy subjects among users of the Mediterranean diet is higher than the relative proportion among users of the AHA recommended diet. In order to test the hypothesis that the probability of keeping healthy following an heart attack is the same for those that use the Mediterranean diet and for those that use the diet recommended by the AHA we create a 2 × 2. This table compares the response of being healthy or not to the type of diet as an explanatory variable. A sequence with logical components, “TRUE” for healthy and “FALSE” for not, is used as the response. Such a sequence is produced via the expression “diet$health==healthy”. The table may serve as input to the function “prop.test”:
The function “prop.test” conducts the test that compares between the probabilities of keeping healthy. In particular, the computed 𝑝- value for the test is 0.0001361, which is less than 0.05. Therefore, we reject the null hypothesis that both diets have the same effect on the chances of remaining healthy following an heart attack. The confidence interval for the difference in probabilities is equal to [0.1114300, 0.3313203]. The point estimation of the difference between the probabilities is 𝑝̂𝑎 − 𝑝̂𝑏 = 0.6881720 − 0.4667969 ≈ 0.22 in favor of a Mediterranean diet. The confidence interval proposes that a difference as low as 0.11 or as high as 0.33 are not excluded by the data.
The mosaic plot describes the distribution of the 4 levels of the response within the different intervals of values of the explanatory variable. The intervals coincide with the intervals that are used in the construction of the histogram. In particular, the third vertical rectangle from the left in the mosaic is associated with the third interval from the left in the histogram[^6]. This interval is associated with the range of values between 20 and 30. The height of the given interval in the histogram is 2, which is the number of patients with “terta” levels that belong to the interval. There are 4 shades of grey in the first vertical rectangle from the left. Each shade is associated with a different level of the response. 362 16 Exercise Solutions The lightest shade of grey, the upmost one, is associated with the level “u”. Notice that this is also the shade of grey of the entire third vertical rectangle from the left. The conclusion is that the 2 patients that are associated with this rectangle have Tetrahydrocortisone levels between 2 and 30 and have an unknown type of syndrome. We fit the logistic regression to the entire data in the data frame “cushings” using the function “glm”, with the “family=binomial” argu- ment. The response is the indicator that the type is equal to “b”. The fitted model is saved in an object called “cushings.fit.all”. The application of the function “summary” to the fitted model produces a report that includes the test of the hypothesis of interest:
The test of interest examines the coefficient that is associated wit the explanatory variable “tetra”. The estimated value of this param- eter is −0.04220. The 𝑝-value for testing that the coefficient is 0 is equal to 0.418. Consequently, since the 𝑝-value is larger than 0.05, we do not reject the null hypothesis that states that the response and the explanatory variable are statistically unrelated.
Specifically, the confidence interval for the coefficient that is associated with the explanatory variable is equal to [−0.1776113, 0.04016772] If we want to fit the logistic model to a partial subset of the data, say all the observations with values of the response other that “u”, we may apply the argument “subset”[^7]. Specifically, adding the expression “subset=(type!=u)” would do the job[^8]. We repeat the same analysis as before. The only difference is the addition of the given expression to the function that fits the model to the data. The fitted model is saved in an object we call “cushings.fit.known”:
The estimated value of the coefficient when considering only subject with a known type of the syndrome is slightly changed to −0.02276. The new 𝑝-value, which is equal to 0.620, is larger than 0.05. Hence, yet again, we do not reject the null hypothesis.
For the modified confidence interval we apply the function “confint”. We get now [−0.1537617, 0.06279923] as a confidence interval for the coefficient of the explanatory variable. We started with the fitting the model to all the observations. Here we use only the observations for which the type of the syndrome is known. The practical implication of using all observations in the fit is equivalent to announcing that the type of the syndrome for obser- vations of an unknown type is not type “b”. This is not appropriate and may introduce bias, since the type may well be “b”. It is more appropriate to treat the observations associated with the level “u” as missing observations and to delete them from the analysis. This approach is the approach that was used in the second analysis.
“Aural skills” are the core skills used by all people involved in music. Many schools and departments of music reserve curricular space for aural skills in classes called “aural skills,” “ear training” (or “ear training and sight singing”), “musicianship,” or other terms. While the word “aural” indicates that we think of these skills as relating to the ear, in many ways they focus more on the brain. These skills belong in two big categories. First, we are developing internalized knowledge and physical structures. For example, we internalize the feeling of conducting a measure “in three” so that we can use that feeling to identify what’s going on in music; and we internalize the sounds of the different notes in a scale and their relationships so that we can draw on these sounds in our own music-making or music-imagining. Second, we are developing habits, and especially habits of attention. When we read music from notation, for example, if we have developed certain eye-movement habits and procedures, we will be much faster and more accurate. We all have lots of practice listening to music, but we can develop habits of listening for specific aspects of the music that relate to our goals—whether they are to write it down, improvise over it, or something else. Now, we should be honest: there’s no way to actually meet our goal of addressing all the “core skills used by all people involved in music.” There are definitely core skills that we have left out. Some of that is due to our own ignorance, particularly of the needs of musicians and music thinkers who focus on repertoires and practices that we’re less familiar with. It is our intention that over time, and with feedback and collaboration, we will address more of what we have left out by accident. Some things, however, are purposefully omitted because if we included everything, the book would be too long and complicated to be useful. Some of what’s in this text may be less useful to certain people than others. But we have done our best, based on our own experiences, to make sure the skills described in this book are broadly useful for as wide a variety of musicians as possible.
We decided to create a new aural skills book for two reasons. First, we wanted there to be a reasonably comprehensive Open Educational Resource (OER) textbook available to make aural training more accessible. We hope some instructors and students find the book useful simply because it is freely available, easy to modify and use, and compatible with accessibility aids like screen readers . While the current version of the text focuses on the foundations of aural skills (often associated with university-level classes named “Aural Skills 1”) and therefore isn’t yet ready to replace a textbook designed for multiple semesters of study, our long-term goal is to support an entire aural curriculum with open resources. Second, there are some values that we do not feel are yet adequately represented in current textbooks. Here are the most important values we have striven to align the text with: Empowerment. It’s difficult for teachers to get away from their role as judges and gatekeepers when the most visible end result of a class or assignment is a numerical or letter grade. But as much as possible, we have tried to make sure that our focus is on helping you develop skills and knowledge that will empower you by making you a more fluent musician, a more sophisticated and sensitive listener, and more confident and creative. Each chapter has specific goals for things you’ll be able to do by the end of your studies, and we focus on those goals rather than constant assessment and judgment. We hope that’s satisfying and even enjoyable. Creativity. It’s crucial to us that the focus is on you and your music-making. Many aural skills classes focus only on listening to or reading music that was already created by other people. That’s important, and we’ll do it too, but engaging that music in creative ways, and creating your own music, is at least as important. Engaging with pre-existing music in ways that involve your own creativity, and creating your own music through improvisation and composition, can be both more fun and more useful to your learning than simply repeating strict dictation and sight-reading of other people’s music. Developmental inclusivity. Current models of aural skills instruction tend to reward students who have already achieved a degree of success with certain skills, particularly familiarity with staff notation, the ability to imagine sound internally, voice and piano experience, and knowledge of music theory. These are definitely strengths to be celebrated, but so are creativity, familiarity with multiple styles, and more. We have tried to embrace a wider vision of desirable skills. And for all the skills we address, we have striven to teach the foundations that everyone needs to be successful. Musical inclusivity. Like you, we are inevitably both empowered and limited by our own experiences and identities. But we seek for our resource to be useful and empowering across a broad range of musical repertoires and practices. It is true that we focus our instruction on several culturally-specific practices, including pitch structures based on keys and triads and a time signature- based understanding of meter. But we have done our best to do so in a manner that is as broadly applicable as possible, that acknowledges and shows respect for other ways of doing things, and that doesn’t prioritize one subset of this repertoire or composer demographic over another. Holistic assessment. Sometimes we need to focus in on a specific skill or idea in order to refine it. But when aural skills classes simply grade the details over and over in excruciating detail (“1 point per note, 1 point per beat”), it’s difficult for students to connect what they’re doing to their broader musicianship, and it’s also natural for students to internalize an impossible standard of perfection. We always try to keep in mind the broader goals that we are aiming for, particularly in the assessments we provide, and to allow different kinds of stumbles and failures as a natural part of the learning process.
This OER aural skills text is designed to support aural skills instruction at introductory levels, particularly college/ university/conservatory classes with names like “Aural Skills 1,” “Ear Training 1,” “Musicianship 1,” “Sight-Singing 1,” etc. The text may also be useful for teachers of high school AP Music Theory or other pre-college classes, though it has not been tailored specifically to the needs of students studying for the AP Music Theory exam. Before we start, we should emphasize that this textbook—while nominally complete and usable in December 2022—is still under fairly active development. Fortunately, we still think this book can be useful to everybody in some way. If you do decide to actually adopt the book, welcome! As we teach from the text ourselves and continue its development, we will try to add advice and possible curricular layouts here. For now, our focus has been on developing the core materials of the text, so our advice is limited. We do suggest combining and reordering chapters as appropriate: it may help to think of them in three groups, with chapters 1–7 covering largely preliminary/foundational skills, chapters 8–13 applying those skills to real-world tasks (including, but not limited to, sight singing and dictation), and chapters 14–15 adding some more difficult foundational skills (form and harmony). We can also point you to some additional OER aural skills resources down below. For those who want to stick with their traditional texts, we hope that you find parts of this book helpful anyway. We feel that many aural skills textbooks do a poor job of teaching some fundamental skills like internal auditory imagery and how to approach “chunking.” Perhaps you’ll find some nice additions to your curriculum in these or other sections, or at least have a resource to send students who struggle with some of these skills. We’ve also done our best to come up with some really creative, applicable, and fun exercises throughout the text. Even if you don’t adopt this as your primary text, you’re welcome and encouraged to look through our activities, find some you really like, and bring them into your classroom. As the book matures and you get used to its approach, we hope you find more and more that you want to incorporate. If you ever find that you have suggestions or feedback for us, please don’t hesitate to share it. We request that you offer your feedback at . If you decide to use the book in any way, please let Tim Chenette know at
Most instructors will be aware that many existing aural skills curricula and texts are derived primarily from the content and ordering of a music theory curriculum, filtered through two tasks: sight singing and dictation. We are trying to move away from that model, for reasons too numerous to express here. As you consider how this book may be of use in your own teaching, we draw your attention to the following differences between this text and previous aural skills texts you may be familiar with. Accessibility. With their focus on notated music, heavy reliance on working memory, and limited vision of what student success looks like, aural skills classes are notoriously inaccessible to students who don’t have certain specific abilities and habits. This text has been designed with accessibility in mind, with a particular focus on trying to make everything transparent to those who use screen readers. In addition, we present different visions of what aural skills look like, from improvisation to playback to transcription to sight reading, offering success to students with different backgrounds and goals. Finally, we explicitly recognize that students bring different abilities and experiences to the class: for example, when we call for the use of the voice, we try to offer alternative approaches for those whose brain-voice connection is problematic. Inevitably, there are accessibility challenges that we have not yet addressed, but we welcome suggestions and commit to making aural skills acquisition accessible for as wide a range of individuals as possible. A focus on aural fundamentals. Common aural skills tasks like sight singing and dictation require or benefit from an array of abilities, including hearing with reference to key and meter, hearing sound in your head (“audiating”), directing your attention to different parts of the music, and relating music to internalized models. In fact, many instructors say they use sight singing and dictation largely in order to teach students these abilities. We’re convinced, however, that students who don’t have a foundation in each of these areas won’t develop them automatically. So the initial chapters of this text draw students’ attention to these important skills and give them strategies for improvement. Connections outside of the classroom. We want aural skills to be something musicians do all the time—not just when they’re in the aural skills classroom. We have a series of chapters dedicated to different manifestations of real-world aural skills: improvisation, playback, transcription, sight reading, and leading or participating in an ensemble. In addition, activities in all chapters relate to such real-world concerns as tuning, communication, conducting, and more. Even traditional classroom activities such as sight-singing and dictation include activities and explanations that connect them to real benefits that musicians can experience outside the classroom. Different outcomes. The array of different real-world activities in this text allows some flexibility. Different instructors may care about different manifestations of aural skills and choose different chapters to emphasize, or they may offer students some choice. For example, playback, transcription, and dictation all rely on a core of listening skills: perhaps, given a choice, some jazz musicians would prefer to focus on transcription while some music therapists would focus on playback. Going beyond notation. The traditional aural skills tasks of sight singing and dictation center “traditional” staff notation. While such notation is a useful skill for many musicians, many pedagogues are exploring how to decenter notation in order to get beyond a focus on pitch and rhythm, embrace oral traditions or traditions with other notation systems, and center creativity rather than replication. In addition, there are many people who struggle with or cannot use notation. We use different methods of describing rhythms in our Rhythm Skills chapter, offer Playback as an alternative to Transcription and Dictation (though we also feel that Playback builds skills relevant to these), and offer Improvisation as a way of welcoming in student creativity. Welcoming instruments. Many instructors and students have the idea that using an instrument in aural skills is “cheating.” We agree that certain tasks benefit from using the voice, but we are also aware of the extent to which we rely on instrument-based imagery when understanding music. We want to invite students to build and access such imagery, too, so many of the activities in the text specifically call for or welcome different kinds of instruments. Learning in groups. So much of what musicians do is collaborative, particularly in ensembles. In addition, “core” classes like aural skills are often important in building cohesion and community among music students. We embrace group activities throughout the text, and include a chapter explicitly dedicated to applying aural skills in an ensemble. Empowerment. We have done our best to avoid language about how one “must” do things, in favor of offering paths to new skills. We hope this invites students to bring their own goals and internal motivation into the process of aural skills acquisition. Learning, not judgment. Standard aural skills tasks (sight singing, dictation, interval and chord drills) are easy to grade according to a standard of “perfection.” We find this tends to judge students on the abilities they bring to class instead of focusing them on learning, and as a result many students develop fixed mindsets that they are “bad” or “good” at aural skills. Instead of focusing on easy assessment and judgment of student abilities, we have designed every activity in the text as a way to engage in learning. Each activity has a listed goal, presented as the desired outcome of the activity rather than something students need to already possess in order to be successful. It’s our hope that this results in a more welcoming, more teaching-based (as opposed to judgment-based), more fun, more creative, and more applicable aural skills curriculum. We welcome your thoughts—and your reports on how it’s working for you!
“Traditional” grading of dictations, interval drills, and sight singing often looks something like “one point per pitch, one point per beat for rhythm.” This standard suggests from the beginning that perfection is expected, which can lead students to develop fixed mindsets—which are bad for learning. It can also be difficult to apply this standard to many of the activities in this text, which embrace group work and creative, open- ended activities designed primarily to foster learning rather than being straightforward to grade. As we and other instructors implement this book, one of
We think the exercises in this text are actually good training for traditional dictations and sight-singing hearings, so if you want to (or are required to) stick with those and traditional grading, perhaps our exercises will simply give you some ideas for in-class activities. If we embrace the idea that students will come to class with different abilities and goals and that it’s okay for them to leave it with different levels of achievement in different areas, then our primary goal may be to encourage active engagement in learning activities. Grading simply based on participation may be appropriate in these circumstances. We also like incorporating self-reflection: for example, a group of students might engage in some creative, collaborative activity, write self-reflections, make goals for next time, and then repeat those steps in the next class. An instructor can grade participation, quality of self-reflection, and whether those goals are achieved or not. Since the activities in this text are each directed towards a particular goal, we think standards-based grading, criterion-referenced grading, and pass-fail grading that emphasizes outcomes are all a good fit. (There are good resources on the internet for exploring these different approaches to assessment.) For example, an instructor might grade an activity pass-fail based on whether the intended goal is achieved, and offer multiple opportunities for students to attempt the exercise.
If you’re looking to adopt Open Educational Resources (OERs) as much as possible in your aural skills curriculum, you may be wondering what else is out there. Here’s the list of what we and our wonderful librarians have found; if you know of other resources, please send them our way and we’ll add them to the list! by Gotham, Gullings, Hamm, Hughes, Jarvis, Lavengood, and Peterson has an , as well as Examples for Sight-Counting and Sight-Singing “” and “” in development by Levi Langolf. by André Mount includes many public domain melodies, formatted to permit either dictation or sight reading, as well as downloadable Sibelius and MusicXML files for these melodies.
singing anthology, downloadable as a pdf or as source code (LaTeX and Lilypond), with printed copies available for a low price. by Adam J. Kolek is a large graded series of recordings for dictation; the answer key can be requested from the author.
Welcome to this text! We hope you find it a creative, fun, and empowering approach to acquiring aural skills. As you embark on your formal education in aural skills, there’s one thing we’ve learned from our years of teaching that we’d like you to know: Everyone is different and hears music differently. People with absolute pitch are often mystified when they learn that not everyone can identify pitch names without context. People who play a lot of music by ear on the guitar are often surprised to hear that others struggle to hear chord progressions. Be respectful and curious about the different ways your peers hear music. And be kind to yourself, too: you will inevitably encounter activities where you feel less successful than your peers, but remember that everyone is different and everyone experiences this feeling at some point. We’re all learning together. You come to this class with strengths. Many strengths can also lead to weaknesses. For example, if you have absolute pitch, you may have avoided situations that would require you to learn to hear relationships among pitches. If you are really good at picking up music by ear, you may have avoided learning to read notation. If you are an excellent sight reader, you may be very uncomfortable with improvisation without the security of knowing what you’re “supposed” to do. If you listen to music in the background every minute of every day, you may have difficulty listening to music with focus and intention. Take a moment and review: what are your musical strengths? What are your weaknesses? Then remind yourself: your current strengths and weaknesses are not facts about you that will never change. Instead, your strengths are your toolbox: the things you know how to use. Your weaknesses are your education plan: we engage in education in order to learn and improve, not in order to pat ourselves on the back for things we can already do. (At least, that should be our goal. Easier said than done.) Set an intention now to explore your areas of weakness with curiosity and dedication, and be alert to the temptation to try to avoid them. It’s a common misperception that people simply have a “good” or “bad” ear and there’s nothing we can do to change that. This is not true. Students do come to their formal education with different experiences and abilities. But aural skills acquisition is about building knowledge structures and practicing helpful habits. These are things absolutely everyone can do. Welcome to the journey!
If you’ve found this text independently, rather than through a class— well, wow. We’re honored you’re here! We hope you find this resource helpful! We need to start with the same warning we gave instructors in a different section: while this text officially reached a nominal state of completeness in December 2022, it is still under active development. But hey, it’s out there, and it’s free. So we hope you find it rewarding, feel free to send us your feedback, and watch for future changes that should make it even better! In a classroom setting, instructors are likely to mix and match the chapters a bit. That may not be easy to do on your own, but we do want you to know that you don’t have to read from (virtual) cover to cover for the book to work. If you have something in particular you want to work on, feel free to skip to that chapter. If not, you can either go in order or jump back and forth between chapters. It may help to know that chapters 1–7 cover the most foundational skills, and should work well together. You might work from chapters 1–3 at first, then 4–7, jumping between chapters as you like. The remaining chapters
Absolute pitch, often called perfect pitch, refers to the ability to name pitches you hear without reference to an instrument. It’s a really cool and useful ability, and some aural skills teachers explicitly seek to teach it. Unfortunately, while there’s some debate, most of the research on the subject seems to suggest that after childhood it’s not possible to develop extensive absolute pitch. Fortunately, you don’t need absolute pitch to be successful as a musician. Absolute pitch either comes in different varieties or, maybe, exists on a spectrum. Some people are extremely accurate and quick at naming any pitch at any time. Other people are great with the “white notes” of the piano but a little slower and less accurate with the “black notes.” Others can identify the open strings of the guitar, or the orchestral tuning A of an oboe. Others, even without musical training, can’t necessarily name pitches but can sing our favorite songs at or very close to the pitch level we’ve learned them at from recordings. People with absolute pitch should celebrate the ability, because it really is useful. But there are also good reasons to develop other ways of hearing. For example, sometimes a choir goes flat, or a Baroque ensemble uses a different tuning standard, or a string quartet adjusts the tuning of individual notes to make them more expressive, or a pop song is tuned a quarter-tone up because it improves the “sound”; in these situations, it’s nice to be able to hear the relationships among notes rather than identifying each one. Many people with absolute pitch also find part of their range seems to “shift” at some point in middle age, making it more challenging or even sometimes unpleasant to rely on at this point. And hearing relationships between musical tones is just such an important listening skill that it’s worth learning for its own sake. Fortunately, you can definitely learn relative pitch—that is, the skill of identifying how pitches relate to each other in some way. This is helpful as an additional, complementary skill for people with absolute pitch, and it’s probably the best way for people without absolute pitch to learn to hear music in more detail. This is the approach we take in this text—listening for relationships, particularly within a key but also in intervals and triads.
This book uses a fair number of Spotify playlists, particularly in the exercises. We do this because it is a relatively straightforward and simple way to curate music for you to work with in a manner that satisfies copyright. However, we know that Spotify does not pay artists what we think they are worth. We encourage you to purchase albums and tracks when you find something you really like to work with or listen to. In terms of usability, you have a few options: If you do not have a Spotify account, you will only be able to listen to a sample of each song, and you won’t be able to control which portion this is. This will work okay for a few exercises, but most exercises will benefit from being able to listen to the beginning or some other specific portion of the song. If you do not wish to sign up for a Spotify account, you may wish to gain access to the songs some other way. If you have a Spotify account, you can either log in to Spotify in your browser or press the Spotify logo on an embedded playlist or song to open it in the Spotify application. From there, you should be able to listen to
I cannot sufficiently express my thanks to my parents, and , who taught me my earliest music lessons. From them, I learned that thinking about music and making music are inseparable. I wish everyone were so lucky as to have the music education I had as a child. Though he is listed as a “collaborator,” was an integral part of the creation of this book. I have long admired Danny’s creative and empowering vision of aural skills, and reached out to him early in the process of creating this text. Since then we have met regularly, shared ideas, co- edited text, co-created activities, and more. As I was on sabbatical in fall 2022 and Danny was not, I had more time to write text and in the end we decided to distinguish between an “author” and a “collaborator.” However, Danny’s authorial voice is recognized in many chapters, and his invaluable advice and ideas have been influential throughout the book. was another early collaborator on the text. Sarah brought important expertise on cognitive science and particularly a sophisticated understanding of how the brain’s various representations of music and musical ideas work together. While Sarah eventually needed to focus on other projects, she was integral to the planning process, and I hope to get her involved again in the future! My thanks also to Textbook Assistants Meghan Hatfield and Ryan Becker. Both are incredibly creative, dedicated, and passionate about sharing music. Meghan and Ryan created graphics, populated playlists, expanded sketchy text and activity descriptions, and gave their own suggestions for how the textbook could be better. Finally, a huge thank-you to the wonderful folks at the Utah State University Merrill Cazier Library who have supported this project, and particularly Stephanie Western, OER Program Manager. A project like this takes a village, and I’m very grateful to those who have been a part of it! -Timothy Chenette
One of my beloved grandparents used to say, “The most important choice you make in life is what you choose to pay attention to.” Attention is often compared to a spotlight: whatever we’re paying attention to seems to be brighter, louder, somehow more “present” than everything else. This is particularly clear in vision. For example, look at the image of a series of tiles below. While these tiles are theoretically all identical, you can probably focus your attention on different shapes among them and make them seem to “pop out” from the image.
Our attentional control is probably most obvious in vision, since we can actually experience our eyes moving around and even use eye-tracking software to detect where we’re focusing our attention. Too bad our ears don’t swivel (unless you’re a cat)! But just as with vision, we can focus on one sound or one component of a sound, and we can move that focus around. It’s just much harder to observe. Consider, for a moment, all the situations where musicians need to exercise our auditory focus of attention. When leading a band rehearsal, for example, a conductor who has just heard something wrong needs to direct their attention to the different parts to figure out where the problem is. When transcribing a tune, a jazz musician or arranger might need to direct their attention to specific notes to determine the exact chord being used at a given moment. Singers asked to make a difficult entrance might need to listen for specific notes at specific times in their ensemble. Keyboard players trying to bring out an inner voice might need to work hard to make sure they are hearing that above the more obvious melody. There are many more. Essentially, one of the most important things a musician does is to pay close attention, and all of our aural skills rely on a foundation of attentional control.
Pay close attention to a piece of sounding music. Direct their attention to different layers within a musical texture. Direct their attention particularly to bass lines. Direct their attention to aesthetic/affective aspects of a piece of music.
Our first goal is simply to build the skill of paying close attention. Unfortunately, our attentional capacities are limited: we simply cannot pay attention to everything available to us. Fortunately, there are strategies we can use when we want to maximize our ability to use the attentional capacities we have. For example, we can use stress- reduction and centering techniques to make sure we are as “fully present” as possible with what we are listening to. These include deep breaths, closing our eyes, and other mindfulness techniques. You may already have some idea of which of these work well for you when you are feeling stressed; if not, we encourage you to look around on the internet or elsewhere to find more ideas. There are, however, more music-specific techniques we can use. These rely on our ability to become invested in and engaged with what we are paying attention to. First, it can help to imagine yourself making the music you are hearing—singing or playing it on your primary instrument or the actual sound source/instrument. You may not know the exact notes, but if you can imagine making an analogous sound, this can involve you more deeply in what you hear. Second, the more you get yourself making predictions and reacting to the music you hear, the more deeply you will be involved in it.
below, with a focus on the oboe melody from 0:48–1:25 (though you may have a hard time turning it off at this point). Follow this melody with your attention, involving yourself in the sound: hear when it is more intense and when it is more at rest, and predict at each moment where it will go next. It may help to imagine yourself singing along with the melody. your eyes. Then listen to the first minute-and-a-half of the song. (If you have less time, the first 30 seconds already have many of the layers.) As you listen, direct your attention towards each about-to- enter layer, anticipating what it might sound like and where in the musical “texture” you will hear it, involving yourself actively in the sound as it unfolds. Sounds: Sounds of a car starting Piano octaves Piano triads and vocal melody Bass (two quick notes) Downward slide (end of Verse 1, around 30 seconds) Claps Downward slide again Sustained low bass and high synth line High synth choir, increasing volume Sudden drop to just voice and piano triads Just an octave in the piano, with the voice (1:25)
Much of the music we interact with every day has multiple things going on at once. There might be any number of melodies and harmonizing lines, perhaps a chord progression, drums, and more. For most people, when there’s more than one thing going on in music, the easiest part of the texture to focus on is either the melody or the highest layer of the texture (often, these are the same). Thus, most of the attentional challenges we face in music are from situations where it would be useful to focus on some element of the music that is not the melody. Several factors can make it more difficult to follow non- melody lines. First, we may have a natural inclination to listen to the highest “voice.” Second, some parts of the texture—notably, bass lines—are more likely to leap around and therefore difficult to hear as a single melody. Third, other parts—notably, inner parts—may be less attention-grabbing because they do not have as many interesting movements, satisfying shapes, and more. Finally, most of us have sung or played plenty of melodies and thus have a certain amount of “feeling” for what kinds of things they will do, but the patterns and tendencies of inner voices and bass lines may be less familiar. Fortunately, the same strategies we used above for simply paying close attention to single melodies can also help direct our attention around a texture. The activities presented here are intended to give you practice and scaffolding for the skill of listening to different parts of a texture.
Goal: Practice directing attention to different parts of complex textures Instructions: Some of the songs are listed below with specific lines and time-stamps to focus on. For the rest, just work on focusing your attention on the parts that aren’t the melody. See if you can pick one and hum along with it on a second listen-through!
Hallelujah 2:34-3:04 Tenor line (on ‘doo’) Schuyler Sisters 1:54-2:15 The string section Waving Through a Window 2:47-3:47 Background vocals (on ‘oh’) The Rose 1:35-1:47 Alto countermelody Try to follow each part individually
Of all the non-melody lines we could listen to, bass lines may be both the most important and the most well-defined. Bass lines are important because they have long been recognized to give a kind of “foundation” to the music, and because they are strongly associated with chords—so following a bass line gets you a long way toward figuring out a chord progression. Bass lines, like inner voices, are not usually as prominent to most listeners as the highest voice. But they have an additional, unique challenge: they tend to leap, making them more difficult to follow. In a later chapter, we’ll work a bit more on learning to “think like a bass line.” For now, we’ll simply work on ways to direct our attention to them, through a series of practice activities. Goal: Direct your attention to the bass line of different songs Before you start: Headphones or high-quality speakers are recommended! Instructions: Listen to the following songs. Pay close attention to following the bass line. On your second listen through, try humming along in a comfortable octave or tracing the line in the air or on paper. Bass lines leap a lot, so your movements or tracing may feel jerky. See if you can follow the bass line through the song, even as the texture of the song may become thicker.
Goal: Help those who have a lot of difficulty following the bass to find a more scaffolded way to practice this skill Before you start: You’ll need to download the application and find a MIDI file of a song whose bass line you are interested in following. Instructions: Open the MIDI file in MidiTrail. Listen to the bass line as MidiTrail plays the file, using the visualization to give your brain some idea of where the bass line is and what it is doing. Since MIDI playback often leaves much to be desired, once you feel comfortable following the MIDI version, find a recording of a more musical performance and see if you can transfer your bass-following skills to the recording.
Sometimes we think of ourselves as being able to hear an entire complex musical texture at once holistically, as a “Gestalt.” This, for example, is what a conductor uses when noticing the one wrong note in a chord that involves many instruments playing different notes—it’d be incredibly time-consuming to listen individually to each different one! Nevertheless, we are not able to focus on everything in a texture equally all at once. In sight, our visual attention has a primary focus, but there is also a broader visual field that our eyes and brains are aware of in less detail. As long as that visual field seems mostly familiar, our brains are pretty good at filling in the details and noticing surprising elements, but if what we’re looking at is confusing and unfamiliar, we may only be truly aware of whatever we’re focusing on. It seems like hearing is similar: as one scholar says, “the data suggest that the pitches of many tones can be processed simultaneously, but that listeners may only be consciously aware of a subset of between three and four at any one time” (Oxenham 2013, 21). How can you get better at listening to a whole ensemble or complex texture and figuring out what is going on? The best way is to give your brain lots of models by learning lots of music in a style similar to what you plan to listen to. For example, if you aim to be a band conductor, learn lots and lots of band music. Your brain will use its memories of other pieces it knows to fill in details of the parts of the sound it isn’t focused on, and the more models it has, the better it will do.
Many people fall in love with music because of how it makes them feel. For such people, the technical focus of a lot of the listening we do in formal music education can draw their attention away from this aspect of music that they most love. That’s a shame! We believe strongly in the importance of being able to listen for the technical details of music. But we also believe strongly that we should be able to direct our attention to more holistic aspects of music such as tension, emotion, and dramatic shape—both because this makes us more “musical” and because it is so much of what most people love about music. As we move forward in this text, never lose touch with your instinctive reactions to music. They are valuable in and of themselves. And when we can place them alongside our technical listening skills and see how they relate, our learning will be even more powerful.
Instructions: While listening to the various songs in this playlist, try air-tracing the tension curves and/or arc of the music. Reflection: What do you notice about the more tense moments in the music? In juxtaposition, what are some key traits in music that you find calming, sad, or low? What aspects of the music differentiate the affects of “Run Free” by Hans Zimmer and “Romantic Flight” by John Powell? Take note of these characteristics so that you can add more feeling to the style and musicianship of your repertoire! Option 2: Draw a visual representation of music Instructions: Draw or paint the feelings that are represented in the songs of the playlist below. It may be helpful to start with a structure or frame such as a circle. Reflection: What kinds of images does the music invoke in you? How did your visual
Across cultures, music is often associated with movement. This movement ranges from elaborate dances to simple toe taps and head nods. When we move to music in this way, we are aligning bodily motions with events and processes we hear in the music. Of course, to accurately align our movements, we have to anticipate when these events and processes will happen. Regardless of music education, most people can do this to at least some extent. Many cultures have also invented some kind of way for musicians to keep track of, coordinate, and communicate about time in music. These take different forms, including West African bell patterns, European time signatures, Indian talas, and more. Each of these is a complex, culturally-specific combination of innate bodily responses and invented concepts and systems. We’ll eventually focus on time signatures, but first we’ll focus on paying attention to your body’s natural responses to music, and on making these responses more detailed. Starting here has two benefits. First, it focuses us on a skill that is widely applicable (though in different ways) across many musical cultures rather than starting right in with cultural specificity. Second, it can actually clarify aspects of time signatures if we start with something a little more intuitive. The ways you’re inclined to move to music are, of course, already influenced by your cultural background. For example, maybe you have been brought up in a culture where public bodily motion is frowned upon, and you don’t feel comfortable with much more than a subtle head nod; or maybe you’ve been brought up in a culture where dance is embraced and it’s all you can do not to get up and groove when you hear certain music. Whatever your background, that’s ok! Embrace it and refer back to it as we get into the terms and systems people use to talk about time in music.
Identify whether there is a recurring temporal pattern in a piece of music Entrain to a beat in sounding music (equal or unequal) Identify a beat cycle (measure) in sounding music, including identifying where that cycle begins Entrain to a simple or compound division of a beat Given a tempo, conduct a beat pattern Internally generate a metrical framework (beat, division, cycle) Map felt patterns to appropriate meter signs Use conventional stylistic markers to identify conventional meters (popular music backbeat, waltz, etc.) Entrain to a hypermetric pattern using simple gestures
We’ll start simply by exploring your natural bodily responses to music. Though we call them “natural,” they are a combination of innate characteristics of your body with habits and patterns you’ve learned from the people and cultures that surround you. As we explore your natural bodily responses to music, we’ll also think about predictability, since this is a priority of the system of time signatures. You’ll likely find that some music doesn’t invite much movement. Often, this is because it’s hard to predict what will happen next and/or exactly when it will happen. Some of this music may be written or imagined with time signatures, but in a way that’s difficult to perceive. Some of it may be created without reference to a time signature at all. Figuring out how to describe this music with time signatures may require vagueness (“it’s free and improvisational”), a lot of work (“I decided where to put all the downbeats, but the time signatures are constantly changing”), or atypical uses of the system (“I wrote it out in 4/4, but don’t pay attention to the downbeats because it’s totally not in 4/4”). A lot of music, however, invites movement through a certain amount of predictability: we have a sense of when important events are likely to happen in the music, so we can move our bodies to align with these. When this is the case, the time signature system prioritizes regular/consistent motion that aligns with important events (long notes, chord changes, bass notes, etc.) in the music as often as possible. Regularity/consistency is particularly important here, so we encourage you to use bodily motions that are easy to keep at a consistent speed. Large-muscle motions are usually best because it’s fairly obvious when we start accidentally doing them at a different speed; these motions include swinging your whole arm, swaying your body, nodding your head, and even tapping your toes. If you use small-muscle actions like tapping your fingers, clapping your hands, or using a vocal syllable, you may find yourself accidentally tracking different speeds at different times (that is, following what some people call the “rhythm” rather than the “meter.”) Listen to the following songs. For each, allow your body to move to the music in some way. Which body part(s) are you moving? Are your motions repetitive (head nods, body sways, toe taps) or more interpretive (playing an “air instrument” or moving continuously through space)? How fast are you moving? Can you specify what in the music you are responding to? If you weren’t engaging in repeated, consistent movements, try to find something you can do over and over to the music at roughly the same speed. If you can do this, what are you aligning with in the music? Optionally, use a metronome or metronome app’s tap function to determine the speed of your repetition in beats per minute. If step 2 worked for you, try to find a slower way to move to the music. How is this different? Is it easier or harder? Then try to find a faster way to move to the music. How is this different? Is it easier or harder?
Now we’re going to start connecting your natural physical response to music with the system and terminology of “time signatures.” This system was developed in European “classical” music and largely adopted (and occasionally adapted) by American “popular” music (broadly defined). We use this framework not because these perspectives are universal, but because they are useful when you are surrounded by these cultures. As we work to connect your bodily motion to time signatures, let’s start with the idea of a “beat.” This term has been defined in many different ways, but for us, it simply means the way you tend to move, in relatively consistent and repeating ways, to a piece of music upon first listening to it. Often, the majority of listeners will agree on the beat; other times, there may be two or even more possibilities. Studies suggest that we tend to gravitate to the range between 80–120 beats per minute (BPM); typically, if people disagree about the beat, it’s because there is no single way to move consistently to the music within that range. (That said, the majority of these studies were conducted on members of Western, Educated, Industrialized, Rich, and Democratic—that is, —populations, so this isn’t necessarily universal.) We’ll continue to use bodily motion as we determine the beat. Bodily motions that tend to be particularly helpful in “entraining” to beats in the typical range include arm waves, head nods, and toe taps. Most often, every beat in a song is about the same length of time (unless there is a tempo change). However, there is also a significant amount of music where beats alternate in some pattern between different lengths, often with “long” and “short” beats in a length ratio of 3:2. These are sometimes called “isochronous” (same beat length) vs. “non-isochronous” or “mixed” (different beat lengths) meters. Because mixed meters present additional challenges and are less common, we will include some examples in this chapter but leave detailed instruction for later study. Unless the beat is purposefully obscured, finding it often feels relatively natural. If it doesn’t, make sure you’re moving your body! But if you’re still not consistently finding a beat—or not finding the same beat as the majority of those around you (or your instructor)—it’s a good idea to work with someone one-on-one. Goal: Use physical motion to entrain to (align our attention and motions with) a beat. Instructions: For each of the songs below, find the beat. You are encouraged to do this with your body in some way. Head nods, foot taps, and hand taps are often particularly helpful in drawing your attention to the layer typically called the “beat.” If you find yourself continuously drawn to different lengths of time (“rhythm” rather than “meter”), however, you might try whole-body sways, which are more likely to stay at the same speed. Optionally, use a metronome or metronome app’s “tap” function to determine the song’s tempo (beats per minute).
Goal: Use physical motion to entrain to (align our attention and motions with) a beat, then determine whether beat lengths are roughly the same most of the time (isochronous) or whether they change lengths (mixed/non-isochronous). Instructions: For each of the songs below, find the beat. Again, you are encouraged to do this with your body. Once you have found the beat, determine whether the beats are generally the same length throughout the piece (isochronous) or whether they change lengths (mixed/non- isochronous). If the beats change lengths, figure out how you would describe the repeating pattern of longer and shorter beats. For example, you might describe one as “long, short, short, long, short, short, long, short, short,” etc.
Goal: Gain skill in working with songs with multiple different possible beat layers or ambiguous beats. Instructions: Each of the songs in the following playlist fits one of two descriptions: The song may have multiple different possible interpretations. In most cases, this means two possible speeds you could move to it within our typical 80–120 beats per minute range. However, sometimes there are two competing interpretations of the beat. Or the song’s beat may be heavily obscured by syncopation and complexity. In each case, use your physical motions and your intuitions to figure out how you would move to, and describe, the beat. You will likely find others who
Beats often feel like they group into longer “cycles” that repeat. In non-isochronous/mixed-meter songs, these cycles are often defined by a repeating pattern of longer and shorter beats. In isochronous songs, these cycles are often defined by accompaniment patterns (including in the drums) or chord changes. These cycles of beats are typically called “measures.” Most often, we will find beat measures that are 2 or 3 beats long, though other lengths, like 5 beats, are possible. 2-beat measures are called duple, and 3-beat measures triple. If you’re familiar with time signatures, you’ve almost certainly heard of a meter called 4/4 (“four-four”). This time signature indicates a quadruple, or four-beat, cycle. However, that four-beat measure can almost always be heard as two two- beat cycles; that is, the two following examples aren’t necessarily different in what they communicate to a performer.
We’ll discuss some of the reasons why you might choose one or the other when we talk below about culturally-specific expectations below, but for now, we’ll never ask you to choose between them. This is why, in the activity below, we refer to “duple or quadruple” (2 or 4) measures as a single category. If this activity is difficult, it may help to think through the factors that contribute to the perception of measures. These include chord changes, accompaniment rhythmic patterns (such as guitar strumming patterns or piano left-hand patterns), repeating rhythmic patterns, and bass (low) notes.
Goal: Identify measures as either duple/quadruple (2 or 4 beats long) or triple (3 beats long). (Again, the decision to write or describe something as a measure of 2 vs. a measure of 4 is a personal decision and in some cases incorporates cultural factors, such as the use of 4/4 meter as a “default” in much popular music.) Instructions:
One culturally-specific way of keeping track of measures is with the conducting patterns used by the conductors of ensembles like bands, choirs, and orchestras. These specific ways of waving your arm and hand can be useful if you need to lead such an ensemble, but they also are useful in giving you a physical model for what measures of different lengths might “feel like.” Internalized physical models are a very powerful tool to draw on in your listening and music-making, and those embodied, internalized models will be our focus here: if you wish to learn about the art of conducting for its own sake, including cueing, communicating character, clarifying ictus, and more, this is not the best source. All of the conducting patterns assume that the cycle has a “beginning” beat, called a “downbeat,” represented by a downward arm motion. They also assume an “ending” beat, called an “upbeat,” represented by an upward arm motion. Of course, because this is a cycle, the ending simply prepares for another beginning, and the upbeat often feels not like a point of rest but rather a moment of anticipation as the arm rises in preparation for the next downbeat. In between the downbeat and the upbeat are different numbers and directions of hand-waves to represent the number of beats in the cycle or “measure.” Conducting patterns, in their basic form, thus communicate two pieces of timekeeping information: the beat (represented by each wave of the arm/hand) and the cycle (represented by the pattern and especially the return of each downbeat). The simplest conducting pattern is for a two-beat cycle, shown with a simple down-up pattern, typically inflected with a slight curve away from the body to look like a J (left arm) or backward J (right arm):
For additional beats (5, 6, 7, etc.), we simply add more “ins” or more “outs.” As a rule of thumb, we switch from “ins” to “outs” at the moment in the middle of the cycle that feels most significant for communication. It’s important to note, though, that if you’re working from time signatures (discussed in more depth below), a 6, 9, or 12 on the top of a time signature most often does not indicate 6, 9, or 12 waves of the hand (beats), but rather 6, 9, or 12 beat divisions (discussed below), and thus 2, 3, or 4 beats.
Goal: Make connections between your embodied experience of conducting and sounding music. Instructions: Listen to the songs below. Use movement to find the beat and measure, and determine the measure length. Conduct along using the appropriate conducting pattern. If you find this easy, see if you can jump right into conducting without thinking through beat and measure first, but be sure you have someone around who can help determine whether your choices work with the music.
Here’s something confusing. We just said that conducting patterns have a “beginning” and an “end”—but it’s actually very common that music doesn’t start at the “beginning” of the cycle (downbeat), and it’s rare for the last note of a song to be struck on the ending beat of the cycle (upbeat). What’s the difference between a downbeat and the beginning of the music? A downbeat marks where we expect particularly significant, noticeable events in the music to happen. These include chord changes, bass drums and other very low sounds, stressed syllables of text, and more. Sometimes these things happen right at the beginning of the music, but sometimes there’s a short bit of music that leads into the first particularly significant, noticeable event (downbeat). Where there’s a lead-in like this, we call the music before the first downbeat a pickup or anacrusis (plural “anacruses”). The song “Happy Birthday to You,” for example, begins with a pickup. The most-stressed syllable, “birth-” of “birthday,” comes on the first downbeat; this is also often where an accompanying instrument such as a guitar or piano will play its first chord. “Happy,” then, is a pickup, which in this case lasts one beat. Perhaps determining whether music starts on a pickup or a downbeat is simple and intuitive to you, but perhaps not, and that’s ok—here’s some advice. Factors that contribute to cycles include chord changes, accompaniment rhythmic patterns (such as guitar strumming patterns or piano left-hand patterns), repeating rhythms in the melody, and bass (low) notes. Above all, instead of just counting out the cycle to see if it feels right, make a significant bodily motion—say, a particularly forceful downbeat with your arm, or a sway of the entire body—on what you think is each downbeat. Using your internalized bodily habits, it may become clear whether you are aligning with the cycle or conflicting with it. And if not, as always, work with someone one-on-one; they can help you understand what it feels like to align your bodily motion with downbeats. Listen to the following songs. As before, find the beat and the beginnings of cycles/ measures. Once you feel comfortable tracking the measures, go back to the beginning of the melody and determine: does it start with a downbeat or a pickup? If you find yourself getting more comfortable with this skill, skip the step where you find the beat and the beginnings of cycles/ measures; instead, just listen from the beginning of a song, and as soon as you hear the melody starting, see if you can determine whether it begins on a pickup or a downbeat.
We will call these “divisions of the beat” or “beat divisions.” Most of the time, the beats will seem to contain either 3 or 2 divisions. Beats of 3 subdivisions are called compound beats, while beats of 2 subdivisions are called simple beats. If you’re able to find a beat with consistent accuracy but finding the division is difficult, it may help to practice one method of counting or moving for simple beats and one for compound beats. Because divisions are often rather quick, speaking (such as “1-la-lee-2-la-lee” for compound and “1-and-2-and” for simple, where the numbers represent the beats of the cycle/measure) or small-muscle motions (such as tapping three fingers in succession for compound and two in succession for simple) are probably best. Pay close attention to whether events occur on your taps/syllables or not to determine which is correct. Notably, beat divisions are not usually indicated in conducting patterns since these patterns just show beats and cycles. This means if you want to keep track of a beat, its cycle, and its division, you will either need to add information to the conducting pattern or use a different method. Counting can be helpful: for example, in 1-la-li-2-la-li-1-la-li-2-la-li, the numbers are used to indicate beats and keep track of the cycle, and the fact that there are three syllables associated with each beat (number-la-li) shows that the beats are compound, not simple. Keep in mind, however that counting does not necessarily activate your internalized physical motions. It may help to focus on each portion in turn (beat, cycle, division) physically before putting it all together into such a composite.
In the previous section, we pretended that every beat divides evenly into either two or three parts. That’s not true. Even within the musical cultures we’re talking about in this text, there’s a third category: swing. In “swung” music, associated primarily with jazz, the beat divides into two unequal parts, with the first part longer and the second part shorter. Nevertheless, when written down, swung music is typically written as if it is in simple meter, and performers simply know to take what look like two even divisions of the beat and make them unequal. Often, the first division of the beat is roughly twice as long as the second. When that’s the case, swing may not be easy to distinguish from compound meter. When listening to jazz, we are more likely to describe the music as “swung”; when listening to music that is not jazz, we are more likely to describe the music as “compound” unless it clearly draws on jazz traditions. It’s also common, however, for the length ratio between the first and second divisions of the beat to be somewhere between 1:1 and 2:1. When this is the case, we typically describe the music using the terminology of simple meter, including time
Goal: Distinguish even from swung divisions of the beat. Instructions: Listen to the following songs. Begin by finding the beat. Then determine whether the beat division is simple, compound, or swung. When distinguishing compound and swing, pay attention to both technical aspects of the music (for example, does an instrument ever articulate three equal divisions of the beat?) and cultural aspects (is the music jazz-influenced or not?).