text
stringlengths
200
319k
A pump that has been in operation for 15 years pumps a constant 450 gpm through 65 feet of dynamic head. The pump uses 6,537 kW‐Hr of electricity per month at a cost of $0.095 per kW‐Hr. The old pump efficiency has dropped to 50%. Assuming a new pump that operates at 90% efficiency is available for $10,270, how long would it take to pay for replacing the old pump?
A utility has annual operating expenses of $4.7 million and a need for $2.1 million in capital improvements. The current water rate is $1.30 per CCF. Last year the utility sold 7270 AF of water and did not meet their capital budget need. How much does the utility need to raise rates in order to cover both the operational and capital requirements? (Round your answer to the nearest cent.)
A 300 hp well operates 6 hours a day and flows 1,700 gpm. The electricity cost is $0.118 per kW‐Hr. The well is also dosed with a 55% calcium hypochlorite tablet chlorinator to a dosage of 1.65 ppm. The tablets cost $1.20 per pound. The labor burden associated with the well maintenance is $60 per day. What is the total operating expense for this well in one year?
A water treatment manager has been asked to prepare a cost comparison between gas chlorine and a chlorine generation system using salt. Gas chlorine is $3.40 per pound and salt is $0.50 per pound. It takes approximately 4 pounds of salt to create 1 gallon of 1.75% chlorine with a specific gravity of 1.20. Assuming that the plant is dosing 12.5 MGD to a dosage of 2.75, what would be the annual cost of each? Which one is more cost effective?
STATE WATER RESOURCES CONTROL BOARD EXAM FORMULA SHEETS The following pages include the State Water Resources Control Board exam formula sheets for the Treatment, Distribution, and Wastewater exams. They are included so you can use them to solve the problems in this text. They will be provided to you when you take your state exams. Being familiar with them and using them to solve problems now will help you later on the exam.
Acronyms [typical units] AST= activated sludge tank BOD= biochemical oxygen demand [mg/L] DO= dissolved oxygen [mg/L] DLR= digester loading rate ET= evapotranspiration F/M = food to microorganism ratio HLR =hydraulic loading rate hp= horsepower HRT = hydraulic residence time or detention time [d] kW= kilowatt MCRT = mean cell residence time [d] Mgal = million gallons MLSS = mixed liquor suspended solids [mg/L] MLVSS = mixed liquor volatile suspended solids [mg/L] OLR = organic loading rate RAS= return activated sludge RBC = rotating biological contactor RP= removal percentage 55 = suspended solids [mg/L] TDH = Hdynam,c = total dynamic head [ft] TF = trickling filter VS= volatile solids WAS= waste activated sludge WOR = weir overflow rate SLR= solids loading rate [lb/d]
p water= Q X Hdynam,c; wherep water= water power, Q = flow rate, and Hdynomic= total dynamic head If horsepower is desired as a unit for water power, with gal/min (gpm) for flow rate and feet for total dynamic head, then
The target audience for this book is college students who are required to learn statistics, students with little background in mathematics and often no motivation to learn more. It is assumed that the students do have basic skills in using computers and have access to one. Moreover, it is assumed that the students are willing to actively follow the discussion in the text, to practice, and more importantly, to think. Teaching statistics is a challenge. Teaching it to students who are required to learn the subject as part of their curriculum, is an art mastered by few. In the past I have tried to master this art and failed. In desperation, I wrote this book. This book uses the basic structure of generic introduction to statistics course. However, in some ways I have chosen to diverge from the traditional approach. One divergence is the introduction of R as part of the learning process. Many have used statistical packages or spreadsheets as tools for teaching statistics. Others have used R in advanced courses. I am not aware of attempts to use R in introductory level courses. Indeed, mastering R requires much investment of time and energy that may be distracting and counterproductive for learning more fundamental issues. Yet, I believe that if one restricts the application of R to a limited number of commands, the benefits that R provides outweigh the difficulties that R engenders. Another departure from the standard approach is the treatment of probability as part of the course. In this book I do not attempt to teach probability as a subject matter, but only specific elements of it which I feel are essential for understanding statistics. Hence, Kolmogorov’s Axioms are out as well as attempts to prove basic theorems and a Balls and Urns type of discussion. On the other hand, emphasis is given to the notion of a random variable and, in that context, the sample space. The first part of the book deals with descriptive statistics and provides proba- bility concepts that are required for the interpretation of statistical inference. Statistical inference is the subject of the second part of the book. The first chapter is a short introduction to statistics and probability. Students are required to have access to R right from the start. Instructions regarding the installation of R on a PC are provided.
xiv Preface The second chapter deals with data structures and variation. Chapter 3 pro- vides numerical and graphical tools for presenting and summarizing the dis- tribution of data. The fundamentals of probability are treated in Chapters 4 to 7. The concept of a random variable is presented in Chapter 4 and examples of special types of random variables are discussed in Chapter 5. Chapter 6 deals with the Normal random variable. Chapter 7 introduces sampling distribution and presents the Central Limit Theorem and the Law of Large Numbers. Chapter 8 summa- rizes the material of the first seven chapters and discusses it in the statistical context. Chapter 9 starts the second part of the book and the discussion of statisti- cal inference. It provides an overview of the topics that are presented in the subsequent chapter. The material of the first half is revisited. Chapters 10 to 12 introduce the basic tools of statistical inference, namely point estimation, estimation with a confidence interval, and the testing of statistical hypothesis. All these concepts are demonstrated in the context of a single measurements. Chapters 13 to 15 discuss inference that involve the comparison of two mea- surements. The context where these comparisons are carried out is that of regression that relates the distribution of a response to an explanatory vari- able. In Chapter 13 the response is numeric and the explanatory variable is a factor with two levels. In Chapter 14 both the response and the explanatory variable are numeric and in Chapter 15 the response in a factor with two levels. Chapter 16 ends the book with the analysis of two case studies. These analyses require the application of the tools that are presented throughout the book. This book was originally written for a pair of courses in the University of the People. As such, each part was restricted to 8 chapters. Due to lack of space, some important material, especially the concepts of correlation and statistical independence were omitted. In future versions of the book I hope to fill this gap. Large portions of this book, mainly in the first chapters and some of the quizzes, are based on material from the online book β€œCollaborative Statistics” by Barbara Illowsky and Susan Dean (Connexions, March 2, 2010. ). Most of the material was edited by this author, who is the only person responsible for any errors that where introduced in the process of editing. Case studies that are presented in the second part of the book are taken from Preface xv Rice Virtual Lab in Statistics can be found in their Case Studies section. The responsibility for mistakes in the analysis of the data, if such mistakes are found, are my own. I would like to thank my mother Ruth who, apart from giving birth, feeding and educating me, has also helped to improve the pedagogical structure of this text. I would like to thank also Gary Engstrom for correcting many of the mistakes in English that I made. This book is an open source and may be used by anyone who wishes to do so. (Under the conditions of the Creative Commons Attribution License (CC-BY 3.0).)) Jerusalem, June 2011.
Student Learning Objectives This chapter introduces the basic concepts of statistics. Special attention is given to concepts that are used in the first part of this book, the part that deals with graphical and numeric statistical ways to describe data (descriptive statistics) as well as mathematical theory of probability that enables statisti- cians to draw conclusions from data. The course applies the widely used freeware programming environment for statistical analysis, known as R. In this chapter we will discuss the installation of the program and present very basic features of that system. By the end of this chapter, the student should be able to: Recognize key terms in statistics and probability. Install the R program on an accessible computer. Learn and apply a few basic operations of the computational system R. Why Learn Statistics? You are probably asking yourself the question, β€œWhen and where will I use statistics?”. If you read any newspaper or watch television, or use the Internet, you will see statistical information. There are statistics about crime, sports, education, politics, and real estate. Typically, when you read a newspaper ar- ticle or watch a news program on television, you are given sample information. With this information, you may make a decision about the correctness of a statement, claim, or β€œfact”. Statistical methods can help you make the β€œbest educated guess”. Since you will undoubtedly be given statistical information at some point in your life, you need to know some techniques to analyze the information thoughtfully. Think about buying a house or managing a budget. Think about 1 1 Introduction your chosen profession. The fields of economics, business, psychology, educa- tion, biology, law, computer science, police science, and early childhood devel- opment require at least one course in statistics. Included in this chapter are the basic ideas and words of probability and statistics. In the process of learning the first part of the book, and more so in the second part of the book, you will understand that statistics and probability work together. Statistics The science of statistics deals with the collection, analysis, interpretation, and presentation of data. We see and use data in our everyday lives. To be able to use data correctly is essential to many professions and is in your own best self-interest.
In Figure this data is presented in a graphical form (called a bar plot). A bar plot consists of a number axis (the π‘₯-axis) and bars (vertical lines) positioned above the number axis. The length of each bar corresponds to the number of data points that obtain the given numerical value. In the given plot Probability 3 the frequency of average time (in hours) spent sleeping per night is presented with hours of sleep on the horizontal π‘₯-axis and frequency on vertical 𝑦-axis. Think of the following questions: Would the bar plot constructed from data collected from a different group of people look the same as or different from the example? Why? If one would have carried the same example in a different group with the same size and age as the one used for the example, do you think the results would be the same? Why or why not? Where does the data appear to cluster? How could you interpret the clus- tering? The questions above ask you to analyze and interpret your data. With this example, you have begun your study of statistics. In this course, you will learn how to organize and summarize data. Organizing and summarizing data is called descriptive statistics. Two ways to summarize data are by graphing and by numbers (for example, finding an average). In the second part of the book you will also learn how to use formal methods for drawing conclusions from β€œgood” data. The formal methods are called infer- ential statistics. Statistical inference uses probabilistic concepts to determine if conclusions drawn are reliable or not. Effective interpretation of data is based on good procedures for producing data and thoughtful examination of the data. In the process of learning how to interpret data you will probably encounter what may seem to be too many mathematical formulae that describe these procedures. However, you should always remember that the goal of statistics is not to perform numerous cal- culations using the formulae, but to gain an understanding of your data. The calculations can be done using a calculator or a computer. The understanding must come from you. If you can thoroughly grasp the basics of statistics, you can be more confident in the decisions you make in life. Probability Probability is the mathematical theory used to study uncertainty. It provides tools for the formalization and quantification of the notion of uncertainty. In particular, it deals with the chance of an event occurring. For example, if the different potential outcomes of an experiment are equally likely to occur then the probability of each outcome is taken to be one divided by the number of potential outcomes. As an illustration, consider tossing a fair coin. There are 4 1 Introduction two possible outcomes – a head or a tail – and the probability of each outcome is 1/2. If you toss a fair coin 4 times, the outcomes may not necessarily be 2 heads and 2 tails. However, if you toss the same coin 4,000 times, the outcomes will be close to 2,000 heads and 2,000 tails. It is very unlikely to obtain more than 2,060 tails and it is similarly unlikely to obtain less than 1,940 tails. This is consistent with the expected theoretical probability of heads in any one toss. Even though the outcomes of a few repetitions are uncertain, there is a regular pattern of outcomes when the number of repetitions is large. Statistics exploits this pattern regularity in order to make extrapolations from the observed sample to the entire population. The theory of probability began with the study of games of chance such as poker. Today, probability is used to predict the likelihood of an earthquake, of rain, or whether you will get an β€œA” in this course. Doctors use probability to determine the chance of a vaccination causing the disease the vaccination is supposed to prevent. A stockbroker uses probability to determine the rate of return on a client’s investments. You might use probability to decide to buy a lottery ticket or not. Although probability is instrumental for the development of the theory of statistics, in this introductory course we will not develop the mathematical theory of probability. Instead, we will concentrate on the philosophical as- pects of the theory and use computerized simulations in order to demonstrate probabilistic computations that are applied in statistical inference. Key Terms In statistics, we generally want to study a population. You can think of a population as an entire collection of persons, things, or objects under study. To study the larger population, we select a sample. The idea of sampling is to select a portion (or subset) of the larger population and study that portion (the sample) to gain information about the population. Data are the result of sampling from a population. Because it takes a lot of time and money to examine an entire population, sampling is a very practical technique. If you wished to compute the overall grade point average at your school, it would make sense to select a sample of students who attend the school. The data collected from the sample would be the students’ grade point averages. In presidential elections, opinion poll samples of 1,000 to 2,000 people are taken. The opinion poll is supposed to represent the views of the people in the entire country. Manufacturers of The R Programming Environment 5 canned carbonated drinks take samples to determine if the manufactured 16 ounce containers does indeed contain 16 ounces of the drink. From the sample data, we can calculate a statistic. A statistic is a number that is a property of the sample. For example, if we consider one math class to be a sample of the population of all math classes, then the average number of points earned by students in that one math class at the end of the term is an example of a statistic. The statistic can be used as an estimate of a population parameter. A parameter is a number that is a property of the population. Since we considered all math classes to be the population, then the average number of points earned per student over all the math classes is an example of a parameter. One of the main concerns in the field of statistics is how accurately a statistic estimates a parameter. The accuracy really depends on how well the sample represents the population. The sample must contain the characteristics of the population in order to be a representative sample. Two words that come up often in statistics are average and proportion. If you were to take three exams in your math classes and obtained scores of 86, 75, and 92, you calculate your average score by adding the three exam scores and dividing by three (your average score would be 84.3 to one decimal place). If, in your math class, there are 40 students and 22 are men and 18 are women, then the proportion of men students is 22/40 and the proportion of women students is 18/40. Average and proportion are discussed in more detail in later chapters. The R Programming Environment The R Programming Environment is a widely used open source system for statistical analysis and statistical programming. It includes thousands of func- tions for the implementation of both standard and exotic statistical methods and it is probably the most popular system in the academic world for the devel- opment of new statistical tools. We will use R in order to apply the statistical methods that will be discussed in the book to some example data sets and in order to demonstrate, via simulations, concepts associated with probability and its application in statistics. The demonstrations in the book involve very basic R programming skills and the applications are implemented using, in most cases, simple and natural code. A detailed explanation will accompany the code that is used. Learning R, like the learning of any other programming language, can be 6 1 Introduction achieved only through practice. Hence, we strongly recommend that you not only read the code presented in the book but also run it yourself, in parallel to the reading of the provided explanations. Moreover, you are encouraged to play with the code: introduce changes in the code and in the data and see how the output changes as a result. One should not be afraid to experiment. At worst, the computer may crash or freeze. In both cases, restarting the computer will solve the problem … You may download R from the R project home page and install it on the computer that you are using. In addition to installing R we recommend that you also install RStudio . RStudio is an integrated development environment (IDE) for R. It includes a console, syntax-highlighting editor that supports direct code execution, as well as tools for plotting, history, debugging and workspace management.
The screenshot in Figure shows the RStudio interface. There are four panels. The top-left panel is the editor, the bottom-left panel shows the R- Console. The two right panels show the R-environment, history, file browser, plot panel and help viewer among others. Some Basic R Commands R is an object-oriented programming system. During the session you may cre- ate and manipulate objects by the use of functions that are part of the basic installation. You may also use the R programming language. Most of the func- tions that are part of the system are themselves written in the R language and one may easily write new functions or modify existing functions to suit specific needs. Let us start by opening RStudio. It is good practice to have a separate folder 1.6 The R Programming Environment 7 for each project where you will store your data and R-code. We have a folder called IntroStat. When starting a R or RStudio-session it is good practice to set the current working directory to your project directory using the β€œβ€˜setwd”’ function for example like this: Now we are ready to run our first R-command. Type in the R-Console panel, immediately after the β€œ>” prompt, the expression β€œ1+2” and then hit the Return key. (Do not include the double quotation in the expression that you type!): ## [1] 3 The prompt β€œ>” indicates that the system is ready to receive commands. Writ- ing an expression, such as β€œ1+2”, and hitting the Return key sends the expres- sion to be executed. The execution of the expression may produce an object, in this case an object that is composed of a single number, the number β€œ3”. Whenever required, the R system takes an action. If no other specifications are given regarding the required action then the system will apply the pre- programmed action. This action is called the default action. In the case of hitting the Return key after the expression that we wrote the default is to display the produced object on the screen. Next, let us demonstrate R in a more meaningful way by using it in order to produce the bar-plot of Figure . First we have to input the data. We will produce a sequence of numbers that form the data. For that we will use the function β€œc” that combines its arguments and produces a sequence with the arguments as the components of the sequence. Write the expression: at the prompt and hit return. The result should look like this: ## [1] 5.0 5.5 6.0 6.0 6.0 6.5 6.5 6.5 6.5 7.0 7.0 8.0 8.0 9.0 The function β€œc” is an example of an R function. A function has a name, β€œc” in this case, that is followed by brackets that include the input to the function. We call the components of the input the arguments of the function. Arguments are separated by commas. A function produces an output, which is typically an R object. In the current example an object of the form of a sequence was
8 1 Introduction created and, according to the default application of the system, was sent to the screen and not saved. If we want to create an object for further manipulation then we should save it and give it a name. For example, it we want to save the vector of data under the name β€œX” we may write the following expression at the prompt (and then hit return): The arrow that appears after the β€œX” is produced by typing the less than key β€œ<” followed by the minus key β€œ-”. This arrow is the assignment operator. Observe that you may save typing by calling and editing lines of code that were processes in an earlier part of the session. One may browse through the lines using the up and down arrows on the right-hand side of the keyboard and use the right and left arrows to move along the line presented at the prompt. For example, the last expression may be produced by finding first the line that used the function β€œc” with the up and down arrow and then moving to the beginning of the line with the left arrow. At the beginning of the line all one has to do is type β€œX <-” and hit the Return key. Notice that no output was sent to the screen. Instead, the output from the β€œc” function was assigned to an object that has the name β€œX”. A new object by the given name was formed and it is now available for further analysis. In order to verify this you may write β€œX” at the prompt and hit return: ## [1] 5.0 5.5 6.0 6.0 6.0 6.5 6.5 6.5 6.5 7.0 7.0 8.0 8.0 9.0 The content of the object β€œX” is sent to the screen, which is the default output. Notice that we have not changed the given object, which is still in the memory. The object β€œX” is in the memory, but it is not saved on the hard disk. With the end of the session the objects created in the session are erased unless specifically saved. You can save objects with the β€œsave” function To see whether this worked, delete the object X' from your environment using the function "rmβ€™β€œ, load your saved object using theβ€β€˜β€™load” function, and print it contents to check whether we successfully restored it:
We used a capital letter to name the object. We could have used a small letter just as well or practically any combination of letters. However, you should note that R distinguishes between capital and small letter. Hence, typing β€œx” in the console window and hitting return will produce an error message: ## Error in eval(expr, envir, enclos): object 'x' not found An object named β€œx” does not exist in the R system and we have not created such object. The object β€œX”, on the other hand, does exist. Names of functions that are part of the system are fixed but you are free to choose a name to objects that you create. For example, if one wants to create an object by the name β€œmy.vector” that contains the numbers 3, 7, 3, 3, and -5 then one may write the expression β€œmy.vector <- c(3,7,3,3,-5)” at the prompt and hit the Return key. If we want to produce a table that contains a count of the frequency of the different values in our data we can apply the function β€œtable” to the object β€œX” (which is the object that contains our data): Notice that the output of the function β€œtable” is a table of the different levels of the input vector and the frequency of each level. This output is yet another type of an object. The bar-plot of Figure can be produced by the application of the function β€œplot” to the object that is produced as an output of the function β€œtable”:
This plot is practically identical to the plot in Figure . The only difference is in the names given to the access. These names were changed in Figure for clarity. Clearly, if one wants to produce a bar-plot to other numerical data all one has to do is replace in the expression plot(table(X)) the object X by an object that contains the other data. For example, to plot the data in my.vector you may use plot(table(my.vector)). Exercises Exercise 1.1. A potential candidate for a political position in some state is interested to know what are her chances to win the primaries of her party and be selected as parties candidate for the position. In order to examine the opinions of her party voters she hires the services of a polling agency. The polling is conducted among 500 registered voters of the party. One of the questions that the pollsters refers to the willingness of the voters to vote for a female candidate for the job. Forty two percent of the people asked said that they prefer to have a women running for the job. Thirty eight percent said that the candidate’s gender is irrelevant. The rest prefers a male candidate. Which of the following is (i) a population (ii) a sample (iii) a parameter and (iv) a statistic: The 500 registered voters. The percentage, among all registered voters of the given party, of those that prefer a male candidate. The number 42% that corresponds to the percentage of those that prefer a female candidate. Summary 11 The voters in the state that are registered to the given party. Exercise 1.2. The number of customers that wait in front of a coffee shop at the opening was reported during 25 days. The results were:
Identify the number of days in which 5 costumers where waiting. The number of waiting costumers that occurred the largest num- ber of times. The number of waiting costumers that occurred the least num- ber of times. 1.8 Summary Glossary Data: A set of observations taken on a sample from a population. Statistic: A numerical characteristic of the data. A statistic estimates the corresponding population parameter. For example, the average number of contribution to the course’s forum for this term is an estimate for the average number of contributions in all future terms (parameter). Statistics The science that deals with processing, presentation and inference from data. Probability: A mathematical field that models and investigates the notion of randomness. R functions introduced in this chapter setwd(dir) is used to set the working directory to dir. c() is a generic function which combines its arguments. save(..., file) writes R objects to the specified file. load() reloads datasets written with the function save. rm() can be used to remove objects. table() constructs a table of counts of the different values.
Discuss in the forum A sample is a subgroup of the population that is supposed to represent the entire population. In your opinion, is it appropriate to attempt to represent the entire population only by a sample? When you formulate your answer to this question it may be useful to come up with an example of a question from you own field of interest one may want to investigate. In the context of this example you may identify a target population which you think is suited for the investigation of the given question. The appropriateness of using a sample can be discussed in the context of the example question and the population you have identified.
Student Learning Objectives In this chapter we deal with issues associated with the data that is obtained from a sample. The variability associated with this data is emphasized and critical thinking about validity of the data encouraged. A method for the introduction of data from an external source into R is proposed and the data types used by R for storage are described. By the end of this chapter, the student should be able to: Recognize potential difficulties with sampled data. Import data into R. Organize your work in R-scripts. Create and interpret frequency tables. The Sampled Data The aim in statistics is to learn the characteristics of a population on the basis of a sample selected from the population. An essential part of this analysis involves consideration of variation in the data. Variation in Data Variation is given a central role in statistics. To some extent the assessment of variation and the quantification of its contribution to uncertainties in making inference is the statistician’s main concern. Variation is present in any set of data. For example, 16-ounce cans of beverage may contain more or less than 16 ounces of liquid. In one study, eight 16
15.8, 16.1, 15.2, 14.8, 15.8, 15.9, 16.0, 15.5 Measurements of the amount of beverage in a 16-ounce may vary because the conditions of measurement varied or because the exact amount, 16 ounces of liquid, was not put into the cans. Manufacturers regularly run tests to determine if the amount of beverage in a 16-ounce can falls within the desired range. Variation in Samples Two random samples from the same population, while resembling the pop- ulation, will nonetheless be different from each other. Suppose Doreen and Jung both decide to study the average amount of time students sleep each night and use all students at their college as the population. Doreen may de- cide to sample randomly 50 students from the entire body of collage students. Jung also collects a random sample of 50 students. The randomness in the sampling will result in Doreen’s sample consisting in different students than Jung’s sample and therefore exhibit different variation in sleeping patterns purely arising from sampling variation (as oppossed to variation arising from error in measuring sleep for a given students). Both samples will however resemble the population. If Doreen and Jung took larger samples (i.e. the number of data values is increased), their samples and their characteristics, for example the average amount of time a student sleeps, would be closer to the actual population value. But still, their samples would still be, most probably, different from each other. The size of a sample (often called the number of observations) plays a key role in determining the uncertainty regarding what we can learn from our data about the population. The examples you have seen in this book so far have been small for convenience, but are usually too small to learn something with relative certainty about a population. Samples of only a few hundred observations, or even smaller, can however be sufficient. In polling, samples that are from 1200 to 1500 observations are considered large enough and good enough if the survey is random and is well done. The theory of statistical inference, that is the subject matter of the second part of this book, provides justification for these claims as well as techniques to quantify the uncertainty arising from random sampling and make decisions about sample sizes. Finally, for the reasons outlined above, if an investigator collects data they will often vary somewhat from the data someone else is taking for the same purpose. However, if two investigators or more, are taking data from the same
Frequency The primary way of summarizing the variability of data is via the frequency distribution. Consider an example. Twenty students were asked how many hours they worked per day. Their responses, in hours, are listed below:
## work.hours ## 2 3 4 5 6 7 ## 3 5 3 6 2 1 Recall that the function β€œtable” takes as input a sequence of data and produces as output the frequencies of the different values. We may have a clearer understanding of the meaning of the output of the function β€œtable” if we presented outcome as a frequency listing the different data values in ascending order and their frequencies. For that end we may apply the function β€œdata.frame” to the output of the β€œtable” function and obtain:
16 2 Sampling and Data Structures who work 3 hours, etc. The total of the frequency column, 20, represents the total number of students included in the sample. The function β€œdata.frame” transforms its input into a data frame, which is the standard way of storing statistical data. We will introduce data frames in more detail in Section below. A relative frequency is the fraction of times a value occurs. To find the relative frequencies, divide each frequency by the total number of students in the sample – 20 in this case. Relative frequencies can be written as fractions, percents, or decimals. As an illustration let us compute the relative frequencies in our data:
## work.hours ## 2 3 4 5 6 7 ## 0.15 0.25 0.15 0.30 0.10 0.05 We stored the frequencies in an object called β€œfreq”. The content of the object are the frequencies 3, 5, 3, 6, 2 and 1. The function β€œsum” sums the components of its input. The sum of the frequencies is the sample size , the total number of students that responded to the survey, which is 20. Hence, when we apply the function β€œsum” to the object β€œfreq” we get 20 as an output. The outcome of dividing an object by a number is a division of each ele- ment in the object by the given number. Therefore, when we divide β€œfreq” by β€œsum(freq)” (the number 20) we get a sequence of relative frequencies. The first entry to this sequence is 3/20 = 0.15, the second entry is 5/20 = 0.25, and the last entry is 1/20 = 0.05. The sum of the relative frequencies should always be equal to 1: The Sampled Data 17 ## [1] 1 The cumulative relative frequency is the accumulation of previous relative frequencies. To find the cumulative relative frequencies, add all the previous relative frequencies to the relative frequency of the current value. Alternatively, we may apply the function β€œcumsum” to the sequence of relative frequencies: ## 2 3 4 5 6 7 ## 0.15 0.40 0.55 0.85 0.95 1.00 Observe that the cumulative relative frequency of the smallest value 2 is the frequency of that value (0.15). The cumulative relative frequency of the second value 3 is the sum of the relative frequency of the smaller value (0.15) and the relative frequency of the current value (0.25), which produces a total of 0.15 + 0.25 = 0.40. Likewise, for the third value 4 we get a cumulative relative frequency of 0.15 + 0.25 + 0.15 = 0.55. The last entry of the cumulative relative frequency column is one, indicating that one hundred percent of the data has been accumulated. The computation of the cumulative relative frequency was carried out with the aid of the function β€œcumsum”. This function takes as an input argument a numerical sequence and produces as output a numerical sequence of the same length with the cumulative sums of the components of the input sequence. Critical Evaluation Inappropriate methods of sampling and data collection may produce samples that do not represent the target population. A naΓ―ve application of statistical analysis to such data may produce misleading conclusions. Consequently, it is important to evaluate critically the statistical analyses we encounter before accepting the conclusions that are obtained as a result of these analyses. Common problems that occurs in data that one should be aware of include: Problems with Samples: A sample should be representative of the popula- tion. A sample that is not representative of the population is biased. Biased samples may produce results that are inaccurate and not valid. Data Quality: Avoidable errors may be introduced to the data via inaccu- rate handling of forms, mistakes in the input of data, etc. Data should be cleaned from such errors as much as possible. Self-Selected Samples: Responses only by people who choose to respond, such as call-in surveys, that are often biased. 18 2 Sampling and Data Structures Sample Size Issues: Samples that are too small may be unreliable. Larger samples, when possible, are better. In some situations, small samples are unavoidable and can still be used to draw conclusions. Examples: Crash testing cars, medical testing for rare conditions. Undue Influence: Collecting data or asking questions in a way that influ- ences the response. Causality: A relationship between two variables does not mean that one causes the other to occur. They may both be related (correlated) because of their relationship to a third variable. Self-Funded or Self-Interest Studies: A study performed by a person or organization in order to support their claim. Is the study impartial? Read the study carefully to evaluate the work. Do not automatically assume that the study is good but do not automatically assume the study is bad either. Evaluate it on its merits and the work done. Misleading Use of Data: Improperly displayed graphs and incomplete data. Confounding: Confounding in this context means confusing. When the ef- fects of multiple factors on a response cannot be separated. Confounding makes it difficult or impossible to draw valid conclusions about the effect of each factor. Reading External Data into R In the examples so far the size of the data set was very small and we were able to input the data directly into R with the use of the function β€œc”. In more practical settings the data sets to be analyzed are much larger and it is very inefficient to enter them manually. In this section we learn how to read data from a file in the Comma Separated Values (CSV) format. The file β€œex1.csv” contains data on the sex and height of 100 individuals. This file is given in the CSV format. The file can be found on the internet at . We will discuss the process of reading data from a file into R and use this file as an illustration. Saving the File and Setting the Working Directory Before the file is read into R you should obtain a copy of the file and store it in some directory on the computer and read the file from that directory. Reading External Data into R 19 We recommend that you create a special directory in which you keep all the material associated with this course. In the explanations provided below we assume that the directory to which the file is stored in called β€œIntroStat”. Files in the CSV format are ordinary text files. They can be created manually or as a result of converting data stored in a different format into this particular format. A common way to produce, browse and edit CSV files is by the use of a standard electronic spreadsheet programs such as Excel or Calc. The Excel spreadsheet is part of the Microsoft’s Office suite. The Calc spreadsheet is part of the free LibreOffice suite. Note however that you should never edit raw data files directly. Keep them in a separate directory and never overwrite them with changes. Any changes you make to the data should be retraceable and documented through R-scripts and the changed data should be saved under a different name. Opening a CSV file by a spreadsheet program displays a spreadsheet with the content of the file. Values in the cells of the spreadsheet may be modified directly. (However, when saving, one should pay attention to save the file in the CVS format.) Similarly, new CSV files may be created by the entering of the data in an empty spreadsheet. The first row should include the name of the variable, preferably as a single character string with no empty spaces. The following rows may contain the data values associated with this variable. When saving, the spreadsheet should be saved in the CSV format by the use of the β€œβ€ dialog and choosing there the option of CSV in the β€œSave by Type” selection. After saving a file with the data in a directory, R should be notified where the file is located in order to be able to read it. A simple way of doing so is by setting the directory with the file as R’s working directory. The working directory is the first place R is searching for files. Files produced by R are saved in that directory. In RStudio one may set the working directory of the active R session to be some target directory in the β€œFiles” panel The dialog is opened by clicking on β€œMore” on the left hand side of the toolbar on the top of the Files panel. In the menu that opens selecting the option of β€œSet As Working Directory” will start the dialog. (See Figure .) Browsing via this dialog window to the directory of choice, selecting it, and approving the selection by clicking the β€œOK” bottom in the dialog window will set the directory of choice as the working directory of R. A full statistical ananlysis will typically involve a non-negligible number of steps. Retracing these steps in retrospect is often difficult without proper documentation. So while working from the R-console is a good way to develop (steps) in an analysis, you will need a way to document your work in order for your the analysis to be reproducible. Rather than to work only from the R-console (and change the working directory every time that R is opened
manually), it is better to organize your R-code in so-called scripts. R-scripts are plain text files which contain the R-commands that perform your analysis. Executing these scripts will perform your full analysis and thereby make your work reproducible. Figure shows how to create an R-script in RStudio. This is done by clicking on the first button on the main toolbar in RStudio. This will popup a menu where you can click on R-script. RStudio will then create a new empty R-script in the editing panel as shown in Figure . Here you can add the R-commands that will perform your analysis. Make sure to save your new script. Finally note that you can select one or several lines in your R-script and click on the β€˜Run’ button in the toolbar to execute them in the R-console. In the rest of this book we assume that a designated directory is set as R’s working directory and that all external files that need to be read into R, such as β€œex1.csv” for example, are saved in that working directory. Once a working directory has been set then the history of subsequent R sessions is stored in that directory. Hence, if you choose to save the image of the session when you end the session then objects created in the session will be uploaded the next time the R Console is opened.
The function β€œread.csv” takes as an input argument the address of a CSV file and produces as output a data frame object with the content of the file. Notice that the address is placed between double-quotes. If the file is located in the working directory then giving the name of the file as an address is sufficient. Consider the content of that R object β€œex.1” that was created by the previous expression, by inspecting the first part using the function β€œhead()”: 2If the file is located in a different directory than the complete address, including the path to the file, should be provided. The file need not reside on the computer. One may provide, for example, a URL (an internet address) as the address. Thus, instead of saving the file of the example on the computer one may read its content into an R object by using the line of code β€œex.1 <- ” instead of the code that we provide and the working method that we recommend to follow.
## id sex height The object β€œex.1”, the output of the function β€œread.csv” is a data frame. Data frames are the standard tabular format of storing statistical data. The columns of the table are called variables and correspond to measurements. In this example the three variables are: id: A 7 digits number that serves as a unique identifier of the subject. sex: The sex of each subject. The values are either β€œMALE” or β€œFEMALE”. height: The height (in centimeter) of each subject. A numerical value. When the values of the variable are numerical we say that it is a quantitative variable or a numeric variable. On the other hand, if the variable has qualita- tive or level values we say that it is a factor. In the given example, sex is a factor and height is a numeric variable. The rows of the table are called observations and correspond to the subjects. In this data set there are 100 subjects, with subject number 1, for example, being a female of height 182 cm and identifying number 5696379. Subject number 98, on the other hand, is a male of height 195 cm and identifying number 9383288. Data Types The columns of R data frames represent variables, i.e. measurements recorded for each of the subjects in the sample. R associates with each variable a type that characterizes the content of the variable. The two major types are Factors, or Qualitative Data. The type is β€œfactor”. Quantitative Data. The type is β€œnumeric”. Factors are the result of categorizing or describing attributes of a population. Hair color, blood type, ethnic group, the car a person drives, and the street a person lives on are examples of qualitative data. Qualitative data are generally described by words or letters. For instance, hair color might be black, dark brown, light brown, blonde, gray, or red. Blood type might be AB+, O-, or Reading External Data into R 23 B+. Qualitative data are not as widely used as quantitative data because many numerical techniques do not apply to the qualitative data. For example, it does not make sense to find an average hair color or blood type. Quantitative data are always numbers and are usually the data of choice because there are many methods available for analyzing such data. Quantita- tive data are the result of counting or measuring attributes of a population. Amount of money, pulse rate, weight, number of people living in your town, and the number of students who take statistics are examples of quantitative data. Quantitative data may be either discrete or continuous. All data that are the result of counting are called quantitative discrete data. These data take on only certain numerical values. If you count the number of phone calls you receive for each day of the week, you may get results such as 0, 1, 2, 3, etc. On the other hand, data that are the result of measuring on a continuous scale are quantitative continuous data, assuming that we can measure accurately. Measuring angles in radians may result in the numbers πœ‹ , πœ‹ , πœ‹ , πœ‹, 3πœ‹ , etc. 6 3 2 4 If you and your friends carry backpacks with books in them to school, the numbers of books in the backpacks are discrete data and the weights of the backpacks are continuous data. The data are the number of books students carry in their backpacks. You sample five students. Two students carry 3 books, one student carries 4 books, one student carries 2 books, and one student carries 1 book. The numbers of books (3, 4, 2, and 1) are the quantitative discrete data. The data are the weights of the backpacks with the books in it. You sample the same five students. The weights (in pounds) of their backpacks are 6.2, 7, 6.8, 9.1, 4.3. Notice that backpacks carrying three books can have different weights. Weights are quantitative continuous data because weights are measured. The data are the colors of backpacks. Again, you sample the same five students. One student has a red backpack, two students have black backpacks, one student has a green backpack, and one student has a gray backpack. The colors red, black, black, green, and gray are qualitative data. The distinction between continuous and discrete numeric data is not reflected usually in the statistical method that are used in order to analyze the data. Indeed, R does not distinguish between these two types of numeric data and store them both as β€œnumeric”. Consequently, we will also not worry about the specific categorization of numeric data and treat them as one. On the other hand, emphasis will be given to the difference between numeric and factors data. One may collect data as numbers and report it categorically. For example, the quiz scores for each student are recorded throughout the term. At the end of the term, the quiz scores are reported as A, B, C, D, or F. On the other hand, 24 2 Sampling and Data Structures one may code categories of qualitative data with numerical values and report the values. The resulting data should nonetheless be treated as a factor. As default, R saves variables that contain non-numeric values as factors. Oth- erwise, the variables are saved as numeric. The variable type is important be- cause different statistical methods are applied to different data types. Hence, one should make sure that the variables that are analyzed have the appro- priate type. Especially that factors using numbers to denote the levels are labeled as factors. Otherwise R will treat them as quantitative data. Exercises Exercise 2.1. Consider the following relative frequency table on hurricanes that have made direct hits on the U.S. between 1851 and 2004 ( ). Hurricanes are given a strength category rating based on the minimum wind speed generated by the storm. Some of the entries to the table are missing. TABLE 2.1: Frequency of Hurricane Direct Hits
What is the relative frequency of direct hits of category 1? What is the relative frequency of direct hits of category 4 or more? Exercise 2.2. The number of calves that were born to some cows during their productive years was recorded. The data was entered into an R object by the name β€œcalves”. Refer to the following R code:
2.5 Summary Glossary Population: The collection, or set, of all individuals, objects, or measure- ments whose properties are being studied. Sample: A portion of the population understudy. A sample is representative if it characterizes the population being studied. Frequency: The number of times a value occurs in the data. Relative Frequency: The ratio between the frequency and the size of data. Cumulative Relative Frequency: The term applies to an ordered set of data values from smallest to largest. The cumulative relative frequency is the sum of the relative frequencies for all values that are less than or equal to the given value. Data Frame: A tabular format for storing statistical data. Columns corre- spond to variables and rows correspond to observations. Variable: A measurement that may be carried out over a collection of sub- jects. The outcome of the measurement may be numerical, which produces a quantitative variable; or it may be non-numeric, in which case a factor is produced. Observation: The evaluation of a variable (or variables) for a given subject. CSV Files: A digital format for storing data frames. Factor: Qualitative data that is associated with categorization or the descrip- tion of an attribute. Quantitative: Data generated by numerical measurements. 26 2 Sampling and Data Structures R functions introduced in this chapter data.frame() creates data frames, tightly coupled collections of variables which share many of the properties of matrices and of lists, used as the fundamental data structure by most of R’s modeling software. head() and tail() return the first or last parts of a data.frame. sum() and cumsum() returns the sum and cumulative sum of the values present in its arguments. read.csv(file) reads a file in table format and creates a data frame from it. Discuss in the forum Factors are qualitative data that are associated with categorization or the description of an attribute. On the other hand, numeric data are generated by numerical measurements. A common practice is to code the levels of factors using numerical values. What do you think of this practice? In the formulation of your answer to the question you may think of an example of factor variable from your own field of interest. You may describe a benefit or a disadvantage that results from the use of a numerical values to code the level of this factor.
Student Learning Objectives This chapter deals with numerical and graphical ways to describe and display data. This area of statistics is called descriptive statistics. You will learn to calculate and interpret these measures and graphs. By the end of this chapter, you should be able to: Use histograms and box plots in order to display data graphically. Calculate measures of central location: mean and median. Calculate measures of the spread: variance, standard deviation, and inter- quartile range. Identify outliers, which are values that do not fit the rest of the distribution. Displaying Data Once you have collected data, what will you do with it? Data can be described and presented in many different formats. For example, suppose you are inter- ested in buying a house in a particular area. You may have no clue about the house prices, so you may ask your real estate agent to give you a sample data set of prices. Looking at all the prices in the sample is often overwhelming. A better way may be to look at the median price and the variation of prices. The median and variation are just two ways that you will learn to describe data. Your agent might also provide you with a graph of the data. A statistical graph is a tool that helps you learn about the shape of the distribution of a sample. The graph can be a more effective way of presenting data than a mass of numbers because we can see where data clusters and where there are only a few data values. Newspapers and the Internet use graphs to show trends and to enable readers to compare facts and figures quickly.
28 3 Descriptive Statistics Statisticians often start the analysis by graphing the data in order to get an overall picture of it. Afterwards, more formal tools may be applied. In the previous chapters we used the bar plot, where bars that indicate the frequencies in the data of values are placed over these values. In this chapter our emphasis will be on histograms and box plots, which are other types of plots. Some of the other types of graphs that are frequently used, but will not be discussed in this book, are the stem-and-leaf plot, the frequency polygon (a type of broken line graph) and the pie charts. The types of plots that will be discussed and the types that will not are all tightly linked to the notion of frequency of the data that was introduced in Chapter and intend to give a graphical representation of this notion. Histograms The histogram is a frequently used method for displaying the distribution of continuous numerical data. An advantage of a histogram is that it can readily display large data sets. A rule of thumb is to use a histogram when the data set consists of 100 values or more. One may produce a histogram in R by the application of the function β€œhist” to a sequence of numerical data. Let us read into R the data frame β€œex.1” that contains data on the sex and height and create a histogram of the heights:
The data set, which is the content of the CSV file β€œex1.csv”, was used in Chap- ter in order to demonstrate the reading of data that is stored in a external file into R. The first line of the above script reads in the data from β€œex1.csv” 3.2 Displaying Data 29 into a data frame object named β€œex.1” that maintains the data internally in R. The second line of the script produces the histogram. We will discuss below the code associated with this second line. A histogram consists of contiguous boxes. It has both a horizontal axis and a vertical axis. The horizontal axis is labeled with what the data represents (the height, in this example). The vertical axis presents frequencies and is labeled β€œFrequency”. By the examination of the histogram one can appreciate the shape of the data, the center, and the spread of the data. The histogram is constructed by dividing the range of the data (the x-axis) into equal intervals, which are the bases for the boxes. The height of each box represents the count of the number of observations that fall within the interval. For example, consider the box with the base between 160 and 170. There is a total of 19 subjects with height larger that 160 but no more than 170 (that is, 160 < height ≀ 170). Consequently, the height of that box is 19. The input to the function β€œhist” should be a sequence of numerical values. In principle, one may use the function β€œc” to produce a sequence of data and apply the histogram plotting function to the output of the sequence producing function. However, in the current case we have already the data stored in the data frame β€œex.1”, all we need to learn is how to extract that data so it can be used as input to the function β€œhist” that plots the histogram. Notice the structure of the input that we have used in order to construct the histogram of the variable β€œheight” in the β€œex.1” data frame. One may address the variable β€œvariable.name” in the data frame β€œdataframe.name” using the format: β€œdataframe.name$variable.name”. Indeed, when we type the expression β€œex.1$height” we get as an output the values of the variable β€œheight” from the given data frame:
1In some books an histogram is introduced as a form of a density. In densities the area of the box represents the frequency or the relative frequency. In the current example the height would have been 19/10 = 1.9 if the area of the box would have represented the frequency and it would have been (19/100)/10 = 0.019 if the area of the box would have represented the relative frequency. However, in this book we follow the default of R in which the height represents the frequency. 30 3 Descriptive Statistics expects a numeric sequence as input, a function such as β€œhist”. (But also other functions, for example, β€œsum” and β€œcumsum”.) There are 100 observations in the variable β€œex.1$height”. So many observations cannot be displayed on the screen on one line. Consequently, the sequence of the data is wrapped and displayed over several lines. Notice that the square brackets on the left hand side of each line indicate the position in the sequence of the first value on that line. Hence, the number on the first line is β€œ[1]”. The number on the second line is β€œ[16]”, since the second line starts with the 16th observation in the display given in the book. Notice, that numbers in the square brackets on your R Console window may be different, depending on the setting of the display on your computer. Box Plots The box plot, or box-whisker plot, gives a good graphical overall impression of the concentration of the data. It also shows how far from most of the data the extreme values are. In principle, the box plot is constructed from five values: the smallest value, the first quartile, the median, the third quartile, and the largest value. The median, the first quartile, and the third quartile will be discussed here, and then once more in the next section. The median, a number, is a way of measuring the β€œcenter” of the data. You can think of the median as the β€œmiddle value,” although it does not actually have to be one of the observed values. It is a number that separates ordered data into halves. Half the values are the same size or smaller than the median and half the values are the same size or larger than it. For example, consider the following data that contains 14 values:
6.8 + 7.2 = 7 2 The median is 7. Half of the values are smaller than 7 and half of the values are larger than 7. Quartiles are numbers that separate the data into quarters. Quartiles may or may not be part of the data. To find the quartiles, first find the median or second quartile. The first quartile is the middle value of the lower half of the
1, 1, 2, 2, 4, 6, 6.8 The middle value of the lower half is 2. The number 2, which is part of the data in this case, is the first quartile which is denoted Q1. One-fourth of the values are the same or less than 2 and three-fourths of the values are more than 2. The upper half of the data is: 7.2, 8, 8.3, 9, 10, 10, 11.5 The middle value of the upper half is 9. The number 9 is the third quartile which is denoted Q3. Three-fourths of the values are less than 9 and one-fourth of the values are more than 9. Outliers are values that do not fit with the rest of the data and lie outside of the normal range. Data points with values that are much too large or much too small in comparison to the vast majority of the observations will be identified as outliers. In the context of the construction of a box plot we identify potential outliers with the help of the inter-quartile range (IQR). The inter-quartile range is the distance between the third quartile (Q3) and the first quartile (Q1), i.e., IQR = Q3 βˆ’ Q1. A data point that is larger than the third quartile plus 1.5 times the inter-quartile range will be marked as a potential outlier. Likewise, a data point smaller than the first quartile minus 1.5 times the inter-quartile range will also be so marked. Outliers may have a substantial effect on the outcome of statistical analysis, therefore it is important that one is alerted to the presence of outliers. In the running example we obtained an inter-quartile range of size 9-2=7. The upper threshold for defining an outlier is 9 + 1.5 Γ— 7 = 19.5 and the lower threshold is 2 βˆ’ 1.5 Γ— 7 = βˆ’8.5. All data points are within the two thresholds, hence there are no outliers in this data. In the construction of a box plot one uses a vertical rectangular box and two vertical β€œwhiskers” that extend from the ends of the box to the smallest and largest data values that are not outliers. Outlier values, if any exist, are marked as points above or blow the endpoints of the whiskers. The smallest and largest non-outlier data values label the endpoints of the axis. The first
32 3 Descriptive Statistics quartile marks one end of the box and the third quartile marks the other end of the box. The central 50% of the data fall within the box. One may produce a box plot with the aid of the function β€œboxplot”. The input to the function is a sequence of numerical values and the output is a plot. As an example, let us produce the box plot of the 14 data points that were used as an illustration: Observe that the end points of the whiskers are 1, for the minimal value, and 11.5 for the largest value. The end values of the box are 9 for the third quartile and 2 for the first quartile. The median 7 is marked inside the box. Next, let us examine the box plot for the height data:
## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 117.0 158.0 171.0 170.1 180.2 208.0 The function β€œsummary”, when applied to a numerical sequence, produce the minimal and maximal entries, as well the first, second and third quartiles (the second is the Median). It also computes the average of the numbers (the Mean), which will be discussed in the next section. Let us compare the results with the box plot for the height data. Observe that the median 171 coincides with the thick horizontal line inside the box and that the lower end of the box coincides with first quartile 158.0 and the upper end with 180.2, which is the third quartile. The inter-quartile range is 180.2 βˆ’ 158.0 = 22.2. The upper threshold is 180.2 + 1.5 Γ— 22.2 = 213.5. This threshold is larger than the largest observation (208.0). Hence, the largest observation is not an outlier and it marks the end of the upper whisker. The lower threshold is 158.0 βˆ’ 1.5 Γ— 22.2 = 124.7. The minimal observation (117.0) is less than this threshold. Hence it is an outlier and it is marked as a point below the end of the lower whisker. The second smallest observation is 129. It lies above the lower threshold and it marks the end point of the lower whisker. Measures of the Center of Data The two most widely used measures of the central location of the data are the mean (average) and the median. To calculate the average weight of 50 people one should add together the 50 weights and divide the result by 50. To find the median weight of the same 50 people, one may order the data and locate a number that splits the data into two equal parts. The median is generally a better measure of the center when there are extreme values or outliers because it is not affected by the precise numerical values of the outliers. Nonetheless, the mean is the most commonly used measure of the center. We shall use small Latin letters such as π‘₯ to mark the sequence of data. In such a case we may mark the sample mean by placing a bar over the π‘₯: π‘₯Μ„ (pronounced β€œπ‘₯ bar”). The mean can be calculated by averaging the data points or it also can be calculated with the relative frequencies of the values that are present in the data. In the latter case one multiplies each distinct value by its relative fre- quency and then sum the products across all values. To see that both ways of calculating the mean are the same, consider the data:
Alternatively, we may note that the distinct values in the sample are 1, 2, 3, and 4 with relative frequencies of 3/11, 2/11, 1/11 and 5/11, respectively. The alternative method of computation produces:
FIGURE 3.2: Three Histograms Consider the following data set: 4, 5, 6, 6, 6, 7, 7, 7, 7, 7, 7, 8, 8, 8, 9, 10 This data produces the upper most histogram in Figure . Each interval has width one and each value is located at the middle of an interval. The histogram displays a symmetrical distribution of data. A distribution is symmetrical if a vertical line can be drawn at some point in the histogram such that the shape to the left and to the right of the vertical line are mirror images of each other. Let us compute the mean and the median of this data:
## [1] 7 The mean and the median are each 7 for these data. In a perfectly symmetrical distribution, the mean and the median are the same. The functions β€œmean” and β€œmedian” were used in order to compute the mean and median. Both functions expect a numeric sequence as an input and produce the appropriate measure of centrality of the sequence as an output. The histogram for the data: 4, 5, 6, 6, 6, 7, 7, 7, 7, 7, 7, 8 is not symmetrical and is displayed in the middle of Figure . The right- hand side seems β€œchopped off” compared to the left side. The shape of the distribution is called skewed to the left because it is pulled out towards the left. Let us compute the mean and the median for this data:
## [1] 7 (Notice that the original data is replaced by the new data when object x is reassigned.) The median is still 7, but the mean is less than 7. The relation between the mean and the median reflects the skewing. Consider yet another set of data: 6, 7, 7, 7, 7, 7, 7, 8, 8, 8, 9, 10 The histogram for the data is also not symmetrical and is displayed at the bottom of Figure . Notice that it is skewed to the right. Compute the mean and the median:
## [1] 7 The median is yet again equal to 7, but this time the mean is greater than 7. Again, the mean reflects the skewing. In summary, if the distribution of data is skewed to the left then the mean is less than the median. If the distribution of data is skewed to the right then the median is less than the mean. Examine the data on the height in β€œex.1”:
Measures of the Spread of Data One measure of the spread of the data is the inter-quartile range that was in- troduced in the context of the box plot. However, the most important measure of spread is the standard deviation. Before dealing with the standard deviation let us discuss the calculation of the variance. If π‘₯𝑖 is a data value for subject 𝑖 and π‘₯Μ„ is the sample mean, then π‘₯𝑖 βˆ’π‘₯Μ„ is called the deviation of subject 𝑖 from the mean, or simply the deviation. In a data set, there are as many deviations as there are data values. The variance is in principle the average of the squares of the deviations.
## [1] 20 Pay attention to the fact that we did not write the β€œ+” at the beginning of the second line. That symbol was produced by R when moving to the next line to indicate that the expression is not complete yet and will not be executed. Only after inputting the right bracket and the hitting of the Return key does R carry out the command and creates the object β€œx”. When you execute this example yourself on your own computer make sure not to copy the β€œ+” sign. Instead, if you hit the return key after the last comma on the first line, the plus sign will be produced by R as a new prompt and you can go on typing in the rest of the numbers. The function β€œlength” returns the length of the input sequence. Notice that we have a total of 20 data points. The next step involves the computation of the deviations:
38 3 Descriptive Statistics From a more technical point of view observe that the expression that computed the deviations, β€œx - x.bar”, involved the deletion of a single value (x.bar) from a sequence with 20 values (x). The expression resulted in the deletion of the value from each component of the sequence. This is an example of the general way by which R operates on sequences. The typical behavior of R is to apply an operation to each component of the sequence. As yet another illustration of this property consider the computation of the squares of the deviations:
## [1] 0.7158911 If the variance is produced as a result of dividing the sum of squares by the number of observations minus one then the variance is called the sample variance. The function β€œvar” computes the sample variance and the function β€œsd” com- putes the standard deviations. The input to both functions is the sequence of data values and the outputs are the sample variance and the standard deviation, respectively:
## [1] 0.7158911 In the computation of the variance we divide the sum of squared deviations by the number of deviations minus one and not by the number of deviations. The reason for that stems from the theory of statistical inference that will be discussed in Part II of this book. Unless the size of the data is small, dividing by 𝑛 or by 𝑛 βˆ’ 1 does not introduce much of a difference. The variance is a squared measure and does not have the same units as the data. Taking the square root solves the problem. The standard deviation mea- sures the spread in the same units as the data. The sample standard deviation, 𝑠, is either zero or is larger than zero. When 𝑠 = 0, there is no spread and the data values are equal to each other. When 𝑠 is a lot larger than zero, the data values are very spread out about the mean. Outliers can make 𝑠 very large. The standard deviation is a number that measures how far data values are from their mean. For example, if the data contains the value 7 and if the mean of the data is 5 and the standard deviation is 2, then the value 7 is one standard deviation from its mean because 5 + 1 Γ— 2 = 7. We say, then, that 7 is one standard deviation larger than the mean 5 (or also say β€œto the right of 5”). If the value 1 was also part of the data set, then 1 is two standard deviations smaller than the mean (or two standard deviations to the left of 5) because 5 βˆ’ 2 Γ— 2 = 1. The standard deviation, when first presented, may not be too simple to in- terpret. By graphing your data, you can get a better β€œfeel” for the deviations and the standard deviation. You will find that in symmetrical distributions, the standard deviation can be very helpful but in skewed distributions, the standard deviation is less so. The reason is that the two sides of a skewed distribution have different spreads. In a skewed distribution, it is better to look at the first quartile, the median, the third quartile, the smallest value, and the largest value. Exercises Exercise 3.1. Three sequences of data were saved in 3 R objects named β€œx1”, β€œx2” and β€œx3”, respectively. The application of the function β€œsummary” to each of these objects is presented below:
What is the mean (π‘₯Μ„) of the data? What is the sample standard deviation of the data? What is the median of the data? What is the inter-quartile range (IQR) of the data? How many standard deviations away from the mean is the value 10?
Summary Glossary Median: A number that separates ordered data into halves: half the values are the same number or smaller than the median and half the values are the same number or larger than the median. The median may or may not be part of the data. Quartiles: The numbers that separate the data into quarters. Quartiles may or may not be part of the data. The second quartile is the median of the data. Outlier: An observation that does not fit the rest of the data. Interquartile Range (IQR) : The distance between the third quartile (Q3) and the first quartile (Q1). IQR = Q3 - Q1. Mean: A number that measures the central tendency. A common name for mean is β€˜average.’ The term β€˜mean’ is a shortened form of β€˜arithmetic mean.’ By definition, the mean for a sample (denoted by π‘₯Μ„) is Sum of all values in the sample π‘₯Μ„ = Number of values in the sample . (Sample) Variance: Mean of the squared deviations from the mean. Square of the standard deviation. For a set of data, a deviation can be represented as π‘₯ βˆ’π‘₯Μ„ where π‘₯ is a value of the data and π‘₯Μ„ is the sample mean. The sample variance is equal to the sum of the squares of the deviations divided by the difference of the sample size and 1:
Discuss in the forum An important practice is to check the validity of any data set that you are supposed to analyze in order to detect errors in the data and outlier obser- vations. Recall that outliers are observations with values outside the normal range of values of the rest of the observations. 3.6 Summary 43 It is said by some that outliers can help us understand our data better. What is your opinion? When forming your answer to this question you may give an example of how outliers may provide insight or, else, how they may abstract our understanding. For example, consider the price of a stock that tend to go up or go down at most 2% within each trading day. A sudden 5% drop in the price of the stock may be an indication to reconsidering our position with respect to this stock. Commonly Used Symbols The symbol βˆ‘ means to add or to find the sum. 𝑛 = the number of data values in a sample. π‘₯Μ„ = the sample mean. 𝑠 = the sample standard deviation. 𝑓 = frequency. 𝑓/𝑛 = relative frequency. π‘₯ = numerical value.
Student Learning Objective This section extends the notion of variability that was introduced in the con- text of data to other situations. The variability of the entire population and the concept of a random variable is discussed. These concepts are central for the development and interpretation of statistical inference. By the end of the chapter the student should: Consider the distribution of a variable in a population and compute param- eters of this distribution, such as the mean and the standard deviation. Become familiar with the concept of a random variable. Understand the relation between the distribution of the population and the distribution of a random variable produced by sampling a random subject from the population. Identify the distribution of the random variable in simple settings and com- pute its expectation and variance. Different Forms of Variability In the previous chapters we examined the variability in data. In the statistical context, data is obtained by selecting a sample from the target population and measuring the quantities of interest for the subjects that belong to the sample. Different subjects in the sample may obtain different values for the measurement, leading to variability in the data. This variability may be summarized with the aid of a frequency table, a table of relative frequency, or via the cumulative relative frequency. A graphical display of the variability in the data may be obtained with the aid of the bar plot, the histogram, or the box plot.
46 4 Probability Numerical summaries may be computed in order to characterize the main features of the variability. We used the mean and the median in order to identify the location of the distribution. The sample variance, or better yet the sample standard deviation, as well as the inter-quartile range were all described as tools to quantify the overall spread of the data. The aim of all these graphical representations and numerical summaries is to investigate the variability of the data. The subject of this chapter is to introduce two other forms of variability, variability that is not associated, at least not directly, with the data that we observe. The first type of variability is the population variability. The other type of variability is the variability of a random variable. The notions of variability that will be presented are abstract, they are not given in terms of the data that we observe, and they have a mathematical- theoretical flavor to them. At first, these abstract notions may look to you as a waste of your time and may seem to be unrelated to the subject mat- ter of the course. The opposite is true. The very core of statistical thinking is relating observed data to theoretical and abstract models of a phenomena. Via this comparison, and using the tools of statistical inference that are pre- sented in the second half of the book, statisticians can extrapolate insights or make statements regarding the phenomena on the basis of the observed data. Thereby, the abstract notions of variability that are introduced in this chapter, and are extended in the subsequent chapters up to the end of this part of the book, are the essential foundations for the practice of statistics. The first notion of variability is the variability that is associated with the pop- ulation. It is similar in its nature to the variability of the data. The difference between these two types of variability is that the former corresponds to the variability of the quantity of interest across all members of the population and not only for those that were selected to the sample. In Chapters and we examined the data set β€œex.1” which contained data on the sex and height of a sample of 100 observations. In this chapter we will consider the sex and height of all the members of the population from which the sample was selected. The size of the relevant population is 100,000, includ- ing the 100 subjects that composed the sample. When we examine the values of the height across the entire population we can see that different people may have different heights. This variability of the heights is the population variability. The other abstract type of variability, the variability of a random variable, is a mathematical concept. The aim of this concept is to model the notion of randomness in measurements or the uncertainty regarding the outcome of a measurement. In particular we will initially consider the variability of a random variable in the context of selecting one subject at random from the population. A Population 47 Imagine we have a population of size 100,000 and we are about to select at random one subject from this population. We intend to measure the height of the subject that will be selected. Prior to the selection and measurement we are not certain what value of height will be obtained. One may associate the notion of variability with uncertainty β€” different subjects to be selected may obtain different evaluations of the measurement and we do not know before hand which subject will be selected. The resulting variability is the variability of a random variable. Random variables can be defined for more abstract settings. Their aim is to provide models for randomness and uncertainty in measurements. Simple ex- amples of such abstract random variables will be provided in this chapter. More examples will be introduced in the subsequent chapters. The more ab- stract examples of random variables need not be associated with a specific population. Still, the same definitions that are used for the example of a ran- dom variable that emerges as a result of sampling a single subject from a population will apply to the more abstract constructions. All types of variability, the variability of the data we dealt with before as well as the other two types of variability, can be displayed using graphical tools and characterized with numerical summaries. Essentially the same type of plots and numerical summaries, possibly with some modifications, may and will be applied. A point to remember is that the variability of the data relates to a concrete list of data values that is presented to us. In contrary to the case of the variability of the data, the other types of variability are not associated with quantities we actually get to observe. The data for the sample we get to see but not the data for the rest of the population. Yet, we can still discuss the variability of a population that is out there, even though we do not observe the list of measurements for the entire population. (The example that we give in this chapter of a population was artificially constructed and serves for illustration only. In the actual statistical context one does not obtain measurements from the entire population, only from the subjects that went into the sample.) The discussion of the variability in this context is theoretical in its nature. Still, this theoretical discussion is instrumental for understanding statistics.
48 4 Probability us review with the aid of an example some of the numerical summaries that were used for the characterization of the variability of data. Recall the file β€œex1.csv” that contains data on the height and sex of 100 subjects. (The data file can be obtained from .) We read the content of the file into a data frame by the name β€œex.1” and apply the function β€œsummary” to the data frame: We saw in the previous chapter that, when applied to a numeric sequence, the function β€œsummary” produces the smallest and largest values in the sequence, the three quartiles (including the median) and the mean. If the input of the same function is a factor then the outcome is the frequency in the data of each of the levels of the factor. Here β€œsex” is a factor with two levels. From the summary we can see that 54 of the subjects in the sample are female and 46 are male. Notice that when the input to the function β€œsummary” is a data frame, as is the case in this example, then the output is a summary of each of the variables of the data frame. In this example two of the variables are numeric (β€œid” and β€œheight”) and one variable is a factor (β€œsex”). Recall that the mean is the arithmetic average of the data which is computed by summing all the values of the variable and dividing the result by the number of observations. Hence, if 𝑛 is the number of observations (𝑛 = 100 in this example) and π‘₯𝑖 is the value of the variable for subject 𝑖, then one may write the mean in a formula form as
where π‘₯Μ„ corresponds to the mean of the data and the symbol β€œβˆ‘π‘› π‘₯ ” corre- 𝑖 sponds to the sum of all values in the data. The median is computed by ordering the data values and selecting a value that splits the ordered data into two equal parts. The first and third quartile are obtained by further splitting each of the halves into two quarters. 4.3 A Population 49 Let us discuss the variability associated with an entire target population. The file β€œpop1.csv” that contains the population data can be found on the internet (). It is a CSV file that contains the information on sex and height of an entire adult population of some imaginary city. (The data in β€œex.1” corresponds to a sample from this city.) Read the population data into R and examine it: The object β€œpop.1” is a data frame of the same structure as the data frame β€œex.1”. It contains three variables: a unique identifier of each subject (id), the sex of the subject (sex), and its height (height). Applying the function β€œsummary” to the data frame produces the summary of the variables that it contains. In particular, for the variable β€œsex”, which is a factor, it produces the frequency of its two categories – 48,888 female and 51,112 – a total of 100,000 subjects. For the variable β€œheight”, which is a numeric variable, it produces the extreme values, the quartiles, and the mean.
50 4 Probability bar is placed above each value of height that appears in the population, with the height of the bar representing the frequency of the value in the population. One may read out of the graph or obtain from the numerical summaries that the variable takes integer values in the range between 117 and 217 (heights are rounded to the nearest centimeter). The distribution is centered at 170 centimeter, with the central 50% of the values spreading between 162 and 178 centimeters. The mean of the height in the entire population is equal to 170 centimeter. This mean, just like the mean for the distribution of data, is obtained by the summation of all the heights in the population divided by the population size. Let us denote the size of the entire population by 𝑁. In this example 𝑁 = 100, 000. (The size of the sample for the data was called 𝑛 and was equal to 𝑛 = 100 in the parallel example that deals with the data of a sample.) The mean of an entire population is denoted by the Greek letter πœ‡ and is read β€œmew”. (The average for the data was denoted π‘₯Μ„). The formula of the population mean is:
Observe the similarity between the definition of the mean for the data and the definition of the mean for the population. In both cases the arithmetic average is computed. The only difference is that in the case of the mean of the data the computation is with respect to the values that appear in the sample whereas for the population all the values in the population participate in the computation. In actual life, we will not have all the values of a variable in the entire popula- tion. Hence, we will not be able to compute the actual value of the population mean. However, it is still meaningful to talk about the population mean be- cause this number exists, even though we do not know what its value is. As a matter of fact, one of the issues in statistics is to try to estimate this unknown quantity on the basis of the data we do have in the sample. A characteristic of the distribution of an entire population is called a param- eter. Hence, πœ‡, the population average, is a parameter. Other examples of parameters are the population median and the population quartiles. These parameters are defined exactly like their data counterparts, but with respect to the values of the entire population instead of the observations in the sample alone. Another example of a parameter is the population variance. Recall that the sample variance was defined with the aid of the deviations π‘₯𝑖 βˆ’ π‘₯Μ„, where π‘₯𝑖 is the value of the measurement for the 𝑖th subject and π‘₯Μ„ is the mean for the data. In order to compute the sample variance these deviations were squared to produce the squared deviations. The squares were summed up and then 4.4 Random Variables 51 divided by the sample size minus one (𝑛 βˆ’ 1). The sample variance, computed from the data, was denoted 𝑠2. The population variance is defined in a similar way. First, the deviations from the population mean π‘₯𝑖 βˆ’ πœ‡ are considered for each of the members of the population. These deviations are squared and the average of the squares is computed. We denote this parameter by 𝜎2 (read β€œsigma square”). A minor difference between the sample variance and the population variance is that for the latter we should divide the sum of squared deviations by the population size (𝑁) and not by the population size minus one (𝑁 βˆ’ 1): 𝜎2 =The average square deviation in the population Sum of the squares of the deviations in the population = Number of values in the population 𝑁 2 =βˆ‘π‘–=1(π‘₯𝑖 βˆ’ πœ‡) . 𝑁 The standard deviation of the population, yet another parameter, is denoted by 𝜎 and is equal to the square root of the variance. The standard deviation summarizes the overall variability of the measurement across the population. Again, the typical situation is that we do not know what the actual value of the standard deviation of the population is. Yet, we may refer to it as a quantity and we may try to estimate its value based on the data we do have from the sample. For the height of the subjects in our imaginary city we get that the variance is equal to 𝜎2 = 126.1576. The standard deviation is equal to 𝜎 = √126.1576 = 11.23199. These quantities can be computed in this example from the data frame β€œpop.1” with the aid of the functions β€œvar” and β€œsd”, respectively. Random Variables In the previous section we dealt with the variability of the population. Next we consider the variability of a random variable. As an example, consider taking a sample of size 𝑛 = 1 from the population (a single person) and measuring his/her height. 2Observe that the function β€œvar” computes the sample variance. Consequently, the sum of squares is divided by 𝑁 βˆ’ 1. We can correct that when computing the population variance by multiplying the result by 𝑁 βˆ’ 1 and dividing by 𝑁. Notice that the difference between the two quantities is negligible for a large population. Henceforth we will use the functions β€œvar” and β€œsd” to compute the variance and standard deviations of populations without the application of the correction.
## [1] 162 The first entry to the function is the given sequence of heights. When we set the second argument to 1 then the function selects one of the entries of the sequence at random, with each entry having the same likelihood of being selected. Specifically, in this example an entry that contains the value 162 was selected. Let us run the function again: ## [1] 160 In this instance an entry with a different value was selected. Try to run the command several times yourself and see what you get. Would you necessarily obtain a different value in each run? Now let us enter the same command without pressing the return key: Can you tell, before pressing the key, what value will you get? The answer to this question is of course β€œNo”. There are 100,000 entries with a total of 94 distinct values. In principle, any of the values may be selected and there is no way of telling in advance which of the values will turn out as an outcome. A random variable is the future outcome of a measurement, before the mea- surement is taken. It does not have a specific value, but rather a collection of potential values with a distribution over these values. After the measurement is taken and the specific value is revealed then the random variable ceases to be a random variable! Instead, it becomes data. Although one is not able to say what the outcome of a random variable will turn out to be. Still, one may identify patterns in this potential outcome. For example, knowing that the distribution of heights in the population ranges between 117 and 217 centimeter one may say in advance that the outcome of the measurement must also be in that interval. Moreover, since there is a total of 3,476 subjects with height equal to 168 centimeter and since the likelihood of each subject to be selected is equal then the likelihood of selecting a subject of this height is 3,476/100,000 = 0.03476. In the context of random variables we call this likelihood probability. In the same vain, the frequency of subjects Random Variables 53 with hight 192 centimeter is 488, and therefore the probability of measuring such a height is 0.00488. The frequency of subjects with height 200 centimeter or above is 393, hence the probability of obtaining a measurement in the range between 200 and 217 centimeter is 0.00393. Sample Space and Distribution Let us turn to the formal definition of a random variable: A random variable refer to numerical values, typically the outcome of an observation, a measure- ment, or a function thereof. A random variable is characterized via the collection of potential values it may obtain, known as the sample space and the likelihood of obtaining each of the values in the sample space (namely, the probability of the value). In the given example, the sample space contains the 94 integer values that are marked in Figure . The probability of each value is the height of the bar above the value, divided by the total frequency of 100,000 (namely, the relative frequency in the population). We will denote random variables with capital Latin letters such as 𝑋, π‘Œ, and 𝑍. Values they may obtain will be marked by small Latin letters such as π‘₯, 𝑦, 𝑧. For the probability of values we will use the letter β€œP”. Hence, if we denote by 𝑋 the measurement of height of a random individual that is sampled from the given population then: P(𝑋 = 168) = 0.03476 and P(𝑋 β‰₯ 200) = 0.00393 . Consider, as yet another example, the probability that the height of a random person sampled from the population differs from 170 centimeter by no more than 10 centimeters. (In other words, that the height is between 160 and 180 centimeters.) Denote by 𝑋 the height of that random person. We are interested in the probability P(|𝑋 βˆ’ 170| ≀ 10). The random person can be any of the subjects of the population with equal probability. Thus, the sequence of the heights of the 100,000 subjects repre- sents the distribution of the random variable 𝑋:
54 4 Probability Notice that the object β€œX” is a sequence of lenght 100,000 that stores all the heights of the population. The probability we seek is the relative frequency in this sequency of values between 160 and 180. First we compute the probability and then explain the method of computation: ## [1] 0.64541 We get that the height of a person randomly sampled from the population is between 160 and 180 centimeters with probability 0.64541. Let us produce a small example that will help us explain the computation of the probability. We start by forming a sequence with 10 numbers: The goal is to compute the proportion of numbers that are in the range [4, 6] (or, equivalently, {|π‘Œ βˆ’ 5| ≀ 1}). The function β€œabs” computes the absolute number of its input argument. When the function is applied to the sequence β€œY-5” it produces a sequence of the same length with the distances between the components of β€œY” and the number 5: ## [1] 1.3 1.9 1.6 1.6 0.5 0.7 1.5 0.3 1.1 0.3 Compare the resulting output to the original sequence. The first value in the input sequence is 6.3. Its distance from 5 is indeed 1.3. The fourth value in the input sequence is 3.4. The difference 3.4 - 5 is equal to -1.6, and when the absolute value is taken we get a distance of 1.6. The function β€œ<=” expects an argument to the right and an argument to the left. It compares each component to the left with the parallel component to the right and returns a logical value, β€œTRUE” or β€œFALSE”, depending on whether the relation that is tested holds or not:
4.4 Random Variables 55 input in the sequence β€œY” is 5.3, which is within the range. Therefore, the last output of the logical expression is β€œTRUE”. Next, we compute the proportion of β€œTRUE” values in the sequence: ## [1] 0.4 When a sequence with logical values is entered into the function β€œmean” then the function replaces the TRUE’s by 1 and the FALSE’s by 0. The average produces then the relative frequency of TRUE’s in the sequence as required. Specifically, in this example there are 4 TRUE’s and 6 FALSE’s. Consequently, the output of the final expression is 4/10 = 0.4. The computation of the probability that the sampled height falls within 10 centimeter of 170 is based on the same code. The only differences are that the input sequence β€œY” is replaced by the sequence of population heights β€œX” as input. the number β€œ5” is replaced by the number β€œ170” and the number β€œ1” is replaced by the number β€œ10”. In both cases the result of the computation is the relative proportion of the times that the values of the input sequence fall within a given range of the indicated number. The probability function of a random variable is defined for any value that the random variable may obtain and produces the distribution of the random variable. The probability function may emerge as a relative frequency as in the given example or it may be a result of theoretical modeling. Examples of theoretical random variables are presented mainly in the next two chapters. Consider an example of a random variable. The sample space and the prob- ability function specify the distribution of the random variable. For example, assume it is known that a random variable 𝑋 may obtain the values 0, 1, 2, or 3. Moreover, imagine that it is known that P(𝑋 = 1) = 0.25, P(𝑋 = 2) = 0.15, and P(𝑋 = 3) = 0.10. What is P(𝑋 = 0), the probability that 𝑋 is equal to 0? The sample space, the collection of possible values that the random variable may obtain is the collection {0, 1, 2, 3}. Observe that the sum over the positive values is: P(𝑋 > 0) = P(𝑋 = 1) + P(𝑋 = 2) + P(𝑋 = 3) = 0.25 + 0.15 + 0.10 = 0.50 . It follows, since the sum of probabilities over the entire sample space is equal to 1, that P(𝑋 = 0) = 1 βˆ’ 0.5 = 0.5. Table summarizes the distribution of the random variable 𝑋. Observe the similarity between the probability function and the notion of relative frequency that was discussed in Chapter . Both quantities describe distribution. Both are non-negative and sum to 1. Likewise, notice that one may define the cu- mulative probability the same way cumulative relative frequency is defined:
Ordering the values of the random variable from smallest to largest, the cu- mulative probability at a given value is the sum of probabilities for values less or equal to the given value. Knowledge of the probabilities of a random variable (or the cumulative prob- abilities) enables the computation of other probabilities that are associated with the random variable. For example, considering the random variable 𝑋 of Table , we may calculate the probability of 𝑋 falling in the interval [0.5, 2.3]. Observe that the given range contains two values from the sample space, 1 and 2, therefore: P(0.5 ≀ 𝑋 ≀ 2.3) = P(𝑋 = 1) + P(𝑋 = 2) = 0.25 + 0.15 = 0.40 . Likewise, we may produce the probability of 𝑋 obtaining an odd value: P(𝑋 = odd) = P(𝑋 = 1) + P(𝑋 = 3) = 0.25 + 0.10 = 0.35 . Observe that both {0.5 ≀ 𝑋 ≀ 2.3} and {𝑋 = odd} refer to subsets of values of the sample space. Such subsets are denoted events. In both examples the probability of the event was computed by the summation of the probabilities associated with values that belong to the event. Expectation and Standard Deviation We may characterize the center of the distribution of a random variable and the spread of the distribution in ways similar to those used for the character- ization of the distribution of data and the distribution of a population. The expectation marks the center of the distribution of a random variable. It is equivalent to the data average π‘₯Μ„ and the population average πœ‡, which was used in order to mark the location of the distribution of the data and the population, respectively. Recall from Chapter that the average of the data can be computed as the weighted average of the values that are present in the data, with weights given by the relative frequency. Specifically, we saw for the data
𝑛 π‘₯Μ„ = βˆ‘π‘–=1 π‘₯𝑖 = βˆ‘ (π‘₯ Γ— (𝑓 /𝑛)) . 𝑛 π‘₯ In the first representation of the arithmetic mean, the average is computed by the summation of all data points and dividing the sum by the sample size. In the second representation, that uses a weighted sum, the sum extends over all the unique values that appear in the data. For each unique value the value is multiplied by the relative frequency of the value in the data. These multiplications are summed up to produce the mean. The expectation of a random variable is computed in the spirit of the second formulation. The expectation of a random variable is marked with the letter β€œE” and is define via the equation: E(𝑋) = βˆ‘ (π‘₯ Γ— P(π‘₯)) . π‘₯ In this definition all the unique values of the sample space are considered. For each value a product of the value and the probability of the value is taken. The expectation is obtained by the summation of all these products. In this definition the probability P(π‘₯) replaces the relative frequency 𝑓π‘₯ /𝑛 but otherwise, the definition of the expectation and the second formulation of the mean are identical to each other. Consider the random variable 𝑋 with distribution that is described in Table . In order to obtain its expectation we multiply each value in the sample space by the probability of the value. Summation of the products produces the expectation (see Table : E(𝑋) = 0 Γ— 0.5 + 1 Γ— 0.25 + 2 Γ— 0.15 + 3 Γ— 0.10 = 0.85 . ## Expectation = sum(Value*Probability) = 0.85 In the example of height we get that the expectation is equal to 170.035 centimeter. Notice that this expectation is equal to πœ‡, the mean of the pop- ulation. This is no accident. The expectation of a potential measurement of
a randomly selected subject from a population is equal to the average of the measurement across all subjects. The sample variance (𝑠2) is obtained as the sum of the squared deviations from the average, divided by the sample size (𝑛) minus 1:
In this formulation one considers each of the unique value that are present in the data. For each value the deviation between the value and the average is computed. These deviations are then squared and multiplied by the relative frequency. The products are summed up. Finally, the sum is multiplied by the ratio between the sample size 𝑛 and 𝑛 βˆ’ 1 in order to correct for the fact that in the sample variance the sum of squared deviations is divided by the sample size minus 1 and not by the sample size. In a similar way, the variance of a random variable may be defined via the probability of the values that make the sample space. For each such value one computes the deviation from the expectation. This deviation is then squared and multiplied by the probability of the value. The multiplications are summed up in order to produce the variance: V(𝑋) = βˆ‘ ((π‘₯ βˆ’ E(𝑋))2 Γ— P(π‘₯)) . π‘₯ Notice that the formula for the computation of the variance of a random variable is very similar to the second formulation for the computation of the sample variance. Essentially, the mean of the data is replaced by the expecta- tion of the random variable and the relative frequency of a value is replaced by the probability of the value. Another difference is that the correction factor is not used for the variance of a random variable. ## Var = sum(p*(x-sum(p*x))^2) = 1.0275
As an example consider the variance of the random variable 𝑋. The computa- tion of the variance of this random variable is carried out in Table . The sample space, the values that the random variable may obtain, are given in the first column and the probabilities of the values are given in the second column. In the third column the deviation of the value from the expectation E(𝑋) = 0.85 is computed for each value. The 4th column contains the square of these deviations and the 5th and last column involves the product of the square deviations and the probabilities. The variance is obtained by summing up the products in the last column. In the given example: V(𝑋) =(0 βˆ’ 0.85)2 Γ— 0.5 + (1 βˆ’ 0.85)2 Γ— 0.25 + (2 βˆ’ 0.85)2 Γ— 0.15 + (3 βˆ’ 0.85)2 Γ— 0.10 = 1.0275 . The standard deviation of a random variable is the square root of the variance. The standard deviation of 𝑋 is √V(𝑋) = √1.0275 = 1.013657. In the example that involves the height of a subject selected from the popula- tion at random we obtain that the variance is 126.1576, equal to the population variance, and the standard deviation is 11.23199, the square root of the vari- ance. Other characterization of the distribution that were computed for data, such as the median, the quartiles, etc., may also be defined for random variables. Probability and Statistics Modern science may be characterized by a systematic collection of empirical measurements and the attempt to model laws of nature using mathemati- cal language. The drive to deliver better measurements led to the develop- ment of more accurate and more sensitive measurement tools. Nonetheless, at some point it became apparent that measurements may not be perfectly reproducible and any repeated measurement of presumably the exact same phenomena will typically produce variability in the outcomes. On the other hand, scientists also found that there are general laws that govern this variabil- ity in repetitions. For example, it was discovered that the average of several independent repeats of the measurement is less variable and more reproducible than each of the single measurements themselves. Probability was first introduced as a branch of mathematics in the investiga- tion of uncertainty associated with gambling and games of chance. During the early 19th century probability began to be used in order to model variability in measurements. This application of probability turned out to be very successful. Indeed, one of the major achievements of probability was the development of the mathematical theory that explains the phenomena of reduced variability that is observed when averages are used instead of single measurements. In Chapter we discuss the conclusions of this theory. Statistics study method for inference based on data. Probability serves as the mathematical foundation for the development of statistical theory. In this chapter we introduced the probabilistic concept of a random variable. This concept is key for understanding statistics. In the rest of Part I of this book we discuss the probability theory that is used for statistical inference. Statistical inference itself is discussed in Part II of the book. Exercises Exercise 4.1. Table presents the probabilities of the random variable π‘Œ. These probabilities are a function of the number 𝑝, the probability of the value β€œ0”. Answer the following questions: What is the value of 𝑝? P(π‘Œ < 3) = ? Summary 61 P(π‘Œ = odd) = ? P(1 ≀ π‘Œ < 4) = ? P(|π‘Œ βˆ’ 3| < 1.5) = ? E(π‘Œ) = ? V(π‘Œ) = ? What is the standard deviation of π‘Œ. Exercise 4.2. One invests $2 to participate in a game of chance. In this game a coin is tossed three times. If all tosses end up β€œHead” then the player wins $10. Otherwise, the player losses the investment. What is the probability of winning the game? What is the probability of loosing the game? What is the expected gain for the player that plays this game? (Notice that the expectation can obtain a negative value.) 4.7 Summary Glossary Random Variable: The probabilistic model for the value of a measurement, before the measurement is taken. Sample Space: The set of all values a random variable may obtain. Probability: A number between 0 and 1 which is assigned to a subset of the sample space. This number indicates the likelihood of the random variable obtaining a value in that subset. Expectation: The central value for a random variable. The expectation of the random variable 𝑋 is marked by E(𝑋). Variance: The (squared) spread of a random variable. The variance of the random variable 𝑋 is marked by V(𝑋). The standard deviation is the square root of the variance. 62 4 Probability Discussion in the Forum Random variables are used to model situations in which the outcome, before the fact, is uncertain. One component in the model is the sample space. The sample space is the list of all possible outcomes. It includes the outcome that took place, but also all other outcomes that could have taken place but never did materialize. The rationale behind the consideration of the sample space is the intention to put the outcome that took place in context. What do you think of this rationale? When forming your answer to this question you may give an example of a situation from you own field of interest for which a random variable can serve as a model. Identify the sample space for that random variable and discuss the importance (or lack thereof) of the correct identification of the sample space. For example, consider a factory that produces car parts that are sold to car makers. The role of the QA personnel in the factory is to validate the quality of each batch of parts before the shipment to the client. To achieve that, a sample of parts may be subject to a battery of quality test. Say that 20 parts are selected to the sample. The number of those among them that will not pass the quality testing may be modeled as a random variable. The sample space for this random variable may be any of the numbers 0, 1, 2, …, 20. The number 0 corresponds to the situation where all parts in the sample passed the quality testing. The number 1 corresponds to the case where 1 part did not pass and the other 19 did. The number 2 describes the case where 2 of the 20 did not pass and 18 did pass, etc.
Student Learning Objective This section introduces some important examples of random variables. The distributions of these random variables emerge as mathematical models of real- life settings. In two of the examples the sample space is composed of integers. In the other two examples the sample space is made of continuum of values. For random variables of the latter type one may use the density, which is a type of a histogram, in order to describe the distribution. By the end of the chapter the student should: Identify the Binomial, Poisson, Uniform, and Exponential random variables, relate them to real life situations, and memorize their expectations and variances. Relate the plot of the density/probability function and the cumulative prob- ability function to the distribution of a random variable. Become familiar with the R functions that produce the density/probability of these random variables and their cumulative probabilities. Plot the density and the cumulative probability function of a random vari- able and compute probabilities associated with random variables. Discrete Random Variables In the previous chapter we introduced the notion of a random variable. A ran- dom variable corresponds to the outcome of an observation or a measurement prior to the actual making of the measurement. In this context one can talk of all the values that the measurement may potentially obtain. This collection of values is called the sample space. To each value in the sample space one may associate the probability of obtaining this particular value. Probabilities
64 5 Random Variables are like relative frequencies. All probabilities are positive and the sum of the probabilities that are associated with all the values in the sample space is equal to one. A random variable is defined by the identification of its sample space and the probabilities that are associated with the values in the sample space. For each type of random variable we will identify first the sample space β€” the values it may obtain β€” and then describe the probabilities of the values. Examples of situations in which each type of random variable may serve as a model of a measurement will be provided. The R system provides functions for the computation of probabilities associated with specific types of random variables. We will use these functions in this and in proceeding chapters in order to carry out computations associated with the random variables and in order to plot their distributions. The distribution of a random variable, just like the distribution of data, can be characterized using numerical summaries. For the latter we used summaries such as the mean and the sample variance and standard deviation. The mean is used to describe the central location of the distribution and the variance and standard deviation are used to characterize the total spread. Parallel summaries are used for random variable. In the case of a random variable the name expectation is used for the central location of the distribution and the variance and the standard deviation (the square root of the variation) are used to summarize the spread. In all the examples of random variables we will identify the expectation and the variance (and, thereby, also the standard deviation). Random variables are used as probabilistic models of measurements. Theoret- ical considerations are used in many cases in order to define random variables and their distribution. A random variable for which the values in the sample space are separated from each other, say the values are integers, is called a discrete random variable. In this section we introduce two important integer- valued random variables: The Binomial and the Poisson random variables. These random variables may emerge as models in contexts where the mea- surement involves counting the number of occurrences of some phenomena. Many other models, apart from the Binomial and Poisson, exist for discrete random variables. An example of such model, the Negative-Binomial model, will be considered in Section . Depending on the specific context that in- volves measurements with discrete values, one may select the Binomial, the Poisson, or one of these other models to serve as a theoretical approximation of the distribution of the measurement. Discrete Random Variables 65 The Binomial Random Variable The Binomial random variable is used in settings in which a trial that has two possible outcomes is repeated several times. Let us designate one of the outcomes as β€œSuccess” and the other as β€œFailure”. Assume that the probability of success in each trial is given by some number 𝑝 that is larger than 0 and smaller than 1. Given a number 𝑛 of repeats of the trial and given the prob- ability of success, the actual number of trials that will produce β€œSuccess” as their outcome is a random variable. We call such random variable Binomial. The fact that a random variable 𝑋 has such a distribution is marked by the expression: β€œπ‘‹ ∼ Binomial(𝑛, 𝑝)”. As an example consider tossing 10 coins. Designate β€œHead” as success and β€œTail” as failure. For fair coins the probability of β€œHead” is 1/2. Consequently, if 𝑋 is the total number of β€œHeads” then 𝑋 ∼ Binomial(10, 0.5), where 𝑛 = 10 is the number of trials and 𝑝 = 0.5 is the probability of success in each trial. It may happen that all 10 coins turn up β€œTail”. In this case 𝑋 is equal to 0. It may also be the case that one of the coins turns up β€œHead” and the others turn up β€œTail”. The random variable 𝑋 will obtain the value 1 in such a case. Likewise, for any integer between 0 and 10 it may be the case that the number of β€œHeads” that turn up is equal to that integer with the other coins turning up β€œTail”. Hence, the sample space of 𝑋 is the set of integers {0, 1, 2, … , 10}. The probability of each outcome may be computed by an appropriate mathematical formula that will not be discussed here. The probabilities of the various possible values of a Binomial random variable may be computed with the aid of the R function β€œdbinom” (that uses the mathe- matical formula for the computation). The input to this function is a sequence of values, the value of 𝑛, and the value of 𝑝. The output is the sequence of probabilities associated with each of the values in the first input. For example, let us use the function in order to compute the probability that the given Binomial obtains an odd value. A sequence that contains the odd values in the Binomial sample space can be created with the expression β€œc(1,3,5,7,9)”. This sequence can serve as the input in the first argument of the function β€œdbinom”. The other arguments are β€œ10” and β€œ0.5”, respectively: ## [1] 0.009765625 0.117187500 0.246093750 0.117187500 0.009765625 Observe that the output of the function is a sequence of the same length as the first argument. This output contains the Binomial probabilities of the values in the first argument. In order to obtain the probability of the event
## [1] 0.5 Observe that the probability of obtaining an odd value in this specific case is equal to one half. Another example is to compute all the probabilities of all the potential values of a Binomial(10, 0.5) random variable: ## [1] 0.0009765625 0.0097656250 0.0439453125 0.1171875000 0.2050781250 ## [6] 0.2460937500 0.2050781250 0.1171875000 0.0439453125 0.0097656250 ## [11] 0.0009765625 The expression β€œstart.value:end.value” produces a sequence of numbers that initiate with the number β€œstart.value” and proceeds in jumps of size one until reaching the number β€œend.value”. In this example, β€œ0:10” produces the sequence of integers between 0 and 10, which is the sample space of the current Binomial example. Entering this sequence as the first argument to the function β€œdbinom” produces the probabilities of all the values in the sample space. One may display the distribution of a discrete random variable with a bar plot similar to the one used to describe the distribution of data. In this plot a vertical bar representing the probability is placed above each value of the Discrete Random Variables 67 sample space. The hight of the bar is equal to the probability. A bar plot of the Binomial(10, 0.5) distribution is provided in Figure . Another useful function is β€œpbinom”, which produces the cumulative probability of the Binomial:
## [1] 0.0009765625 0.0107421875 0.0546875000 0.1718750000 0.3769531250 ## [6] 0.6230468750 0.8281250000 0.9453125000 0.9892578125 0.9990234375 ## [11] 1.0000000000 The output of the function β€œpbinom” is the cumulative probability P(𝑋 ≀ π‘₯) that the random variable is less than or equal to the input value. Observe that this cumulative probability is obtained by summing all the probabilities associated with values that are less than or equal to the input value. Specif- ically, the cumulative probability at π‘₯ = 3 is obtained by the summation of the probabilities at π‘₯ = 0, π‘₯ = 1, π‘₯ = 2, and π‘₯ = 3: P(𝑋 ≀ 3) = 0.0009765625 + 0.009765625 + 0.0439453125 + 0.1171875 = 0.171875 The numbers in the sum are the first 4 values from the output of the function β€œdbinom(x,10,0.5)”, which computes the probabilities of the values of the sample space. In principle, the expectation of the Binomial random variable, like the expec- tation of any other (discrete) random variable is obtained from the application of the general formulae: E(𝑋) = βˆ‘ (π‘₯ Γ— P(𝑋 = π‘₯)) , V(𝑋) = βˆ‘ ((π‘₯ βˆ’ E(𝑋))2 Γ— P(π‘₯)) . π‘₯ π‘₯ However, in the specific case of the Binomial random variable, in which the probability P(𝑋 = π‘₯) obeys the specific mathematical formula of the Binomial distribution, the expectation and the variance reduce to the specific formulae: E(𝑋) = 𝑛𝑝 , V(𝑋) = 𝑛𝑝(1 βˆ’ 𝑝) . Hence, the expectation is the product of the number of trials 𝑛 with the probability of success in each trial 𝑝. In the variance the number of trials is multiplied by the product of a probability of success (𝑝) with the probability of a failure (1 βˆ’ 𝑝). 68 5 Random Variables As illustration, let us compute for the given example the expectation and the variance according to the general formulae for the computation of the expectation and variance in random variables and compare the outcome to the specific formulae for the expectation and variance in the Binomial distribution:
## [1] 2.5 This agrees with the specific formulae for Binomial variables, since 10 Γ— 0.5 = 5 and 10 Γ— 0.5 Γ— (1 βˆ’ 0.5) = 2.5. Recall that the general formula for the computation of the expectation calls for the multiplication of each value in the sample space with the probability of that value, followed by the summation of all the products. The object β€œX.val” contains all the values of the random variable and the object β€œP.val” contains the probabilities of these values. Hence, the expression β€œX.val*P.val” produces the product of each value of the random variable times the probability of that value. Summation of these products with the function β€œsum” gives the expectation, which is saved in an object that is called β€œEX”. The general formula for the computation of the variance of a random variable involves the product of the squared deviation associated with each value with the probability of that value, followed by the summation of all products. The expression β€œ(X.val-EX)^ 2” produces the sequence of squared deviations from the expectation for all the values of the random variable. Summation of the product of these squared deviations with the probabilities of the values (the outcome of β€œ(X.val-EX)^2*P.val”) gives the variance. When the value of 𝑝 changes (without changing the number of trials 𝑛) then the probabilities that are assigned to each of the values of the sample space of the Binomial random variable change, but the sample space itself does not. For example, consider rolling a die 10 times and counting the number of times that the face 3 was obtained. Having the face 3 turning up is a β€œSuccess”. The probability 𝑝 of a success in this example is 1/6, since the given face is one out of 6 equally likely faces. The resulting random variable that counts the total number of success in 10 trials has a Binomial(10, 1/6) distribution. The sample space is yet again equal to the set of integers {0, 1, … , 10}. However, the probabilities of values are different. These probabilities can again be computes with the aid of the function β€œdbinom”:
In Figure the probabilities for Binomial(10, 1/6), the Binomial(10, 1/2), and the Binomial(10, 0.6) distributions are plotted side by side. In all these 3 distri- butions the sample space is the same, the integers between 0 and 10. However, the probabilities of the different values differ. (Note that all bars should be placed on top of the integers. For clarity of the presentation, the bars asso- ciated with the Binomial(10, 1/6) are shifted slightly to the left and the bars associated with the Binomial(10, 0.6) are shifted slightly to the right.) The expectation of the Binomial(10, 0.5) distribution is equal to 10 Γ— 0.5 = 5. Compare this to the expectation of the Binomial(10, 1/6) distribution, which is 10 Γ—(1/6) = 1.666667 and to the expectation of the Binomial(10, 0.6) distribution which equals 10 Γ— 0.6 = 6. The variance of the Binomial(10, 0.5) distribution is 10 Γ— 0.5 Γ— 0.5 = 2.5. The variance when 𝑝 = 1/6 is 10 Γ— (1/6) Γ— (5/6) = 1.388889 and the variance when 𝑝 = 0.6 is 10 Γ— 0.6 Γ— 0.4 = 2.4. Example 5.1. As an application of the Binomial distribution consider a pre- election poll. A candidate is running for office and is interested in knowing the percentage of support in the general population in its candidacy. Denote the probability of support by 𝑝. In order to estimate the percentage a sample of size 300 is selected from the population. Let 𝑋 be the count of supporters in 70 5 Random Variables the sample. A natural model for the distribution of 𝑋 is the Binomial(300, 𝑝) distribution, since each subject in the sample may be a supporter (β€œSuccess”) or may not be a supporter (β€œFailure”). The probability that a subject supports the candidate is 𝑝 and there are 𝑛 = 300 subjects in the sample. Example 5.2. As another example consider the procedure for quality control that is described in Discussion Forum of Chapter . According to the procedure 20 items are tested and the number of faulty items is recorded. If 𝑝 is the probability that an item is identified as faulty then the distribution of the total number of faulty items may be modeled by the Binomial(20, 𝑝) distribution. In both examples one may be interested in making statements on the prob- ability 𝑝 based on the sample. Statistical inference relates the actual count obtained in the sample to the theoretical Binomial distribution in order to make such statements. The Poisson Random Variable The Poisson distribution is used as an approximation of the total number of occurrences of rare events. Consider, for example, the Binomial setting that involves 𝑛 trials with 𝑝 as the probability of success of each trial. Then, if 𝑝 is small but 𝑛 is large then the number of successes 𝑋 has, approximately, the Poisson distribution. The sample space of the Poisson random variable is the unbounded collec- tion of integers: {0, 1, 2, …}. Any integer value is assigned a positive probability. Hence, the Poisson random variable is a convenient model when the maximal number of occurrences of the events in a-priori unknown or is very large. For example, one may use the Poisson distribution to model the number of phone calls that enter a switchboard in a given interval of time or the number of malfunctioning components in a shipment of some product. The Binomial distribution was specified by the number of trials 𝑛 and proba- bility of success in each trial 𝑝. The Poisson distribution is specified by its ex- pectation, which we denote by πœ†. The expression β€œπ‘‹ ∼ Poisson(πœ†)” states that the random variable 𝑋 has a Poisson distribution with expectation E(𝑋) = πœ†. The function β€œdpois” computes the probability, according to the Poisson dis- tribution, of values that are entered as the first argument to the function. The expectation of the distribution is entered in the second argument. The function β€œppois” computes the cumulative probability. Consequently, we can compute the probabilities and the cumulative probabilities of the values between 0 and 10 for the Poisson(2) distribution via:
we have based the computation in R on the first 11 values of the distribution only, instead of the infinite sequence of values. A more accurate result may be obtained by the consideration of the first 101 values:
## [1] 2 In the last expression we have computed the variance of the Poisson distri- bution and obtained that it is equal to the expectation. This results can be validated mathematically. For the Poisson distribution it is always the case that the variance is equal to tEhe expectation, namely to πœ†: In Figure you may find the probabilities of the Poisson distribution for πœ† = 0.5, πœ† = 1 and πœ† = 2. Notice once more that the sample space is the same for all the Poisson distributions. What varies when we change the value of πœ† are the probabilities. Observe that as πœ† increases then probability of larger values increases as well. Example 5.3. A radio active element decays by the release of subatomic particles and energy. The decay activity is measured in terms of the number of decays per second in a unit mass. A typical model for the distribution of the number of decays is the Poisson distribution. Observe that the number Continuous Random Variable 73 of decays in a second is a integer and, in principle, it may obtain any integer value larger or equal to zero. The event of a radio active decay of an atom is a relatively rare event. Therefore, the Poisson model is likely to fit this phenomena. Example 5.4. Consider an overhead power line suspended between two util- ity poles. During rain, drops of water may hit the power line. The total number of drops that hit the line in a one minute period may be modeled by a Poisson random variable. Continuous Random Variable Many types of measurements, such as height, weight, angle, temperature, etc., may in principle have a continuum of possible values. Continuous random variables are used to model uncertainty regarding future values of such mea- surements. The main difference between discrete random variables, which is the type we examined thus far, and continuous random variable, that are added now to the list, is in the sample space, i.e., the collection of possible outcomes. The former type is used when the possible outcomes are separated from each other as the integers are. The latter type is used when the possible outcomes are the entire line of real numbers or when they form an interval (possibly an open ended one) of real numbers. The difference between the two types of sample spaces implies differences in the way the distribution of the random variables is being described. For discrete random variables one may list the probability associated with each value in the sample space using a table, a formula, or a bar plot. For continuous random variables, on the other hand, probabilities are assigned to intervals of values, and not to specific values. Thence, densities are used in order to display the distribution. Densities are similar to histograms, with areas under the plot corresponding to probabilities. We will provide a more detailed description of densities as we discuss the different examples of continuous random variables. In continuous random variables integration replaces summation and the den- sity replaces the probability in the computation of quantities such as the probability of an event, the expectation, and the variance. 3The number of decays may also be considered in the Binomial(𝑛, 𝑝) setting. The number 𝑛 is the total number of atoms in the unit mass and 𝑝 is the probability that an atom decays within the given second. However, since 𝑛 is very large and 𝑝 is very small we get that the Poisson distribution is an appropriate model for the count. 74 5 Random Variables Hence, if the expectation of a discrete random variable is given in the formula E(𝑋) = βˆ‘ (π‘₯ Γ— P(π‘₯)), which involves the summation over all values of the product between the value and the probability of the value, then for continuous random variable the definition becomes: E(𝑋) = ∫ (π‘₯ Γ— 𝑓(π‘₯))𝑑π‘₯ , where 𝑓(π‘₯) is the density of 𝑋 at the value π‘₯. Therefore, in the expectation of a continuous random variable one multiplies the value by the density at the value. This product is then integrated over the sample space. Likewise, the formula V(𝑋) = βˆ‘ ((π‘₯βˆ’E(𝑋))2Γ—P(π‘₯)) for the variance is replaced by: V(𝑋) = ∫ ((π‘₯ βˆ’ E(𝑋))2 Γ— 𝑓(π‘₯))𝑑π‘₯ . Nonetheless, the intuitive interpretation of the expectation as the central value of the distribution that identifies the location and the interpretation of the standard deviation (the square root of the variance) as the summary of the total spread of the distribution is still valid. In this section we will describe two types of continuous random variables: Uniform and Exponential. In the next chapter another example – the Normal distribution – will be introduced. The Uniform Random Variable The Uniform distribution is used in order to model measurements that may have values in a given interval, with all values in this interval equally likely to occur. For example, consider a random variable 𝑋 with the Uniform distribution over the interval [3, 7], denoted by β€œπ‘‹ ∼ Uniform(3, 7)”. The density function at given values may be computed with the aid of the function β€œdunif”. For instance let us compute the density of the Uniform(3, 7) distribution over the integers {0, 1, … , 10}: ## [1] 0.00 0.00 0.00 0.25 0.25 0.25 0.25 0.25 0.00 0.00 0.00 Notice that for the values 0, 1, and 2, and the values 8, 9 and 10 that are outside of the interval the density is equal to zero, indicating that such values cannot occur in the given distribution. The values of the density at integers inside the interval are positive and equal to each other. The density is not
A plot of the Uniform(3, 7) density is given in Figure in the form of a solid line. Observe that the density is positive over the interval [3, 7] where its height is 1/4. Area under the curve in the density corresponds to probability. Indeed, the fact that the total probability is one is reflected in the total area under the curve being equal to 1. Over the interval [3, 7] the density forms a rectangle. The base of the rectangle is the length of the interval 7 βˆ’ 3 = 4. The height of the rectangle is thus equal to 1/4 in order to produce a total area of 4 Γ— (1/4) = 1. The function β€œpunif” computes the cumulative probability of the uniform dis- tribution. The probability P(𝑋 ≀ 4.73), for 𝑋 ∼ Uniform(3, 7), is given by: ## [1] 0.4325 This probability corresponds to the marked area to the left of the point π‘₯ = 4.73 in Figure . This area of the marked rectangle is equal to the length of the base 4.73 - 3 = 1.73, times the height of the rectangle 1/(7-3) = 1/4. Indeed:
76 5 Random Variables is the area of the marked rectangle and is equal to the probability. Let us use R in order to plot the density and the cumulative probability func- tions of the Uniform distribution. We produce first a large number of points in the region we want to plot. The points are produced with aid of the function β€œseq”. The output of this function is a sequence with equally spaced values. The starting value of the sequence is the first argument in the input of the function and the last value is the second argument in the input. The argument β€œlength=1000” sets the length of the sequence, 1,000 values in this case:
The object β€œden” is a sequence of length 1,000 that contains the density of the Uniform(3, 7) evaluated over the values of β€œx”. When we apply the function β€œplot” to the two sequences we get a scatter plot of the 1,000 points. A scatter plot is a plot of points. Each point in the scatter plot is identify by its horizontal location on the plot (its β€œπ‘₯” value) and by its vertical location on the plot (its 𝑦 value). The horizontal value of each point in the plot is determined by the first argument to the function β€œplot” and the vertical value is determined by the second argument. For example, the first value in the sequence β€œx” is 0. The value of the Uniform density at this point is 0. Hence, the first value of the sequence β€œden” is also 0. A point that corresponds to these values is produced in the plot. The horizontal value of the point is 0 and the vertical value is 0. In a similar way the other 999 points are plotted. The last point to be plotted has a horizontal value of 10 and a vertical value of 0. The number of points that are plotted is large and they overlap each other in the graph and thus produce an impression of a continuum. In order to obtain nicer looking plots we may choose to connect the points to each other with segments and use smaller points. This may be achieved by the addition of the argument β€œtype=l”, with the letter l for line, to the plotting function:
with the standard deviation being the square root of this value. Specifically, for 𝑋 ∼ Uniform(3, 7) we get that V(𝑋) = (7 βˆ’ 3)2/12 = 1.333333. The standard deviation is equal to √1.333333 = 1.154701. Example 5.5. In Example we considered rain drops that hit an overhead power line suspended between two utility poles. The number of drops that hit the line can be modeled using the Poisson distribution. The position between the two poles where a rain drop hits the line can be modeled by the Uniform distribution. The rain drop can hit any position between the two utility poles. Hitting one position along the line is as likely as hitting any other position. Example 5.6. Meiosis is the process in which a diploid cell that contains two copies of the genetic material produces an haploid cell with only one copy (sperms or eggs, depending on the sex). The resulting molecule of genetic mate- rial is linear molecule (chromosome) that is composed of consecutive segments: a segment that originated from one of the two copies followed by a segment from the other copy and vice versa. The border points between segments are called points of crossover. The Haldane model for crossovers states that the position of a crossover between two given loci on the chromosome corresponds to the Uniform distribution and the total number of crossovers between these two loci corresponds to the Poisson distribution. The Exponential Random Variable The Exponential distribution is frequently used to model times between events. For example, times between incoming phone calls, the time until a component becomes malfunction, etc. We denote the Exponential distribution via β€œπ‘‹ ∼ Exponential(πœ†)”, where πœ† is a parameter that characterizes the distribution and is called the rate of the distribution. The overlap between the parameter used to characterize the Exponential distribution and the one used for the Poisson distribution is deliberate. The two distributions are tightly interconnected. As a matter of fact, it can be shown that if the distribution between occurrences of a phenomena has the Exponential distribution with rate πœ† then the total number of the occurrences of the phenomena within a unit interval of time has a Poisson(πœ†) distribution. The sample space of an Exponential random variable contains all non-negative numbers. Consider, for example, 𝑋 ∼ Exponential(0.5). The density of the distribution in the range between 0 and 10 is presented in Figure . Observe that in the Exponential distribution smaller values are more likely to occur in comparison to larger values. This is indicated by the density being larger at the vicinity of 0. The density of the exponential distribution given in the plot is positive, but hardly so, for values larger than 10.
The density of the Exponential distribution can be computed with the aid of the function β€œdexp”. The cumulative probability can be computed with the function β€œpexp”. For illustration, assume 𝑋 ∼ Exponential(0.5). Say one is interested in the computation of the probability P(2 < 𝑋 ≀ 6) that the random variable obtains a value that belongs to the interval (2, 6]. The required probability is indicated as the marked area in Figure . This area can be computed as the difference between the probability P(𝑋 ≀ 6), the area to the left of 6, and the probability P(𝑋 ≀ 2), the area to the left of 2: ## [1] 0.3180924 The difference is the probability of belonging to the interval, namely the area marked in the plot. The expectation of 𝑋, when 𝑋 ∼ Exponential(πœ†), is given by the equation: E(𝑋) = 1/πœ† , and the variance is given by: V(𝑋) = 1/πœ†2 . The standard deviation is the square root of the variance, namely 1/πœ†. Observe that the larger is the rate the smaller are the expectation and the standard deviation. In Figure the densities of the Exponential distribution are plotted for πœ† =
0.5, πœ† = 1, and πœ† = 2. Notice that with the increase in the value of the parameter then the values of the random variable tends to become smaller. This inverse relation makes sense in connection to the Poisson distribution. Recall that the Poisson distribution corresponds to the total number of occurrences in a unit interval of time when the time between occurrences has an Exponential distribution. A larger expectation πœ† of the Poisson corresponds to a larger number of occurrences that are likely to take place during the unit interval of time. The larger is the number of occurrences the smaller are the time intervals between occurrences. Example 5.7. Consider Examples and that deal with rain dropping on a power line. The times between consecutive hits of the line may be mod- eled by the Exponential distribution. Hence, the time to the first hit has an Exponential distribution. The time between the first and the second hit is also Exponentially distributed, and so on. Example 5.8. Return to Example that deals with the radio activity of some element. The total count of decays per second is model by the Poisson distribution. The times between radio active decays is modeled according to the Exponential distribution. The rate πœ† of that Exponential distribution is equal to the expectation of the total count of decays in one second, i.e. the expectation of the Poisson distribution.
Exercises Exercise 5.1. A particular measles vaccine produces a reaction (a fever higher that 102 Fahrenheit) in each vaccinee with probability of 0.09. A clinic vacci- nates 500 people each day. What is the expected number of people that will develop a re- action each day? What is the standard deviation of the number of people that will develop a reaction each day? In a given day, what is the probability that more than 40 people will develop a reaction? In a given day, what is the probability that the number of people that will develop a reaction is between 50 and 45 (inclusive)?
Exercise 5.2. The Negative-Binomial distribution is yet another example of a discrete, integer valued, random variable. The sample space of the distribution are all non-negative integers {0, 1, 2, …}. The fact that a random variable 𝑋 has this distribution is marked by β€œπ‘‹ ∼ Negative-Binomial(π‘Ÿ, 𝑝)”, where π‘Ÿ and 𝑝 are parameters that specify the distribution. Consider 3 random variables from the Negative-Binomial distribution: 𝑋1 ∼ Negative-Binomial(2, 0.5) 𝑋2 ∼ Negative-Binomial(4, 0.5) 𝑋3 ∼ Negative-Binomial(8, 0.8) 82 5 Random Variables The bar plots of these random variables are presented in Figure , re- organizer in a random order. Produce bar plots of the distributions of the random variables 𝑋1, 𝑋2, 𝑋3 in the range of integers between 0 and 15 and thereby identify the pair of parameters that produced each one of the plots in Figure . Notice that the bar plots can be produced with the aid of the function β€œplot” and the function β€œdnbinom(x,r,p)”, where β€œx” is a sequence of integers and β€œr” and β€œp” are the parameters of the distribution. Pay attention to the fact that you should use the argument β€œtype = h” in the function β€œplot” in order to produce the horizontal bars. Below is a list of pairs that includes an expectation and a vari- ance. Each of the pairs is associated with one of the random vari- ables 𝑋1, 𝑋2, and 𝑋3: E(𝑋) = 4, V(𝑋) = 8. E(𝑋) = 2, V(𝑋) = 4. E(𝑋) = 2, V(𝑋) = 2.5. Use Figure in order to match the random variable with its asso- ciated pair. Do not use numerical computations or formulae for the expectation and the variance in the Negative-Binomial distribution in order to carry out the matching. Use, instead, the structure of the bar-plots. Summary Glossary Binomial Random Variable: The number of successes among 𝑛 repeats of independent trials with a probability 𝑝 of success in each trial. The distri- bution is marked as Binomial(𝑛, 𝑝). Poisson Random Variable: An approximation to the number of occur- rences of a rare event, when the expected number of events is πœ†. The distri- bution is marked as Poisson(πœ†).
Summary 83 Density: Histogram that describes the distribution of a continuous random variable. The area under the curve corresponds to probability. Uniform Random Variable: A model for a measurement with equally likely outcomes over an interval [π‘Ž, 𝑏]. The distribution is marked as Uniform(π‘Ž, 𝑏). Exponential Random Variable: A model for times between events. The distribution is marked as Exponential(πœ†). Discuss in the Forum This unit deals with two types of discrete random variables, the Binomial and the Poisson, and two types of continuous random variables, the Uniform and the Exponential. Depending on the context, these types of random variables may serve as theoretical models of the uncertainty associated with the outcome of a measurement. In your opinion, is it or is it not useful to have a theoretical model for a situation that occurs in real life? When forming your answer to this question you may give an example of a situation from you own field of interest for which a random variable, possibly from one of the types that are presented in this unit, can serve as a model. Discuss the importance (or lack thereof) of having a theoretical model for the situation. For example, the Exponential distribution may serve as a model for the time until an atom of a radio active element decays by the release of subatomic particles and energy. The decay activity is measured in terms of the number of decays per second. This number is modeled as having a Poisson distribution. Its expectation is the rate of the Exponential distribution. For the radioactive element Carbon-14 (14C) the decay rate is 3.8394 Γ— 10βˆ’12 particles per second. Computations that are based on the Exponential model may be used in order to date ancient specimens.
Student Learning Objective This chapter introduces a very important bell-shaped distribution known as the Normal distribution. Computations associated with this distribution are discussed, including the percentiles of the distribution and the identification of intervals of subscribed probability. The Normal distribution may serve as an approximation to other distributions. We demonstrate this property by showing that under appropriate conditions the Binomial distribution can be approximated by the Normal distribution. This property of the Normal distri- bution will be picked up in the next chapter where the mathematical theory that establishes the Normal approximation is demonstrated. By the end of this chapter, the student should be able to: Recognize the Normal density and apply R functions for computing Normal probabilities and percentiles. Associate the distribution of a Normal random variable with that of its standardized counterpart, which is obtained by centering and re-scaling. Use the Normal distribution to approximate the Binomial distribution. The Normal Random Variable The Normal distribution is the most important of all distributions that are used in statistics. In many cases it serves as a generic model for the distri- bution of a measurement. Moreover, even in cases where the measurement is modeled by other distributions (i.e. Binomial, Poisson, Uniform, Exponential, etc.) the Normal distribution emerges as an approximation of the distribution of numerical characteristics of the data produced by such measurements.
86 6 The Normal Random Variable The Normal Distribution A Normal random variable has a continuous distribution over the sample space of all numbers, negative or positive. We denote the Normal distribution via β€œπ‘‹ ∼ Normal(πœ‡, 𝜎2)”, where πœ‡ = E(𝑋) is the expectation of the random variable and 𝜎2 = V(𝑋) is it’s variance. Consider, for example, 𝑋 ∼ Normal(2, 9). The density of the distribution is presented in Figure . Observe that the distribution is symmetric about the expectation 2. The random variable is more likely to obtain its value in the vicinity of the expectation. Values much larger or much smaller than the expectation are substantially less likely.
The density of the Normal distribution can be computed with the aid of the function β€œdnorm”. The cumulative probability can be computed with the function β€œpnorm”. For illustrating the use of the latter function, assume that 𝑋 ∼ Normal(2, 9). Say one is interested in the computation of the probability P(0 < 𝑋 ≀ 5) that the random variable obtains a value that belongs to the interval (0, 5]. The required probability is indicated by the marked area in Figure . This area can be computed as the difference between the probability P(𝑋 ≀ 5), the area to the left of 5, and the probability P(𝑋 ≀ 0), the area to the left of 0:
6.2 The Normal Random Variable 87 being inside the interval, which turns out to be approximately equal to 0.589. Notice that the expectation πœ‡ of the Normal distribution is entered as the second argument to the function. The third argument to the function is the standard deviation, i.e. the square root of the variance. In this example, the standard deviation is √9 = 3. Figure displays the densities of the Normal distribution for the combina- tions πœ‡ = 0, 𝜎2 = 1 (the red line); πœ‡ = 2, 𝜎2 = 9 (the black line); and πœ‡ = βˆ’3, 𝜎2 = 1/4 (the green line). Observe that the smaller the variance the more con- centrated is the distribution of the random variable about the expectation.
FIGURE 6.2: The Normal Distribution for Various Values of πœ‡ and 𝜎2 Example 6.1. IQ tests are a popular (and controversial) mean for measuring intelligence. They are produced as (weighted) average of a response to a long list of questions, designed to test different abilities. The score of the test across the entire population is set to be equal to 100 and the standard deviation is set to 15. The distribution of the score is Normal. Hence, if 𝑋 is the IQ score of a random subject then 𝑋 ∼ Normal(100, 152). Example 6.2. Any measurement that is produced as a result of the combi- nation of many independent influencing factors is likely to poses the Normal distribution. For example, the hight of a person is influenced both by genetics and by the environment in which that person grew up. Both the genetic and the environmental influences are a combination of many factors. Thereby, it should not come as a surprise that the heights of people in a population tend to follow the Normal distribution. The Standard Normal Distribution The standard normal distribution is a normal distribution of standardized val- ues, which are called 𝑧-scores. A 𝑧-score is the original measurement measured in units of the standard deviation from the expectation. For example, if the
expectation of a Normal distribution is 2 and the standard deviation is 3 = √9, then the value of 0 is 2/3 standard deviations smaller than (or to the left of) the expectation. Hence, the 𝑧-score of the value 0 is -2/3. The calculation of the 𝑧-score emerges from the equation: (0 =) π‘₯ = πœ‡ + 𝑧 β‹… 𝜎 (= 2 + 𝑧 β‹… 3) The 𝑧-score is obtained by solving the equation 0 = 2 + 𝑧 β‹… 3 ⟹ 𝑧 = (0 βˆ’ 2)/3 = βˆ’2/3 . In a similar way, the 𝑧-score of the value π‘₯ = 5 is equal to 1, following the solution of the equation 5 = 2 + 𝑧 β‹… 3, which leads to 𝑧 = (5 βˆ’ 2)/3 = 1. The standard Normal distribution is the distribution of a standardized Normal measurement. The expectation for the standard Normal distribution is 0 and the variance is 1. When 𝑋 ∼ 𝑁(πœ‡, 𝜎2) has a Normal distribution with expecta- tion πœ‡ and variance 𝜎2 then the transformed random variable 𝑍 = (𝑋 βˆ’ πœ‡)/𝜎 produces the standard Normal distribution 𝑍 ∼ 𝑁(0, 1). The transformation corresponds to the reexpression of the original measurement in terms of a new β€œzero” and a new unit of measurement. The new β€œzero” is the expectation of the original measurement and the new unit is the standard deviation of the original measurement. Computation of probabilities associated with a Normal random variable 𝑋 can be carried out with the aid of the standard Normal distribution. For example, consider the computation of the probability P(0 < 𝑋 ≀ 5) for 𝑋 ∼ 𝑁(2, 9), that has expectation πœ‡ = 2 and standard deviation 𝜎 = 3. Consider 𝑋’s standardized values: 𝑍 = (𝑋 βˆ’2)/3. The boundaries of the interval [0, 5], namely 0 and 5, have standardized 𝑧-scores of (0 βˆ’ 2)/3 = βˆ’2/3 and (5 βˆ’ 2)/3 = 1, respectively. Clearly, the original measurement 𝑋 falls between the original boundaries (0 < 𝑋 ≀ 5) if, and only if, the standardized measurement 𝑍 falls between the standardized boundaries (βˆ’2/3 < 𝑍 ≀ 1). Therefore, the probability that 𝑋 obtains a value in the range [0, 5] is equal to the probability that 𝑍 obtains a value in the range [βˆ’2/3, 1]. The function β€œpnorm” was used in the previous subsection in order to compute that probability that 𝑋 obtains values between 0 and 5. The computation produced the probability 0.5888522. We can repeat the computation by the application of the same function to the standardized values: ## [1] 0.5888522 The value that is being computed, the area under the graph for the standard Normal distribution, is presented in Figure . Recall that 3 arguments where specified in the previous application of the function β€œpnorm”: the π‘₯ value, the
expectation, and the standard deviation. In the given application we did not specify the last two arguments, only the first one. (Notice that the output of the expression β€œ(5-2)/3” is a single number and, likewise, the output of the expression β€œ(0-2)/3” is also a single number.) Most R function have many arguments that enables flexible application in a wide range of settings. For convenience, however, default values are set to most of these arguments. These default values are used unless an alternative value for the argument is set when the function is called. The default value of the second argument of the function β€œpnorm” that specifies the expectation is β€œmean=0”, and the default value of the third argument that specifies the standard deviation is β€œsd=1”. Therefore, if no other value is set for these arguments the function computes the cumulative distribution function of the standard Normal distribution. Computing Percentiles Consider the issue of determining the range that contains 95% of the prob- ability for a Normal random variable. We start with the standard Normal distribution. Consult Figure . The figure displays the standard Normal dis- tribution with the central region shaded. The area of the shaded region is 0.95. We may find the 𝑧-values of the boundaries of the region, denoted in the figure as 𝑧0 and 𝑧1 by the investigation of the cumulative distribution function. Indeed, in order to have 95% of the distribution in the central region one should leave out 2.5% of the distribution in each of the two tails. That is, 0.025 should be the area of the unshaded region to the right of 𝑧1 and, likewise, 0.025 should be the area of the unshaded region to the left of 𝑧0. In other 90 6 The Normal Random Variable words, the cumulative probability up to 𝑧0 should be 0.025 and the cumulative distribution up to 𝑧1 should be 0.975. In general, given a random variable 𝑋 and given a percent 𝑝, the π‘₯ value with the property that the cumulative distribution up to π‘₯ is equal to the probability 𝑝 is called the 𝑝-percentile of the distribution. Here we seek the 2.5%-percentile and the 97.5%-percentile of the standard Normal distribution.
FIGURE 6.4: Central 95% of the Standard Normal Distribution The percentiles of the Normal distribution are computed by the function β€œqnorm”. The first argument to the function is a probability (or a sequence of probabilities), the second and third arguments are the expectation and the standard deviations of the normal distribution. The default values to these arguments are set to 0 and 1, respectively. Hence if these arguments are not provided the function computes the percentiles of the standard Normal distri- bution. Let us apply the function in order to compute 𝑧1 and 𝑧0:
## [1] -1.959964 Observe that 𝑧1 is practically equal to 1.96 and 𝑧0 = βˆ’1.96 = βˆ’π‘§1. The fact that 𝑧0 is the negative of 𝑧1 results from the symmetry of the standard Normal distribution about 0. As a conclusion we get that for the standard Normal distribution 95% of the probability is concentrated in the range [βˆ’1.96, 1.96]. The problem of determining the central range that contains 95% of the dis- tribution can be addresses in the context of the original measurement 𝑋 (See Figure ). We seek in this case an interval centered at the expectation 2,
which is the center of the distribution of 𝑋, unlike 0 which was the center of the standardized values 𝑍. One way of solving the problem is via the applica- tion of the function β€œqnorm” with the appropriate values for the expectation and the standard deviation:
## [1] -3.879892 Hence, we get that π‘₯0 = βˆ’3.88 has the property that the total probability to its left is 0.025 and π‘₯1 = 7.88 has the property that the total probability to its right is 0.025. The total probability in the range [βˆ’3.88, 7.88] is 0.95. An alternative approach for obtaining the given interval exploits the inter- val that was obtained for the standardized values. An interval [βˆ’1.96, 1.96] of standardized 𝑧-values corresponds to an interval [2 βˆ’ 1.96 β‹… 3, 2 + 1.96 β‹… 3] of the original π‘₯-values:
92 6 The Normal Random Variable Hence, we again produce the interval [βˆ’3.88, 7.88], the interval that was ob- tained before as the central interval that contains 95% of the distribution of the Normal(2, 9) random variable. In general, if 𝑋 ∼ 𝑁(πœ‡, 𝜎) is a Normal random variable then the interval [πœ‡ βˆ’ 1.96 β‹… 𝜎, πœ‡ + 1.96 β‹… 𝜎] contains 95% of the distribution of the random variable. Frequently one uses the notation πœ‡ Β± 1.96 β‹… 𝜎 to describe such an interval. Outliers and the Normal Distribution Consider, next, the computation of the interquartile range in the Normal distri- bution. Recall that the interquartile range is the length of the central interval that contains 50% of the distribution. This interval starts at the first quartile (Q1), the value that splits the distribution so that 25% of the distribution is to the left of the value and 75% is to the right of it. The interval ends at the third quartile (Q3) where 75% of the distribution is to the left and 25% is to the right. For the standard Normal the third and first quartiles can be computed with the aid of the function β€œqnorm”:
## [1] -0.6744898 Observe that for the standard Normal distribution one has that 75% of the distribution is to the left of the value 0.6744898, which is the third quartile of this distribution. Likewise, 25% of the standard Normal distribution are to the left of the value -0.6744898, which is the first quartile. the interquartile range is the length of the interval between the third and the first quartiles. In the case of the standard Normal distribution this length is equal to 0.6744898 βˆ’ (βˆ’0.6744898) = 1.348980. In Chapter we considered box plots as a mean for the graphical display of numerical data. The box plot includes a vertical rectangle that initiates at the first quartile and ends at the third quartile, with the median marked within the box. The rectangle contains 50% of the data. Whiskers extends from the ends of this rectangle to the smallest and to the largest data values that are not outliers. Outliers are values that lie outside of the normal range of the data. Outliers are identified as values that are more then 1.5 times the interquartile range away from the ends of the central rectangle. Hence, a value is an outlier 6.3 Approximation of the Binomial Distribution 93 if it is larger than the third quartile plus 1.5 times the interquartile range or if it is less than the first quartile minus 1.5 times the interquartile range. How likely is it to obtain an outlier value when the measurement has the standard Normal distribution? We obtained that the third quartile of the standard Normal distribution is equal to 0.6744898 and the first quartile is minus this value. The interquartile range is the difference between the third and first quartiles. The upper and lower thresholds for the defining outliers are: ## [1] 2.697959 ## [1] -2.697959 Hence, a value larger than 2.697959 or smaller than -2.697959 would be iden- tified as an outlier. The probability of being less than the upper threshold 2.697959 in the standard Normal distribution is computed with the expression β€œpnorm(2.697959)”. The probability of being above the threshold is 1 minus that probability, which is the outcome of the expression β€œ1-pnorm(2.697959)”. By the symmetry of the standard Normal distribution we get that the proba- bility of being below the lower threshold -2.697959 is equal to the probability of being above the upper threshold. Consequently, the probability of obtaining an outlier is equal to twice the probability of being above the upper threshold:
Approximation of the Binomial Distribution The Normal distribution emerges frequently as an approximation of the dis- tribution of data characteristics. The probability theory that mathematically establishes such approximation is called the Central Limit Theorem and is
Approximate Binomial Probabilities and Percentiles Consider, for example, the probability of obtaining between 1940 and 2060 heads when tossing 4,000 fair coins. Let 𝑋 be the total number of heads. The tossing of a coin is a trial with two possible outcomes: β€œHead” and β€œTail.” The probability of a β€œHead” is 0.5 and there are 4,000 trials. Let us call obtaining a β€œHead” in a trial a β€œSuccess”. Observe that the random variable 𝑋 counts the total number of successes. Hence, 𝑋 ∼ Binomial(4000, 0.5). The probability P(1940 ≀ 𝑋 ≀ 2060) can be computed as the difference between the probability P(𝑋 ≀ 2060) of being less or equal to 2060 and the probability P(𝑋 < 1940) of being strictly less than 1940. However, 1939 is the largest integer that is still strictly less than the integer 1940. As a result we get that P(𝑋 < 1940) = P(𝑋 ≀ 1939). Consequently, P(1940 ≀ 𝑋 ≀ 2060) = P(𝑋 ≀ 2060) βˆ’ P(𝑋 ≀ 1939). Applying the function β€œpbinom” for the computation of the Binomial cumulative probability, namely the probability of being less or equal to a given value, we get that the probability in the range between 1940 and 2060 is equal to ## [1] 0.9442883 This is an exact computation. The Normal approximation produces an ap- proximate evaluation, not an exact computation. The Normal approximation replaces Binomial computations by computations carried out for the Normal distribution. The computation of a probability for a Binomial random variable is replaced by computation of probability for a Normal random variable that has the same expectation and standard deviation as the Binomial random variable. Notice that if 𝑋 ∼ Binomial(4000, 0.5) then the expectation is E(𝑋) = 4, 000 Γ— 0.5 = 2, 000 and the variance is V(𝑋) = 4, 000 Γ— 0.5 Γ— 0.5 = 1, 000, with the standard deviation being the square root of the variance. Repeating the same computation that we conducted for the Binomial random variable, but this time with the function β€œpnorm” that is used for the computation of the Normal cumulative probability, we get:
## [1] 0.9442441 Observe that in this example the Normal approximation of the probabil- ity (0.9442441) agrees with the Binomial computation of the probability (0.9442883) up to 3 significant digits. Normal computations may also be applied in order to find approximate per- centiles of the Binomial distribution. For example, let us identify the central region that contains for a Binomial(4000, 0.5) random variable (approximately) 95% of the distribution. Towards that end we can identify the boundaries of the region for the Normal distribution with the same expectation and standard deviation as that of the target Binomial distribution:
## [1] 1938.02 After rounding to the nearest integer we get the interval [1938, 2062] as a proposed central region. In order to validate the proposed region we may repeat the computation under the actual Binomial distribution:
## [1] 1938 Again, we get the interval [1938, 2062] as the central region, in agreement with the one proposed by the Normal approximation. Notice that the function β€œqbinom” produces the percentiles of the Binomial distribution. It may not come as a surprise to learn that β€œqpois”, β€œqunif”, β€œqexp” compute the percentiles of the Poisson, Uniform and Exponential distributions, respectively. The ability to approximate one distribution by the other, when computation tools for both distributions are handy, seems to be of questionable importance. Indeed, the significance of the Normal approximation is not so much in its abil- ity to approximate the Binomial distribution as such. Rather, the important point is that the Normal distribution may serve as an approximation to a wide class of distributions, with the Binomial distribution being only one example. 96 6 The Normal Random Variable Computations that are based on the Normal approximation will be valid for all members in the class of distributions, including cases where we don’t have the computational tools at our disposal or even in cases where we do not know what the exact distribution of the member is! As promised, a more detailed discussion of the Normal approximation in a wider context will be presented in the next chapter. On the other hand, one need not assume that any distribution is well approxi- mated by the Normal distribution. For example, the distribution of wealth in the population tends to be skewed, with more than 50% of the people possess- ing less than 50% of the wealth and small percentage of the people possessing the majority of the wealth. The Normal distribution is not a good model for such distribution. The Exponential distribution, or distributions similar to it, may be more appropriate.
In principle, the Normal approximation is valid when 𝑛, the number of inde- pendent trials in the Binomial distribution, is large. When 𝑛 is relatively small the approximation may not be so good. Indeed, take 𝑋 ∼ Binomial(30, 0.3) and consider the probability P(𝑋 ≀ 6). Compare the actual probability to the Normal approximation:
## [1] 0.1159989 The Normal approximation, which is equal to 0.1159989, is not too close to the actual probability, which is equal to 0.1595230. A naΓ―ve application of the Normal approximation for the Binomial(𝑛, 𝑝) dis- tribution may not be so good when the number of trials 𝑛 is small. Yet, a small modification of the approximation may produce much better results. In order to explain the modification consult Figure where you will find the bar plot of the Binomial distribution with the density of the approximating Normal distribution superimposed on top of it. The target probability is the sum of heights of the bars that are painted in red. In the naΓ―ve application of the Normal approximation we used the area under the normal density which is to the left of the bar associated with the value π‘₯ = 6. Alternatively, you may associate with each bar located at π‘₯ the area under the normal density over the interval [π‘₯ βˆ’ 0.5, π‘₯ + 0.5]. The resulting correction to the approximation will use the Normal probability of the event {𝑋 ≀ 6.5}, which is the area shaded in red. The application of this approximation, which is called continuity correction produces: ## [1] 0.1596193 Observe that the corrected approximation is much closer to the target prob- ability, which is 0.1595230, and is substantially better that the uncorrected approximation which was 0.1159989. Generally, it is recommended to apply the continuity correction to the Normal approximation of a discrete distribu- tion. Consider the Binomial(𝑛, 𝑝) distribution. Another situation where the Normal approximation may fail is when 𝑝, the probability of β€œSuccess” in the Binomial distribution, is too close to 0 (or too close to 1). Recall, that for large 𝑛 the Poisson distribution emerged as an approximation of the Binomial distribution in such a setting. One may expect that when 𝑛 is large and 𝑝 is small then the Poisson distribution may produce a better approximation of a Binomial probability. When the Poisson distribution is used for the approximation we call it a Poisson Approximation. Let us consider an example. Let us analyze 3 Binomial distributions. The expectation in all the distributions is equal to 2 but the number of trials, 𝑛, vary. In the first case 𝑛 = 20 (and hence 𝑝 = 0.1), in the second 𝑛 = 200 (and 𝑝 = 0.01), and in the third 𝑛 = 2, 000 (and 𝑝 = 0.001. In all three cases we will be interested in the probability of obtaining a value less or equal to 3. 98 6 The Normal Random Variable The Poisson approximation replaces computations conducted under the Bi- nomial distribution with Poisson computations, with a Poisson distribution that has the same expectation as the Binomial. Since in all three cases the expectation is equal to 2 we get that the same Poisson approximation is used to the three probabilities: