Research Project: Statistics

Question 1

The monthly unemployment rates in US are published by the Bureau of Labor Statistics. However, to publish the monthly unemployment rates requires the Bureau to collect the data constantly from the sample population since it would be impractical to count every unemployed individual each month. Moreover, the procedure would be costly and time-consuming before the data is produced. As a result, bureau conducts a monthly survey known as Current Population Survey (CPS) to determine the level of unemployment in the country. The survey uses approximately 60,000 household as the sample for the survey. This consequently translates to approximately 110,000 persons each month; a wide sample compared to the public opinion survey that usually cover less than 2,000 people. However, the CPS is selected to represent the entire US population in the study.

Type of service
Type of assignment
Urgency
Academic Level
Spacing
Number of pages
Currency
Total price: 00.00 $ 00.00
 

In order to select a sample to use in the study, all the counties and the independent cities in US are classified into about 2,000 sampling units. The Bureau then devises and selects the sample of approximately 800 of above sampling units to represent each state and the Columbia district. The sample is usually state-based design and depicts urban and rural regions, different types of farming and industrial areas. However, each month, one-fourth of the household used in the sample are changed to ensure that no household is questioned more 4 months consecutively. As a consequence, approximately 75 percent of the sample usually remains the same from month to month, and 50 percent remain same over the years. The procedure usually strengths the reliability of the estimate developed by the bureau from month to month and year to year. The collection of the data is conducted by the Census Bureau, who uses highly trained and experienced employees to contact approximately 60,000 eligible household.

Question 2

Willaim Sealy Gosset (1876 to 1937) and Carl Friedrich Gauss (1777 to 1855) represent the great scientist of their period. Their achievement and influences are still evident in the modern world. Carl Gauss is considered as the greatest German mathematicians of the nineteenth century. His discoveries and scholarly writing have significantly left a lasting mark in the area of geodesy, astronomy, and physics. In contrast, Willaim Sealy is a famous English mathematician, best known for his efforts in development of the student’s t-distribution. Gosset joined Winchester College before proceeding to New College to pursue mathematics and chemistry. On graduating in 1899, he joined the Arthur Guinness and son in Ireland. Although Carl was initially discouraged from attending school, with anticipation that he would follow the family trade, Gauss uncle recognized his potential and enrolled him to school. While in school, Gauss exhibited his skills as a math prodigy. As an outcome of his inventiveness, he was sponsored to pursue further studies at Caroline college and later at the University of Gottingen.

Gauss was genius who at the age of 14 years had discovered that a polygon with 17 sides could be drawn using a compass and a straight edge. While at the university, Gauss submitted the proof that every arithmetical equation usually consist of least one root or solution. In contrast, Gosset gained much of the knowledge through empirical studies, as well as via trial and error since he spent most of the time in biometric laboratory. Most of the Carl Gauss discovery was explicitly published and acknowledged. In 1801, Gauss made a discovery that was totally different from the field of mathematics. He discovered a method of determining and locating the orbit and new asteroids. In contrast, Gosset employer proscribed the employees from publishing any paper irrespective of their content, as the aspect the made Gosset to use Penname such as Student to evade their discovery by the employer. Both Gosset and Gauss devoted their life to academic fielder, and their ideas and invention are formed the foundation of the modern mathematical and statistic field.

Question 3

The U.S. Constitution permits the house of representative to conduct the census as directed by the law. The U.S Code (U.S.C), subdivision 141 authorizes the Census Bureau to conduct the decennial census of the population after every 10 years. Furthermore, the section authorizes the secretary to obtain information necessary or related with any other census. The constitution permits inclusion of questions in the decennial census beyond those concerning simple count of the number of people. The census data provide a detailed statistics essential for the economic development, strategic planning and political realignment in the house of representative. In USA, the data is used a primary source if fundamental benchmark statistics on the population housing trends in the nation.

The census data provides facts critical to government for policy planning, administration, and national planning. The data permit the government to develop socio-economic policies that enhance the welfare of the population. It also provides critical information required in the analysis of the changing patterns of urban movement and concentration in relation to variables such as education and occupation. The census data is crucial in national development since it allow the government to compare statistics, especially socio-economic data to identify the trend and status in different regions. Accurate data enables the government to plan how more than $400 billion per year are allocated to different development projects such as schools and hospital yearly, and more than $4.2 trillion over a period of 10 years for development projects such as road among others. The data is essential reapportioning of the U.S House of representative. The census information influences the number of seats a state holds in the House of Representatives. In addition, residents also use the census data to support community ingenuities encompassing environmental regulation, living standards issues and consumer advocacy among others.

Question 4

Self-representing and Non-self-representing concept forms the part of sampling and estimation procedures used by the Current Population Survey (CPS). In order to Understand the above two concepts, it is essential to illustrate with a case study. Each month, trained interviewers collect information from a scientifically identified sample of approximately 60,000 eligible households. Such sample is usually designed to represent the civilian constitutional population, and include 10,000 household beyond the 50,000 in regular CPS sample in order to meet the set requirements. The 2009 sample encompassed of 824 sample areas. The 824 areas were selected by dividing the entire US region in 2,025 primary sampling units (PSUs). A PSU usually consists of a county or a number of contiguous counties and most metropolitan areas constitute distinctive PSUs.

In order to improve the efficiency of the survey sample, the 2,025 PSUs are usually grouped into strata within each state. Such PSUs that are in Stratum by themselves are referred to as self-representing and are commonly the most populous in each state in USA. Other strata are formed through combination of PSUS that are similar in such characteristics as the population growth and distribution by occupation, industry, and sex. However, in the state with State Children’s Health Insurance Program (SCHIP) sample, the self-representing PSUs are usually the same for both the regular CPS and SCHIP. In most states, the same non-self-representing sample PSUs are in the sample for both the regular CPS and SCHIP. However, to improve the reliability of the SCHIP estimates, the SCHIP non-self-estimating PSUs are selected independent of the regular CPS sample PSUs, with replacement. However, the technique for stratification of PSUs for SCHIP is these states is similar to that of other stratification except that the stratification variable applied is usually the number of people under the age of 18 with household income below twice the poverty level.

Question 5

In statistics, plus or minus sign usually indicates a choice of exactly two possible values, one which is the negation of the other. It indicates the confidence interval or error in a measurement, often the standard error or a deviation. In regard to the Boston Globe survey, it reported an outcome result of plus or minus 47 percent. This indicated an inclusive range of the values that were considered acceptable. The Globe Boston survey outcomes of 4 percent plus or minus represented the sampling error during the survey. Therefore, the results of the survey were accurate to within +/- 4 percent of the survey. The margin error is high because the sample survey includes a small sample. In order to lower the confidence level during research, the size of the sample is usually increased. For instance, to achieve a confidence level of 99 percent in presidential polls, it is recommended to use a much higher sample size. As a result, this implies that the survey will require a high power to accept the hypothesis

Question 6

Student’s t-distribution commonly known as t-distribution is the probability distribution that is used to project the population parameters when the sample size is small or when population variance is unknown. It arises when estimating the mean of a normally distributed population for small sample size. Although normal distribution describes a full population, t-distribution only defines sample drawn from the full population. The t-distribution of each sample size is usually different, and the greater the sample survey, the more the correlation between the distribution and the normal distribution. In statistics, the key role of t-distribution is usually to analyze arithmetical implication of the variation between two sample sizes, and the construction of the confidence intervals for two population means, as well as in linear regression analysis.

The naming of the t-distribution remains a question of controversy among scholars since it is unclear the inventor of the model named it Student’s t-distribution rather his name. T-distribution was discovered by William Gosset’s in 1908 while working for the Guinness brewery in Ireland. Gosset was very attentive to the problem of small samples. For instance, the biochemical characteristics of the barley sample size was as low as 3, an aspect Gosset was interested in enhancing understandability of such low sample. However, Gosset employer preferred the employees to use the name of their pen when distributing their scientific findings rather than their names. Therefore, since Gosset used a pen named student, the scientific invention was named student t-distribution in attempt to hide his identity. However, another proposition about the naming of the model indicates the company was striving to conceal the use of the t-test in evaluating the quality of raw material from its competitors. However, through the work of Ronal Fisher, a British statistician, it became well known that student distribution model was the invention work of Gosset.

Question 7

There have been many debates regarding the probabilities and the odds in the field of probability. It has been questioned whether is fair or not to get 550 heads and 450 tails when a coin is tossed in the air 1000 times. However, in respects to probability fundamentals, it is fair and possible to get such outcomes. It is essential to understand that the chances of something occurring to be gathered from the observed instances, the sample size need to be huge, and not in tens, hundreds or thousands. For instance, when we flip a coin, it is evident that the chances of getting a head or a tail are usually 50 percent. When we flip a coin 1000 times, it is also quite easy to get 550 heads and 450 tails. However, this is purely hypothetical since reasonable outcome does not imply that the coin has 55 percent probability of turning up heads.

Fundamentally, it would take hundreds of thousands or even million trials to overcome a random opportunity, and obtain results that reflect the true probability of the event. It is also essential to appreciate that all random events are purely independent events. The outcome of the first test does influence the outcome of the second trial. For instance, if the coin flips outcome result to tail five times in a row, the next flip probability of getting heads remains 50 percent. The assumption that certain results are guaranteed because it has been previously attempted a number of times is usually inaccurate especially in the field of probability. It is probable to get 800 heads or even 0 heads. It is also possible for the coin to land heads or tail 100 times repeatedly, though we usually disregard such possibility since it is assumed that the possible results are head and tails.

Question 8

Ronald Fisher is one the great statistician whose innovation revolutionized the field of statistics. He was born in 1890 in England. He is famous of being the chief architect of the neo-Darwin synthesis. The British biologist and mathematician invented the revolutionary method of using statistics to natural sciences. In particular, Fisher discovered the techniques to maximize the evaluation of the empirical results. Furthermore, he is credited with discovery of the analysis of variance technique famously known as ANOVA, which demonstrates how restricted number of empirical studies can be efficient in devising a general law through consideration of few variables at the same time. Furthermore, Fisher discovered the extreme value theory that illustrates how to predict the most severe possible form of an accident or a catastrophe based on the past happenings. He also established the P-value that functions as a rigorous arithmetical measure of the dependability of the data sample as a source of methodical projections.

Ronald Fisher’s prolific finding profited from his experience at an agricultural research institutes, and from a physical inability that was evident throughout his life, due to near blindness that restrained him from participating in the First World War. As a result, such aspect forced Fisher to focus his investigation on abstract speculations. However, his intuition anticipated numerous later achievements in science from the classification of blood group to the statistical analysis of the text. The key principles of Fisher’s theory are illustrated in his works titled The Genetical Theory of Natural Selection that present a mathematical analysis of the evolution and Statistical Methods for Research Workers. In the book titled “The Genetical Theory of Natural Selection”, Fisher has proved that small differences potential produces significant changes in the history of species. He devoted his efforts in Genetical study in the last art of his academic career that started in 1929.

Question 9

Statistical inference is the process of drawing deductions based on the data. It entails making deductions about a parameter that seek to estimate or measure. In most instances, scientists have many measurements of an object such as mass of the electron and often select the most appropriate measure. The key principle of statistical inference is the Bayesian estimation, which included reasonable expectations or prior judgments, as well as a new observation or experimental outcome. Another method applied is the likelihood of the approach, in which prior probabilities are avoided in calculating the value of the parameter that would likely generate the observed distribution of empirical outcome. In a fully parametric inference, a specific mathematical form of the distribution function is usually presumed to be fully defined by the family of the probability distribution encompassing only a finite number of unknown parameters. For example, an investigator may presume that the distribution values are truly natural with unknown variance and mean. However, in non-particular inferences eschew this assumption and is used to project parameter values of the unknown distribution with unknown functional form. In short, the assumption made by the process producing the data is much less parametric statistic. For instance, each continuous probability distribution has a median that can be estimated using the sample median that has good properties when data originate from simple random sampling.

The inference statistic is used to test a hypothesis and make projections using the sample data that represent the larger population. It makes the propositions about the population using the data collected from the population through sampling. Given the hypothesis regarding the population intending to draw a conclusion, the statistical inference first selects a statistical model of the process that produces the data and secondly deduces the proposition from the model. The conclusion of the statistical model usually consists of the following key proposition; an estimate, credible level, confidence level and clustering of the data points into groups.

Question 10

The central Limit Theorem is a vital model in the field of statistic. It explains the characteristics of the population of the means that has been developed from the mean of an infinite number of random population samples. In short, the theorem states that the sampling distribution of the sampling means approaches the normal distribution as the sample size increase, irrespective of the shape of the population. Therefore, the sample means will be distributed normally if the population is positively, negatively skewed or even binomial. The two key points from the central limit theorem include; the average of the sample mean which form the population means, and the standard deviation of the sample means that equals the standard error of the population mean.

The mean as a key point of central limit theorem measures the central tendency either of the random variable or probability distribution characterized by the distribution. In the case of discrete probability distribution with a random variable X, the mean is usually equal to the summation of every possible value determined by probability of the value. In case of arithmetic mean, the average of the data set is synonymously used to describe a central value of a distinct set of number, where the summation of the values is divided by the number of values. However, standard deviation determines the rate of dispersion or variation from the mean. A low deviation indicates that the set data points tend to be close to the mean. On the other hand, high standard deviation indicates that data points to be dispersed out over a large range of values. A consequence of central theorem is that if the average measurement of a specific quantity, the average usually tends to shift toward the normal one, while measured variable is a combination of several uncorrelated variables, the measurement tends to be affected with random error of any distribution.

Need an essay?
We can easily write it for you
Place an order

Related essays