The parameter of a population or of a sample drawn from that population describes the characteristics of that population or sample. A parameter can describe a population either completely or partially. When the analyst does not know the parameters of the population completely, then he uses certain statistic to estimate the values of those unknown parameters. The statistic is the function of the sample values of the variables. The statistic is the descriptive measure of the sample. The following are the two examples of parameters and statistics:
- A researcher conducts a study to find an estimate of the mean height of women of age 20 years and above. From a random sample of size 45, the researcher finds the mean height of the sample of 63.9 inches. Here, the sample mean is the statistic and the population mean is the parameter.
- An experimenter wants to find the estimate of the average farm size in Kansas. The researcher finds out that the mean of the sample of 40 farms is 731 acres. Here also mean of the population is the parameter and mean of the sample is the statistic ("Parameters & Statistics", 2016).
The three main components making up an observed test statistic in null hypothesis testing are the mean, the standard deviation and the proportion of the population. The mean, standard deviation and the proportion of the population may be specified. If the values of these components are not specified, then the researcher has to estimate the values of these parameters from the sample. The main function of each of the three components, the mean, the proportion and the standard deviation is to define the null hypothesis of the study. The values of these components also help the experimenter to calculate the value of the test statistic. The value of the test statistic helps to draw the conclusion to accept or reject the null hypothesis ("Inferential Statistics - Hypothesis Testing", 2016).
An odds ratio defines a measure of association between an outcome and an exposure. The odds ratio describes the odds that an outcome of an experiment would occur when a particular exposure is specified. The experimenter uses Odds ratio in studies of case-control and in the cohort and cross sectional studies.
Researcher defines the two concepts of odds in favour of an event and the odds against an event. These two concepts differ from the concept of odds ratio that the author has described above. One defines the odds in favour of an event as the ratio of the number or frequency of the successes or favourable choices of the event to the frequency of the failures of that event. The experimenter defines the odds against an event as the ratio of the number of failures or the unfavourable choices of the event to the frequency of successes or favourable choices of that event ("Odds in Favor and Odds against an Event. Calculating - Odds from Probabilities, Probabilities from Odds", 2016).
The analyst defines the asymmetrical relation among variables to be the relationship in which one can define that the change in the independent variable affects the change in the dependent variable. In the linear regression analysis, one studies the additive and linear relationship of the dependent variable with the independent variable. Hence, one can see that linear regression best reflects the asymmetrical relationship among the variables ("Introduction to linear regression analysis", 2016).
One defines the symmetrical relationship between variables as the relationship in which the change in one variable affects the change in the other variable. Correlation is the measure of association between two variables. Correlation depicts the linear relationship among two variables. Hence, one can conclude that correlation reflects the symmetric relationship between the two variables.
In multiple regression analysis, the regression coefficients indicate the slope of the relationship between the dependent variable and a portion of the independent variables that are independent of all the other variables. In multiple linear regressions, the partial regression coefficients denote the weights attached to the independent variables. The model set up of multiple linear regression and simple linear regression is different. Hence, the values that one obtains of the partial regression coefficients and the linear regression coefficients of the independent variables are different under these two different models (Onlinestatbook.com, 2016).
In the world of Statistics, the analysts define robust confidence intervals to be the confidence intervals in which some robust modification has been done. The data set may contain some outlier values. The analyst calculates the robust confidence interval for any parameter by discarding the outlier observations from the data set. One way in which the analyst can calculate robust intervals is by drawing samples by bootstrap methods.
Researchers call a confidence interval to be conservative if the interval gives a slightly wider range than that is expected to match the given level of confidence, under some approximations. Conservative confidence intervals offer higher level of confidence. The liberal confidence interval depicts the opposite picture than the conservative confidence interval. One can obtain robust intervals by modifying the conservative and liberal intervals ("Confidence Interval", 2016).
To derive the contrast weights in planned comparisons, the statistician pays attention only to a few priori or predicted hypothesis. In the planned comparisons, to calculate the contrast weights the statisticians do not pay serious concern to the family wise errors. The contrasts of the planned comparisons generally involve the comparison of two averages. The statisticians derive the contrast weights in the planned comparison study by assigning zero weights to the weights that the statistician does not compare in the study. The analyst assigns the values +1 and -1 to the weights being compared ("www.upa.pdx.edu", 2016).
There is a difference between orthogonal and non-orthogonal contrasts. In ANOVA, one can define the orthogonal contrast to be the comparisons between the groups with at least three levels, which are linearly independent. The non-orthogonal contrasts are the contrasts that are not uncorrelated. The correlated contrasts used in the non orthogonal designs are known as non orthogonal contrasts.
The analyst uses the omnibus tests in Statistics to test if the overall unexplained variable is significantly smaller than the explained variance of the data set. In an ANOVA table, the contrasts of the omnibus F- test can be of two types- the post hoc contrast and the planned contrast. The planned contrasts describe the hypothesis which the experimenter would test. The post hoc contrasts test the reliability of the unexpected prediction results. In an omnibus F-test, the contrasts explain whether the prediction is accurate to be easily translated to the coefficients of contrasts. The F-test has the number of orthogonal contrasts as one less than the frequency of the independent variable levels ("www.utdallas.edu", 2016).
The statisticians use the standardized mean differences in the planned comparisons as a summary statistic measure. When all the studies give rise to the same outcomes, the standardized mean difference measures the outcomes in different ways. The standardized mean difference denotes the size of the effect that is intervening in all the studies in relation to the variation that one can observe in that study ("handbook.cochrane.org", 2016). These are the advantages of using the measure of standardized means to perform planned comparisons.
In a factorial design, the experimenter samples the dependent variable in light of every favourable combination of the factors of the experiment. Hence, in factorial designs cross classification of the different factors take place. The four main types of the means of the factorial designs are the means of the two main effects and the means of the interaction effect. The means of the main effects are similar to each other in calculations while they differ from the means of the interaction effects of the study ("msu.edu/course/lin/875/apr1", 2016).
(2016). Onlinestatbook.com. Retrieved 22 June 2016,
Confidence Interval. (2016). Stattrek.com. Retrieved 22 June 2016,
handbook.cochrane.org. (2016). Handbook.cochrane.org. Retrieved 22 June 2016,
Inferential Statistics - Hypothesis Testing. (2016). Sphweb.bumc.bu.edu. Retrieved 22 June 2016,
Introduction to linear regression analysis. (2016). People.duke.edu. Retrieved 22 June 2016
Odds in Favor and Odds against an Event. Calculating - Odds from Probabilities, Probabilities from Odds. (2016). Futureaccountant.com.