# Is A Z Test Parametric

### Because of the central limit theorem, many test statistics are approximately normally distributed for large samples. Therefore, many statistical tests can be conveniently performed as approximate Z -tests if the sample size is large or the population variance is known.

(Source: www.slideshare.net)

Contents

How to perform a Test when T is a statistic that is approximately normally distributed under the null hypothesis is as follows: For example, if the observed data X 1, ..., X n are (i) independent, (ii) have a common mean , and (iii) have a common variance 2, then the sample average X has mean and variance 2 n {\display style {\franc {\sigma ^{2}}{n}}}.

T- test is used when sample size is small (n<50) and population variance is unknown. There is no universal constant at which the sample size is generally considered large enough to justify use of the plug-in test.

Generally, one appeals to the central limit theorem to justify assuming that a test statistic varies normally. In some situations, it is possible to devise a test that properly accounts for the variation in plug-in estimates of nuisance parameters.

Next calculate the z -score, which is the distance from the sample mean to the population mean in units of the standard error: This is the one-sided p -value for the null hypothesis that the 55 students are comparable to a simple random sample from the population of all test -takers.

Another way of stating things is that with probability 1 0.014 = 0.986, a simple random sample of 55 students would have a mean test score within 4 units of the population mean. A deficiency of this analysis is that it does not consider whether the effect size of 4 points is meaningful.

(Source: www.coursehero.com)

This shows that if the sample size is large enough, very small differences from the null value can be highly statistically significant. Another class of Z -tests arises in maximum likelihood estimation of the parameters in a parametric statistical model.

Maximum likelihood estimates are approximately normal under certain conditions, and their asymptotic variance can be calculated in terms of the Fisher information. The maximum likelihood estimate divided by its standard error can be used as a test statistic for the null hypothesis that the population value of the parameter equals zero.

. Consider for example the following problem. The owner of a betting company wants to verify whether a customer is cheating or not.

Suppose now that it can not make any assumption on the data of the problem, so that it can not approximate the binomial with a Gauss. We solve the problem with the test of chi-square applied to a 2×2 contingency table.

In both cases, we obtained p-value less than 0.05, which leads us to reject the hypothesis of equal probability. Since chi-square-calculated is greater than chi-square-tabulation, we conclude by rejecting the hypothesis H (as stated by the p-value, and the parametric test).

(Source: www.slideshare.net)

Next, the test statistic should be calculated, and the results and conclusion stated. Assume an investor wishes to test whether the average daily return of a stock is greater than 1%.

A simple random sample of 50 returns is calculated and has an average of 2%. Conversely, the alternative hypothesis is whether the mean return is greater or less than 3%.

The value for z is calculated by subtracting the value of the average daily return selected for the test, or 1% in this case, from the observed average of the samples. This test was developed by Prof. W.S. Gossett in 1908, who published statistical papers under the pen name of ‘Student’.

Compare two means of small independent samples2. Compare Sample mean and Population mean3. Compare two proportions of small independent samples. Assumptions: 1. Samples are randomly selected2. Data utilized is quantitative3. Variable follow Normal distribution4. Sample variance are mostly same in both the groups under study5. Samples are small, mostly lower than 30.

When we compare the mean of a single group of observations with a specified value. Compare two means of two samples Two independent random samples come from the normal populations having unknown or same variance We test the null hypothesis, that the two population means are same against an appropriate one-sided or two-sided alternative hypotheses.

(Source: slideinshare.blogspot.com)

Same individuals are studied more than once in different circumstances- measurements made on the same people before and after interventions. Assumptions: 1. The outcome variable should be continuous.2. The difference between pre-post measurements should be normally distributed.

Here, d =difference between two samples X1 and X2, bar = Mean of d, SD = Standard Deviation of the difference, n =Sample size. Given by Sir Ronald Fisher The principal aim of statistical models is to explain the variation in measurements.

If the various experimental groups differ in terms of only one factor at a time- a One way ANOVA is used. e.g. A study to assess the effectiveness of four different antibiotics on S Angus.

If the various groups differ in terms of two or more factors at a time, then a Two way ANOVA is performed. Correlation is a technique for investigating the relationship between two quantitative, continuous variables Pearson’s Correlation Coefficient(r) is a measure of the strength of the association between the two variables.

Subjects selected for study with a pair of values of X & Y are chosen with random sampling procedure.2. Both variables X & Y are assumed to follow normal distribution.

(Source: www.slideshare.net)

Steps: *The first step in studying the relationship between two continuous variables is to draw a scatter plot of the variables to check for linearity. *The correlation coefficient should not be calculated of the relationship is not linear.

*For correlation only purposes, it does not matter on which axis the variables are plotted. The nearer the (scatter) of points is to a straight line, the higher the strength of association between the variables.

Assumptions: 1. The sample must be randomly selected.2. Data must be quantitative.3. Samples should be larger than 30.4. Data should follow normal distribution.5. Sample variances should be almost the same in both the groups of study. Note: • If the SD of the populations is known, a Test can be applied even if the sample is smaller than 30.

E.g. for two-tailed test : In a test of significance, when one wants to determine whether the mean IQ of malnourished children is different from that of well nourished and does not specify higher or lower, the P value of an experiment group includes both sides of extreme results at both ends of scale, and the test is called two-tailed test. E.g. for single tailed: In a test of significance when one wants to know specifically whether a result is larger or smaller than what occur by chance, the significant level or P value will apply to relative end only e.g. if we want to know if the malnourished have lesser mean IQ than the well nourished, the result will lie at one end (tail)of the distribution, and the test is called single tailed test.

Tests of significance play an important role in conveying the results of any research & thus the choice of an appropriate statistical test is very important as it decides the fate of outcome of the study. Hence, the emphasis placed on tests of significance in clinical research must be tempered with an understanding that they are tools for analyzing data & should never be used as a substitute for knowledgeable interpretation of outcomes.

(Source: www.slideshare.net)

### Other Articles You Might Be Interested In

###### Sources
1 yugioh.fandom.com - https://yugioh.fandom.com/wiki/Yu-Gi-Oh!_GX
2 www.yugioh.com - https://www.yugioh.com/yu-gi-oh-gx
3 animenetwork.net - https://animenetwork.net/anime/2792-yu-gi-oh-gx/