**Comment 1**

**T-test**

The T-Statistic is used in a T test running a hypothesis test. You will use T-Statistic with a p value. The p value tells us the odds are that the results could have happened by chance. A big difference in the t-score is that the population standard deviation will need to be estimated (therefore the population standard deviation is unknown until estimated).

**T-test is used when**: the **sample size is below** 30 and it has an **unknown** population standard deviation

**Z-test**

The Z-score (also known as standard score) lets us calculate the probability of a score occurring within a normal frequency distribution. Also, the Z-score lets us compare two scores from other different normal distributions. This is done by changing scores in a normal distribution to z-scores resulting in a standard normal distribution. In a normal frequency distribution by standardizing the score we can find out the probability of the score. In a normal frequency distribution (bell curve) a z-score is the number of standard deviation from the mean. This is a measure of how many standard deviations above or below the population mean. Z-score results are used to compare results from a test to a normal population (Statistics How to, 2018). For example if you want to compare a student test average to the rest of the class.

**Z-test**– lets us to decide if the sample is different from the population mean. Four things must be known when using z-score: 1. the population mean 2. The population standard deviation 3. The sample mean and 4. The sample size.

**z-test** is used when: the **sample size is above** 30 and the standard deviation of the population is known or **if the standard deviation of the population is known but the sample size is small then a t-test is used.**

**Comment 2**

Good morning Instructor and classmates; The statistical procedure for testing whether or not a chance s credible for the explanation of an experimental finding is called hypothesis testing. One of the key concepts in hypothesis testing is that of significance level or, the alpha level, which specifies the probability level for the evidencee to be an unreasonable estimate. (Banerjee, 2009).

An alpha level of 0.05 is the recommended pattern for a two-tailed test. The alpha level is considered based on the individual’s personal conditions of how strong the individual wants the evidence to be. The alpha level is also the probability or p-value that researchers accept as being significant, and can also be interpreted as the opportunity of making a Type I or Type II error. When you set a more stringent (smaller) alpha like such as .01 or .001 it decreases the probability of making Type 1 error, the likelihood of making a Type II error, suggesting that an alpha level .05 is a satisfying compromise between the making of Type I and Type II errors.

For some experiments, if the consequence of the applying of null hypothesis is extremely serious, for example, if null hypothesis applies, there may be death, injury or serious disease happening, then you want to try your best to avoid the type I error, that is, avoid the situation that null hypothesis is true but you reject it. As the significance level is the probability you will make the type 1 error. So for such experiments with serious results, we want to make the level smaller than standard situation. So or such experiments, if you can’t tolerate a 5% (0.05) chance of being wrong, use a lower significance level, 0.01 or 0.001 for example. 0.001 is common if there’s a possibility of death or serious disease.

The first kind of error that involves the rejection of a null hypothesis where it is actually true is called Type I error. Type I errors are equivalent to false positives. For example, A drug like Cyclophosphamide (Cytoxan) that is being used to treat various types of cancer can have dangerous results if the Null hypothesis is rejected because this would claim that this drug does not have any effects on the cancer. However, if the Null hypothesis is true, when the drug ( Cytoxan) does not combat cancer. As a result, the drug (Cytoxan) has falsely claimed to have a positive effect on treating the cancer.

Type II error is equivalent to false negatives. A Type II error occurs if it showed that the drug Cyclophosphamide ( Cytoxan) has no effect on treating the cancer, but in actuality it does. In conclusion, even though Type and Type II errors are part of the process of hypothesis testing and errors can’t be completely eliminated, one type of error may be minimize by decreasing the probability of the type of error in order for the probability of the other type of error to increase.