R language learning notes 5_ Hypothesis test of parameters

Posted by A2xA on Thu, 23 Dec 2021 16:30:56 +0100

5, Hypothesis test of parameters

Firstly, make some assumptions about an unknown parameter of the population or the distribution form of the population, then provide information from the extracted samples, construct appropriate statistics, test the provided assumptions, and make statistical judgment whether to accept or reject the assumptions.

5.1 hypothesis test and P value of test

5.1. 1 concept and steps of hypothesis testing

Basic idea of hypothesis testing

  • Counter proof of probability property: small probability events are almost impossible to occur in a test.
  • To test A hypothesis H0, first assume that H0 is correct. Under this hypothesis, an event A is constructed, and its probability of occurrence under the condition that H0 is correct is very small; Now conduct A test. If event A occurs (A small probability event occurs), it indicates that there is sufficient reason to refuse "assuming that H0 is correct"; on the contrary, if event A does not occur, there is no sufficient reason to refuse H0 and accept H0.
  • Accept / reject H0 ≠ H0 is correct / wrong, but H0 is considered correct or wrong with a certain degree of reliability according to the information provided by the sample.
  • Generally, the proposition that is uncertain and cannot be easily affirmed is regarded as alternative hypothesis H1, and the proposition that cannot be easily denied without sufficient reasons is regarded as original hypothesis H0 (it should be rejected only when the reason is sufficient, otherwise it should be retained).

Two types of errors

1) Type I error: discarding truth
         p (reject H0 | H0 as true)= α
2) Type II error: false
         p (accept H0 | H0 as false)= β
The only way to reduce both types of errors is to increase the sample size.
Usually only for the maximum probability of the first type of error α Limit without considering β, This statistical hypothesis testing problem is called significance testing, α Is the significance level of hypothesis test.

Inspection steps

1) Propose original hypothesis H0 and alternative hypothesis H1;
2) Select the test statistic W and determine its distribution;
3) At a given significance level, determine the rejection domain of H0 with respect to the statistic W;
4) Calculate the value of the test statistics corresponding to the sample points;
5) Judgment: if the value of the statistic falls within the rejection field, H0 will be rejected, otherwise H0 will be accepted.

5.1. 2 P value of inspection

P value of test - in a hypothesis test problem, reject the minimum significance level of the original hypothesis H0.
The P value indicates the degree of doubt about the original hypothesis / the probability of rejecting the original hypothesis for the first time. The smaller the P value, the more suspicious the original hypothesis is, and the more it should be rejected.
α ≥ P, at significance level α Lower reject H0;   α< P. At significance level α Keep H0 under

5.2 test of mononormal population parameters

5.2. 1 mean μ Hypothesis test of

1) Variance σ 2 when known μ Inspection: Z inspection

Hypothesis testing problemReject domain
H0: μ=μ0,H1: μ≠μ0{ |Z| > z1-α/2 }
H0: μ≤μ0,H1: μ>μ0{ Z > z1-α }
H0: μ≥μ0,H1: μ<μ0{ Z < - z1-α }

For example, the radiation of microwave oven when the door is closed is an important quality index. Let the index obey the normal distribution N( μ, 0.12), and the average value shall not exceed 0.12. In order to check the quality of recent products, 25 microwave ovens produced by a factory were randomly selected. It was found that the average radiation amount when the door was closed was 0.13. Asked whether the radiation amount of the microwave oven produced by the factory was high when the door was closed? ( α= 0.05)

Suppose H0: μ ≤0.12,H1: μ> zero point one two

> z.test(0.13,25,0.1,0.05,u0=0.12,alternative = "greater")
$mean
[1] 0.13

$z
[1] 0.5

$p.value
[1] 0.6915

$conf.int
[1] 0.0908 0.1692

Since P = 0.6915 > α= 0.05, the original assumption is accepted, and it is considered that the radiation is not too high when the furnace door is closed.

2) Variance σ 2 when unknown μ Inspection of: t-test

Hypothesis testing problemReject domain
H0: μ=μ0,H1: μ≠μ0{ |T| > t1-α/2(n-1) }
H0: μ≤μ0,H1: μ>μ0{ T > t1-α(n-1) }
H0: μ≥μ0,H1: μ<μ0{ T < - t1-α(n-1) }

For example, a workshop uses a packaging machine to package refined salt, with a rated net mass of 500g per bag, and the net mass of each bag of salt packaged by the packaging machine is x ~ n( μ,σ 2) , 9 bags were randomly selected one day, and the net mass (g) was 49050650850249851510515512. Is the packaging machine working normally? ( α= 0.05)

Suppose H0: μ= 500,H1: μ ≠500

> x<- c(490,506,508,502,498,511,510,515,512)
> t.test(x,mu=500)

	One Sample t-test

data:  x
t = 2.2, df = 8, p-value = 0.06
alternative hypothesis: true mean is not equal to 500
95 percent confidence interval:
 499.7 511.8
sample estimates:
mean of x 
    505.8 

Since p-value = 0.06 > α, Accept the original assumption that the packaging machine is normal.

5.2. 2 Variance σ 2. Inspection: Chi square inspection

Hypothesis testing problemReject domain
H0: σ2=σ02,H1: σ2≠σ02{ χ 2 ≥ χ 21- α/ 2 (n-1) or χ 2 ≤ χ two α/ 2(n-1)}
H0: σ2≤σ02,H1: σ2>σ02{ χ2 ≥ χ21-α(n-1) }
H0: σ2≥σ02,H1: σ2<σ02{ χ2 ≤ χ2α(n-1) }

Example: check a batch of fuses, draw out 10 fuses, and measure the time (s) required for melting through strong current: 42,65,75,78,59,71,57,68,54,55. Assuming that the melting time follows a normal distribution, can we think that the variance of melting time does not exceed 80? ( α= 0.05)

Suppose H0: σ 2≤80,H1: σ 2>80

> x<-c(42,65,75,78,59,71,57,68,54,55)
> chisq.var.test(x,80,0.05,alternative = "greater")
$var
[1] 121.8

$chi2
[1] 13.71

$p.value
[1] 0.8668

$conf.int
[1]  57.64 406.02

Since P = 0.8668 > α, Therefore, the original hypothesis is accepted and it is considered that the time variance of melting does not exceed 80

5.3 test of two normal population parameters

5.3. 1 Comparison of mean values: t-test

Premise: σ 12= σ twenty-two

Hypothesis testing problemReject domain
H0: μ1=μ2,H1: μ1≠μ2{ |T| > t1-α/2(n1+n2-2) }
H0: μ1≤μ2,H1: μ1>μ2{ T > t1-α(n1+n2-2)}
H0: μ1≥μ2,H1: μ1<μ2{ T < - t1-α(n1+n2-2)}

For example, two machine tools a and B process a certain bearing respectively, and the diameter of the bearing follows the normal distribution n( μ 1, σ 21),N( μ 2, σ 22) select several bearings from the processed bearings and measure their diameters. The results are shown in the table below. set up σ 21= σ 22. Is there any significant difference between the machining accuracy of the two machine tools? ( α= 0.05)

populationsample sizediameter
X (a)820.5 19.8 19.7 20.4 20.1 20 19 19.9
Y (b)720.7 19.8 19.5 20.8 20.4 19.6 20.2

Suppose H0: μ 1= μ 2,H1: μ 1≠ μ two

> x<-c(20.5, 19.8 ,19.7 ,20.4, 20.1, 20 ,19 ,19.9)
> y<-c(20.7, 19.8, 19.5, 20.8, 20.4, 19.6, 20.2)
> t.test(x,y,var.equal = T)

	Two Sample t-test

data:  x and y
t = -0.85, df = 13, p-value = 0.4
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -0.7684  0.3327
sample estimates:
mean of x mean of y 
    19.93     20.14 

Since P = 0.4 > α= 0.05, so the original assumption is accepted and it is considered that there is no significant difference in machining accuracy between the two machine tools.

5.3. 2 Comparison of variance: F test

Hypothesis testing problemReject domain
H0: σ12=σ22,H1: σ12≠σ22{ F ≥ F1- α/ 2 (n1-1, n2-1) or F ≤ f α/ 2(n1-1,n2-1) }
H0: σ12≤σ22,H1: σ12>σ22{ F ≥ F1-α(n1-1,n2-1) }
H0: σ12≥σ22,H1: σ12<σ22{F ≤ Fα(n1-1,n2-1) }

Example: the data is the same as the above example. Is the variance of the bearing diameter processed by the two machine tools the same?

Suppose H0: σ 12= σ 22,H1: σ 12≠ σ twenty-two

> var.test(x,y)

	F test to compare two variances

data:  x and y
F = 0.79, num df = 7, denom df = 6, p-value = 0.8
alternative hypothesis: true ratio of variances is not equal to 1
95 percent confidence interval:
 0.1393 4.0600
sample estimates:
ratio of variances 
            0.7932 

Since P = 0.8 > α= 0.05, so the original assumption is accepted and it is considered that the variance of bearing diameter processed by the two machine tools is the same.

5.4 t-test of paired data

Paired data: the sample sizes of the two samples are equal, and there is no difference except the mean value.

Is the second test score of the same unit in a class higher than the first one?

Zi=Xi-Yi , i=1,2,...,nμ=μ1-μ2σ2=σ12+σ22Z~N( μ,σ2)
Hypothesis testing problemReject domain
H0: μ=μ0,H1: μ≠μ0{ |T| > t1-α/2(n-1) }
H0: μ≤μ0,H1: μ>μ0{ T > t α/2(n-1) }
H0: μ≥μ0,H1: μ<μ0{ T < - t α/2(n-1) }

Example: in the bleaching process of knitwear, the effect of temperature on the breaking strength of knitwear should be considered. In order to compare the effects of 70 ° C and 80 ° C, 8 tests were repeated at these two temperatures, and the data are shown in the following table (unit: N). According to experience, temperature has no effect on the fluctuation of knitwear breaking strength. Ask whether there is a significant difference between the average breaking strength at 70 ° C and 80 ° C( α= 0.05)

Strength at 70 degrees20.518.819.820.921.519.521.021.2
Strength at 80 degrees17.720.320.018.81920.120.019.1

Suppose H0: μ=μ 0,H1: μ ≠ μ 0 μ=μ 1- μ two
1) Method 1:

> x<-c(20.5 ,18.8, 19.8, 20.9 ,21.5 ,19.5, 21.0 ,21.2)
> y<-c(17.7, 20.3, 20.0, 18.8, 19 ,20.1, 20.0 ,19.1)
> t.test(x,y,paired = TRUE)

	Paired t-test

data:  x and y
t = 1.8, df = 7, p-value = 0.1
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -0.3214  2.3714
sample estimates:
mean of the differences 
                  1.025 

2) Method 2:

onesamp(dset, x="unsprayed", y="sprayed", xlab=NULL, ylab=NULL, dubious=NULL, conv=NULL, dig=2)

dset is a data frame or matrix with two columns, x is the column name in "predictor" status, and y is the column name in "response" status

> z<-data.frame(x,y)
> > onesamp(z,x='y',y='x')

 x 0.9411 0.8876 1.61 

	One Sample t-test

data:  d
t = 1.8, df = 7, p-value = 0.1
alternative hypothesis: true mean is not equal to 0
95 percent confidence interval:
 -0.3214  2.3714
sample estimates:
mean of x 
    1.025 

5.5 test of single sample ratio

Let the sample obey binom(1,p), T = sample and ~ binom(n,p)

5.5. 1 accurate test of ratio p

Hypothesis testing problemReject domain
H0: p=p0,H1: p≠p0{T ≤ c1 or T ≥ c2}, c1 < c2
H0: p≤p0,H1: p>p0{ T ≥ c}
H0: p≥p0,H1: p<p0{ T≤ c '}

The critical value c can be determined by binomial distribution / F distribution, using binom Test() completes the test of the original hypothesis

5.5. 2. Approximate test of ratio p (n > 30)

When the sample size is large, the sampling distribution of proportion p approximately obeys the normal distribution.

Hypothesis testing problemReject domain
H0: p=p0,H1: p≠p0{ |Z| > z1-α/2 }
H0: p≤p0,H1: p>p0{ Z > z1-α }
H0: p≥p0,H1: p<p0{ Z < - z1-α }

For example, the high-quality rate of a product has been maintained at 40%. Recently, the supervision department randomly inspected 12 products, including 5 high-quality products. What's the problem α= At the level of 0.05, can we think that its high-quality frequency is still maintained at 40%?

Suppose H0: p=p0, H1: p ≠ p0, because n = 12 < 30, it is suitable for accurate test.

> binom.test(c(5,7),p=0.4)

	Exact binomial test

data:  c(5, 7)
number of successes = 5, number of trials = 12, p-value = 1
alternative hypothesis: true probability of success is not equal to 0.4
95 percent confidence interval:
 0.1517 0.7233
sample estimates:
probability of success 
                0.4167           

You can also use prop Test() performs an approximate test, but issues a warning

> prop.test(5,12,p=0.4,correct = T)

	1-sample proportions test with continuity correction

data:  5 out of 12, null probability 0.4
X-squared = 0, df = 1, p-value = 1
alternative hypothesis: true p is not equal to 0.4
95 percent confidence interval:
 0.1818 0.6941
sample estimates:
     p 
0.4167 

Warning message:
In prop.test(5, 12, p = 0.4, correct = T) : Chi-squared The approximation algorithm may not be accurate

5.6 test of two sample ratio

10. Y is independent of each other and has a large overall capacityn1,n2 largerP1 and P2 approximately obey the normal distribution
Hypothesis testing problemReject domain
H0: p1=p2,H1: p1≠p2{ |Z| > z1-α/2 }
H0: p1≤p2,H1: p1>p2{ Z > z1-α }
H0: p1≥p2,H1: p1<p2{ Z < - z1-α }

Example: 102 male students and 135 female students were randomly selected from a university to investigate whether there was a computer at home. The results showed that 23 male students and 25 female students had computers at home. Ask in α= At the level of 0.05, can it be considered that the ratio of male and female students with computers at home is the same?

Suppose H0: p1=p2, H1: p1 ≠ p2

> prop.test(c(23,25),c(102,135))

	2-sample test for equality of proportions with continuity correction

data:  c(23, 25) out of c(102, 135)
X-squared = 0.36, df = 1, p-value = 0.5
alternative hypothesis: two.sided
95 percent confidence interval:
 -0.07256  0.15317
sample estimates:
prop 1 prop 2 
0.2255 0.1852 

Since p-value = 0.5 > 0.05, we accept the original hypothesis that the ratio of male and female students with computers at home is the same.

Topics: R Language