Uasstatlan2018 PDF
Uasstatlan2018 PDF
Notes :
• Closed Book
• Mobile phone is not allowed for any purpose, so please keep it in your bag
• Each problem is weighted differently, it is also a guidance for you in managing your time working on
the problems
2. Adjusted R2 is
a. A measure of goodness of fit
b. Adjusted for the use of additional explanatory variables
c. Measures the percentage of the total variation of the dependent variable
explained by the
regression
d. Generally a better measure of goodness of fit than R2 in multiple regression
e. All of the above
3. Other things being equal, as we increase the number of explanatory variables in regression,
a. R2 increases
b. R2 decreases
c. R2 can increase or decrease
d. There is no effect on R2
e. The probability of multicollinearity decreases
1
4. If you estimate an earnings equation for 100 individuals with the number of years of schooling as one
explanatory variable and the number of months of school as a second explanatory variable you may
a. Get estimates that are not BLUE
b. Suffer from serial correlation
c. Have unequal variances of the error terms
d. Suffer from multicollinearity
e. Both b and d
7. If we are interested in using dummy variables to capture the effect of different months on stock returns
we should use
a. Twelve dummy variables, one for each month of the year
b. One dummy variable
c. Eleven dummy variables
d. As many dummy variables as we like
e. We cannot use dummy variables to capture this effect
8. Which of the followings is equivalent to the “ratios of actuals to moving aver age”?
a. The product of seasonal and irregular components
b. The product of seasonal and cyclical components
c. The product of trend and irregular components
d. The product of cyclical and irregular components
e. The product of trend and cyclical components
9. In a multiple regression model, to make valid statistical inferences, which of the following is an
unnecessary assumption for the error terms?
a. with mean 0
b. with equal variance
c. independent to each other
d. normally distributed
e. with variance 1
2
10. The use of a dummy variable for gender, years of experience, and the interaction between that dummy
variable and years of experience to model wages assumes that
a. The slope coefficient for males and females will be the same
b. The intercept terms for males and females will be the same
c. Both the slope coefficient and the intercept terms will be the same
d. Both the slope coefficients and the intercept terms will be different
e. Multicollinearity is a problem
11. If we are estimating a U-shaped relationship such as the total cost curve in economics, we can best
capture this relationship by using
a. Dummy variables
b. Simple regression
c. A lagged-dependent variable
d. A quadratic regression model
e. This relationship cannot be modeled using regression analysis
12. If we use a log-log linear model to estimate the demand for ice cream, the slope coefficients would
be
a. Elasticities
b. Identical to the results we would get from a nonlog model
c. Always positive
d. An example of multicollinearity
e. Impossible to interpret
15. Which of the followings is the nonparametric counterpart of the F test to com- pare the means of three
or more independent populations?
a. The sign test
b. The Mann–Whitney U test
c. The Wilcoxon signed-rank test
d. Spearman’s rank correlation tes
e. Kruskal–Wallis test
3
16. If the sales for a company exhibit constant growth over time, the best method for forecasting would be
a. A linear time trend
b. A log-linear time trend
c. A simple moving average
d. A first-order autoregressive model
e. A centered moving average
17. The estimated regression equation is: : yˆ =800+15x1 −50x2 . What can be concluded?
a. Y increases 15 units as x1 increases by 1 unit
b. Y increases 15 units as x1 increases by 1 unit holding x2 constant
c. Y is expected to increase 15 units as x1 increases by 1 unit
d. Y is expected to increase 15 units as x1 increases by 1 unit holding x2
constant
e. None is correct
19. If we are interested in examining if the slope coefficient is negative and significant, the null
hypothesis should be
a. H0: β ≤ 0
b. H0: β ≥ 0
c. H0: β=0
d. H0: β=1
e. H0:β = −1
4
QUESTION II (15% point)
A statistics lecturer wants to know whether his students have higher score at final exam than at mid exam
because of longer spare time before the final exam than mid exam. Scores of mid exam and final exam
owned by ten students are used as a sample. It is assumed that sample is from non-normal distribution.
Scores for mid exam and final exam are on the table below.
a. Find The Spearman rank correlation coefficient between weight and shipping cost (6 point)
b. Test the company’s hypothesis (4 point)
5
QUESTION IV (Poin 30%)
The Dean of Economics and Business School EBS is interested in the extent to which the final test score
is able to predict students eventual success in sales. The accompanying table records monthly sales (in
million rupiah) and final test scores for a random sample of eight students:
a. Draw a scatter diagram depicting the relationship between final test score and sales (2 point)
b. The descriptive statistics for the two variables are given in the table below. Calculate the correlation
coefficient (2 point)
c. At the 0.05 significance level, is there a positive association between the variables? Does it support
your result in a ?(2 point)
Later the dean wants to asses the importance of factors that may help in predicting success in sales. For a
random sample of 50 students, and the following model is fitted:
y = b0 + b1X1 + b2X2 + b3X3 + e, where
y = sales (million rupiah)
X1= grade point average (GPA)
X2= score on aptitude test
X3= dummy variable taking the value 1 if a student’s letters of motivation are strong and 0
otherwise.
6
The following is part of the output obtained using statistical software:
R
Source df Sum of Squares Mean Square square
Model 3 641,04 213,68 0,356
Error 46 1159,66 25,21
Total 49 1800,7
Standard Error of
Parameter Estimate Estimate
Intercept 6,512
X1 3,502 2,419
X2 0,491 0,107
X3 10,327 4,213
Ratio to
Period Sales Moving
Average
2013:1 153 -
2013:2 156 -
2013:3 153 4.080
2013:4 147 3.843
2014:1 159 3.987
2014:2 160 4.169
7
2014:3 147 4.000
2014:4 147 3.933
2015:1 152 3.897
2015:2 160 3.891
2015:3 169 3.919
2015:4 176 4.000
2016:1 176 3.966
2016:2 179 3.945
2016:3 184 -
2016:4 181 -
Standard Error of
Parameter Estimate
Estimate
Intercept 0.2 0.36418
t 0.297 0.05329
8
Selected Formulas
𝑆./
𝑦" = exp(𝑏) + 𝑏+ 𝑡 + )
2
∑ Y = na + b∑ X
∑ XY = a ∑ X + b∑ X 2
∑ ( X − X )(Y − Y )
r=
( n − 1) S X SY
2 2
∑ (Y − Yˆ ) SST = ∑ (Y − Y ) = ∑ Y 2 − nY 2
SY . X = 2
n−2 SSR = ∑ Yˆ − Y = a ∑ Y + b ∑ XY − nY 2
( )
SSE 2
= ∑ SSE = ∑ (Y − Yˆ ) = ∑ e2 = ∑ Y 2 − a ∑ Y − b∑ XY
n−2
2 2
Yˆ ± t ( SY . X )
1
+
(X − X ) Yˆ ± t ( SY . X )
1
1+ +
(X − X )
n ∑ ( X − X )2 n ∑ ( X − X )2
SSE
2
Radj = 1 − n − k − 1 ; k = number of dependent variables
SST
n −1
n
6∑ di2
i =1 n−2
rs = 1 − 2
t = rs
n(n − 1) 1 − rS2
⎛ 12 k
Ri2 ⎞
H =⎜ ∑ ⎟ − 3(n + 1)
⎝ n(n + 1) i =1 ni ⎠
z=
( S − .5) − .5n
.5 n
9
Normal Standard Z Distribution:
Z 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09
0.0 0.0000 0.0040 0.0080 0.0120 0.0160 0.0199 0.0239 0.0279 0.0319 0.0359
0.1 0.0398 0.0438 0.0478 0.0517 0.0557 0.0596 0.0636 0.0675 0.0714 0.0753
0.2 0.0793 0.0832 0.0871 0.0910 0.0948 0.0987 0.1026 0.1064 0.1103 0.1141
0.3 0.1179 0.1217 0.1255 0.1293 0.1331 0.1368 0.1406 0.1443 0.1480 0.1517
0.4 0.1554 0.1591 0.1628 0.1664 0.1700 0.1736 0.1772 0.1808 0.1844 0.1879
0.5 0.1915 0.1950 0.1985 0.2019 0.2054 0.2088 0.2123 0.2157 0.2190 0.2224
0.6 0.2257 0.2291 0.2324 0.2357 0.2389 0.2422 0.2454 0.2486 0.2517 0.2549
0.7 0.2580 0.2611 0.2642 0.2673 0.2704 0.2734 0.2764 0.2794 0.2823 0.2852
0.8 0.2881 0.2910 0.2939 0.2967 0.2995 0.3023 0.3051 0.3078 0.3106 0.3133
0.9 0.3159 0.3186 0.3212 0.3238 0.3264 0.3289 0.3315 0.3340 0.3365 0.3389
1.0 0.3413 0.3438 0.3461 0.3485 0.3508 0.3531 0.3554 0.3577 0.3599 0.3621
1.1 0.3643 0.3665 0.3686 0.3708 0.3729 0.3749 0.3770 0.3790 0.3810 0.3830
1.2 0.3849 0.3869 0.3888 0.3907 0.3925 0.3944 0.3962 0.3980 0.3997 0.4015
1.3 0.4032 0.4049 0.4066 0.4082 0.4099 0.4115 0.4131 0.4147 0.4162 0.4177
1.4 0.4192 0.4207 0.4222 0.4236 0.4251 0.4265 0.4279 0.4292 0.4306 0.4319
1.5 0.4332 0.4345 0.4357 0.4370 0.4382 0.4394 0.4406 0.4418 0.4429 0.4441
1.6 0.4452 0.4463 0.4474 0.4484 0.4495 0.4505 0.4515 0.4525 0.4535 0.4545
1.7 0.4554 0.4564 0.4573 0.4582 0.4591 0.4599 0.4608 0.4616 0.4625 0.4633
1.8 0.4641 0.4649 0.4656 0.4664 0.4671 0.4678 0.4686 0.4693 0.4699 0.4706
1.9 0.4713 0.4719 0.4726 0.4732 0.4738 0.4744 0.4750 0.4756 0.4761 0.4767
2.0 0.4772 0.4778 0.4783 0.4788 0.4793 0.4798 0.4803 0.4808 0.4812 0.4817
2.1 0.4821 0.4826 0.4830 0.4834 0.4838 0.4842 0.4846 0.4850 0.4854 0.4857
2.2 0.4861 0.4864 0.4868 0.4871 0.4875 0.4878 0.4881 0.4884 0.4887 0.4890
2.3 0.4893 0.4896 0.4898 0.4901 0.4904 0.4906 0.4909 0.4911 0.4913 0.4916
2.4 0.4918 0.4920 0.4922 0.4925 0.4927 0.4929 0.4931 0.4932 0.4934 0.4936
2.5 0.4938 0.4940 0.4941 0.4943 0.4945 0.4946 0.4948 0.4949 0.4951 0.4952
2.6 0.4953 0.4955 0.4956 0.4957 0.4959 0.4960 0.4961 0.4962 0.4963 0.4964
2.7 0.4965 0.4966 0.4967 0.4968 0.4969 0.4970 0.4971 0.4972 0.4973 0.4974
2.8 0.4974 0.4975 0.4976 0.4977 0.4977 0.4978 0.4979 0.4979 0.4980 0.4981
2.9 0.4981 0.4982 0.4982 0.4983 0.4984 0.4984 0.4985 0.4985 0.4986 0.4986
3.0 0.4987 0.4987 0.4987 0.4988 0.4988 0.4989 0.4989 0.4989 0.4990 0.4990
10
Student t Distribution:
The table shows that at the degree of freedom at left
margin then the probability of t will be greater than
α
0.1 0.05 0.025 0.01 0.005
df
1 3.0777 6.3137 12.7062 31.8210 63.6559
7 1.4149 1.8946 2.3646 2.9979 3.4995
8 1.3968 1.8595 2.3060 2.8965 3.3554
9 1.3830 1.8331 2.2622 2.8214 3.2498
10 1.3722 1.8125 2.2281 2.7638 3.1693
11 1.3634 1.7959 2.2010 2.7181 3.1058
12 1.3562 1.7823 2.1788 2.6810 3.0545
15 1.3406 1.7531 2.1315 2.6025 2.9467
20 1.3253 1.7247 2.0860 2.5280 2.8453
24 1.3178 1.7109 2.0639 2.4922 2.7970
25 1.3163 1.7081 2.0595 2.4851 2.7874
30 1.3104 1.6973 2.0423 2.4573 2.7500
60 1.2958 1.6706 2.0003 2.2990 2.6603
120 1.2886 1.6576 1.9799 2.3578 2.6174
11
Chi Square (χ2) Distribution:
The table shows that at degree of freedom at left margin
α
0.975 0.95 0.90 0.10 0.05 0.025 0.01
df
1 0.0010 0.0039 0.0158 2.7055 3.8415 5.0239 6.6349
2 0.0506 0.1026 0.2107 4.6052 5.9915 7.3778 9.2103
3 0.2158 0.3518 0.5844 6.2514 7.8147 9.3484 11.3449
4 0.4844 0.7107 1.0636 7.7794 9.4877 11.1433 13.2767
5 0.8312 1.1455 1.6103 9.2364 11.0705 12.8325 15.0863
6 1.2373 1.6354 2.2041 10.6446 12.5916 14.4494 16.8119
7 1.6899 2.1674 2.8331 12.0170 14.0671 16.0128 18.4753
8 2.1797 2.7326 3.4895 13.3616 15.5073 17.5345 20.0902
9 2.7004 3.3251 4.1682 14.6837 16.9190 19.0228 21.6661
12 4.4038 5.2260 6.3038 18.5493 21.0261 23.3367 26.2170
15 6.2621 7.2609 8.5468 22.3071 24.9958 27.4884 30.5780
20 9.59077 10.85080 12.44260 28.41197 31.41042 34.16958 37.56627
30 16.79076 18.49267 20.59924 40.25602 43.77295 46.97922 50.89218
12
F Distribution with α = 0.05
The table shows that at degree of freedom of numerator as
1 2 3 4 5 6 7 8 9 10 12 15 20
1 161.4 199.5 215.7 224.6 230.2 234 236.8 238.9 240.5 241.9 243.9 245.9 248
2 18.51 19.00 19.16 19.25 19.30 19.33 19.35 19.37 19.38 19.40 19.41 19.43 19.45
3 10.13 9.55 9.28 9.12 9.01 8.94 8.89 8.85 8.81 8.79 8.74 8.70 8.66
4 7.71 6.94 6.59 6.39 6.26 6.16 6.09 6.04 6.00 5.96 5.91 5.86 5.80
5 6.61 5.79 5.41 5.19 5.05 4.95 4.88 4.82 4.77 4.74 4.68 4.62 4.56
6 5.99 5.14 4.76 4.53 4.39 4.28 4.21 4.15 4.10 4.06 4.00 3.94 3.87
7 5.59 4.74 4.35 4.12 3.97 3.87 3.79 3.73 3.68 3.64 3.57 3.51 3.44
8 5.32 4.46 4.07 3.84 3.69 3.58 3.50 3.44 3.39 3.35 3.28 3.22 3.15
9 5.12 4.26 3.86 3.63 3.48 3.37 3.29 3.23 3.18 3.14 3.07 3.01 2.94
10 4.96 4.10 3.71 3.48 3.33 3.22 3.14 3.07 3.02 2.98 2.91 2.85 2.77
11 4.84 3.98 3.59 3.36 3.20 3.09 3.01 2.95 2.90 2.85 2.79 2.72 2.65
12 4.75 3.89 3.49 3.26 3.11 3.00 2.91 2.85 2.80 2.75 2.69 2.62 2.54
13 4.67 3.81 3.41 3.18 3.03 2.92 2.83 2.77 2.71 2.67 2.60 2.53 2.46
14 4.60 3.74 3.34 3.11 2.96 2.85 2.76 2.70 2.65 2.60 2.53 2.46 2.39
15 4.54 3.68 3.29 3.06 2.90 2.79 2.71 2.64 2.59 2.54 2.48 2.40 2.33
16 4.49 3.63 3.24 3.01 2.85 2.74 2.66 2.59 2.54 2.49 2.42 2.35 2.28
17 4.45 3.59 3.20 2.96 2.81 2.70 2.61 2.55 2.49 2.45 2.38 2.31 2.23
18 4.41 3.55 3.16 2.93 2.77 2.66 2.58 2.51 2.46 2.41 2.34 2.27 2.19
19 4.38 3.52 3.13 2.90 2.74 2.63 2.54 2.48 2.42 2.38 2.31 2.23 2.16
20 4.35 3.49 3.10 2.87 2.71 2.60 2.51 2.45 2.39 2.35 2.28 2.20 2.12
30 4.17 3.32 2.92 2.69 2.53 2.42 2.33 2.27 2.21 2.16 2.09 2.01 1.93
40 4.08 3.23 2.84 2.61 2.45 2.34 2.25 2.18 2.12 2.08 2.00 1.92 1.84
60 4.00 3.15 2.76 2.53 2.37 2.25 2.17 2.10 2.04 1.99 1.92 1.84 1.75
120 3.92 3.07 2.68 2.45 2.29 2.18 2.09 2.02 1.96 1.91 1.83 1.75 1.66
13
B-24 APPENDIX B
n2 TL TU TL TU TL TU TL TU TL TU TL TU TL TU TL TU
4 6 18 11 25 17 33 23 43 31 53 40 64 50 76 61 89
5 6 11 12 28 18 37 25 47 33 58 42 70 52 83 64 96
6 7 23 12 32 19 41 26 52 35 63 44 76 55 89 66 104
7 7 26 13 35 20 45 28 56 37 68 47 81 58 95 70 110
8 8 28 14 38 21 49 29 61 39 63 49 87 60 102 73 117
9 8 31 15 41 22 53 31 65 41 78 51 93 63 108 76 124
10 9 33 16 44 24 56 32 70 43 83 54 98 66 114 79 131
n2 TL TU TL TU TL TU TL TU TL TU TL TU TL TU TL TU
3 6 15 11 21 16 29 23 37 31 46 39 57 49 68 60 80
4 7 17 12 24 18 32 25 41 33 51 42 62 52 74 63 87
5 7 20 13 27 19 37 26 46 35 56 45 67 55 80 66 94
6 8 22 14 30 20 40 28 50 37 61 47 73 57 87 69 101
7 9 24 15 33 22 43 30 54 39 66 49 79 60 93 73 107
8 9 27 16 36 24 46 32 58 41 71 52 84 63 99 76 114
9 10 29 17 39 25 50 33 63 43 76 54 90 66 105 79 121
10 11 31 18 42 26 54 35 67 46 80 57 95 69 111 83 127
Source: From F. Wilcoxon and R. A. Wilcox, “Some Rapid Approximate Statistical Procedures” (1964), p. 28. Reproduced with the permission of American Cyanamid Company.
Copyright 2011 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
14
APPENDIX B
B-25
TABLE 10
Critical Values for (a) A ! .025 one-tail; A ! .05 two-tail (b) A ! .05 one-tail; A ! .10 two-tail
the Wilcoxon Signed
n TL TU TL TU
Rank Sum Test
6 1 20 2 19
7 2 26 4 24
8 4 32 6 30
9 6 39 8 37
10 8 47 11 44
11 11 55 14 52
12 14 64 17 61
13 17 74 21 70
14 21 84 26 79
15 25 95 30 90
16 30 106 36 100
17 35 118 41 112
18 40 131 47 124
19 46 144 54 136
20 52 158 60 150
21 59 172 68 163
22 66 187 75 178
23 73 203 83 193
24 81 219 92 208
25 90 235 101 224
26 98 253 110 241
27 107 271 120 258
28 117 289 130 276
29 127 308 141 294
30 137 328 152 313
Source: From F. Wilcoxon and R. A. Wilcox, “Some Rapid Approximate Statistical Procedures” (1964), p. 28. Reproduced with the permission of
American Cyanamid Company.
15
Copyright 2011 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part. Due to electronic rights, some third party content may be suppressed from the eBook and/or eChapter(s).
itorial review has deemed that any suppressed content does not materially affect the overall learning experience. Cengage Learning reserves the right to remove additional content at any time if subsequent rights restrictions require it.
ENDIX B
5 .900 — —
6 .829 .886 .943
7 .714 .786 .893
8 .643 .738 .833
9 .600 .683 .783
10 .564 .648 .745
11 .523 .623 .736
12 .497 .591 .703
13 .475 .566 .673
14 .457 .545 .646
15 .441 .525 .623
16 .425 .507 .601
17 .412 .490 .582
18 .399 .476 .564
19 .388 .462 .549
20 .377 .450 .534
21 .368 .438 .521
22 .359 .428 .508
23 .351 .418 .496
24 .343 .409 .485
25 .336 .400 .475
26 .329 .392 .465
27 .323 .385 .456
28 .317 .377 .448
29 .311 .370 .440
30 .305 .364 .432
Source: From E. G. Olds, “Distribution of Sums of Squares of Rank Differences for Small Samples,” Annals of Mathematical Statistics
9 (1938). Reproduced with the permission of the Institute of Mathematical Statistics.
16