Here is a full lecture on the F-test which I gave today

By Gary Brooks,2014-04-16 21:35
5 views 0
Here is a full lecture on the F-test which I gave today

    F-Tests in Econometrics

There are three types of F-tests commonly used in economics.

    (1) zero slopes (2) linear restrictions (3) Chow test for regression stability

    All three of these tests can be easily computed using the GRETL software. I will show you how to do this later in class. Lets begin with zero slopes.

(1) zero slopes F-test

Suppose that we have the following regression:

    1K1 Y;X;;;X;(otttt1K1

for t = 1,...,N.

    Clearly, we have K coefficients and N observations on data. The zero slopes F-test is concerned with the following hypothesis:

    H: β = 0, ..., β = 0 o1K-1

     H: β ? 0 for some j aj

    How can we test this hypothesis? To test this, we consider two separate regressions.

    R(Restricted) Y;(ott

    1K1UR(Unrestricted) Y;X;;;X;(otttt1K1

    You should study these two equations carefully. Note how that the restricted equation simply assumes that all the β’s are all zero (except for the constant β). The error term in o

    RURthe first equation is while the error term in the unrestricted equation is . ((tt

    RURˆˆWhen we run these regressions, we will get estimated residuals and . In ((tt

    general, these estimated residuals will be different. However, if H is true, then it makes o

    RURˆˆsense to believe that . That is, they will be nearly the same. Therefore, if ((tt

    H is true then the following logic holds: o

    R2RURUR2ˆˆˆˆ ((()((()tttt

    NNUR2R2ˆˆ (()(()~~ttt1t1

    NR2ˆ SSR SSR 0 where SSR . (()~R URRtt1

We next normalize this statistic (since any change in the units of measurement of Y t

    will also change the units of measurement of ε). We do this by dividing by SSR. tUR

    {SSRSSR}RURIt follows that if H is true, then 0. We can now see the F statistic oSSRUR

    can be defined in the following way:


We multiply the statistic by (N-K)/(K-1) because it allows us to determine the

    distribution of the statistic for any linear model and any amount of data. If H is true o

    ˆthen the statistic is distributed as an F distribution. Textbooks on econometrics FK-1 ,N-K

    will have F distribution tables. You can also find such tables on the Internet.

    Our decision rule for the test is as follows:

ˆˆIf F is large, then we should reject H. If, on the other hand, F is small, then we o

    should not reject H. What do we mean by large and small here? This is given by o

    ˆthe distribution of F. On the next page is a drawing of the F distribution (thanks to my

    daughter Sonya -- age 13 -- for making this drawing) with degrees of freedom df = K-1, 1

    df = N-K. Our notion of small would be a value of F-hat that lands us in the 2

    aquamarine colored area. Our notion of large would be any value of F-hat that lands us in the red region.

Our rule becomes reject H is F-hat is larger than the critical value and do not reject o

    H if F-hat is smaller than the critical value. Another way of saying this is to reject H oo

    if the p-value of F-hat is less than 0.05 and do not reject H if the p-value of F-hat is o

    larger than 0.05.

    ˆˆ Reject H if Do not reject H if Fcriticalvalue.Fcriticalvalue.oooo

    Note that the critical value will change whenever N and K change. Note also that if we do not reject H then we are saying that our regression is not any better at explaining Y ot

    than the mean of Y. If we reject H, then we are saying that the regression model is o

    better at explaining the variation in Y than Y.t

(2) linear restrictions F-test

    To see how we encounter linear restrictions in econometrics, suppose that we have the following Cobb-Douglas production function.

    ut YALKettt

We usually just assume β = 1- α. However, this may not be justified by the data. We

    should at least test to see if α + β = 1. Such a test is a test of linear restrictions. In fact it is a test of exactly ONE restriction namely α + β = 1.

    First we convert the above model into a linear model by taking the natural logarithm of both sides to get

    log(Y)log(A);log(L);log(K);u tttt

We can write this again as

     log(Y);log(L);log(K);u otttt12

Our hypothesis is that H: β= 1- β with an alternative hypothesis H: β ? 1 β. How o2 1121

    can we test this hypothesis? To test H we need to run two regressions. o

    R(Restricted) {log(Y)log(K)};{log(L)log(K)};uottttt1

    UR(Unrestricted) log(Y);log(L);log(K);uotttt12

    RURˆˆWe run these two regressions and get estimated and . Once again we know uutt

    RURˆˆthat if H is true then the estimated residuals . We can therefore form the uuott

    following F-test statistic:


In this case, the unrestricted regression has N-K degrees of freedom. In addition, there is

    only 1 restriction used in the restricted regression. Therefore, we multiply the statistic by

    the ratio (N-K)/1 and get the F statistic.

    ˆOur decision rule is to reject H if F> the critical value. We will not reject H if the oo

    ˆcalculated F< the critical value. Note that the F distribution is now F. 1 ,N-K

    The critical value shown in the figure can be determined by looking up the value in an F-table or the Internet. The critical value changes whenever K or N changes.

This leads us to the final type of F-test.

(3) Chow Test for Regression Stability

    The last F-test concerns the issue of whether the regression is stable or not. We begin by assuming a regression model

    1K1 Y;X;;;X;(otttt1K1

for t = 1, ..., N.

The question we ask is whether our estimated model will be roughly the same, whether

    we estimate it over the first half of the data or the second half of the data.

Consider splitting our data into two equal parts.

    t = 1,2,...,N/2 and t = (N/2)+1, (N/2) + 2, ..., N.

Both intervals have N/2 observations of data.

    one on the first data set, the other on the second data We next consider two regressions


    1111K1UR1Y;X;;;X;(Unrestricted Regression on data1: tttot1K1

     for t = 1,2,...,N/2

    2212K1UR2Y;X;;;X;(Unrestricted Regression on data2: tttot1K1

     for t = (N/2)+,(N/2)+2, ..., N

    121212Our hypothesis is that: H: , , ..., ooo11K1K1

    12 H: for some j ajj

This leads to K constraints in H. We run a third regression over the entire data set and o

    impose the K constraints in H. Therefore we can write this regression as o

    1K1RRestricted Regression on all data: Y;X;;;X;( tttot1K1

Once again we form an F statistic using the SSR from these three regressions above.


We divide by K because there are K constraints in H. We multiply by (N-2K) because o

    the two unrestricted regressions each have (N/2) K degrees of freedom. Adding these two together gives (N 2K).

The F-statistic above is distributed with an F distribution having K and N-2K degrees of

    freedom. The graph of the F distribution is shown below.

    ˆFLike before we will reject Ho if > the critical value. We will not reject H if the o

    ˆFcalculated < the critical value. Note that the F distribution is now F. K, N-2K

ˆThis means we should reject H if the p-value for is less than 0.05 and we should not Fo

ˆreject H if the p-value for > 0.05. Fo

Report this document

For any questions or suggestions please email