DOC

GWR_TutorialFinal

By Terry Fox,2014-05-24 08:47
7 views 0
GWR_TutorialFinal

    Geographically Weighted Regression

    A Tutorial on using GWR in ArcGIS 9.3

    Martin Charlton

    A Stewart Fotheringham

Introduction

    Geographically Weighted Regression (GWR) is a powerful tool for exploring spatial heterogeneity. Spatial heterogeneity exists when the structure of the process being modelled varies across the study area. We term a simple linear model such as

    y;x; i01ii

    a global model the relationship between y and x is assumed to be constant across the study area at every

    possible location in the study area the values of and are the same. The residuals from this model are 01i

    assumed to be independent and normally distributed with a mean of zero (sometimes this is termed iid

    independent and identically distributed).

    This short tutorial is designed to introduce you to the operation of the Geographically Weighed Regression Tool in ArcGIS 9.3. It assumes that you understand both regression and Geographically Weighted Regression (GWR) techniques. A separate ESRI White Paper is available which outlines the theory underlying GWR.

Modelling the Determinants of Educational Attainment in Georgia

    We use a simple example: modelling the determinants of educational attainment in the counties of of the State of Georgia. The dependent variable in this example is the proportion of residents with a Bachelor‟s degree or higher in each county (PctBach). The four independent variables that we shall use are:

    Proportion of elderly residents in each county: PctEld

    Proportion of residents who are foreign born: PctFB

    Proportion of residents who are living below the poverty line: PctPov

    Proportion of residents who are ethnic black: PctBlack

    The spatial variation in each of the variables should be mapped by way of initial data exploration. There are some clear patterns in the educational attainment variable high values around Atlanta and Athens. This is

    perhaps not surprising since the campuses of Georgia Institute of Technology, Georgia State University, Kennesaw State University, and Georgia Perimeter College are around Atlanta, and the University of Georgia (which has the largest enrolment of all the universities in Georgia) is located in Athens. Mapping the individual independent variables suggests that there might be some relationships with the variation in educational attainment, and some initial analysis also suggests that these variables are reasonable as predictors. The proportion of elderly is included because concentrations of educational attainment are usually associated with concentrations of the young rather than the old we would expect there to be

    increased proportions of the elderly to have a negative influence on educational attainment. It is suspected that there might be a higher value given to further education amongst recent migrants who are anxious for their children to succeed. Educational attainment is generally associated with affluence, so we would expect those parts of the State with higher proportions of those living below the poverty line to have lower proportions of those educated to degree level. Higher proportions of ethnic black residents in the population are sometimes associated with poorer access to grad schools and lower interest in higher education.

     1

    Before any analysis with regression takes place, we will have undertaken some initial statistical analysis to determine the characteristics of each of the variables which are proposed for the model. Some summary statistics for the variables in the exercise are presented in the Table 1.

    [Table 1 about here]

    The correlation analysis shown in Table 2 reveals some initial associations.

    [Table 2 about here]

    Most of the associations with PctBach are in the expected direction. One interesting correlation is that between PctBlack and PctPov - there is some colinearity here (r=0.74), but probably not enough for us to worry about at this stage.

    The attribute table for the Georgia shapefile is shown in Figure 1. You will notice that there some other variables in the file which we will not use. The AreaKey item contains the FIPS codes for the counties in Georgia. The X and Y columns contain the coordinates in a UTM projection suitable for Georgia. [Figure 1 about here]

Getting started: OLS Regression

    GWR is not a panacea for all regression ills and it should not be the automatic first choice in any regression modelling exercise. We will begin by fitting an „ordinary‟ linear regression model this is „ordinary‟ in the

    sense that it‟s the default regression model in packages such as SPSS or R and the estimation of the coefficients is by Ordinary Least Squares. The residuals are assumed to be independently and identically normally distributed around a mean of zero. The residuals are also assumed to be homoscedastic that is, any

    samples taken at random from the residuals will have the same mean and variance.

    There is an OLS regression modelling tool in the Spatial Statistics Tools in Arc Toolbox. You may need to uncheck the Hide Locked Tools option for Arc Toolbox before you can see the tool listed. The form to specify the model structure for this example is shown in Figure 2. You should save both the coefficients and diagnostics to separate DBF tables for later scrutiny.

    [Figure 2 about here]

    Clicking on the [OK] button will run the Tool. The results of the OLS analysis are shown in Figure 3. [Figure 3 about here]

    A useful place to start is with the model diagnostics. There are a number of different goodness-of-fit measures: 222the r is 0.53 and the adjusted r is 0.51. The r measures the proportion of the variation in the dependent

    variable which is accounted for by the variation in the model, and the possible values range from 0 to 1. Values closer to 1 indicate that the model has a better predictive performance. However, its values can be influenced by the number of the variables which are in the model increasing the number of variables will 22never decrease the r. The adjusted r is a preferable measure since it contains some adjustment for the number of variables in the model. In the model we have just fitted, the value of 0.51 indicates that it accounts for about half the variation in the dependent variable. This suggests that perhaps some variables have been omitted from the model, or the form of the model is not quite right: we are failing to account for 49% of the variation in educational attainment with our model.

     2

A slightly different measure of goodness-of-fit is provided by the Akaike Information Criterion (AIC). Unlike 2the r the AIC is not an absolute measure it is a relative measure and can be used to compare different

    models which have the same independent variable. It is a measure of the „relative distance between the

    model that has been fitted and the unknown „true‟ model. Models with smaller values of the AIC are

    preferable to models with higher values (where 5 is less than 10 and -10 is less than -5); however, if the difference in the AIC between two models is less than about 3 or 4, they are held to be equivalent in their explanatory power. The AIC formula contains log terms and sometimes the values can be unexpectedly large or negative this is not important it is the difference between the AICs that we are interested in. The AIC in

    this case is 969.82.

    We have fitted an OLS model to spatial data. It is likely that there will be some structure in the residuals. We have not taken this into account in the model, which may be one contributory factor towards its rather indifferent performance. The value of the Jarque-Bera statistic indicates that the residuals appear not to be normally distributed. The OLS tool prints a warning that we should test to determine whether the residuals appear to be spatially autocorrelated.

    We now examine the model coefficient estimates which are shown in Table 3 along with the t-statistics for each estimated coefficient. The signs on the coefficient estimates are as expected, with the exception of PctBlack (we have already noted a raised correlation between it and PctPov).

    [Table 3 about here]

    The t-statistics test the hypothesis that the value of an individual coefficient estimate is not significantly different from zero. With the exception of PctEld, the coefficient estimates are all statistically significant (this is, their values are sufficiently large for us to assume that they are not zero in the population from which our sample data have been drawn). The Variance Inflation Factors are all reasonably small, so there is no strong evidence of variable redundancy.

    In completing the OLS model form we specified DBF output tables for the coefficient estimates and the regression diagnostics. These may be examined the coefficient estimates from the OLS model are shown in

    Figure 4 and the diagnostics table is shown in Figure 5.

    [Figure 5 about here]

    In the diagnostics DBF table shown in the Figure 5 those statistics which have been discussed are highlighted. [Figure 5 about here]

    The output feature class attribute table shown in Figure 6 contains three extra columns in addition to the original observed data.

    [Figure 6 about here]

    The column headed PCTBACH contains the observed dependent variable values and the columns headed

    PCTELD, PCTFB, PCTPOV and PCTBLACK contain the values for the independent variables in the

    model. The column headed Estimated contains the predicted y values given the model coefficients and the

    data for each observation. The predicted y values are sometimes known as the fitted values. The residual is the difference between the observed values of the dependent variable (in this case in the column headed PCTBACH) and the fitted values these are found in the column headed Residual. Finally, the column

    headed StdResid contains standardised values of the residuals: these have a mean of zero and a standard

     3

    deviation of 1. Observations of interest are those which have positive standardised residuals greater than 2 (model underprediction) or negative standardised residuals less than -2 (model overprediction). The report from the OLS advised that we should carry out a test to determine whether there is spatial autocorrelation in the residuals. If the residuals are sufficiently autocorrelated then the results of the OLS regression analysis are unreliable autocorrelated residuals are not iid, so one of the underlying assumptions of OLS regression has been violated. An appropriate test statistic is Moran‟s I: this is a measure of the level

    of spatial autocorrelation in the residuals. This tool is available under Spatial Statistics Tools / Analyzing Patterns / Spatial Autocorrelation and is shown in Figure 7

    [Figure 7 about here]

    The Input Feature Class should be the Output Feature Class specified in the OLS Regression tool. The Input Field should be Residual (the results are the same if you use StdResid instead). The other choices should be left as their defaults.

    The report from the tool is shown in Figure 8

    [Figure 8 about here]

    The value of Moran‟s I for the OLS model is 0.14, and the p-value for the hypothesis that this value is not

    significantly different from zero is 0.26 (Z = 1.14). We would normally accept the hypothesis that autocorrelation is not present in the residuals given this value of p, but the graphical output warns that although the pattern is “somewhat clustered” it may also be due to “random chance”.However, we have made the assumption here that the model structure is spatially stationary in other words we assume that the

    process we are modelling is homogenous. Although we have a model that performs moderately well with reasonably random residuals, we nevertheless would be justified in attempting to improve the reliability of the predictions from the models by using GWR. We also will be able to map the values of the county specific coefficient estimates to examine whether the process appears to be spatially heterogenous.

Geographically Weighted Regression

    The ArcGIS 9.3 GWR tool is an exploratory tool. It can be found in Spatial Statistics Tools / Modeling Spatial Relationships / Geographically Weighted Regression. The model choices are specified in a form. The choices we use in this example are shown in Figure 9.

    [Figure 9 about here]

    The Input feature class will be the same as that which was specified in the OLS model. The Output feature

    class will contain the coefficient estimates and their associated standard errors, as well as a range of observation specific diagnostics. The Dependent variable and the Explanatory variable(s) will be those which

    were specified for the OLS model. There are a number of options which may be specified which need some initial thought from the user.

    There are two possible choices for the Kernel type: FIXED or ADAPTIVE. A spatial kernel is used to provide

    the geographic weighting in the model. A key coefficient in the kernel is the bandwidth this controls the size

    of the kernel. Which kernel is chosen largely depends on the spatial configuration of the feature in the Input feature class. If the observations are either reasonably regularly positioned in the study area (perhaps they are the mesh points of a regular grid) then a FIXED kernel is appropriate; if the observations are clustered so that

     4

    the density of observations varies around the study area, then an ADAPTIVE kernel is appropriate. If you are not sure which to use, ADAPTIVE will cover most applications.

    There are three choices for the Bandwidth method: AICc, CV and BANDWIDTH COEFFICIENT. The first

    two choices allow you to use an automatic method for finding the bandwidth which gives the best predictions, the third allows you to specify a bandwidth. The AICc method finds the bandwidth which minimises the AICc value the AICc is the corrected Akaike Information Criterion (it has a correction for small sample sizes). The CV finds the bandwidth which minimises a CrossValidation score. In practice there isn‟t much to

    choose between the two methods, although the AICc is our preferred method. The AICc is computed from (a) a measure of the divergence between the observed and fitted values and (b) a measure of the complexity of the 1 of a GWR model depends not just on the number of variables in the model, but also model. The complexity

    on the bandwidth. This interaction between the bandwidth and the complexity of the model is the reason for our preference for the AICc over the CV score.

    There may be some modelling contexts where you wish to supply your own bandwidth. In this case, the Bandwidth method is BANDWIDTH COEFFICIENT. If you have chosen a FIXED kernel, the coefficient will be a distance which is in the same units as the coordinate system you are using for the feature class. Thus if your coordinates are in metres, this will be a distance in metres; if they are in miles, the distance will be in miles. If you are using geographic coordinates in decimal degrees, this value will be in degrees large values

    (90 for example) will create very large kernels which will cover considerable parts of the earth‟s surface and

    the geographical weights will be close to 1 for every observation! If you have chosen an ADAPTIVE kernel the bandwidth is a count of the number of nearest observations to include under the kernel the spatial

    extent of the kernel will change to keep the number of observations in the kernel constant. In general you should have good reasons for specifying an a priori bandwidth, and for most applications allowing the GWR

    tool to chose an „optimal‟ bandwidth is good practice.

    In the example described here, we have chosen an ADAPTIVE kernel whose bandwidth will be found by minimising the AICc value.

    There are a number of optional Additional coefficients which are for more advanced users of GWR. One of

    the features of GWR is that while a model can be fitted to data collected at one set of locations, coefficients may also be estimated as locations at which no data have been collected (for example, the mesh points of a raster) or at other locations for which the ys and xs are known (for example a model can be fitted to a

    calibration set of data and then used to estimate coefficients and predictions for a validation set). [Figure 10 about here]

    The GWR tool will create a report and a DBF table which contains the diagnostic statistics which are also listed in the report shown in Figure 10. The report is the first place to start when interpreting the results from a GWR exercise as it provides not only a list of the coefficients which have used by the tool, but also a set of important diagnostic statistics. Recall that the bandwidth of the model has been estimated for an adaptive kernel, unsing AICc minimisation. The Neighbours value is the number of nearest neighbours that have been

    used in the estimation of each set of coefficients. In this case it‟s 121: this is large in comparison with the

1 We use the term complexity here as a shorthand for the number of parameters in the model. In an OLS regression

    model, the number of parameters one more than the number of independent variables (the intercept is also a parameter). In a GWR model the equivalent measure is known as the effective number of parameters and is usually much larger than

    that for an OLS model with the same variables and need not be an integer.

     5

    number of observations in the dataset (175), and means that under each kernel there are about 70% of the data. There may be some evidence of spatial variation in the coefficient estimates. The ResidualSquares value is the

    sum of the squared residuals this is used in several subsequent calculations. The EffectiveNumber is a

    measure of the complexity of the model it is equivalent to the number of parameters in the OLS model and

    is usually larger than the OLS value and is usually not an integer. It is also used in the calculation of several diagnostics. Sigma is the square root of the normalised residual sum of squares. The AICc is the corrected 22Akaike Information Criterion, and with R2 (r) and R2Adjusted (the adjusted r) provide some indication of

    the goodness of fit of the model. These diagnostics are also saved in a DBF table whose name is that of the output feature class with the suffix _supp.

    We start by comparing the fit of the OLS and GWR models. We‟ll refer to the OLS model as the global model 22and the GWR model as the local model. The global adjusted r is 0.51 and the local adjusted r is 0.62 which

    suggests that there has been some improvement in model performance. Our preferred measure of model fit is the AICc, the global model‟s value is 969.82, and the local model‟s value is 937.94 – the difference of 31.88 2is strong evidence of an improvement in the fit of the model to the data.

    Visualising the GWR output

    The attribute table for the output feature class contains the coefficient estimates, their standard errors, and a range of diagnostic statistics. Descriptions of the main column headings in this table are given in Table 4. [Table 4 about here]

    Mapping the values of StdResid (the standardised residual) is a good starting point these are shown in Figure

    11. There are two questions of interest (a) where are the unusually high or low residuals and (b) are the residuals spatially autocorrelated? Not surprisingly those counties with the large universities have very large positive residuals (StdResid > 3) (University of Georgia, Georgia Southern University), and there are large positive residuals for those counties in and around Atlanta which contain major university campuses. We would expect that, given the variables we have in the model, the model would underpredict the levels of educational attainment in these counties. Two counties have noticeable over-prediction and would certainly warrant closer inspection to discover possible reasons for this.

    [Figure 11 about here]

    The report from the Spatial Autocorrelation Tool used on the GWR residuals is shown in Figure 12. Moran‟s

    I for the residuals is 0.04 (p=0.74) so there is little evidence of any autocorrelation in them. Any spatial dependencies which might have been present in the residuals for the global model have been removed with the geographical weighting in the local model.

    [Figure 12 about here]

    The local coefficient estimates should also be mapped. Figure 13 shows the variation in the coefficient estimates for the PctFB variable. The estimated value for the global model was 2.54, with a standard error of 0.28. (95% CI: 2.00 - 3.09). The map for the local coefficients reveals that the influence of this variable in the model varies considerably over Georgia, with a strong north-south direction. The range of the local coefficient

    2 As a general rule of thumb, if the AICc difference between the two models is less than about 4 there is little to choose between them; if the difference between them is greater than about 10 there is little evidence in support of the model with the larger AICc. For further discussion of issues in using the AICc see Burnham and Anderson (2002).

     6

is from 0.67 in the southernmost counties to 3.84 in the northernmost counties evidence which points to

    heterogeneity in the model structure within Georgia.

    [Figure 13 about here]

    The global coefficient and all the local coefficients for this variable are positive there is agreement between

    the two models on the direction of the influence of this variable. There may be some cases where most of the local coefficients have one sign, but for a few observations the sign changes. How can a variable have a positive influence in the model in some areas but a negative influence in other areas?

    As the values of the coefficients change sign, they will pass through zero. The coefficients themselves are estimates and have a standard error, so for some of them they will be so close to zero that any variation in the variables concerned will not influence the local variation in the model. In an OLS model it is conventional to test whether coefficients are different from zero using a t test. Carrying out such tests in GWR is perhaps a little more contentious and raises the problem of multiple significance testing. It would be inappropriate to compute local t statistics and carry out 174 individual significance tests. Not only are the local results highly dependent, but the problem of carrying out multiple significance tests is that we would expect, with a 5% level of significance, that some 8 or 9 would be significant at random. Fotheringham et al (2002) suggest using a Bonferroni correction to the significance level; this may well be overly conservative, and a test procedure such as the Benjamini-Hochberg False Discovery Rate might be more appropriate (Thissen et al (2002)). However, answers to these problems continue to be the subject of research and future publication.

     7

Further Reading

    The definitive text on GWR is:

    Fotheringham, AS, Brunsdon, C, and Charlton, ME, 2002, Geographically Weighted Regression: The

    Analysis of Spatially Varying Relationships, Chichester: Wiley

    A useful text on model selection is:

    Burnham, KA and Anderson, DR, 2002, Model Selection and Multimodel Inference: a practical ndinformation-theoretic approach, 2 edition, New York: Springer

    An excellent text on data issues is:

    Belsley, DA, Kuh, E and Welsch, R (1980), Regression Diagnostics: identifying influential data and

    sources of collinearity, Hoboken, NJ: Wiley

    An implementation of the Benjamini-Hochberg False Discovery Rate procedure:

    Thissen, D, Steinberg, L, and Kuang, D, 2002, Quick and easy implementation of the Benjamini-

    Hochberg procedure for controlling the false positive rate in multiple comparisons, Journal of

    Educational and Behavioural Statistics, 27(1), 77-83

     8

    Table 1: Summary Statistics Variable Mean Std Deviation Median Minimum Maximum

    10.95 5.70 9.40 4.20 37.50 PctBach

    11.74 3.08 12.07 1.46 22.96 PctEld

    1.13 1.23 0.72 0.04 6.74 PctFB

    19.34 7.25 18.60 2.60 35.90 PctPov

    27.39 17.38 27.64 0.00 79.64 PctBlack

     9

Report this document

For any questions or suggestions please email
cust-service@docsford.com