NOT FOR DISTRIBUTION
The Link between Productivity, Earnings, and Alternative Job Opportunities in a Survey
NORC/University of Chicago
NORC/University of Chicago
NORC/University of Chicago
We acknowledge the helpful input of Kyle Fennell, Steven Padlow, Andrey Pryjma, Rick
Swedlund and Alec Levenson.
A major goal of personnel economics is determining the relationship between incentives and employee performance. Papers that have overcome the twin impediments of poor direct measures of performance, as well as direct measures of incentives have found that incentives act to sort out low performance from high performance workers, rather than getting them to work harder. This paper advances the literature by examining a dataset from a single firm and constructing both types of measures. The dataset has direct measures of productivity, namely the hours expended by field interviewers in completing survey cases. It also has measures of one particular type of incentives: namely alternative employment opportunities in the local labor market.
We ask two questions:
1. How important are earnings and alternative employment opportunities in
attracting a pool of high productivity workers?
2. How important are earnings and alternative employment opportunities in retaining
high productivity workers?
The paper proceeds as follows. We begin by providing a brief overview of the literature, then turn to describing the structure of the labor market in question. This is followed by a description of the dataset, together with some basic facts and analytical results. Theoretical Motivation
There is a vast literature in labor economics on wage setting and productivity. A good theoretical overview is provided by Huang and Cappelli (2006), who summarize the basic challenge faced by a firm using a simple principal agent model. Briefly, this assumes that there is a large labor pool and that firms (or principals) are identical, and are of the same size. Each principal has to hire one worker (or agent) from a pool of potential candidates to complete a project. The outcome is stochastic.
The challenge for the principal is to determine how to induce the agent to put forth the appropriate amount of effort. The simplest assumption is that all agents have the same cognitive ability, but that they are very different in how much they are willing to work,
particularly in teams. In this simple model, principals decide on a compensation package, as well as how much to monitor, depending on their costs of screening, monitoring and providing compensation. As a result, some screen and hire only cooperative workers; others randomly hire. Knowing this, workers then choose where to apply for jobs. After the labor market clears where all workers and principals are matched, production starts on a project. Workers decide to work hard or shirk on the project, and they can consume the benefits immediately, regardless of whether shirking is eventually detected. Meanwhile, principals monitor workers to catch shirking. Finally, when the project is finished, principals pay wages to workers not found shirking and withhold them from those caught shirking – the equivalent of firing them.
As noted in the introduction, the major challenge to testing this theory has been a lack of adequate data. There have been a number of “insider econometrics” studies of single firms (see Sicherman 1996 and Bartel et al. 2004); probably the most famous empirical study is that by Lazear (2000). He examined a single company “Safelite” and found that when the firm’s incentive structure changed from hourly wage rates to piecework,
productivity soared. Much of the gain was due to selection: poor workers left the firm.
This study has a similar focus, albeit examining a very different industry: the survey research industry. Most survey research firms know how much it costs to screen and monitor workers, because they keep close eyes on the costs of their field management staff. The open question, however, how much high relative wages increase worker productivity (either because they result in a more cooperative pool of hires, or because the cost of shirking and losing a job is increased). The word relative is important, because wages paid by the survey firm have to be valued in the context of the alternative labor market opportunities, which is typically the local labor market.
The Market for Field Interviewers
The market for field interviewers is very similar to a spot market. Workers are hired as intermittent employees, and are retained for the period of the survey. Field and regional managers are involved in screening, hiring and monitoring interviewers during his period. Before interviewing begins, there is typically a training period which covers general
skills, like locating, confidentiality protection, gaining respondent cooperation, as well as specific skills like the administering of a particular questionnaire.
On the supply side, workers are overwhelming female, and hold a high school diploma, although some do have some college qualifications. The target labor pool for field interviewers is a group of individuals who are looking for short term, flexible work, such as stay at home mothers with child care, unemployed students and retirees who want something adventurous to do on a limited basis, social servants who want extra cash, teachers who have evening and weekends available and retirees who want to earn enough that it doesn’t disrupt their social security and pension checks, but allows for social
interaction and daily purpose. Because of both the investment in training and the exigencies of survey timing, interviewers are expected to commit at least 25 hours a week to their work.
On the demand side, the market is dominated by the Census Bureau, but there are a few other large employers, such as Westat, RTI, Nielson, Temple University, Mathematica and NORC.
The critical feature of the market for this firm, however, is that hourly wage rate that is offered to interviewers is determined by their experience and their education, and is independent of where they live. This fact, together with the fact that most field interviewers are unlikely to move their residences in search of higher wages, permits us to examine the impact of the substantial geographic variation in the different labor markets on workforce quality and retention.
a) Administrative Data
Our dataset on field interviewers included 19,517 weekly observations on 1,231 field interviewers for nine major surveys fielded by NORC during the period 2004-2006. These data included for each interviewer the weekly hours per case, costs per case for the entire time that they were employed at NORC on any of nine surveys: the 2003 National Immigrant Survey; the 2005 Survey of Consumer Finances; the National Longitudinal
Surveys of Youth (1997 cohort, rounds 7,8 and 9 2004-2006); the 2004 and 2006 General Social Survey; the 2005 National Social Life, Health, & Aging Project and the 2005 Residential Energy Consumption Survey . Information was also available on their hourly pay rate and their separation date. Finally, there is information about geographic location of their place of residence: we use the zip code level of detail.
The best measure of productivity is hours per case. This is routinely used a measure of performance in government contracts, and is the major metric on which cost estimates are based. One clear concern with using this as a measure of productivity is that different surveys have different degrees of complexity, and hence hours per case differ substantially across surveys. In order to mitigate this problem, we norm the hours per case for each survey. In addition there is enormous respondent heterogeneity, so that a single weeks’ average is probably not sufficient to accurately gauge interviewer productivity. As a result, we use the average over a seven week period. In addition, there is substantial selection bias: field managers actively monitor the productivity of field interviewers and often give the hardest cases to the best interviewers. However, these actions typically occur later in the survey period. In order to mitigate this effect, we constructed measures of interviewer productivity based solely on their first seven weeks in the field.
Table 1: Productivity (Hours per Case) in First Seven Weeks
75th Percentile Median 25th Percentile
Project 1 8.87 11.06 14.11
Project 2 9.41 12.65 16.65
Project 3 15.47 19.50 22.80
Project 4 5.31 6.44 7.61
Project 5 4.86 5.59 6.94
Project 6 4.66 5.36 6.34
Project 7 8.81 11.20 14.74
Project 8 5.76 7.53 9.37
Project 9 12.30 16.66 20.75
An examination of Table 1 reveals that there is substantial heterogeneity in interviewer performance.. In project one, for example, a field interviewer at the cusp of the top quartile in productivity averaged 8.87 hours per case, which was about 80% of the time
that the median interviewer posted. By contrast, a field interviewer at the cusp of the bottom quartile took almost a third longer at over 14 hours per case. Although the median hours per case varies substantially across projects, these relative relationships stay roughly the same: the top performers deliver cases in about 20-25% less time than the median interviewers; the bottom quartile at about 20-30% more time.
The attrition across projects is quite high. Depending on the project, between 25 and 50% of the interviewers have left by the seventh week – and a surprisingly large number
of those who were trained did not even do one week’s work.
Table 2: Weeks in First Spell (Censored at Seven Weeks)
Proportion Leaving Before Project
0 1 2 3 4 5 6 7 Seven Weeks Length
Project 1 5 7 10 11 12 9 6 58 50.85% 21
Project 2 3 5 7 6 10 5 2 66 36.54% 23
Project 3 5 8 14 5 6 7 4 127 27.84% 49
Project 4 8 8 7 6 11 3 7 102 32.89% 34
Project 5 8 8 5 6 7 8 12 111 32.73% 34
Project 6 12 5 1 5 4 4 4 110 24.14% 40
Project 7 8 6 4 5 7 4 8 79 34.71% 40
Project 8 7 2 9 1 6 5 4 81 29.57% 20
Project 9 1 7 3 3 10 11 4 96 28.89% 32
That there is substantial variation in interviewer characteristics is evident from an examination of Figure 1, in which each interviewer’s hours per case in the first seven weeks is plotted against their duration on the project, where the weeks of retention have been normalized to the maximum length of the project; the hours per case have been normalized in the same way. Although we have zip code level of detail, the labels on the graph identify the state in which each interviewer lives in order to illustrate the geographic variation we will exploit in what follows.
Geographic Variation in Worker Quality and Retention
b) LEHD Data
There are several dimensions that need to be considered in constructing measures of alternative employment opportunities. One is geographical: what is available in a particular area. Another is demographic: the opportunities available to workers of a particular age or sex. A third is skill based: the industries in which a field interviewer is likely to get additional work. There are similar considerations in choosing which measures to use in the analysis. One is quantity based: the amount of workforce turnover and the net number of new jobs available. Another is wage based: the earnings associated with jobs or the earnings available for new hires. A third is a measure of demand pressures: the ratio of new hire earnings to incumbent worker earnings.
Each of these measures is available from a new program (Longitudinal Employer
1Household Dynamics, or LEHD) at the Census Bureau, which is a partnership between
the Census Bureau and 44 states. This program releases a set of thirty employment and earning measures tabulated at a detailed level of demographic (two sex and eight age categories), geographic (county, metropolitan area, and Workforce Investment Area) and
industry (4-digit NAICS) information. These Quarterly Workforce Indicators (QWIs) measures provide, for instance, information on the level of employment, job creation and destruction, accessions (both hires and recalls), as well as separations. Moreover, because these measures are derived from micro-data on individual workers and firms, the QWI are provided at a level of previously unavailable demographic, geographic, and industry detail.
For this analysis we use workforce turnover and net job openings as a measure of the quantity of alternative job openings. We use the earnings of new hires and the earnings of incumbent workers as measures of the quality of those openings. We focus on the measures at the county level and for five age groups: 22-34, 35-44, 45-54, 55-64 as well as both sexes. As well as examining these measures for all industries in each county, we also examine the measures for a subset of detailed industries that are likely to be benchmarks for workers with field interviewer characteristics and interests: retail trade; real estate, administration, and other services.
Some indication of the differences across counties and by demographic characteristics in local labor markets is presented just for one state in Table 3.
Table 3: Earnings Variation for Women in Illinois by Age and Selected County:
All Industries 2005: Quarter 3
County 65 Plus 55-64 45-54 35-444 25-34 22-24 -1499
Adams $1,225 $2,226 $2,468 $2,459 $2,102 $1,541 $2,065
Alexander $1,136 $1,983 $1,872 $1,996 $1,679 $1,065 $1,764
Bond $1,417 $2,062 $2,401 $2,548 $2,143 $1,681 $2,111
Boone $1,715 $2,746 $3,058 $3,101 $2,625 $1,370 $2,588
Brown $1,080 $1,887 $2,307 $2,406 $2,260 $1,544 $2,063
Bureau $1,566 $2,077 $2,274 $2,146 $1,835 $1,393 $1,911
Calhoun $1,074 $1,378 $1,461 $1,414 $1,397 $1,487 $1,284
Carroll $1,202 $1,891 $2,036 $1,901 $1,685 $1,249 $1,682
Cass $1,342 $2,036 $2,194 $2,195 $2,032 $1,737 $1,984
Champaign $1,487 $2,491 $2,806 $2,640 $2,223 $1,475 $2,232
Christian $1,197 $1,782 $2,186 $2,112 $1,835 $1,371 $1,801
Clark $1,132 $1,806 $2,108 $2,204 $1,993 $1,538 $1,834
Clay $1,325 $1,782 $1,854 $1,929 $1,748 $1,314 $1,725
Clinton $1,235 $1,769 $2,077 $2,009 $1,829 $1,294 $1,703
Coles $1,196 $2,101 $2,415 $2,217 $1,871 $1,222 $1,912
Cook $2,004 $3,200 $3,595 $3,585 $3,013 $1,898 $3,057
Variations in Turnover Rates
Adams 5.5% 4.4% 5.6% 6.4% 10.3% 17.7% 9.2%
Alexander 4.3% 5.0% 5.8% 8.7% 12.9% 20.0% 9.3%
Bond 7.7% 5.1% 6.4% 9.2% 11.2% 16.4% 9.7%
Boone 10.3% 6.6% 5.7% 7.1% 10.4% 21.4% 9.9%
Brown 6.6% 5.4% 5.6% 6.2% 7.4% 7.8% 7.6%
Bureau 5.2% 5.1% 5.9% 7.9% 10.0% 18.4% 9.6%
Calhoun 2.7% 4.4% 8.0% 7.9% 13.6% 38.1% 12.2%
Carroll 6.4% 6.3% 6.0% 8.8% 12.5% 20.3% 10.8%
Cass 7.1% 4.4% 4.9% 7.1% 8.8% 14.2% 8.2%
Champaign 6.7% 4.5% 5.2% 7.4% 11.5% 20.1% 10.5%
Christian 4.2% 4.9% 5.7% 6.1% 11.0% 14.9% 8.9%
Clark 11.0% 5.2% 6.2% 9.4% 13.4% 29.6% 11.8%
Clay 9.9% 5.5% 5.6% 9.3% 10.0% 14.9% 9.9%
Clinton 4.6% 3.9% 5.5% 6.5% 9.6% 17.4% 9.2%
Coles 5.3% 7.6% 6.9% 9.5% 13.7% 23.9% 12.5%
Cook 8.4% 6.3% 6.9% 8.5% 12.2% 18.7% 10.5%
This variation across counties translates into very different alternative job opportunities facing field interviewers. Just in the state of Illinois, there are substantial differences between the quantity of job openings in Cook County, as measured by job flows and job turnover rates, and in the quality of jobs, as measured by both the earnings of incumbent workers and earnings of new hires in the other industries in which field interviewers are likely to be seeking work.
Figures 2 and 3 provide further visual illustration of these differences for the particular labor markets faced by the field interviewers.
Geographic Variation in Quantity and Quality of Jobs - All Industries
0.1.2.3.4Turnover Rate in All Industries