Edited by Gerrit De Geest
Chapter 17: Unemployment
[In: Volume 2, Kenneth G. Dau-Schmidt, Seth D. Harris and Orly Lobel (eds) Labor and Employment Law and Economics]
1 Defining and measuring unemployment
For decades, unemployment posed an embarrassment for economic theory because, as Davidson (1990) aptly notes, ‘the Walrasian assumption that markets clear necessarily assumes away the possibility of unemployment’. The near absence of theoretical guidance on the nature and definition of unemployment made some comments by Sinclair (1987) remarkably cogent: ‘Unemployment is like an elephant: easier to recognize than to define. Definitions abound. Practices differ between countries, too; politicians beset by a sharp rise in unemployment on a given definition sometimes yield to the temptation to redefine their problem away.’
It would be cavalier and incorrect to suggest that, since the above observations were written, economists have come to fully understand unemployment or that unemployment is now a settled field within labor economics. Nevertheless, the advances have been substantial – the definitions are clearer, the explanations less ad hoc. As we will see, advances in data collection and the measurement of unemployment have generally led advances in the economic theory of unemployment, although many economic theorists would probably resist this suggestion.
This section begins with a typology of unemployment that labor economists have used throughout the post-World War II period. It then describes how unemployment is measured, a method that was pioneered in the United States by the Bureau of Labor Statistics and has been gradually accepted and adopted by most advanced Western economies. As will be seen, the measurement of unemployment inevitably involved specifying a definition of unemployment and related labor force concepts.
A Types of unemployment
In 1945, William H. Beveridge introduced a typology of unemployment that economists and policy makers have found useful ever since. Beveridge suggested that a worker may be unemployed for one of three reasons which he termed frictional, structural and demand deficient.
Frictional unemployment is unemployment that exists because it takes time and effort for job seekers to find job vacancies and for employers with job vacancies to find suitable workers. Frictional unemployment, then, is caused by search costs or, more generally, ‘transaction costs’. Frictional p. 481unemployment may be voluntary if a worker leaves a job in order to search for a better job. A job seeker who has just entered or re-entered the labor force – a recent college graduate or someone who has taken some time out of the labor force to care for children – often experiences frictional unemployment.
It is unlikely that society could ever eliminate frictional unemployment because information is not free. As long as the demand for labor is dynamic and adjusting, and as long as workers are mobile and changing their preferences, frictional unemployment will exist. However, frictional unemployment could be reduced because it is generated by transaction costs, such as the costs of search and negotiation, which could be reduced. In particular, improved search technologies like the internet hold the promise of dramatically improving the exchange of information and reducing frictional unemployment because they promise to make it easier and cheaper for employers to let workers know when they have vacancies and for job seekers to find out about those vacancies.
Structural unemployment occurs because workers differ in their skills, abilities and locations and because jobs differ in their skill requirements and locations. Suppose a school district opens a new position for a math teacher because the state has increased the high school graduation requirement in math and computer technology. The district searches for a math teacher but cannot find anyone with the needed training, skills and interests. Meanwhile, it receives dozens of applications from teachers who would like to teach English and history. The problem is one of mismatch between the job vacancy and the available job seekers. For this reason, structural unemployment is often called ‘mismatch’ unemployment. Also, it is sometimes referred to as the ‘round peg–square hole’ problem because it is like trying to fit a worker of one kind (a round peg) into a job of another kind (a square hole).
The underlying cause of structural unemployment is heterogeneous workers and heterogeneous jobs. But it is called ‘structural’ because it is caused by so-called structural shifts – changes in demand for goods and services, technological change and changes in the location of people and industry. Government programs have attempted to address structural unemployment by policies such as training subsidies and moving assistance. Section 4 on training programs and re-employment policies examines the effectiveness of such policies.
Demand-deficient unemployment is unemployment that arises over the business cycle as a result of macroeconomic fluctuations. All workers and all jobs could be alike and all transaction costs could be eliminated, but there could still be demand-deficient unemployment. Specifically, if the number of job vacancies in an economy is less than the number of job seekers, then demand-deficient unemployment exists.
Estimating the extent of frictional, structural and demand-deficient p. 482unemployment has been controversial. Because frictional unemployment is arguably inevitable, it has come to be identified with the ‘natural rate of unemployment’ (a term coined by Milton Friedman) which over time has become known as the Non-accelerating Inflation Rate of Unemployment (NAIRU) (Friedman 1968; Gordon 1997). Most empirical work on the NAIRU has had a macroeconomic focus and defines the NAIRU as the rate of unemployment at which there is neither upward nor downward pressure on the general price level (DeLong 2002). Upward pressure on prices can result from bottlenecks in production or tight labor markets; downward pressure from excess capacity or slack labor markers. The controversy over the NAIRU has focused on whether this unemployment rate is stable or variable. Robert Gordon (1997) has been a staunch defender of the hypothesis that the NAIRU is a constant 6 per cent. Ball and Mankiw (2002), among others, suggest that the NAIRU has varied over the years from 4.5 per cent in the 1960s, to a peak of 7.5 per cent in the early 1980s, then gradually back to 4.5 per cent by 2000. Appealing as it may be to think of the NAIRU as a universal constant, a variable NAIRU makes much sense in view of the changes in the composition of the labor force and improvements in the labor exchange system (Balducchi et al. 2004).
B An illustration
Table 17.1 gives an example that illustrates frictional, structural, and demand-deficient unemployment. Suppose we have a very simple labor market in which jobs and workers are of just two kinds: Type I and Type II (Type I might be skilled workers and jobs, whereas Type II might be unskilled workers and jobs). On the demand side of the labor market, employers have seven Type I vacancies and five Type II vacancies. On the supply side of the market, seven Type I workers are looking for work and six Type II workers are looking for work. An employment agency (it could be the public Employment Service or a private agency) takes job orders from firms that have job vacancies to fill and then tries to match those job orders with unemployed job seekers who have applied for jobs.
|Job vacancies (V)||Job seekers (U)|
|Type I||Type II||Type I||Type II|
In the table, a total of 12 workers (six Type I and six Type II) are looking for work, so total unemployment is 12. How would we characterize the unemployment of these 12 unemployed job seekers – as frictional, structural or demand deficient? Note first that the number of vacancies (V) equals the number of job seekers (U), so there is no demand-deficient unemployment. Next, it is clear that 11 workers are frictionally unemployed. The agency could match the six Type II job seekers with six of the seven existing Type I job vacancies. Then the agency could match five of the six Type I job seekers with the five existing Type II job vacancies. That would leave one Type II job seeker without a job. He or she is structurally p. 483because one Type I job vacancy remains to be filled, but the Type II job seeker cannot fill it. There is a mismatch between the vacancy and the skills of the Type I worker. In principle, this remaining Type II worker could be retrained and could then fill the open Type I vacancy. Again, we will examine the effectiveness of retraining below.
It would not be far-fetched to suggest that Beveridge’s typology has had a greater influence on the measurement and, ultimately, the theory of unemployment than any other conceptualization of unemployment. The reason is that his approach is essentially practical and grounded in an understanding of the real workings of the labor market. Beveridge’s approach sets a straightforward agenda: first, measure unemployment, then try to understand its origins.
C Measuring unemployment
Measuring unemployment necessarily involves defining unemployment, and decades of debate over the definition of unemployment were ultimately settled through efforts to measure (or estimate) the extent of unemployment in the labor market (National Commission on Employment and Unemployment Statistics 1979). In order to determine whether a worker is unemployed, we must ask the worker questions about his or her labor force status. To the extent the debate over the definition of unemployment continues, it is defined by the questions asked of workers to determine whether they are unemployed and the categories to which the workers are assigned.
To understand unemployment (and the labor force more generally), we need to define some concepts which are illustrated in Figure 17.1. The outer rectangle of the figure represents the total population of the US in January 2004 which was estimated to be about 280 million people. The total population is every living person in the United States, but not everyone in the US population has the potential to be part of the labor force. In particular, three groups are excluded: anyone under age 16; inmates of penal institutions, psychiatric institutions, old-age homes and tuberculosis sanitariums; and anyone in the military. Once we exclude these three groups, we have what is known as the civilian noninstitutional population, which is represented by the inner rectangle of the figure.p. 484
During the second week of each month, the Bureau of Labor Statistics surveys about 65 000 households (nearly 250 000 individuals) as part of the Current Population Survey (CPS) (US Department of Labor 2007). The CPS includes a series of questions that divide everyone in the civilian noninstitutional population into one of three groups: Employed (E), Unemployed (U) and Not in the Labor Force (N). An individual interviewed for the CPS is counted as employed (E) if he or she (a) worked for pay for as little as one hour during the middle week of the month, (b) was temporarily absent from a regular job because of illness, vacation, a strike, bad weather or other specified reasons, or (c) worked in a family business (not for pay) for 15 hours or more. In January 2004, 138.6 million American workers were employed.
Note that anyone who works for just one hour for pay is employed according to this definition. Someone who works 35 hours or more during p. 485the week is considered a full-time worker. Someone who works less than 35 hours per week is considered part time. But both are employed. The CPS asks questions of part-time workers to determine whether they are voluntarily or involuntarily working part time. Most part-time workers want to work part time, but about 35–40 per cent would prefer full-time work (during recessions, the proportion of part-time workers who want to work full time tends to rise).
Next, an individual is counted as unemployed (U) if she did not work during the survey week but (a) was available for work and said she had looked for a job in the last four weeks, or (b) was waiting to report for a scheduled job. The key here is that the individual is not automatically counted as unemployed simply because she didn’t work during the week. She must have been looking for a job (or be waiting to report for one) or she is counted as not in the labor force. In January 2004, 8.3 million workers were unemployed.
Finally, an individual is counted as not in the labor force (N) if she did not work during the week and was not available for work or not looking for work. Not in the labor force workers include retired people, homemakers and those engaged in caring for their children, students who are not working, the ‘voluntarily idle’ and ‘discouraged workers’. Discouraged workers are those who did not have a job and had not looked for one in the last month but said they wanted a job. Again, someone must have looked for a job to be counted as unemployed.
Counting discouraged workers as not in the labor force rather than unemployed has been a continuing source of controversy (Bregger and Haugen 1995). Some argue that these people are able and available for work and want jobs but have given up looking after concluding that their prospects are bleak. Why not classify them as part of the labor force, and hence unemployed? Supporters of the existing classification of these workers as not in the labor force respond that if these workers haven’t looked for work in the last month, how do they really know jobs are not available? Are they really willing to work and available for work?
As something of a compromise, the BLS now routinely reports the number of workers not in the labor force who say they want a job. In January 2004, there were 4.7 million such American workers and including them among the unemployed would have increased the ranks of the unemployed from 8.3 million to over 13 million – an increase of more than 50 per cent.
D The unemployment rate and related labor force statistics
The above definitions give us what we need to define some important socioeconomic statistics. The first is the civilian labor force (usually referred to as the labor force, or L) which is the sum of individuals who are employed p. 486and those who are unemployed. In the notation of the figure: L = E + U. These are people who are either working or are able, available and looking for work. Note that unemployed individuals are considered part of the labor force because they are able, available and seeking work.
The second definition is the civilian labor force participation rate (usually referred to as the labor force participation rate, or lfpr) which is the proportion of the civilian noninstitutional population that is in the labor force. In the notation of the figure: lfpr = L / (L + N) = L / (E + U + N). Literally, the lfpr is the proportion of all individuals eligible to work (that is, the noninstitutional population) who are either working or trying to find work. The lfpr, then, is one measure of the extent to which an economy’s population is contributing (or attempting to contribute) to formal market work. A country where the lfpr is 0.1 would be a country of leisure; a country where it is 0.9 would be a country of workaholics. During the last century, the extent of labor force attachment in the United States has strengthened overall, weakening somewhat for men and strengthening greatly for women – especially married women.
The third key statistic is the civilian unemployment rate (usually referred to simply as the unemployment rate, or ur) which is the proportion of the labor force that is currently unemployed stated as a percentage. In the notation of the figure: ur = U / (E + U) = U / L. The unemployment rate is one of the most closely watched economic statistics the US government generates.
A final statistic that follows from the earlier definitions is the employment–population ratio (E/P), which is the proportion of the civilian noninstitutional population currently employed. In the notation of the figure: E/P = E / (L + N) = E / (E + U + N). The employment–population ratio is closely related to the lfpr: adding the number of unemployed workers to the numerator of E/P gives the lfpr. The employment–population ratio is an important measure of the relationship between living standards, on one hand, and the need of a population to work, on the other.
Estimates of the incidence and duration of unemployment are essential products of the CPS and have fueled an important debate over the understanding of unemployment. Suppose the measured unemployment rate is 5 per cent over the course of a year. At one extreme, this could imply that every labor force participant is unemployed for 5 per cent of the year (or about two and one-half weeks). At the other extreme, it could mean that all unemployment falls on just 5 per cent of the labor force, each of whom suffers 26 weeks of unemployment during the year. The truth, of course, is somewhere in between, but how seriously we view an unemployment rate of 5 per cent depends crucially on whether the labor market is closer to one extreme or the other. The debate over understanding and interpreting p. 487the incidence and duration of unemployment is longstanding and shows no sign of abating (Clark and Summers 1979; Akerlof and Main 1980, 1981; Sider 1985; Baker 1992; Shimer 2007; Elsby et al. 2007).
Clearly, the longer the typical unemployment spell, the more hardship an unemployed worker suffers and the median unemployment spell in the United States has shown a definite upward trend over the last four decades. In the late 1960s, the typical unemployment spells lasted less than five weeks; by the late 2000s, it had nearly doubled to more than nine weeks. This trend has important implications for income replacement and re-employment policy, as discussed below.
Recent years have seen two significant improvements in the measurement of unemployment dynamics in the United States. Following the work of Dunn et al. (1989) and Davis et al. (1996), BLS now tracks the creation and destruction of jobs through its Business Employment Dynamics program (US Department of Labor quarterly). Also, job vacancies, neglected for decades, have been estimated and reported since 2001 by the Job Openings and Labor Turnover survey (US Department of Labor monthly).
2 Modeling unemployment
Starting in the 1970s, something of a revolution occurred in the modeling of unemployment. From the 1940s into the 1960s, economists used the Keynesian model to ‘explain’ unemployment, but this model was a stopgap, an ad hoc model not based on a well-developed set of assumptions about human behavior. By the 1960s, economic theorists took seriously the challenge unemployment posed for the workings of the labor market and devised several possible solutions. This section describes the two that have had the most influence on empirical work in labor economics – the job-search/reservation-wage model and the job-matching/trade-frictions model.
A Job-search and the reservation wage model
The key to the job-search and job-matching models is that they relax the assumption of perfect information in the labor market and admit uncertainty. In particular, the models assume that workers face a distribution of wage offers and workers know all the features of that wage distribution, but uncertainty is present because workers do not know which employer offers which wage. As a result, workers have an incentive to search, which is costly because it takes time and effort to apply to an employer for a job. In a modeling sense, the search process is highly mechanical and stylized – it implies drawing a firm (or wage offer) at random from the distribution of wage offers. Just as in probability theory, uncertain outcomes are modeled by the analogy of drawing a ball from an urn. p. 488To make this more concrete, consider Table 17.2 which shows a distribution of wage offers that a worker might face. If search were costless, then the worker would search until he or she found one of the three firms offering a wage of $16.00. Indeed, all workers would do this, so the high-wage firms would be inundated with applicants and all other firms would find that workers would never accept their wage offers. The high-wage firms would then lower their wage offers, the low-wage firms would raise theirs, and the distribution of wages would disappear. This argument implies that without search costs, the distribution of wages would disappear (or ‘degenerate’).
A distribution of possible wage offers
|Hourly wage ($)||Number of firms||Probability of wage − P(w)|
The job-search models invented by Stigler (1962), McCall (1970) and Mortensen (1970) avoid the problem of workers searching ad infinitum by assuming a worker must incur some cost each time she ‘searches’ for a job (that is, each time she applies for a job by making a random draw from the wage-offer distribution). The cost could be time and effort by the worker or a fee paid to an employment agency, but it is a real cost that implicitly lowers the wage by the amount of the per-search cost. The worker now faces an economic problem: She must weigh the expected benefit of making another job application (that is, the possibility that the next draw from the wage-offer distribution will be higher than the last) against the cost of applying.
How should a worker proceed? Two basic decision rules have been proposed. Both are technical, so only the flavor of each will be given here. The first was the optimal sample size rule suggested by George Stigler (1962) which assumes the worker decides before search begins how many ‘searches’ to make. The worker does this by calculating the maximum expected wage offer for one search, two searches and so on, then choosing the number of searches that maximizes the difference between the expected wage resulting from searching n times and the cost of searching n times.
p. 489The second rule is the reservation-wage rule (or sequential-decision rule) suggested by McCall (1965) and Mortensen (1970) which most subsequent work has followed. Under the reservation-wage rule, the worker decides on a minimum wage that would be acceptable – a reservation wage, wr – and begins to search if that reservation wage is greater than the value of the current state. The worker samples one employer at a time (sequentially), rejecting any offer less than wr and accepting the first wage at or above wr.
B Job-matching and the trade-frictions approach
The job-search/reservation-wage model has been enormously influential among empirical labor economists and has motivated innumerable papers on the duration of unemployment. It ‘solves’ the problem of explaining unemployment by relaxing one of the key assumptions of the Walrasian model – that transaction costs are zero. However, Rothschild (1973) and others early criticized it for being too narrow and, in particular, for focusing on just one side (the supply side) of just one market (the labor market).
The job-matching approach (also known as the trade-frictions approach) is an alternative to the job-search/reservation-wage model and was developed by Diamond (1982), Mortensen (1982) and Pissarides (1990). It differs in two essentials from the job-search/reservation-wage model. First, the job-matching approach considers both sides of the labor market by adding equations for the number of jobs offered by employers. Second, it takes the intensity of job search (rather than the reservation wage) as the worker’s object of choice. In the job-matching model, workers flow through various labor market states with rates of transition between states depending (in part) on the search behavior of workers. Unemployed workers search randomly across firms for job vacancies and firms with vacancies randomly select from the pool of applicants. Each unemployed worker chooses search intensity – the number of firms contacted – in an effort to maximize expected lifetime utility. Increasing search intensity raises the probability of re-employment but is also costly. The model generates a steady-state equilibrium by equating the flows into and out of each labor market state.
A highly simplified version of the job-matching model can give some of its flavor. Suppose a fixed number of homogeneous workers in the labor market (L) move between two states – employment and unemployment. At a given time, E workers have jobs and U are unemployed (so L = E + U). In each period, a fraction of workers (s) with jobs separate from their job and become unemployed. Also in each period, a fraction of unemployed workers (m) finds a job match and gains re-employment.
In steady-state equilibrium, the flow of workers from employment (E) to unemployment (U) must equal the flow from unemployment (U) to p. 490employment (E). That is, we have the steady-state condition, sE = mU. Using this steady-state condition and the identity L = E + U, the steady-state number of employed and unemployed workers can be solved as:
Also, the steady-state unemployment rate (ur = U / L) can be found:
For example, if the separation rate (s) is 0.01 per two-week period (as the empirical literature suggests), the re-employment rate (m) is 0.15 per two-week period and there are 125 million in the labor force (L). Then in steady-state equilibrium there are 117 million employed workers (J), 7.8 million unemployed workers (U) and the steady-state unemployment rate (ur) is 6.2 per cent.
Although the above model is extremely spare, it can among other things illustrate the mechanism by which re-employment policies might work. For example, intensive job-search assistance should increase the re-employment probability (m) of unemployed workers. If job-search assistance caused m to rise from 0.15 to 0.2, then steady-state employment (J) would rise to 119 million, steady-state unemployment (U) would fall to 6 million and the steady-state unemployment rate would fall to 4.8 per cent. Note that the increase in employment and the decrease in unemployment occur even though the labor force and (implicitly) the total number of available jobs are fixed. The increase in job-search intensity results in more of the existing job vacancies being filled and, hence, increased employment.
From the standpoint of empirical work, the job-matching/trade-frictions model has been less influential than the earlier job-search/reservation-wage model. But the advantages of the job-matching model – its consideration of the employer’s side of the labor market and its ability to accommodate many types of workers and jobs – has made it attractive for simulating a variety of labor market policies and generalizing effects estimated in experimental settings to the entire labor market (Davidson and Woodbury 1993, 2000, 2002; Heckman et al. 1999).
3 Unemployment insurance
Unemployment insurance (UI) programs pay a weekly benefit for a limited period of time to workers who have lost their job through no fault of their own and are actively seeking work. In the United States, p. 491Congress established the UI system when it passed the Social Security Act in 1935. The US system of UI is unusual because it is a federal-state system – each state administers its own UI program, setting its own benefit levels and tax rates subject only to broad federal guidelines and oversight by the US Department of Labor (Advisory Council on Unemployment Compensation 1995, 1996).
Traditionally, UI has been considered a social insurance program to alleviate the hardship that arises during a cyclical downturn (Haber and Murray 1966; Blaustein 1993; Advisory Council on Unemployment Compensation 1995). Although it also assists displaced and dislocated workers, the duration of UI benefits is not long enough to support a dislocated worker through a period of training longer than six months (and indeed, many states disqualify workers from receiving UI if they are in school or training). Many frictionally unemployed workers are also eligible for UI. Whether it makes sense for frictionally unemployed workers to receive UI depends on the extent to which the longer job search enabled by UI results in a better job match. The available evidence suggests that longer job searches generally do not yield substantially better matches (Johnson and Klepinger 1994; Decker et al. 2001; Black et al. 2003).
This section describes the eligibility requirements and benefit levels of UI programs in the United States and then the financing of the program. The discussion provides background for the current controversy over goals of the UI system – whether it should serve mainly as an income replacement program for workers with a strong attachment to the labor force or if it should play a more aggressive role in fighting poverty by transferring income from higher-wage to lower-wage workers.
A Eligibility and benefit determination
To be eligible for UI benefits, an unemployed worker must satisfy two sets of criteria. The first are called ‘monetary’ eligibility criteria and pertain to a worker’s earnings history. The second are ‘nonmonetary’ eligibility criteria which pertain to the conditions that led a claimant to leave the last employer and whether the claimant is now looking for work.
In most states, a worker’s monetary eligibility is based on earnings in a ‘base period’, which is the first four of the five completed quarters before the claim is filed. The quarter of the base period in which earnings were highest is referred to as the ‘high quarter’ and to be eligible for benefits a worker must typically have had some specified minimum earnings in the high quarter and one and one-half times that amount in the full base period.
Thirty-four US states automatically adjust the maximum benefit amount by linking it to the state’s average weekly wage. A typical approach is to p. 492set the maximum at between 50 and 60 per cent of the average weekly wage. Illinois, for example, sets it at 50 per cent. States that do not take this approach often see their UI replacement rates slip below the national average and experience repeated political wrangling over the maximum benefit amount.
In all but one state, the maximum potential duration of benefits is 26 weeks. The exception is Massachusetts where the maximum is 30 weeks. In nine states (including two large states – Illinois and New York) all eligible workers can receive up to 26 weeks of benefits, but in most states potential duration may be much less. For example, in California, Florida and Texas, workers may be eligible for as few as 14, nine or ten weeks respectively. In these ‘variable duration’ states, potential benefit duration is determined by a formula that provides greater duration to workers whose earnings have been more stable, as measured by the ratio of base-period earnings to high-quarter earnings.
To be eligible for benefits, a UI claimant must also satisfy three sets of ‘nonmonetary’ eligibility criteria. First, she must have left her last job due to lack of work and through no fault of her own (these are known as ‘separation criteria’). Accordingly, a worker who quits voluntarily or is discharged for cause is usually ineligible for UI, although there are some exceptions for workers who quit for good cause. Second, a worker must be currently available for and seeking full-time work (these are known as ‘nonseparation criteria’ or the ‘UI work test’). Accordingly, a worker who is unavailable for full-time work due to childcare responsibilities, decides not to search for work because he or she believes jobs are unavailable or takes a vacation is also ineligible for UI. Third, a worker must not be receiving ‘disqualifying income’, the definition of which varies from state to state, but which often includes severance pay, partial disability pay and vacation pay.
Nonmonetary eligibility criteria appear straightforward, but they are difficult to implement and enforce. Consider first the separation criteria. Should all base period separations be considered or only the most recent? Several states consider separations in addition to the most recent under some circumstances. What reasons should be included as ‘good causes’ for quitting a job voluntarily? In most states, good cause is restricted to issues directly related to work or the employer, but many states also allow other reasons, such as leaving a job to accept another job that does not materialize or leaving a job due to sexual harassment. States vary widely in how permissive they are in allowing additional ‘good causes’ (US Department of Labor 2008b; UWC 2007).
In most states, a claimant who quits voluntarily or is discharged for cause cannot receive UI benefits for the duration of the current unemployment p. 493spell. In order to requalify, a worker who quits voluntarily must typically earn some multiple of his or her weekly benefit amount.
Consider next the nonseparation criteria. For most workers, satisfying the nonseparation criteria entails registering with the state’s public labor exchange, which goes by a different name in each state, being available for ‘suitable’ work, and actively searching for full-time work. Registering with the public labor exchange is unambiguous, but suitable work can be variously defined. Most states simply state that a worker must be available to accept any work, which is likely to exclude many workers from eligibility.
The requirement to seek full-time work has been controversial because of its implications for the eligibility of part-time workers. For example, a worker who works 30 hours per week over a period of years and is laid off would very likely be monetarily eligible for UI. Such a worker would have a strong attachment to the labor force by most reasonable definitions, but unless she were seeking full-time employment (meaning employment of at least 35 hours per week), she would not satisfy the nonseparation criteria for eligibility. This problem has led some observers to conclude that the UI system is outmoded in that it suits the needs only of traditional full-time workers and fails to take account of the needs of single household heads with childcare responsibilities. The Advisory Council on Unemployment Compensation (1995) recommended that workers who satisfy a state’s monetary eligibility criteria should not be denied benefits solely because they are seeking part-time work. Sixteen states have modified their eligibility requirements along these lines (US Department of Labor 2008b).
Finally, most states reduce weekly benefits by the amount of any disqualifying income, which includes severance pay, salary continuation, back pay, wages in lieu of notice, vacation and holiday pay, and pension income received from a base-period employer. UI agencies throughout the country have found these provisions difficult to administer and enforce because they depend on the worker reporting the income. As a result, they appear to be a frequent source of payment errors (Woodbury 2002).
B UI benefit adequacy
A central policy concern in UI is whether the program is adequate. That is, is a large enough proportion of the unemployed covered by UI? Does it replace enough of the lost earnings of unemployed workers? Does it do so for long enough?
To address the first question, economists have long tracked the UI recipiency rate – the percentage of unemployed workers receiving UI benefits. During the 1950s the national recipiency rate averaged about 50 per cent. It then fell gradually to a low of 30 per cent in 1984, but has since recovered to a range of 35 to 40 per cent. The determinants of UI recipiency are only p. 494partially understood (Vroman 2001), but the long decline in the recipiency was coincident with states tightening their eligibility requirements. Also, interstate differences in recipiency are highly correlated with the proportion of workers who are union members and consequently receive assistance in claiming UI benefits.
A variety of reforms have been suggested to increase UI eligibility and recipiency (Stettner et al. 2004; Peterson 2008). In addition to relaxing the requirement that a UI recipient must seek full-time employment, many states have implemented an ‘alternative base period’ so that workers who lose a job after a relatively short (and recent) spell of employment can qualify for UI. For example, consider a worker who started working in October 2007, was laid off in June 2008 and filed a claim on June 30, 2008. For this worker, the base period would be the four calendar quarters of 2007 because the second quarter of 2008 was not yet complete. Only three of her roughly nine months of earnings would be included in the base period and she would be ineligible for UI benefits. By waiting until July 1, 2008, she would have two quarters of earnings in a conventionally defined base period and (potentially) satisfy the monetary eligibility criteria for UI. Many states now recognize an ‘alternative base period’, defined as the last four completed quarters before the claim is filed, which may be used to make benefits available to workers who would otherwise be ineligible. For a detailed treatment of the alternative base period, see Vroman (1995).
Does UI replace enough of unemployed workers’ lost earnings? To address this issue, economists and social insurance experts have long used the UI replacement rate defined as a worker’s weekly benefit amount divided by her average weekly high-quarter earnings and expressed as a percentage. Although the replacement rate gives a rough impression of the extent to which UI benefits replace earnings lost due to unemployment, it is a weak measure of adequacy for at least two reasons. First, the denominator of the replacement rate is based on high-quarter earnings. Although high-quarter earnings accurately represent the regular earnings of stably employed workers, they are an upward-biased indicator of earnings for workers whose employment is irregular. Second, UI benefits are taxed as income by the federal government and most states that have an income tax. As a result, the numerator of the replacement rate should be adjusted downward, but in practice this is rarely done.
Most states have a statutory replacement rate of 0.5, but because the weekly benefit amount is capped, high-wage workers’ wages are replaced at less than 50 per cent. Nevertheless, the Department of Labor estimates that the national average replacement rate in 2006 was 47 per cent (US Department of Labor 2008c). Most states are within three percentage points of this average, although there are a few exceptions. In general, the p. 495program appears to come close to its stated goal of replacing about half of pre-layoff earnings.
Finally, is the potential duration of UI benefits long enough? As discussed above, the UI program of every state except Massachusetts provides a maximum of 26 weeks of benefits to eligible claimants. These ‘regular state’ benefits are the first tier of the program and are financed entirely from payroll taxes collected from each state’s employers. In 1970, Congress enacted the Extended Unemployment Compensation Act, which was intended to extend the duration of benefits by 50 per cent (up to 13 weeks) automatically for workers who have exhausted their regular benefits in states where the labor market has deteriorated as a result of recession. This program, known as ‘standby Extended Benefits’ (or simply ‘EB’), can be thought of as a second tier of the UI program. Unlike regular state benefits, EB is funded half from state UI payroll taxes and half from the federal UI payroll tax.
Congress revised the triggers that activate EB in 1981, making it much harder for EB to activate (Vroman and Woodbury 2004; Wenger and Walters 2006). As a result, during the 1990–91 recession, standby EB activated in only ten states, but never in several states that were hard hit by the recession (notably California, New York and Pennsylvania). Similarly, during the recession of 2001–02, EB activated in only four states (Alaska, Idaho, Oregon and Washington). During the downturn of 2008, EB had activated only in Alaska and Rhode Island as of July 2008. In the main, EB is inactive.
Because EB has become inactive, Congress authorized ‘emergency’ extended benefits during 1991–94 and again during 2002–04. Such ‘emergency’ benefits can be thought of as a third tier of the program, funded wholly from federal revenues. Legislation to extend benefits by 50 per cent (up to 13 weeks) in all states and by 100 per cent (up to 26 weeks) in high-unemployment states was introduced in Congress during the spring of 2008.
The percentage of UI recipients who exhaust their regular benefits offers a ready gauge of the adequacy of the potential duration of benefits. Not surprisingly, the exhaustion rate rises when aggregate economic conditions deteriorate and falls during a recovery (US Department of Labor 2008a). For example, the exhaustion rate rose from 31.8 per cent in 2000 (the peak of the late 1990s expansion) to 43.8 per cent in 2003 following the 2001–02 recession (a post-World War II high). In addition, though, the exhaustion rate appears to have trended upward since 1970. A trend line estimated over exhaustion rates shows that the predicted exhaustion has increased from 30 per cent in 1973 to 38 per cent in 2008.
Exhaustion rates have trended upward partly because state duration p. 496provisions have become less generous over time and partly because unemployment spell durations have increased since the 1980s. Woodbury and Rubin (1997) show that states with greater average potential duration of benefits have lower exhaustion rates, and states with higher total unemployment rates have higher regular exhaustion rates.
C Disincentive effects of UI and optimal UI benefits
For the United States, studies estimating the effects of higher weekly benefit amounts and longer potential duration of benefits were well developed by the 1990s and have been reviewed by Decker (1997) and Woodbury And Rubin (1997). Decker’s summary suggests that the elasticity of unemployment duration with respect to changes in the replacement rate is between 0.75 and 1.0. Accordingly, a 25 per cent increase in the average replacement rate (from 0.5 to 0.65) would increase the median spell of unemployment by up to 25 per cent (from about eight weeks to about ten weeks).
Unemployment duration also appears to respond to changes in the potential duration of UI benefits (Woodbury and Rubin 1997). The most convincing estimates suggest an additional week of potential benefit duration lengthens unemployment duration by 0.2 to 0.3 week. Accordingly, a typical 13-week emergency extension would lengthen unemployment spells by about 2.5 to 4.0 weeks. Clearly, these disincentive effects are large enough to be important in setting UI policy. Indeed, if benefits are so generous that they reduce workers’ motivation to gain re-employment, the UI program’s goal of providing adequate benefits may collide with the objective of preserving work incentives.
Starting with two pioneering contributions by Baily (1978) and Flemming (1978), a literature has developed investigating whether UI programs are ‘optimal’ in balancing their income replacement benefits against their work disincentives. See Karni (1999) for a review of the first 20 years of contributions. The questions addressed in this literature have been diverse: some papers focus on replacement rates (are they too high or too low?), some focus on the time path of benefits (should benefits be constant, rise or fall over the spell of unemployment?), some focus on the welfare effects of these programs (what is the deadweight loss associated with existing programs compared with optimal programs or no program at all?), and some focus on the potential duration of benefits (should benefits be offered for shorter or longer time periods?).
The diversity of questions posed and models used in this literature makes a brief summary difficult, but two generalizations seem appropriate. First, the prevailing view offered by these papers is that current programs are poorly designed and overly generous. For example, Layard et al. (1991) and Millard and Mortensen (1997) traced many of Europe’s dual problems p. 497of high unemployment and long average duration of unemployment to increases in UI program generosity. They suggested reducing the potential duration of UI benefits, discarding policies that impose employmentadjustment costs on firms, and instituting subsidies to offset recruiting and training costs incurred by firms. Second, the policy implications of the optimal UI literature are often tenuous because the assumptions underlying the models used are often very strong. For example, most of the studies that attempt to measure the welfare loss from current UI programs assume that workers are risk neutral, in which case no welfare gain can result from government-provided insurance.
A few papers in the optimal UI literature have quite different policy implications from those just mentioned. For example, Wang and Williamson (1996, 2002) and Davidson and Woodbury (1997, 2002) develop models based on the modern insurance literature, which finds that optimal insurance contracts take the form of deductible policies, where coverage is not provided for losses below a certain level (Rothschild and Stiglitz 1976; Shavell 1979; Raviv 1979). The implication is that an optimal UI system is characterized by a fairly low replacement rate (about 50 per cent, consistent with the existing system) and a long potential duration of benefits (at least one year, and certainly longer than the existing 26 weeks). Under such a system, workers who suffer large losses because they have a particularly difficult time finding re-employment would be better compensated than under the current system. They would nevertheless have an incentive to search for employment because the replacement rate is relatively low. The implications are consistent with O’Leary’s (1998) consumer-theoretic approach to examining UI benefit adequacy, which suggests that the current UI system over-compensates short spells of unemployment and under-compensates long spells.
D Financing unemployment insurance
States finance UI by collecting a payroll tax from employers. This UI payroll tax is essential to the federal role in the UI system. The Social Security Act provides for a payroll tax (the Federal Unemployment Tax), which is currently 6.2 per cent of the first $7000 of a worker’s earnings in a calendar year. However, employers in states with a federally approved UI program (that is, one that meets the broad guidelines stated in the Act) are credited 5.4 per cent and as a result pay only a 0.8 per cent Federal payroll tax. This is the incentive whereby the Federal government induced all the states to adopt a UI program (Blaustein 1993, chapter 6). Virtually all employers are required to pay the UI payroll tax. Exceptions exist for agricultural, domestic and very small employers (including the self-employed), but most employers are ‘liable’ and must pay the tax.
p. 498Like any tax, the UI payroll tax has two parts: a base and a rate. The federally required minimum tax base (or ‘taxable payroll’) is the first $7000 of a worker’s earnings in a calendar year, but most states have a higher tax base. A growing number of states – 18 in 2008 – automatically adjust their taxable wage base by specifying it as a percentage of the state’s average weekly wage. Most of these states have a wage base exceeding $20 000. In the majority of states that do not automatically adjust, the wage base rarely exceeds $10 000, and never exceeds $14 000 (US Department of Labor 2008b).
It follows that most states’ taxable payrolls are far below the average annual earnings of full-time workers, which are of the order of $40 000. Hence, the UI taxable wage base is quite narrow, and financing a given level of benefits requires higher tax rates than would a broader base. The taxable wage base has not always been so narrow. At the outset of the program in 1936, the wage base was the same as Social Security’s and covered about 93 per cent of earnings (Hamermesh 1977, p. 72). Only in the 1960s did the UI wage base start to erode significantly relative to payrolls. In 2007, about 13 per cent of total wages paid by taxable employers were covered (US Department of Labor 2008c).
The low taxable wage base creates an incentive for employers to prefer high-wage to low-wage workers and shifts the distribution of employment away from low-wage and toward high-wage workers (Hamermesh 1977, pp. 72–5). Similarly, an employer who employs a succession of, for example, four workers to fill a given job in a calendar year will pay more in UI payroll taxes than an employer who fills a similar job with just one worker. Accordingly, the low taxable wage base tends to work against the employment prospects of low-wage, high-turnover workers.
In every state, each employer’s tax rate is ‘experience rated’, meaning the tax rate depends on the extent to which that employer has laid off workers who have claimed and received UI benefits in the past. To implement experience rating, states must ‘charge’ UI benefits received by a worker to the employer who is in some sense responsible for that worker’s unemployment. Most, but not all, benefits are charged to an employer and result in payments to the UI trust fund. Some benefits, though, are not charged to any employer (are ‘noncharged’) – those paid to workers who have quit voluntarily with good cause, dependants’ allowances, the federal share (50 per cent) of standby extended benefits and emergency extended benefits. Other benefits are charged to employers that have gone out of business, which makes it impossible to collect UI payroll taxes (these are known as ‘inactively charged’ benefits). Finally, some benefits are charged to an employer but are uncollectable because the employer is at the maximum UI payroll tax rate (such benefits are ‘ineffectively charged’). In this last case, p. 499further layoffs may be charged to the employer, but they do not result in higher tax rates or larger payments by the employer. In practice, experience rating applies only to employers who have been liable (paying UI payroll taxes) for at least two years. New employers pay a ‘standard rate’ until they have enough experience to be rated.
E Policy issues in financing UI
In every state, the UI payroll tax rate is capped at between 5.4 and 10 per cent, so employers with layoff experience do not face a higher tax rate once their reserve or benefit ratio reaches a certain level. Experience rating is incomplete: if the payroll tax were not capped, many high-layoff employers would face higher tax rates as a result of their layoff experience. Because it is capped, these employers pay less into the UI system than the workers they lay off draw.
The original rationale for experience rating the UI payroll tax was to create a financial incentive for employers to avoid layoffs and hence stabilize employment (Witte 1962). Indeed, several studies have estimated large impacts of experience rating on temporary layoffs (Topel 1984, 1985). For example, Card and Levine (1994) find that complete experience rating would reduce temporary layoffs by 50 per cent in a recession. That high-layoff employers evade the incentive to avoid layoffs once they reach the maximum tax rate has been a longstanding concern to policy makers and employers.
Another UI payroll tax issue of longstanding concern is that low-layoff employers tend to subsidize high-layoff employers through the UI system (Becker 1972a, 1972b; Deere 1991; Laurence 1993; Anderson and Meyer 1993a, 1993b). A study of UI cross-subsidies during 1985–95 in Missouri, Pennsylvania and Washington State found that benefit payments to laid-offconstruction workers were between 28 and 65 per cent greater than the payroll taxes paid by the construction industry. Overall, between 23 and 26 per cent of UI payroll taxes were shifted across employers through UI payroll taxes. Under a tax regime with no experience rating (that is, a single tax rate paid by all employers in a state), cross-subsidies would have been even greater: between 38 and 46 per cent of payroll taxes (Woodbury 2007).
F Trust fund adequacy
Unlike Social Security, UI is not a pay-as-you-go system. Rather, each state places UI payroll taxes it collects in a trust fund from which benefits are paid. The intent is to ‘forward-fund’ UI so that, in a recession, funds required to pay benefits will be available and UI will serve as an automatic stabilizer.
The most useful measure of UI trust fund adequacy is the average p. 500high-cost multiple. This gives the number of years for which a state’s existing trust fund would be adequate if the state paid benefits at a rate equal to the average of the three highest-cost years in the last 20 years. Formally, it is the trust fund as a percentage of state taxable wages divided by the average ratio of benefits paid to taxable wages in the three highest-cost years in the last 20 years. Standards for trust fund adequacy have been elusive. The Advisory Council on Unemployment Compensation (1995) recommended that states maintain an average high-cost multiple of one. But Emsellem et al. (2002) consider states with a high-cost multiple of 0.75 to be adequately funded.
Before 1975, the average high-cost multiple for the United States as a whole never fell below one. Since then it has been as high as one only in 1989 and 1990, although it hovered near 0.9 during 1995–2000 (US Department of Labor 2008a). The average high-cost multiple fell to 0.36 following the recession of 2001–02 and had recovered to 0.51 by 2006 (the most recent available year).
G Tradeoffs in the UI system
The goal, then, has been to increase the stability of income and consumption of workers who have a history of labor force attachment.
The most important objective of the US system of Unemployment Insurance is the provision of temporary, partial wage replacement … to involuntarily unemployed individuals who have demonstrated a prior attachment to the labor force. This support should help to meet the necessary expenses of these workers as they search for employment that takes advantage of their skills and experience.
A potentially conflicting view is that UI can and should serve as an essential component of an income transfer system that redistributes income to the working poor (Stettner et al. 2004; Peterson 2008). This view has been implicit in various policy discussions of UI since the growth of antipoverty programs in the 1960s. For example, Corson and Nicholson (1982) criticized one of the extended UI benefit programs as ‘target inefficient’ because it paid ‘substantial’ benefits to the nonpoor. Hamermesh (1982) replied that ‘this poverty-fighter’s view of UI … is quite inconsistent with the origins and goals of the UI program …’.
p. 501Interest in the potential for UI to serve as an antipoverty program (or as an important part of a larger antipoverty strategy) has been heightened by the reforms of the American welfare system adopted in 1996. As former welfare recipients have flowed into the labor market following those reforms, the effectiveness of UI has been measured increasingly by its ability to provide income support for a growing number of unemployed former welfare recipients who generally have earnings that are quite low. For example, the US General Accounting Office (2000, p. 7) noted that ‘few states have adjusted their UI programs to eliminate practices that may present difficulties to low-wage workers, particularly these new workers [former welfare recipients]’. Other research – for example, Gustafson and Levine (1998) and Kaye (2001) – has raised specific concerns about the extent to which nonmonetary eligibility criteria prevent former welfare recipients from obtaining UI. Clearly, the effectiveness of UI is being judged by its ability to provide income support to workers who have not traditionally been within its ambit.
4 Training programs and re-employment policies
Despite criticism and occasional relatively minor modifications, unemployment insurance is well developed, stable and widely accepted in all developed Western nations. In contrast, other programs to assist unemployed workers have been transient and ephemeral, changing as views of what might be effective have changed. This section begins with a brief discussion of the volatile history of re-employment policy in the United States. It then reviews evidence on the losses of dislocated workers. Finally, it considers the important advances that have been made in evaluating re-employment programs and briefly reviews what the evidence shows.
A A Brief history
Re-employment policy in the United States can be dated to the passage in 1933 of the Wagner-Peyser Act, a little known but important act that established the United States Employment Service (ES). The ES is a public labor exchange, or employment agency, that takes job applications from unemployed workers, takes job orders from employers, and then attempts to match the unemployed workers with the job vacancies. Like UI, the ES is a federal-state system; that is, each state administers its own ES program, but the US Department of Labor funds and oversees the state programs (Balducchi et al. 1997).
The ES represents a ‘work-first’ approach to re-employment because it presumes job applicants are ‘job ready’ – that is, have the skills and training to be successful in one or another job (Fagnoni 2000). If a worker is unemployed, the solution is to find work.
p. 502Until the 1960s, federal policy emphasized job placement and assumed that workers were job ready. But in the 1960s, some labor economists and policy makers began to question the work-first approach. Their argument had two parts. First, many workers, especially blacks, had never received adequate schooling or training that could be expected to lead to a good job. Second, dislocated workers, although they had once qualified for a job, had skills that were dated or specific to the job they had lost. Could they reasonably be expected to compete for a new job without some updated skills? For both groups, work was not the solution to unemployment because the workers were not job-ready. Rather, they needed training or so it was argued.
As a result, Congress enacted two programs in 1962 that can be thought of as ‘second chance’ training programs. Such programs are intended to help two kinds of workers. ‘Disadvantaged workers’ are those who were poorly served by public education when they were young and/or never completed high school. ‘Dislocated workers’ are those dislocated due to structural change from a job they held for many years.
The first large federally funded training program was the Manpower Development and Training Act (MDTA, 1962–73) which gave classroom and on-the-job training to disadvantaged workers. The second was Trade Adjustment Assistance (TAA), which started in 1962 and still exists today. TAA originally provided only income support – no training – to individuals who had lost their jobs as a result of international trade, but it now provides both income support and training grants to dislocated workers (Decker and Corson 1995). MDTA and TAA are significant because they moved away from the ‘work-first’ approach to reemployment.
In 1973, the Comprehensive Employment and Training Act (CETA, 1973–82) attempted to consolidate the various federal training programs that existed in 1973. It was a mix of programs for disadvantaged workers (Title I, which included classroom training, on-the-job training and work experience), programs to help dislocated workers (Title II) and public service employment programs which were essentially job creation programs (Title IV). CETA reaffirmed the federal emphasis on training – as opposed to simply placing workers in jobs – and added to it job creation, which embodied the idea that the government should serve as the employer of last resort.
By the late 1970s, CETA had gained a reputation as a wasteful program for two reasons. First, its training components were ineffective because they often trained workers for jobs that did not exist. Second, its job creation component was seen as creating useless or ‘make-work’ jobs rather than jobs engaging workers in producing useful goods and services. Congress responded by enacting the Job Training Partnership Act (JTPA, 1982–98).
p. 503JTPA scaled back CETA by eliminating public service employment (CETA’s job-creation component) and reducing income support for workers during training. It also refocused CETA in two ways. First, it shifted administration of federally funded training programs to the states. JTPA established a ‘Private Industry Council’, comprising local employers and training service providers within each region of a state. These councils decided what kinds of training would be provided and tried to ensure that training would lead to jobs. Second, JTPA created ‘performance standards’ or measures of the success and effectiveness of training programs. In order to continue to be funded, a training program needed to meet the performance standards specified by the state. Although in principle a good idea, the performance standards adopted under JTPA were of questionable value (Heckman et al. 2002). For example, one performance standard required that a certain percentage of a training program’s participants must be placed in a job. This created an incentive for training providers to select trainees who were likely to complete training and get a job – that is, to select the best applicants to the training program – a practice known as ‘creaming’. As a result, many selected workers were job ready, whereas people who might most need and benefit from training would not be admitted to the training program.
In the 1990s, Congress and the Clinton Administration made further efforts to consolidate and ‘rationalize’ employment and training policy by eliminating JTPA and replacing it with the Workforce Investment Act (WIA, 1998). WIA embodies two main changes in re-employment policy (Balducchi et al. 2004). First, it requires that states provide most federally funded employment and training services through a system of one-stop centers which provide all re-employment services (or information about and referral to such services) at a single location. The intent of One-Stop Centers is to offer an attractive, logically organized office that directs any job seeker to information, assistance or programs needed to gain employment. Moreover, One-Stop Centers encourage coordination of services by collecting the day-to-day operations of various re-employment programs under a single manager.
Second, WIA replaces the JTPA programs for economically disadvantaged and dislocated workers with consolidated programs for adults, dislocated workers and youth. That is, WIA de-emphasizes the differences among the groups needing assistance. In particular, WIA provides three levels of services: (1) ‘Core’, including basic services such as job search assistance; (2) ‘Intensive’, including workshops, assessment and other services that require staff assistance; and (3) ‘Training’, for eligible workers. As part of this overhaul, the Private Industry Councils that existed under JTPA have been replaced with Workforce Investment Boards. This change p. 504is significant because Workforce Investment Boards have responsibility in principle for overseeing all re-employment services and governmentfunded training in their region, whereas Private Industry Councils were concerned mainly with providing training under JTPA.
WIA, then, has represented a move away from second-chance training and toward a work-first approach. It has consolidated federally funded training programs and attempted to bring about what has long been viewed as desirable – the centralization of information and other re-employment services in One-Stop Centers.
From this summary, it should be clear that re-employment policy in the US has seesawed between the work-first approach that started with the ES in 1933 and the train-first approach that started with MDTA and TAA in 1963. With the adoption of WIA, US policy has not abandoned second-chance training, although it has clearly returned to an emphasis on the work-first approach.
B Earnings losses of dislocated workers
Before turning to a discussion of re-employment program evaluation, it is important to understand the nature of the losses suffered by dislocated workers. A dislocated worker is a worker who loses a long-term job, usually due to structural change. Job losses in manufacturing industries like steel, automobiles and textiles have received media attention and the attention of politicians because the permanent job losses in those industries have been devastating to many workers and communities.
Is it possible to estimate the losses suffered by dislocated workers? Many researchers have tried and the line of work by Jacobson et al. (1993a, 1993b, 1993c) is perhaps the most convincing. Jacobson et al. examined the long-term effects of dislocation on the earnings of workers with at least six years of seniority who were permanently laid off from declining companies in western Pennsylvania between 1980 and 1986 (the period during which the basic steel industry in and around Pittsburgh essentially collapsed). To gauge what would have happened to the earnings of these workers if they had not been dislocated, Jacobson et al. developed a comparison group of stably employed workers in the same region. Their idea was to compare the actual earnings of the dislocated workers with the earnings they could have expected if they had not been dislocated.
The earnings paths of the two groups are shown in Figure 17.2. Earnings of the stably employed workers follow the path SS’. Earnings of the dislocated workers follow the path DD’. The figure illustrates four points. First, the gap between stably employed and dislocated workers is constant until one to two years before dislocation (region 1 in the figure). Second, in the two years before dislocation, the gap between stably employed p. 505and the dislocated workers widens (region 2 in the figure). The dip in the earnings of workers who will soon lose their jobs was discovered by Ashenfelter (1978, 1979) and hence is called ‘Ashenfelter’s Dip’. The dip occurs because employers of soon-to-be-dislocated workers are in financial trouble, so they temporarily lay workers off and reduce workers’ hours before closing down altogether. Third, when dislocated workers lose their jobs, their earnings drop sharply for a time, then recover somewhat (region 3 in the figure). Fourth, after about a year, the earnings gap between stably employed and dislocated workers again becomes constant, but the gap is larger and remains larger than before dislocation (region 4 in the figure).
This general pattern implies that the losses suffered by dislocated workers are far larger than labor economists believed before Jacobson et al. did their research. Before the Jacobson et al. study, most economists thought that virtually all of the losses of dislocated workers occurred around the time of job loss (that is, in region 3 of Figure 17.2). But the Jacobson et al. study makes it clear that two additional sources of earnings loss are important as well. The first occurs during the period leading up to the permanent job loss (region 2 in the figure) when earnings fall from their previous level. The second is the long-term earnings loss suffered even after a dislocated worker gains p. 506re-employment (region 4 in the figure). Note again that the gap between the earnings of stably employed and dislocated workers is larger in region 4 than in region 1. This wider gap represents permanently lower earnings of the dislocated workers and suggests that dislocated workers suffer long-term earnings losses that they never recover. Jacobson et al. calculated that the total loss suffered by the average dislocated worker – the sum of the three sources of income loss – is about $100 000 in today’s dollars.
Many earlier studies used a different approach to gauge the earnings losses of dislocated workers. For reviews of these studies, see Hamermesh (1989) and Leigh (1989, 1990). They compared the earnings of dislocated workers at the time of job loss with their earnings after re-employment. Jacobson et al. show why this comparison may be misleading: dislocated workers suffer some earnings loss even before they are dislocated and further losses after they become re-employed. Also, earlier studies do not estimate the long-term losses of dislocated workers because they do not make use of a comparison group of stably employed workers.
Why are the earnings of dislocated workers permanently lower? The theory of human capital offers a plausible answer (Becker 1975). Workers who have six or more years of tenure in a given job typically have accumulated considerable human capital specific to that job. A significant part of their earnings may be a return to the specific skills in which they and their employers have invested. When the worker permanently loses his or her job, the value of those specific skills is destroyed – the same skills are useless in other jobs precisely because they were specific to the former employer. Even for a relatively young worker who is willing to invest further in general or specific skills, it may take years to accumulate human capital that will allow her to regain her earlier earnings. The permanent losses suffered by dislocated workers stem mainly from lost specific human capital.
To summarize, the earnings losses of dislocated workers come from three sources: (a) the earnings dip that occurs before dislocation, (b) the earnings drop at the time of dislocation and (c) the long-term earnings loss that occurs because dislocated workers’ earnings are permanently lower than they would have been without the dislocation. The last of these sources of earnings loss is in fact the largest of the three, probably because dislocation leads to the loss of specific human capital.
C How effective are training programs?
The modern evaluation of training programs and re-employment policy started with the contributions of Ashenfelter (1978, 1979) and Heckman (1979). Ashenfelter argued convincingly that, in evaluating training programs, it is essential to compare the post-training earnings and employment of trainees with the earnings and employment of a group of otherwise p. 507similar workers who did not receive training. The alternative – comparing the post-training earning of workers with their pre-training earnings – is misleading because workers who get training are usually suffering a spell of unusually low earnings from which they would recover even without training. Heckman’s contribution was to recognize that individuals who enroll in and complete training may differ from those who do not – that is, they ‘self-select’ into training – and failing to take account of this self-selection may again give misleading findings about the effectiveness of training.
For all the attention paid to retraining dislocated workers and for all the federal- and state-sponsored dislocated worker programs that have been conducted, convincing evidence on the effectiveness of training for dislocated workers is virtually nonexistent. Only one training program for dislocated workers – the Texas Worker Adjustment Demonstration – has used randomization to assign workers to training or a control group, and evaluation of this program was hampered by small samples (Bloom 1990). However, two convincing studies of the effectiveness of training for disadvantaged workers have been performed, both using randomized trials: the National Supported Work Demonstration, which was conducted in the mid 1970s (Manpower Demonstration Research Corporation 1980; Couch 1992), and the National JTPA Evaluation conducted from the late 1980s through the mid 1990s (Orr et al. 1996; US General Accounting Office 1996). Because the two programs focused on similar groups of disadvantaged workers and obtained essentially similar findings, the discussion here summarizes only the JTPA evaluation.
Recall that Title IIA of JTPA provided specific employment and training services to four groups of disadvantaged workers: adult men, adult women, young out-of-school women and young out-of-school men. The National JTPA Evaluation was accomplished by randomly assigning individuals either to training or to a control group in 16 sites throughout the United States. The training and services given varied from group to group, but included mainly classroom training in occupational skills (often at a community college or vocational training center) and on-the-job training. The kinds of training services given to participants in the National JTPA evaluation varied greatly from site to site, as is characteristic of real-world training programs.
The results of the National JTPA evaluation are illustrated in Figure 17.3 (adult men and women) and Figure 17.4 (young males and females). The vertical axis of each figure shows annual earnings and the horizontal axis is ‘year relative to assignment’, with year 0 being the year in which individuals were assigned either to training (the treatment group) or to the control group. In each figure, the lines with squares and with crosses show the annual earnings of workers in the control groups, and the lines p. 508with diamonds and with triangles show the earnings of workers who were randomly assigned to training. For all four groups, the treatment and the control groups had essentially similar earnings in the three years leading up to the experiment, which suggests that randomization of workers was successful in all four cases.
Before the experiment, the earnings of both treatment and control men and women dropped (Figure 17.3). This is another manifestation of Ashenfelter’s Dip: adults who were eligible for JTPA training had fallen on hard times in the years before they applied for training. Ashenfelter’s Dip is important because a researcher who ignored it might be tempted to do a ‘before–after’ evaluation of training. For example, comparing the earnings of workers in the treatment group in the year of assignment (year zero, when earnings were about $4500) with their earnings two years later (year two, when earnings were about $7800) would lead to the conclusion that JTPA increased the earnings of adult men by about $3300 a year – an enormous ‘effect’.
Random assignment gives a different and more convincing answer for the impact of JTPA on the earnings of adult men. Comparing the difference between the earnings of the treatment and control groups in the years following the experiment suggests that two years after training that difference was about $500 and three years after training it was about $700 (Figure 17.3). p. 509The evidence does suggest that JTPA training improved the earnings of adult men, but the estimated effect – $500 to $700 a year – is substantially less than the effect suggested by a before–after comparison ($3300).
The earnings patterns of adult women look similar to those of adult men, although the amounts are lower. In years two, three and four following training, women in the treatment group had higher earnings than those in the control group by $400 to $700. Training again seems to have been successful in raising the earnings of adult women.
Figure 17.4 shows the earnings paths of young males and females in the JTPA Evaluation. Because these workers were young, Ashenfelter’s Dip does not show up in their earnings paths. Also, JTPA training does not appear to have had any effect on the earnings of young males or females. The results of the National JTPA Evaluation can be summarized simply. JTPA training was effective in raising the earnings of disadvantaged adult males and of AFDC women in the short term and the long term. Further, JTPA had no effect on the earnings of disadvantaged young males or females in either the short term or the long term.
Why did JTPA work for some groups and not for others? The groups for whom JTPA was not effective – young males and females – were disadvantaged. They had dropped out of high school and in some cases had p. 510been in trouble with the law. These individuals had not done well in school and probably are not well suited to training, at least until they gain some maturity and direction.
Why didn’t training have a greater impact for adult men and women than it did? The training investments made by JTPA were not very large – at most $3000 per participant. Compared with the investment American college students typically make, that is small indeed. Small investments generally yield small expected returns and government training programs would need to make much larger investments in training adults to obtain a larger effect on earnings, as LaLonde (1995) has argued.
D Intensive job search assistance
Over the last 25 years, several states and the US Department of Labor have conducted a variety of social experiments evaluating innovative ways of getting workers back to work. The experiments have focused on recipients of UI and stem from two main concerns: first, that UI recipients spend longer than necessary to become re-employed (that is, UI provides an incentive to search less intensely rather than to search longer for a better job match); and second, that the UI system has been handling more and more dislocated workers, but the system is ill-suited to serve them.
The Labor Department has focused mainly on a policy known as intensive Job Search Assistance (JSA) for these workers (Corson et al. 1985, 1989; Johnson and Klepinger 1994; Klepinger et al. 2002). Compared with training programs, JSA is quite inexpensive. Whereas the training programs discussed in the last section cost between $3000 and $9000 per participant, JSA typically costs only about $1000 per participant. JSA provides four kinds of services to unemployed workers: job counseling and assessment, job-search workshops, job clubs and miscellaneous help in finding a job (including preparing a résumé and using job listings).
In all the JSA experiments, new UI claimants were randomly assigned to either a control group or a treatment group. All the JSA experiments examined the effectiveness of giving unemployed workers services that might shorten the spell of unemployment or reduce the likelihood of UI benefit exhaustion. However, each of the JSA experiments had other components and purposes as well. In particular, the JSA experiments enforced the requirement that UI claimants must search actively for work – known as the work-search test.
The JSA experiments are unanimous in showing that JSA reduces the duration of insured unemployment by roughly one-half week. But the experiments also offer interesting evidence on the reasons for this basic finding. In three of the experiments, UI claimants were subjected to increased enforcement of the work-search test without any job counseling, p. 511workshops or other aspects of JSA. For example, one treatment in the Maryland UI Work Search Demonstration (Klepinger et al. 2002) strengthened the work-search test by telling claimants their job-search contacts would be checked and verified. Another treatment required claimants to make more than the usual number of job contacts per week. Both policies shortened spells by more than half a week, suggesting the importance of enforcing the work-search test.
These experiments show that JSA programs do shorten unemployment spells. But why? To some extent, re-employment services may help workers find a job faster. But mainly, JSA appears to be more like a stick than a carrot. It raises the cost of remaining on UI by telling UI recipients, in effect, ‘report to the public labor exchange and show you are serious about getting a job or this is the last UI check you will get’. And rather than report, many UI recipients either find a job or stop claiming benefits. In the end, the JSA experiments probably say more about the effectiveness of enforcing the work-search test than about the effectiveness of JSA. From the standpoint of policy, the JSA experiments have been important because they led directly to adoption of the Worker Profiling and Reemployment Services program in 1996 (Reich 1997).
2006), ‘Implementing Personal Reemployment Accounts (PRAs): Early Experiences of the Seven Demonstration States: Final Interim Evaluation Report’, Washington, DC: Mathematica Policy Research, Inc.(
2006), ‘Fundamental Restructuring of Unemployment Insurance: Wageloss Insurance and Temporary Earnings Replacement Accounts’, Brookings Institution Discussion Paper 2006-05.(
p. 515 and (1997), ‘The Unemployment and Welfare Effects of Labor Market Policy: A Comparison of the USA and UK’, in Dennis J. Snower and Guillermo de la Dehesa (eds), Unemployment Policy: Government Options for the Labor Market, Cambridge: Cambridge University Press, pp. 545–572.
US Department of Labor (monthly), ‘Job Openings and Labor Turnover Survey’, Washington, DC: Bureau of Labor Statistics.
US Department of Labor (quarterly), ‘Business Employment Dynamics’, Washington, DC: Bureau of Labor Statistics.
Source: Adapted from Joll et al. (1983).