Friday, March 23, 2018

Contingent Valuation and the Deepwater Horizon Spill

Economists are often queasy about the idea that preferences can be measured by surveys. It's easy for someone to say that they value organic fruits and vegetables, for example, but when they go to the grocery, how do they actually spend their money?

However, in some contexts, prices are not readily available. A common example is an oil spill, like the BP Deepwater Horizon Oil Spill in the Gulf of Mexico in 2010, or the Exxon Valdez oil spill in Alaska back in 1989. We know that such spills cause economic costs to those who use the waters directly, like the tourism and fishing industries. But is there some additional cost for "non-use" value? Can I put a personal value on protecting the environment in a place where I have not visited, and am not likely to visit?  There are various ways to measure these kinds of environmental damages. For example, one can include the costs of clean-up and remediation. But another method is to try to design a survey instrument that would get people to reveal the value that they place on this environmental damage, which is called a "contingent valuation" survey.

Such a survey has been completed for the BP Deepwater Horizon Oil spill. Richard C. Bishop and 19 co-authors provide a quick overview in "Putting a value on injuries to natural assets: The BP oil spill" (Science, April 21, 2017, pp. 253-254). For all the details, like the actual surveys used and how they were developed, you can go to the US Department of the Interior website (go to this link, and then type "total value" into the search box).

The challenge for a contingent valuation study is that it would obviously be foolish just to walk up to people and ask: "What's your estimate of the dollar value of the damage from the BP oil spill?" If the answers are going to be plausible, they need some factual background and some context. Also, they need to suggest, albeit hypothetically, that the person answering the survey would need to pay something directly toward the cost. As Bishop et al. write:
"The study interviewed a large random sample of American adults who were told about (i) the state of the Gulf before the 2010 accident; (ii) what caused the accident; (iii) injuries to Gulf natural resources due to the spill; (iv) a proposed program for preventing a similar accident in the future; and (v) how much their household would pay in extra taxes if the program were implemented. The program can be seen as insurance, at a specified cost, that is completely effective against a specific set of future, spill-related injuries, with respondents told that another spill will take place in the next 15 years. They were then asked to vote for or against the program, which would impose a one-time tax on their household. Each respondent was randomly assigned to one of five different tax amounts: $15, $65, $135, $265, and $435 ..." 
Developing and testing the survey instrument took several years. The survey was administered to a nationally-representative random sample of household by 150 trained interviewers. There were 3,646 respondents. They write: "Our results confirm that the survey findings are consistent with economic decisions and would support investing at least $17.2 billion to prevent such injuries in the future to the Gulf of Mexico’s natural resources."

One interesting permutation of the survey is that it was produced in two forms: a "smaller set of injuries" and a "larger set of injuries" version.
"To test for sensitivity to the scope of the injury, respondents were randomly assigned to different versions of the questionnaire, describing different sets of injuries and different tax amounts for the prevention program. The smaller set of injuries described the number of miles of oiled marshes, of dead birds, and of lost recreation trips that were known to have occurred early in the assessment process. The larger set included the injuries in the smaller set plus injuries to bottlenose dolphins, deep-water corals, snails, young fish, and young sea turtles that became known as later injury studies were completed  ..." 

Here's a sample of the survey results. The top panel looks at those who had the survey with the smaller set of injuries. It shows a range of how much taking steps to avoid the damage would personally (hypothetically) cost the person taking the survey. You can see that a majority were willing to pay $15, but the willingness to pay to prevent the oil spill declined as the cost went up. The willingness to pay was higher for the larger set of injuries, but at least my eye, not a whole lot larger.

It should be self-evident why the contingent evaluation approach is controversial. Does the careful and extensive process of  constructing and carrying out the survey lead to more accurate results? Or does it in some ways shape or predetermine the results? The authors seem to take some comfort in the fact that their estimate of $17.2 billion is roughly the same as the value of the Consent Decree signed in April 2016, which called for $20.8 billion in total payments. But is it possible that the survey design was tilted toward getting an answer similar to what was likely to emerge from the legal process? And if the legal process is getting about the same result, then the contingent valuation survey method is perhaps a useful exercise--but not really necessary, either.

I'll leave it to the reader to consider more deeply. For those interested in digging deeper into the contingent valuation debates, some useful starting points might be:

The Fall 2012 issue of the Journal of Economic Perspectives had three-paper symposium on contingent valuation with a range of views:

H. Spencer Banzhaf has just published  "Constructing Markets: Environmental Economics and
the Contingent Valuation Controversy," which appears in the Annual Supplement 2017 issue of the History of Political Economy (pp. 213-239). He provides a thoughtful overview of the origins and use of contingent valuation methods from the early 1960s ("estimated the economic value of outdoor recreation in the Maine woods") up to the Exxon Valdez spill in 1989.

Harro Maas and Andrej Svorenčík tell the story of how Exxon organized a group of researchers in opposition to contingent valuation methods in the aftermath of the 1989 oil spill in "`Fraught with Controversy': Organizing Expertise against Contingent Valuation," appearing in the History of Political Economy earlier in 2017 (49:2, pp. 315-345). 

Also, Daniel McFadden and Kenneth Train edited a 2017 book called Contingent Valuation of Environmental Goods, with 11 chapters on various aspects of how to do and think about contingent valuation studies. Thanks to Edward Elgar Publishing, individual chapters can be freely downloaded.

Thursday, March 22, 2018

Opioids: Brought to You by the Medical Care Industry

There's a lot of talk about the opioid crisis, but I'm not confident that most people have internalized just how awful it is. To set the stage, here are a couple of figures from the 2018 Economic Report of the President.

The dramatic rise in overdose deaths, from about 7000-8000 per year in the late 1990s to more than 40,000 in 2016, is of course just one reflection of social problem that includes much more than deaths.

However, the nature of the opioid crisis is shifting. The rise in overdose deaths from 2000 up to about 2010 was mainly due to prescription drugs. The more recent rise is overdose deaths is due to heroin and synthetic opioids like fentanyl.

It seems clear that the roots of the current opioid crisis are in prescribing behavior: to be blunt about it, US health care professionals made the decisions that created this situation. The Centers for Disease Control and Prevention notes on its website: "Sales of prescription opioids in the U.S. nearly quadrupled from 1999 to 2014, but there has not been an overall change in the amount of pain Americans report. During this time period, prescription opioid overdose deaths increased similarly."

The CDC also offers a striking chart showing differences in opioid prescriptions across states. Again from the website: "In 2012, health care providers in the highest-prescribing state wrote almost 3 times as many opioid prescriptions per person as those in the lowest prescribing state. Health issues that cause people pain do not vary much from place to place, and do not explain this variability in prescribing."
Some states have more opioid prescriptions per person than others. This color-coded U.S. map shows the number of opioid prescriptions per 100 people in each of the fifty states plus the District of Columbia in 2012. Quartile (Opioid Prescriptions per 100 People): States: 52-71: HI, CA, NY, MN, NJ, AK, SD, VT, IL, WY, MA, CO; 72-82.1: NH, CT, FL, IA, NM, TX, MD, ND, WI, WA, VA, NE, MT; 82.2-95: AZ, ME, ID, DC, UT, PA, OR, RI, GA, DE, KS, NV, MO; 96-143: NC, OH, SC, MI, IN, AR, LA, MS, OK, KY, WV, TN, AL. Data from IMS, National Prescription Audit (NPATM), 2012.
But although the roots of the opioid crisis come from this rise in prescriptions, the problem of opioid abuse itself is more complex. What seems to have happened in many cases is that as opioids were prescribed so freely that there was a good supply for friends, family, and to sell. Here's one more chart from the CDC, this one showing where those who abuse opioids get their drugs. Three of the categories are: given by a friend or relative for free; stolen from a friend or relative; and bought from a friend or relative.
 Source of Opioid Pain Reliever Most Recently Used by Frequency of Past-Year Nonmedical Use[a]
For example, a study published in JAMA Surgery in November 2017 found that among patients who were prescribed opioids for pain relief after surgery, 67-92%  ended up not using their full rescription.

This narrative of how the medical profession fueled the opioid crises has gotten some pushback from doctors. For example, Nabarun Dasgupta, Leo Beletsky, and Daniel Ciccarone wrote (The Opioid Crisis: No Easy Fix to Its Social and Economic Determinants" in the February 2018 issue of the American Journal of Public Health (pp. 182-186). After briskly acknowledging the evidence, the paper veers into the importance of "the urgency of integrating clinical care with efforts to improve patients’ structural environment. Training health care providers in “structural competency” is promising, as we scale up partnerships that begin to address upstream structural factors such as economic opportunity, social cohesion, racial disadvantage, and life satisfaction. These do not typically figure into the mandate of health care but are fundamental to public health .As with previous drug crises and the HIV epidemic, root causes are social and structural and are intertwined with genetic, behavioral, and individual factors. It is our duty to lend credence to these root causes and to advocate social change."

Frankly, that kind of essay seems to me to me an attempt the fact that the health care profession made extraordinarily poor decisions. We had root causes back in 1999. We have root causes now. It isn't the root causes that brought the opioid crisis down on us.

As another example, Sally Satel contributed an essay on "The Myth of What’s Driving the Opioid Crisis: Doctor-prescribed painkillers are not the biggest threat," to Politico (February 21, 2018).  She makes a number of reasonable points. Tthe current rise in opioid deaths is being driven by heroin and fentanyl, not prescription opioids. Only a very small percentage of those who are prescribed prescription opioids become addicts, and many of those had previous addiction problems.

As Satel readily acknowledges:
In turn, millions of unused pills end up being scavenged from medicine chests, sold or given away by patients themselves, accumulated by dealers and then sold to new users for about $1 per milligram. As more prescribed pills are diverted, opportunities arise for nonpatients to obtain them, abuse them, get addicted to them and die. According to SAMHSA, among people who misused prescription pain relievers in 2013 and 2014, about half said that they obtained those pain relievers from a friend or relative, while only 22 percent said they received the drugs from their doctor. The rest either stole or bought pills from someone they knew, bought from a dealer or “doctor-shopped” (i.e., obtained multiple prescriptions from multiple doctors). So diversion is a serious problem, and most people who abuse or become addicted to opioid pain relievers are not the unwitting pain patients to whom they were prescribed.
But her argument is that even though it was true 5-10 years ago that three-quarters of the heroin addicts showing up at treatment centers said they had got their start using presciption opioids, more recent evidence is that addicts are starting with heroin and fentanyl directly. Ultimately, Satel writes:
What we need is a demand-side policy. Interventions that seek to reduce the desire to use drugs, be they painkillers or illicit opioids, deserve vastly more political will and federal funding than they have received. Two of the most necessary steps, in my view, are making better use of anti-addiction medications and building a better addiction treatment infrastructure.
This specific recommendation makes practical sense, and it sure beats a ritual invocation of "root causes," but I confess it still rubs me the wrong way. We didn't have these demand-side interventions back in 1999, either, but the number of drug overdoses was much lower. Sure, the nature of the opioid crisis has shifted in recent years. But prescription opioids are still being prescribed at triple the level of 1999. And given that the medical profession lit the flame of the current opioid crisis, it seems evasive to seek a reduced level of blame by pointing out that the wildfire has now spread to other opioids. .

For a list of possible policy steps, one starting point is the President's Commission on Combating Drug Addiction and the Opioid Crisis, which published its report in November 2017. The 56 recommendations make heavy use of terms like "collaborate," "model statutes," "accountability," "model training program," "best practices," "a data-sharing hub," "community-based stakeholders," "expressly target Drug Trafficking Organizations," "national outreach plan," "incorporate quality measures," "the adoption of process, outcome, and prognostic measures of treatment services,"" prioritize addiction treatment knowledge across all health disciplines." "telemedicine," "utilizing comprehensive family centered approaches," "a comprehensive review of existing research programs," "a fast-track review process for any new evidence-based technology," etc. etc. There are probably some good suggestions embedded here, like fossils sunk deeply into a hillside. Hope someone can disinter them.

Tuesday, March 20, 2018

The Distribution and Redistribution of US Income

The Congressional Budget Office has published the latest version of its occasional report on "The Distribution of Household Income, 2014" (March 2018). It's an OK place to start for a fact-based discussion of the subject. Here is one figure in particular that caught my eye.

The vertical axis of the figure is a Gini coefficient, which is a common way of summarizing the extent of inequality in a single number. A coefficient of 1 would mean that one person owned everything. A coefficient of zero would mean complete equality of incomes.

In this figure, the top line shows the Gini coefficient based on market income, rising over time.

The green line shows the Gini coefficient when social insurance benefits are included: Social Security, the value of Medicare benefits, unemployment insurance, and worker's compensation. Inequality is lower with such benefits taken into account, but still rising. It's worth remembering that almost all of this change is due to Social Security and Medicare, which is to say that it is a reduction in inequality because of benefits aimed at the elderly.

The dashed line then adds a reduction in inequality due to means-tested transfers. As the report notes, the largest of these programs are "Medicaid and the Children’s Health Insurance Program (measured as the average cost to the government of providing those benefits); the Supplemental Nutrition Assistance Program (formerly known as the Food Stamp program); and Supplemental Security Income." What many people think of as "welfare," which used to be called Aid to Families with Dependent Children (AFDC) but for some years now has been called Temporary Assistance to Needy Families (TANF), is included here, but it's smaller than the programs just named. 

Finally, the bottom  purple line also includes the reduction in inequality due to federal taxes, which here includes not just income taxes, but also payroll taxes, corporate taxes, and excise taxes. 

A few thoughts: 

1) As the figure shows, the reduction in inequality for programs aimed at the elderly--Social Security and Medicare--is about as large as the total reduction in inequality due to all the reduction in inequality that happens from mean-tested spending and federal taxes. 

2) Moreover, a large share of the reduction in inequality shown in this figure is a result of "in-kind' programs that do not put any cash in the pockets of low-income people. This is true of the health care programs, like Medicare, Medicaid, and the Children's Health Insurance Program, as well as of the food stamp program. These programs do benefit people by covering a share of health care costs or helping buy food, but they don't help to pay for other costs like the rent, heat, or electricity.  

3) Contrary to popular belief, federal taxes do help to reduce the level of inequality. This figure shows the average tax rate paid by those in different income groups. The calculation includes all federal taxes: income, payroll, corporate, and excise. It is the average amount paid out of total income, which includes both market income and Social Security benefits. 

4) Finally, to put some dollar values on the Gini coefficient numbers, here is the average income for each of these groups in 2014. (Remember, this includes both cash and in-kind payments from the government, and all the different federal taxes.)
Figure 8.
Average Income After Transfers and Taxes, by Income Group, 2014
Thousands of Dollars  
Lowest Quintile 31,100
Second Quintile 44,500
Middle Quintile 62,300
Fourth Quintile 87,700
Highest Quintile 207,300
81st to 90th Percentiles 120,400
91st to 95th Percentiles 159,100
96th to 99th Percentiles 251,500
Top 1 Percent 1,178,600
Source: Congressional Budget Office.
(I'm a long-standing fan of CBO reports. But n the shade of this closing parenthesis, I'll add in passing that the format of this report has changed, and I think it's a change for the worse. Previous versions had more tables, where you could run your eye down columns and across rows to see patterns. This figure is nearly all figures and bar charts. It's quite possible that I'm more in favor of seeing underlying numbers and tables than the average reader. And it's true that you can go to the CBO website and see the numbers behind each figure. But in this version of the report, it's harder (for me) to see some of the patterns that were compactly summarized in a few tables in previous reports, but are now spread out over figures and bar graphs on different pages.) 

Monday, March 19, 2018

What if Country Bonds Were Linked to GDP Growth?

What if countries could have some built-in flexibility in repaying their debts: specifically, what if the repayment of the debt was linked to whether the domestic economy was growing? Thus, the burden of debt payments would fall in a recession, which is when government sees tax revenues fall and social expenditures rise. Imagine, for example, how the the situation of Greece with government debt would have been different if the country's lousy economic performance had automatically restructured its debt burden in away that reduced current payments. Of course, the tradeoff is that when the economy is going well, debt payments are higher--but presumably also easier to bear.

There have been some experiments along these lines in recent decades, but the idea is now gaining substantial interest,  James Benford, Jonathan D. Ostry, and Robert Shiller have edited a 14-paper collection of papers on Sovereign GDP-Linked Bonds: Rationale and Design (March 2018, Centre for  Economic Policy Research, available with free registraton here).

For a taste of the arguments, here are a few thoughts from the opening essay: "Overcoming the obstacles to adoption of GDP-linked debt," by Eduardo Borensztein, Maurice Obstfeld, and Jonathan D. Ostry.  They provide an overview of issues like: Would borrowers have to pay higher interest rates for GDP-linked borrowing? Or would the reduced risk of default counterbalance other risks? What measure of GDP would be used as part of such a debt contract? They write:
"Elevated sovereign debt levels have become a cause for concern for countries across the world. From 2007 to 2016, gross debt levels shot up in advanced economies – from 24 to 89% of GDP in Ireland, from 35 to 99% of GDP in Spain, and from 68 to 128% of GDP in Portugal, for example. The increase was generally more moderate in emerging economies, from 36 to 47% of GDP on average, but the upward trend continues. ...

"GDP-linked bonds tie the value of debt service to the evolution of GDP and thus keep it better aligned with the overall health of the economy. As public sector revenues are closely related to economic performance, linking debt service to economic growth acts as an automatic stabiliser for debt sustainability. ..  While most efforts to reform the international financial architecture over the past 15 years have aimed at facilitating defaults, for example through a sovereign debt restructuring framework (SDRM), the design of a sovereign debt structure that is less prone in the first place to defaults and their associated costs  would be a more straightforward policy initiative. GDP-linked debt is an attractive instrument for this purpose because it can ensure that debt stays in step with the growth of the economy in the long run and can create fiscal space for countercyclical policies during recessions. ...
"The first lesson is to ensure that the payout structure of the instrument reflects the state of the economy and is free from complexities or delays that can make payments stray from their link to the economic situation. To date, GDP-linked debt has been issued primarily in the context of debt restructuring operations, from the Brady bond exchanges that began in 1989 to the more recent cases of Greece and Ukraine. ...  This feature, however, gave rise to structures that were not ideal from the point of view of debt risk management. For example, some specifications provided for large payments if GDP crossed certain arbitrary thresholds or were a function of the distance to GDP from those thresholds. In addition, some payout formulas were sensitive to the exchange rate, failed to take inflation into account, or were affected by revisions of population or national account statistics. All these mechanisms resulted in payments that were  disconnected from the business cycle and the state of public finances, detracting from the value of these GDP-linked instruments for risk management (see Borensztein 2016).
"The second lesson is that the specification of the payout formula can strengthen the integrity of the instruments. GDP statistics are supplied by the sovereign, and there is no realistic alternative to this arrangement. This fact is often held up as an obstacle to wide market acceptance of the instruments. However, the misgivings seem to have been exaggerated, as under-reporting of GDP growth is not a politically attractive idea for a policymaker whose success will be judged on the strength of economic performance. ... 
"[T]he main source of reluctance regarding the use of GDP-linked debt, or insurance instruments more generally, may not stem from markets but from policymakers. Politicians tend to have relatively short horizons, and would not find debt instruments attractive that offer insurance benefits in the medium to long run but are costlier in the short run, as they include an insurance premium driven by the domestic economy’s correlation with the global business cycle. In addition, if the instruments are not well understood, they may be perceived as a bad choice if the economy does well for some time. The value of insurance may come to be appreciated only years later, when the country hits a slowdown or a recession, but by then the politician may be out of office. While this problem is not ever likely to go away completely, multilateral institutions might be able to help by providing studies on the desirability of instruments for managing country risk, and how to support their market development, in analogy to work done earlier in the millennium promoting emerging markets’ domestic-currency sovereign debt markets."
Back in 2015, the Ad Hoc London Term Sheet Working Group decided to produce a hypothetical model example of how a specific contract for GDP-linked government agreement might work, with the ideas that the framework could then be adapted and applied more broadly. This volume has a short and readable overview of the results by two members of the working group, in "A Term Sheet for GDP-linked bonds," by Yannis Manuelides and Peter Crossan. I'll just add that in the introduction to the book, Robert Shiller characterizes the London Term Sheet approach in this way:
"The kind of index-linked bond described in the London Term Sheet in this volume is close to a conventional bond, in that it has a fixed maturity date and a balloon payment at the end. The complexities described in the Term-Sheet are all about inevitable details and questions, such as how the coupon payments should be calculated for a GDP-linked bond that is issued on a specific date within the quarter, when the GDP data are issued only quarterly. The term sheet is focused on a conceptually simple concept for a GDP-linked  bond, as it should be. It includes, as a special case, the even simpler concept – advocated recently by me and my Canadian colleague Mark Kamstra – of a perpetual GDP-linked bond, if one sets the time to maturity to infinity. Perpetual GDP-linked bonds are an analogue of shares in corporations, but with GDP replacing corporate earnings as a source of dividends. However, it seems there are obstacles to perpetual bonds and these obstacles might slow the acceptance of GDP-linkage. The term-sheet here gets the job done with finite maturity, shows how a GDP-linkage can be done in a direct and simple way, and should readily be seen as appealing.
"The London Term Sheet highlighted in this volume describes a bond which is simple and attractive, and the chapters in this volume that spell out other considerations and details of implementation, have the potential to reduce the human impact of risks of economic crisis, both real crises caused by changes in technology and environment, and events better described as financial crises. The time has come for sovereign GDP-linked bonds. With this volume they are ready to go."

Friday, March 16, 2018

An NCAA Financial Digression During March Madness

I'm an occasional part of the audience for college sports, both the big-time televised events like basketball's March Madness and college football bowl games, as well as sometimes going to baseball and women's volleyball and softball games here at the local University of Minnesota. I enjoy the athletes and the competition, but I try not to kid myself about the financial side.

 Big-time colleges and universities do receive substantial sports-related revenues. But the typical school has sports-related expenses that eat up all of that revenue and more besides. For data, a useful starting point is the annual NCAA Research report called "Revenues and Expenses, 2004-2016," prepared by Daniel Fulks. This issue was released in 2017; the 2018 version will presumably be out in a few months.

For the uninitiated, some terminology may be useful here. The focus here is on Division I athletics, which is made up of about 350 schools that tend to have large student attendance, large participation in intercollegiate athletics, and lots of scholarships. Division I is then divided into three groups. The Football Bowl Subdivision is the most prominent schools, in which the football teams participate in bowl games at the end of the season. In the FBS group, Alabama beat Georgia 26-23 for the championship in January. The Football Championship series is medium-level football programs. Last season, North Dakota State beat James Madison 17-13 in the championship game at this level. And the Division I schools without football programs include many well-known universities that have scholarship athletes and prominent programs in other sports: Gonzaga and Marquette are two examples.

Since 2014, the Football Bowl Division is further divided into two groups, the Autonomy Group and the Non-Autonomy Group. The Autonomy Group is the 65 schools that are most identified with big-time athletics. They are in the "Power Five" conferences: the Atlantic Coast Conference, Big Ten, Big 12, Pac 12 and Southeastern Conference. Under the 2014 agreement, they have autonomy to alter some rules for the group as a whole: for example, this group of schools offer scholarships that cover the "full cost" of attending the university, which pays the athletes a little more, and coaches are no longer (officially) allowed to take a scholarship away because a player isn't performing as hoped. The Non-Autonomy Schools are allowed to follow these rule changes, but are not required to do so.

With this in mind, here are some facts from the NCAA report about the big-time Football Bowl Division schools.
Net Generated Revenues. The median negative net generated revenue for the AG is $3,600,000 (i.e., the median loss for a program in the AG), which must be supplemented by the institution; for the NA is $19,900,000; and for all FBS is $14,400,000. ...
Financial Haves and Have-nots. A total of 24 programs in the AG showed positive net generated revenues (profits), with a median of $10,000,000, while the remaining 41 of the AG lost a median of $10,000,000; the 64 NA programs lost a median of $20,000,000; the total FBS loss is a median of $18,000,000. Net losses for women's programs were $14,000,000 for AG, $6,500,000 for NA, and $9,000,000 for FBS.
For the Football Bowl Championship schools, the magnitude of the losses is smaller, but the pattern remains the same:
Net Generated Revenues. The result is a median net loss for the subdivision of $12,550,000; men's programs = $5,022,000 and women's programs = $4,089,000. These medians are up only slightly from 2015. ...
Losses per Sport: Highest losses incurred were in gymnastics and basketball for women's programs and football and basketball for the men.
And for the non-football Division I schools, where the big-time revenue sport is usually basketball, the pattern of losses continues:
Median Losses. The median net loss for the 95 schools in this subdivision was $12,595,000 for the 2016 reporting year, compared with $11,764,000 in 2015, and $5,367,000 in the 2004 base year. ... 
Programmatic Results. Five men's basketball programs reported positive net generated revenues, with a median of $1,742,000, while the remaining 90 schools reported a median negative net generated revenue of $1,573,000. The median loss for women's basketball was $1,415,000. These losses are up slightly from 2015 and more than double from 2004.

There's an ongoing dispute about whether big-time colleges and universities should pay their players. When I listen to sports-talk radio, a usual comment is along these lines: "These college athletes are making millions of dollars for their institutions. They deserve to be paid, and more than just a scholarship and some meal money." I'm sympathetic. But the economist in me always rebels against the assumption that there is a Big Rock Candy Mountain made of money just waiting to be handed out.  I want to know where the money is going to come from, and how the wages will be determined.

The median school is losing money on athletics. I know of no evidence that donations from alumni are sufficient to counterbalance these losses. So if the payment for athletes is going to come from schools, there will be a tradeoff. Should costs be cut by eliminating sports that don't generate revenue (and the scholarships for those athletes)? The NCAA Report notes that salaries are about one-third of total expenses for college sports programs, and maybe some of that money could be redistributed to student-athletes. It seems implausible that the median school is going to substantially increase its subsidies to the athletics department.

What if the money for paying students came from outside sponsors? Some decades ago, top college athletes sometimes were compensated via make-work or no-show jobs. It would be interesting to observe how a single rich alum or a group of local businesses, could collaborate with a coaching staff to raise money for paying athletes--and what the athletes might need to endorse in return.

It's easy to say that student-athletes should get "more," but it's not obvious that they would or should all get the same. For example, would all student-athletes get the same pay, regardless of revenue generated by their sport? Even within a single sport, would the star players get the same play as the backups? Would the amount of pay be the same between first-years and seniors? Would the pay be adjusted year-to-year, depending on athletic performance? Would players get bonuses for championships or big wins? 

I don't have a clear answer to the economic issues here, and so I will now turn off this portion of my brain and return to watching the games in peace. For those who want more, Allen R. Sanderson and John J. Siegfried wrote a thoughtful article," The Case for Paying College Athletes." which appeared in the Winter 2015 issue of the Journal of Economic Perspectives (where I work as Managing Editor).

Thursday, March 15, 2018

The Skeptical View in Favor of an Antitrust Push

Is the US economy as a whole experiencing notably less competition? Of course, pointing to a few industries where the level of competition seems to have declined (like airlines or banking) does not prove that competition as a whole has declined. In his essay, "Antitrust in a Time of Populism," Carl Shapiro offers a skeptical view on whether overall US competition has declined in a meaningful way, but combines this critique with an argument for the ways in which antitrust enforcement should be sharpened. The essay is forthcoming in the International Journal of Industrial Organization, which posted a pre-press version in late February. A non-typeset version is available at Shapiro's website

(Full disclosure: Shapiro was my boss for a time in the late 1980s and into the 1990s as a Co-editor and then Editor of the Journal of Economic Perspectives, where I have labored in the fields as Managing Editor since 1987.)

Shapiro points to a wide array of articles and reports from prominent journalistic outlets and think tanks that claim that the US is experiencing a wave of anti-competitive behavior. He writes:
"Until quite recently, few were claiming that there has been a substantial and widespread decline in competition in the United States since 1980. And even fewer were suggesting that such a decline in competition was a major cause of the increased inequality in the United States in recent decades, or the decline in productivity growth observed over the past 20 years. Yet, somehow, over the past two years, the notion that there has been a substantial and widespread decline in competition throughout the American economy has taken root in the popular press. In some circles, this is now the conventional wisdom, the starting point for policy analysis rather than a bold hypothesis that needs to be tested. ...
"I would like to state clearly and categorically that I am looking here for systematic and widespread evidence of significant increases in concentration in well-defined markets in the United States. Nothing in this section should be taken as questioning or contradicting separate claims regarding changes in concentration in specific markets or sectors, including some markets for airline service, financial services, health care, telecommunications, and information technology. In a number of these sectors, we have far more detailed evidence of increases in concentration and/or declines in competition."
Shapiro makes a number of points about competition in markets. For example, imagine that national restaurant chains are better-positioned to take advantage of information technology and economies of scale than local producers. As a result. national restaurant chains expand and locally-owned eateries decline. A national measure of aggregation will show that the big firms have a larger share of the market. But focusing purely on the competition issues, local diners may have essentially the same number of choices that they had before.

A number of the overall measures of growth of larger firms don't show much of a rise. As one example, Shapiro points to an article in the Economist magazine which divided the US economy into 893 industries, and found that the share of the four largest firms in each industry had on average risen from 26% to 32%. Set aside for a moment the issues of whether this is national or local, or whether it takes international competition into account. Most of those who study competition would say that a market where the four largest firms combine to have either 26% or 32% of the market is still pretty competitive. For example, say the top four firms all have 8% of the market. Then the remaining firms each have less than 8%, which means this market probably has at least a dozen or more competitors.

The most interesting evidence for a fall in competition, in Shapiro's view, involves corporate profits. Here's a figure showing corporate profits over time as a share of GDP.

And here's a figure showing the breakdown of corporate profits by industry.
Thus, there is evidence that profit levels have risen over time. In particular, they seem to have risen in the Finance & Insurance sector an in the Health Care & Social Assistance area. But as Shapiro emphasizes, antitrust law does not operate on a presupposition that "big is bad" or "profits are bad." The linchpin of US antitrust law is whether consumers are benefiting.

Thus, it is a distinct possibility that large national firms in some industries are providing lower-cost services to consumers and taking advantage of economies of scale. They earn high profits, because it's hard for small new firms without these economies of scale to compete. Shapiro writes:
"Simply saying that Amazon has grown like a weed, charges very low prices, and has driven many smaller retailers out of business is not sufficient. Where is the consumer harm? I presume that some large firms are engaging in questionable conduct, but I remain agnostic about the extent of such conduct among the giant firms in the tech sector or elsewhere. ... As an antitrust economist, my first question relating to exclusionary conduct is whether the dominant firm has engaged in conduct that departs from legitimate competition and maintains or enhances its dominance by excluding or weakening actual or potential rivals. In my experience, this type of inquiry is highly fact-intensive and may necessitate balancing procompetitive justifications for the conduct being investigated with possible exclusionary effects. ...
"This evidence leads quite naturally to the hypothesis that economies of scale are more important, in more markets, than they were 20 or 30 years ago. This could well be the result of technological progress in general, and the increasing role of information technology on particular. On this view, today’s large incumbent firms are the survivors who have managed to successfully obtain and exploit newly available economies of scale. And these large incumbent firms can persistently earn supra-normal profits if they are protected by entry barriers, i.e., if smaller firms and new entrants find it difficult and risky to make the investments and build the capabilities necessary to challenge them."
What should be done? Shapiro suggests that tougher merger and cartel enforcement, focused on particular practices and situations, makes a lot of sense. As one example, he writes:

"One promising way to tighten up on merger enforcement would be to apply tougher standards to mergers that may lessen competition in the future, even if they do not lessen competition right away. In the language of antitrust, these cases involve a loss of potential competition. One common fact pattern that can involve a loss of future competition occurs when a large incumbent firm acquires a highly capable firm operating in an adjacent space. This happens frequently in the technology sector. Prominent examples include Google’s acquisition of YouTube in 2006 and DoubleClick in 2007, Facebook’s acquisition of Instagram in 2012 and of the virtual reality firm Oculus CR in 2014, and Microsoft’s acquisition of LinkedIn in 2016.  ... Acquisitions like these can lessen future competition, even if they have no such immediate impact."
Shapiro also makes the point that a certain amount of concern about large companies mixes together a range of public concerns: worries about whether consumers are being harmed by a lack of competition is mixed together with worries about whether citizens are being harmed by big money in politics, or worries about rising inequality of incomes and wealth, or worries about how locally-owned firms may suffer from an onslaught of national chain competition. He argues that these issues should be considered separately.
"I would like to emphasize that the role of antitrust in promoting competition could well be undermined if antitrust is called upon or expected to address problems not directly relating to competition. Most notably, antitrust is poorly suited to address problems associated with the excessive political power of large corporations. Let me be clear: the corrupting power of money in politics in the United States is perhaps the gravest threat facing democracy in America. But this profound threat to democracy and to equality of opportunity is far better addressed through campaign finance reform and anti-corruption rules than by antitrust. Indeed, introducing issues of political power into antitrust enforcement decisions made by the Department of Justice could dangerously politicize antitrust enforcement. Antitrust also is poorly suited to address issues of income inequality. Many other public policies are far superior for this purpose. Tax policy, government programs such as Medicaid, disability insurance, and Social Security, and a whole range of policies relating to education and training spring immediately to mind. So, while stronger antitrust enforcement will modestly help address income inequality, explicitly bringing income distribution into antitrust analysis would be unwise."

In short, where anticompetitive behavior is a problem, by all means go after it--and go after it more aggressively than the antitrust authorities have done in recent decades. But other concerns over big business need other remedies. 

Tuesday, March 13, 2018

Interview with Jean Tirole: Competition and Regulation

"Interview: Jean Tirole" appears in the most recent issue of Econ Focus from the Federal Reserve Bank of Richmond (Fourth Quarter 2017, pp. 22-27). The interlocutor is David S. Price. Here are a few comments that jumped out at me.

How did Tirole end up in the field of industrial organization?
"It was totally fortuitous. I was once in a corridor with my classmate Drew Fudenberg, who's now a professor at MIT. And one day he said, "Oh, there's this interesting field, industrial organization; you should attend some lectures." So I did. I took an industrial organization class given by Paul Joskow and Dick Schmalensee, but not for credit, and I thought the subject was very interesting indeed.
"I had to do my Ph.D. quickly. I was a civil servant in France. I was given two years to do my Ph.D. (I was granted three at the end.) It was kind of crazy."
Why big internet firms raise competition concerns
"[N]ew platforms have natural monopoly features, in that they exhibit large network externalities. I am on Facebook because you are on Facebook. I use the Google search engine or Waze because there are many people using it, so the algorithms are built on more data and predict better. Network externalities tend to create monopolies or tight oligopolies.
"So we have to take that into account. Maybe not by breaking them up, because it's hard to break up such firms: Unlike for AT&T or power companies in the past, the technology changes very fast; besides, many of the services are built on data that are common to all services. But to keep the market contestable, we must prevent the tech giants from swallowing up their future competitors; easier said than done of course ...
Bundling practices by the tech giants are also of concern. A startup that may become an efficient competitor to such firms generally enters within a market niche; it's very hard to enter all segments at the same time. Therefore, bundling may prevent efficient entrants from entering market segments and collectively challenging the incumbent on the overall technology.
"Another issue is that most platforms offer you a best price guarantee, also called a "most favored nation" clause or a price parity clause. You as a consumer are guaranteed to get the lowest price on the platform, as required from the merchants. Sounds good, except that if all or most merchants are listed on the platform and the platform is guaranteed the lowest price, there is no incentive for you to look anywhere else; you have become a "unique" customer, and so the platform can set large fees to the merchant to get access to you. Interestingly, due to price uniformity, these fees are paid by both platform and nonplatform users — so each platform succeeds in taxing its rivals! That can sometimes be quite problematic for competition.
"Finally, there is the tricky issue of data ownership, which will be a barrier to entry in AI-driven innovation. There is a current debate between platform ownership (the current state) and the prospect of a user-centric approach. This is an underappreciated subject that economists should take up and try to make progress on."

The economics of two-sided platforms
"We get a fantastic deal from Google or credit card platforms. Their services are free to consumers. We get cashback bonuses, we get free email, Waze, YouTube, efficient search services, and so on. Of course there is a catch on the other side: the huge markups levied on merchants or advertisers. But we cannot just conclude from this observation that Google or Visa are underserving monopolies on one side and are preying against their rivals on the other side. We need to consider the market as a whole.
"We have learned also that platforms behave very differently from traditional firms. They tend to be much more protective of consumer interests, for example. Not by philanthropy, but simply because they have a relationship with the consumers and can charge more to them (or attract more of them and cash in on advertising) if they enjoy a higher consumer surplus. That's why they allow competition among applications on a platform, that's why they introduce rating systems, that's why they select out nuisance users (a merchant who wants to be on the platform usually has to satisfy various requirements that are protective of consumers). Those mechanisms — for example, asking collateral from participants to an exchange or putting the money in an escrow until the consumer is satisfied — screen the merchants. The good merchants find the cost minimal, and the bad ones are screened out.
"That's very different from what I call the "vertical model" in which, say, a patent owner just sells a license downstream to a firm and then lets the firm exercise its full monopoly power.
"I'm not saying the platform model is always a better model, but it has been growing for good reason as it's more protective of consumer interest. Incidentally, today the seven largest market caps in the world are two-sided platforms."

Monday, March 12, 2018

The Distressingly Weak Lessons of Research on Gun Control

If you want to know what actual research on the effects of various gun control policies have to say, the RAND Corporation has your back. It has published a lengthy reports: "The Science of Gun Policy:  A Critical Synthesis of Research Evidence on the Effects of Gun Policies in the United States," by a team of 17 researchers led by Andrew R. Morral. A smaller group led by Morral also published  "The Magnitude and Sources of Disagreement Among Gun Policy Experts."  And there's also a nice accessible website with a summary of results and links to these more detailed studies. They write: 
The 13 classes of gun policies considered in this research are as follows:

1. background checks
2. bans on the sale of assault weapons and high-capacity magazines
3. stand-your-ground laws
4. prohibitions associated with mental illness
5. lost or stolen firearm reporting requirements
6. licensing and permitting requirements
7. firearm sales reporting and recording requirements
8. child-access prevention laws
9. surrender of firearms by prohibited possessors
10. minimum age requirements
11. concealed-carry laws
12. waiting periods
13. gun-free zones.

The eight outcomes considered in this research are

1. suicide
2. violent crime
3. unintentional injuries and deaths
4. mass shootings
5. officer-involved shootings
6. defensive gun use
7. hunting and recreation
8. gun industry.
They focus on high-quality studies published since 2003. They write:
"[W]e produced research syntheses that describe the quality and findings of the best available scientific evidence. Each synthesis defines the class of policies being considered; presents and rates the available evidence; and describes what conclusions, if any, can be drawn about the policy’s effects on outcomes. In many cases, we were unable to identify any research that met our criteria for considering a study as providing minimally persuasive evidence for a policy’s effects. Studies were excluded from this review if they offered only correlational evidence for a possible causal effect of the law, such as showing that states with a specific law had lower firearm suicides at a single point in time than states without the law. Correlations like these can occur for many reasons other than the effects of a single law, so this kind of  evidence provides little information about the effects attributable to specific laws. We did not exclude studies on the basis of their findings, only on the basis of their methods for isolating causal effects. For studies that met our inclusion criteria, we summarize key findings and methodological weaknesses, when present, and provide our consensus judgment on the overall strength of the available scientific evidence."
One main result is that the actual evidence is pretty thin. "Of more than 100 combinations of policies and outcomes, we found that surprisingly few were the subject of methodologically rigorous investigation." For example, evidence on four of the eight outcomes was "essentially unavailable," including defensive gun use, officer-involved shootings, hunting and recreation, and effects on the gun industry. None of the studies of waiting periods and licencing and permitting requirements have reached more than inconclusive results. There are no methodologically sound studies at all on the effects of gun-free zones or requirements for reporting of lost or stolen firearms. I'll just list the study's overall conclusions here:
Conclusion 1. Available evidence supports the conclusion that child-access prevention laws, or safe storage laws, reduce self-inflicted fatal or nonfatal firearm injuries among youth. There is moderate evidence that these laws reduce firearm suicides among youth and limited evidence that the laws reduce total (i.e., firearm and nonfirearm) suicides among youth.
Conclusion 2. Available evidence supports the conclusion that child-access prevention laws, or safe storage laws, reduce unintentional firearm injuries or unintentional firearm deaths among children. In addition, there is limited evidence that these laws may reduce unintentional firearm injuries among adults. ...
Conclusion 3. There is moderate evidence that background checks reduce firearm suicides and firearm homicides, as well as limited evidence that these policies can reduce overall suicide and violent crime rates.
Conclusion 4. There is moderate evidence that stand-your-ground laws may increase state homicide rates and limited evidence that the laws increase firearm homicides in particular.

Conclusion 5. There is moderate evidence that laws prohibiting the purchase or possession of guns by individuals with some forms of mental illness reduce violent crime, and there is limited evidence that such laws reduce homicides in particular. There is also limited evidence these laws may reduce total suicides and firearm suicides. ...

Conclusion 6. There is limited evidence that before implementation of a ban on the sale of assault weapons and high-capacity magazines, there is an increase in the sales and prices of the products that the ban will prohibit.

Conclusion 7. There is limited evidence that a minimum age of 21 for purchasing firearms may reduce firearm suicides among youth.

Conclusion 8. No studies meeting our inclusion criteria have examined required reporting of lost or stolen firearms, required reporting and recording of firearm sales, or gun-free zones. ...

Conclusion 9. The modest growth in knowledge about the effects of gun policy over the past dozen years reflects, in part, the reluctance of the U.S. government to sponsor work in this area at levels comparable to its investment in other areas of public safety and health, such as transportation safety. ...

Conclusion 10. Research examining the effects of gun policies on officer-involved shootings, defensive gun use, hunting and recreation, and the gun industry is virtually nonexistent.

Conclusion 11. The lack of data on gun ownership and availability and on guns in legal and illegal markets severely limits the quality of existing research. ...

Conclusion 12. Crime and victimization monitoring systems are incomplete and not yet fulfilling their promise of supporting high-quality gun policy research in the areas we investigated. ...

Conclusion 13. The methodological quality of research on firearms can be significantly improved.
Of course, absence of evidence is not evidence of absence--that is, just because there is a lack of evidence on certain policies or outcomes doesn't prove that those policies don't work. But it does suggest that a degree of humility might be appropriate on all sides. As a hopelessly out-of-touch academic, perhaps there could be bipartisan consensus on building up the data and evidence so that better studies can be done, but maybe this is a situation where neither side wishes to take the risk tha their presuppositions might be rebutted. Or at least when gun control laws are passed, the law could include a specific provision for exactly how those laws will be meaningfully evaluated a few years down the road.

Follow-up on 3/13/18: Faithful reader DK reminds me that Congress blocked the public health authorities from doing research into gun control issues back in the 1990s, as the New York Times just reported.  I'm pretty much always in favor of additional research, and I don't like research being limited  That said, it seems pretty clear to me as someone who has never fired a gun and tends to favor additional gun controls that most public health researchers then and now have been so  stridently anti-gun that their research was not trustworthy. I also tend to view gun policy as a social science issue, which is best tackled with the social science research methods like those considered in the RAND report. It's not clear to me that public  health researchers have the tools or expertise to address it appropriately.

Saturday, March 10, 2018

About those Tariff Exemptions for Canada and Mexico ...

I wrote a few days ago with some skepticism about the claim of a "national security" justification for President Trump's steel and aluminium tariffs. When the tariffs were actually imposed, Trump decided to exempt Canada and Mexico.

At a political level, the exemptions for Canada and Mexico make sense. As I mentioned in the earlier post, the US has treaty commitments with Canada going back to the 1950s to integrate their defense-related industrial bases, and there is even a North American Technology and Industrial Base Organization (NATIBO). This is part of the reason why Canada is by far the largest source of US aluminum imports (aluminum imports from Canada are about the same as the combined imports from the next 10-largest exporters to the US, combined). Canada is also the largest source of US steel imports, while Mexico is fourth. And of course, the US is part of the North American Free Trade Agreement with Canada and Mexico, too. Even if Trump wants to renegotiate that agreement, it doesn't make sense to do it haphazardly.

So now we are imposing import tariffs on steel and aluminum on the basis that they are vital to US national security, but the tariffs don't actually affect the main source of steel and aluminum imports, which is Canada.

Moreover, the exemptions for Canada and Mexico make it even less likely that the tariffs can benefit the US economy. Here's why:

The entire purpose of import tariffs is to reduce the extent of foreign competition so that domestic producers can charge more and earn higher profits. (Otherwise, there would be no point to enacting them.) Of course, domestic users of steel and aluminum will pay those higher prices. But at least with a tariff imposed against all trading partners, the higher prices paid by US consumers of steel and aluminum go to two places: either higher revenues for US steel and aluminum producers or higher revenue for the US Treasury. Foreign producers don't benefit.

With Canada and Mexico now exempted from the tariffs, the higher prices paid by US consumers of steel and aluminum now go three places: 1) higher revenues for US steel and aluminum producers, 2) higher revenues for Canadian and Mexican steel and aluminum producers, who will also benefit from the higher price; and 3) higher revenues for the US Treasury.

To understand how strange this is, imagine that someone in Congress proposed this policy to "help" the US steel and aluminum industries. Start by imposing a tax on US domestic users of steel and aluminum, based on how much they used. Then some of the revenues from that tax would be be rebated to US producers of steel and aluminum, some would be sent to Canadian and Mexican producers of steel and aluminum, and the rest would be kept by the federal government.

As I have commented before in the context of tire tariffs imposed by the Obama administration some years ago, this way of trying to assist the US steel and aluminium industry seems literally insane once you spell it out in this way.  It's hard to imagine that even the steel and aluminum industries would favor it. But it accurately describes the economic effect of steel and aluminum tariffs with a Canada and Mexico exemption.

Friday, March 9, 2018

Some Economics of Place-Based Policies

When it comes to public policies for helping the poor, economists have tended to favor a focus on individuals who are poor, rather than on places that had a higher share of poor people. This seemed like a better way to target scarce public resources. There was some fear that if the focus shifted to places, much of the benefit would flow to homeowners who lived in those places--and thus saw an improvement in property values-- or to local building contractors, rather than helping the poor directly. Also, a healthy economy will see a flow of people moving toward destinations that are more attractive, while place-based support of locations that aren't doing well would tend to hinder such migration.

But some economists are rethinking the mertics of place-based policies. Benjamin Austin, Edward Glaeser, and Lawrence H. Summers have written "Saving the heartland: Place-based policies in 21st century America ," for the Spring 2018 issue of the Brookings Papers on Economic Activity. As they argue, we seem to have entered a time when geographic mobility is down and when regional convergence of incomes has dropped off.  They write:
"America’s western frontier may have closed at the end of the 19th century, but there was still a metropolitan frontier where workers from depressed areas could find a more prosperous future. Five facts collectively suggest that this geographic escape valve has tightened: declining geographic mobility, increasingly inelastic housing supplies in high income areas, declining income convergence, increased sorting by skill across space, and persistent pockets of non-employment. Together these facts suggest that even if income differences across space have declined, the remaining economic differences may be a greater source of concern. Consequently, it may be time to target pro-employment policies towards our most distressed areas. ...

"We divide the U.S. into three regions: the prosperous coasts, the western heartland and the eastern heartland, The coasts have high incomes, but the western heartland also benefits from natural resources and high levels of historical education. America’s social problems, including non-employment, disability, opioid-related deaths and rising mortality, are concentrated in America’s eastern heartland, states from Mississippi to
Michigan, generally east of the Mississippi and not on the Atlantic coast. The income and employment gaps between three regions are not converging, but instead seem to be hardening ..."
The paper has a bunch of figures showing differences across these three regions. Here are figures  on economic growth, the share of prime-age men not working, and mortality rates for men across these three regions.

What would place-based policies look like? As the authors point out, such policies can be explicit or implicit. For example, an infrastructure policy like the Tennessee Valley Authority is explicitly aimed at a certain geographic region. However, an infrastructure project like the federal  highway system, or a program like flood insurance, will clearly have specific geographic effects for those closer to highways or at higher risk of floods, without actually naming a certain geographic area. After mulling the options, they suggest that targeted employment subsidies may be the best bet. They write:
"The best case for geographic targeting of policies is that a dollar spent fighting non-employment in a high not working rate area will do more to reduce non-employment than a dollar spent fighting non-employment in a low not working rate area. The empirical evidence for heterogeneous labor supply responses to demand shocks or public interventions is limited, but broadly supportive of the view that reducing the not working rate in some parts of the country is easier than in other parts of the country. ...  While infrastructure remains an important investment for America, targeting infrastructure spending towards distressed areas risks producing projects with limited value for users. By contrast, enhanced spending on employment subsidies in high not working rate areas, and perhaps the U.S. as a whole, seems like a more plausible means of reducing not working rates."
For those interested in this approach, here's an earlier discussion of "What Do We Know about Subsidized Employment Programs?" (April 25, 2016).

Wednesday, March 7, 2018

The National Security Argument for Steel and Aluminum Tariffs

The reason behind the tariffs that President Trump has announced for steel and aluminum is an unusual one. The legal justification for the tariffs is based Section 232 of the Trade Expansion Act of 1962, which gives the President the power to impose tariffs if "national security" is at stake.

As Chad Bown of the Peterson Institute for International Economics has pointed out, this specific justification for import tariffs has led to a total of 28 investigations in the 56 years since the law was enacted. The most recent investigation as to whether national security should lead to import tariffs was in 17 years ago in 2001; the most recent time in which national security actually led to imports being limited was 32 years ago, when President Reagan used this argument to limit imports of certain machine tools.

However, the argument that it might sometimes be necessary to limit imports because of national security has a venerable history. Adam Smith, the intellectual godfather of free trade arguments, listed national defense as an exception in  Book IV of the The Wealth of Nations.. Smith wrote:
"There seem, however, to be two cases in which it will generally be advantageous to lay some burden upon foreign for the encouragement of domestic industry. The first is, when some particular sort of industry is necessary for the defence of the country. The defence of Great Britain, for example, depends very much upon the number of its sailors and shipping. The act of navigation, therefore, very properly endeavours to give the sailors and shipping of Great Britain the monopoly of the trade of their own country in some cases by absolute prohibitions and in others by heavy burdens upon the shipping of foreign countries."
By all means, the national security argument deserves serious consideration. And seriously, are the steel and aluminum tariffs actually about national security in the sense of military strength? Or is it "national security" in a more generic and rhetorical sense, really meaning that if it's good for steel industry profits, then it's good for "national security."

(Side note: Of course, this second argument is essentially similar to the line attributed long-ago to to Charles Wilson, a former head of General Motors who was nominated to be Secretary of Defense in 1953. Wilson was widely mocked for saying, "What's good for General Motors is good for the country." That's not actually what he said, as I explain in "What's Good for General Motors ..." (October 23, 2012). But the sentiment that if corporate profits for favored industries are vital to national security was certainly common enough, then and now.)

The US Department of Commerce has put forward the "national security" justification for the steel and aluminum tariffs in two January 2018 reports: "The Effect of Imports of Steel on the National Security" (January 11, 2018) and "The Effect of Imports of Aluminum on the National Security" (January 17, 2018).

As the reports point out, Section 232 allows for a broad definition of "national security." It quotes from a report back in 2001, the last time the national security justification for tariffs was considered (although not ultimately used), to note that“in addition to the satisfaction of national defense requirements, the term “national security” can be interpreted more broadly to include the general security and welfare of certain industries, beyond those necessary to satisfy national defense requirements that are critical to the minimum operations of the economy and government.” These reports have a lot of detail on levels of steel and aluminum imports and the difficulties of US steel and aluminum companies. But the details about just how national security is being affected are harder to find and to pin down. Here, I'll first give some details on the steel industry from the US Department of Commerce report, and then turn to the aluminum industry report.

Background on the US Steel Industry

The report sums up its case in this sentence: "It is these three factors – displacement of domestic steel by excessive imports and the consequent adverse impact on the economic welfare of the domestic steel industry, along with global excess capacity in steel – that the Secretary has concluded create a persistent threat of further plant closures that could leave the United States unable in a national emergency to produce sufficient steel to meet national defense and critical industry needs."

How much steel is actually used by the US Department of Defense? The answer is 3% of domestic US production. The report says: "The U.S. Department of Defense (DoD) has a large and ongoing need for a range of steel products that are used in fabricating weapons and related systems for the nation’s defense. DoD requirements – which currently require about three percent of U.S. steel production – are met by steel companies that also support the requirements for critical infrastructure and commercial industries."

What about the "critical industries" more broadly? The answer is about half of domestic production. The report says: '"[T]here are 16 designated critical infrastructure sectors in the United States, many of which use high volumes of steel (see Appendix I). The 16 sectors include chemical production, communications, dams, energy, food production, nuclear reactors, transportation systems, water, and waste water systems. ... The updated analysis in Appendix I shows that 49.1 percent of domestic steel consumption in 2007 was used in critical industries."

The report has a LOT to say about steel production in China. But when you look at US imports of steel, this table taken from the report shows that Canada is at the top and China is 11th. US steel imports from 2011 to 2017 are up considerably overall, especially from Brazil, South Korea, Mexico, Russia, Turkey, Germany, and Taiwan. But over that time, US steel imports from China are down by about one-third.

How much is US production capacity dropping off? Here's the figure from the report. There's a rise before the Great Recession and a fall after, but current steel capacity is about the same as it was in the early 2000s.  
Logically speaking, the number of jobs in the US steel industry shouldn't be part of the national security argument. After all, steel like pretty much every other industry is continually making more use of automation and robots. But here's what the report shows about steel industry jobs: a big drop from about 2000-2003, but not much change since then. 

For me, it's hard to look at these kinds of figures and see a national security crisis in the making in the military strength category. The report even notes that while steel prices are low all around the world, "Notwithstanding these effects, prices for steel in the U.S. remained substantially higher than in any other area. However, relative to prices between 2010 and 2013, prices are still relatively depressed."

The report does give a few examples of specific types of steel products that are important for defense production and where there are few domestic suppliers. The report notes:
"This is not a hypothetical situation. The Department of Defense already finds itself without domestic suppliers for some particular types of steel used in defense products, including tire rod steel used in military vehicles and trucks. ... In the case of critical infrastructure, the United States is down to only one remaining producer of electrical steel in the United States (AK Steel – which is highly leveraged). Electrical steel is necessary for power distribution transformers for all types of energy – including solar, nuclear, wind, coal, and natural gas – across the country. If domestic electrical steel production, as well as transformer and generator production, is not maintained in the U.S., the U.S. will become entirely dependent on foreign producers to supply these critical materials and products."
The report also notes that steel producers have reduced their capability to ramp up production in a national emergency:
"[D]omestic steel producers have a shrinking ability to meet national security production requirements in a national emergency. The U.S. Department of Commerce, Census Bureau regularly surveys plant capacity, and has found that steel producers are quickly shedding production capacity that could be used in a national emergency. The Census Bureau defines national emergency production as the “greatest level of production an establishment can expect to sustain for one year or more under national emergency conditions.” From 2011 to 2017, steel producers increased the utilization of the surge capacity they would have during a national emergency from 54.2 percent to 68.2 percent ...  As steel producers use more of this emergency capacity, there is an increasingly limited ability to ramp up steel production to meet national security needs during a national emergency."
As the report notes, ramping up steel production in the patterns that occurred during the Vietnam War or World War II would take time and effort. Of course, the notion that the US steel industry should be continually prepared to ramp up at high speed for the equivalent of World War II is a questionable one. Let's pause there for a moment, and turn to the aluminum report.

Background on the US Aluminum Industry

As the aluminium report explains: "Aluminum originates from bauxite, an ore typically found in the topsoil of various tropical and subtropical regions; the United States is not a significant source of bauxite as it cannot be economically extracted here. Once mined, aluminum within the bauxite ore is chemically extracted in a refinery into alumina, an aluminum oxide compound. In a second step, the alumina is smelted to produce pure aluminum metal."

One of the ironies here is that the US is worried about the national security importance of an industry that depends entirely on imported raw materials. However, as the report notes: "The U.S. Government does not maintain any strategic stockpile of bauxite, alumina, aluminum ingots, billets or any semi-finished aluminum products such aluminum plate."

How much aluminum is used by the US Department of Defense? The report blacks out this information. It reads: "The U.S. Department of Defense (DoD) and its contractors use a small percentage of U.S. aluminum production. The DoD “Top Down” estimate of average annual demand for aluminum during peacetime is XXXXX, or XXXXX percent of total U.S. demand." However, later in the discussion in the specific category of high-purity aluminum, the report reads: "The U.S. manufacturers of products based on aluminum require 250,000 metric tons of high-purity aluminum a year. Approximately 90 percent of this is for commercial aerospace and other applications. Ten percent is used to support the manufacture of defense-related products."

As with steel, the main source of US aluminum imports is Canada. In fact, this outcome is the result of long-standing polices. The report notes: 
"The U.S. in 2016 relied on imports for 89 percent of its primary aluminum requirements, up from 64 percent in 2012. Canada, which is highly integrated with the U.S. defense industrial base and considered a reliable supplier, is the leading source of imports. With Canadian smelters operating at near full capacity and with the vast majority of their production already going to customers in the United States, there is limited ability for Canada to replace other suppliers. ... 
"The U.S. and Canadian defense industrial bases are integrated. This cooperative relationship has existed since 1956 and is codified in a number of bilateral defense agreements. For example in 1987, DoD (all Services), the Defense Logistics Agency (DLA), the Office of the Secretary of Defense (OSD), and the Canadian Department of National Defence (DND) joined together to form a North American Technology and Industrial Base Organization (NATIBO). NATIBO is chartered to promote a cost effective, healthy technology and industrial base that is responsive to the national and economic security needs of the United States and Canada." 
How low is the price of aluminum? Here's a graph from the report showing aluminum prices since 1998. Prices peaked during the commodity boom in the lead-up to the Great Recession, then crashed, but presently are above where they were in the late 1990s and early 2000s. In other words, it's hard to make the case that prices have fallen below their usual historical range, rather than being pretty much in the middle of that range. 

Where is the world's aluminum produced? The report says: "Because aluminum production is highly energy intensive, the world’s leading producers are generally the countries with the lowest energy costs (including Canada, Russia, the United Arab Emirates (UAE), and Bahrain). The exception is China, where electricity costs are actually higher than those of the United States ($614 per metric ton of aluminum produced in China versus $532 per metric ton in the United States); China’ overall production costs were equal to that of U.S. producers." 

What jumps out at me from the previous table is the US capacity utilization rate in aluminum production is so very low. 

The report does note two particular issues that seem to me potentially relevant to national security in the military sense. One is the particular area of "high-purity aluminum:
"The U.S. currently has five [aluminum] smelters remaining, only two smelters that are operating at full capacity. Only one of these five smelters produces high-purity aluminum required for critical infrastructure and defense aerospace applications, including types of high performance armor plate and aircraft-grade aluminum products used in upgrading F-18, F-35, and C-17 aircraft. Should this one U.S. smelter close, the U.S. would be left without an adequate domestic supplier for key national security needs. The only other high-volume producers of high-purity aluminum are located in the UAE and China (internal use only)."
A somewhat related issue is that many of the high-tech uses of aluminum involve research into new alloys and their properties. But aluminum industry R&D seems to have died off:
"At this time most aluminum companies cannot afford to fund research. The importance of research in this industry is clear, however. More than 90 percent of all alloys currently used in the aerospace industry were developed through Alcoa’s research. ... Of the three remaining companies with U.S. smelting operations in 2016, Alcoa is the only company to report spending on Research and Development over the past five years in its financial statements; Century Aluminum and Noranda reported zero spending on R&D since 2012."
Some Thoughts about the National Security Argument for Protection

Imagine for a moment that you were firmly convinced that the US faced a national security problem with steel and aluminum--and I mean in the specific sense of being related to military and critical industry needs, not in the generic sense of just thinking some industries should have bigger profits. What would you propose? Here are some ideas: 
  • Focus on the specific areas where the dependence on imports of steel and aluminum is most concerning, like the areas of steel tire rods, electrical steel, and high-purity aluminum mentioned in the reports. 
  • Undertake a crash R&D program to find ways of substituting for steel and aluminum in various applications, and also to reuse and recycle existing steel and aluminum where possi le 
  • Stockpile bauxite and other raw materials, so as not to be vulnerable to import disruptions. 
  • Take all the subsidies that are proposed or enacted for favored noncarbon energy sources like solar and wind, and adapt them to apply to steel and aluminum: maybe tax cuts for these industries; or government guarantees that these companies could borrow large sums at subsidized or zero interest rates; or the  Department of Defense and other government purchasers would buy purchase steel and aluminum from US producers at above-market prices; or government could pay steel companies to keep unused excess capacity that could be ramped  up quickly. 
  • Make  contingency plans that would redirect steel and aluminum from noncritical uses to national security uses, if needed. 
  • Avoid undercutting Canada, which is both a key US ally and the largest outside supplier of steel and aluminum.
Just to be clear, I'm not advocating everything on this list of ideas. I'm saying that someone who is seriously concerned that the domestic production of steel and aluminum raises national security concerns should be considering all of these ideas, and advocating for at least some of them.

If the response to national security concerns over steel and aluminum is just "slap on tariffs, help domestic industry earn higher profits, and just kinda sorta hope that domestic industry uses those profits to build up capacity and specialized products and R&D"--well, that response doesn't actually seem like a serious concern over national security to me.  If the national security concerns are legitimate, seems like a remarkably sloppy and unserious way to address them.  

Speaking of being serious, one frustration for any economist reading these reports is that at no point do they acknowledge that imports tariffs or quotas have any costs to consumers and other industrial users of these products. After all, the key mechanism by which import restrictions benefit domestic firms is by allowing them to charge higher prices to buyers.  

I care a considerable amount about national security. But waving the words "national security" should not exempt anyone from an actual consideration of actual costs, benefits, and alternative strategies. 
There is zero question in the mind of any economist that import tariffs will offer short-run benefits to  the domestic steel and aluminum industries. Whether it benefits the country overall--either in the military or the economic sense of "national security"-- is considerably more dubious. The inevitable trade retaliation from other countries will only worsen these tradeoffs.

Finally, one sometimes hears the argument that these steel and aluminum tariffs are just an opening bid in the renegotiation of trade agreements. In this telling, the steel and aluminum tariffs could be bartered away for concessions in other parts of trade agreements. Maybe this is true. But if the tariffs are now bargained away or discarded so, I would conclude that the national security justification for their existence was not made sincerely in the first place.