Monday, May 30, 2016

Allocation of Scarce Elevators

In a perfect world, an elevator would always be waiting for me, and it would always take me to my desired floor without stopping along the way. But economics is about scarce resources. What about the problem of scarce elevators?

Jesse Dunietz offers an accessible overview to how such decisions are made "The Hidden Science of Elevators: How powerful algorithms decide when that elevator car is finally going to come pick you up," in Popular Mechanics (May 24, 2016). For those who want all the details, Gina Barney and Lutfi Al-Sharif have just published the second edition of Elevator Traffic Handbook: Theory and Practice, which with its 400+ pages seems to be the definitive book on this subject (although when I checked, still zero reviews of the book on Amazon). Some of the tome can be sampled here via Google. For example, it notes at the start: 
"The vertical transportation problem can be summarised as the requirement to move a specific number of passengers from their origin floors to their respective destination floors with the minimum time for passenger waiting and travelling, using the minimum number of lifts, core space, and cost, as well as using the smallest amount of energy." 
This problem of allocating elevators is complex in detail: not just the basics like number and size of elevators, the total number of passengers, and the height of the building, but also questions of the usual timing of peak loads of passengers. Moreover, the problem is complex because passengers prefer short wait and travel times, which are costs of time imposed on them, while building owners prefer a smaller cost for elevators, which they pay.  It turns out that many people would rather have a shorter waiting time for an elevator, even if it might mean a longer travel time once inside the elevator. But although the problem of allocating elevators may not have a single best answer, some answers are better than others.  

Of course, in the early days of elevators, they often had an actual human operator. When automated elevators arrived and up until about a half-century ago, Dunietz explains in Popular Mechanics that many of them operated  rather like a bus route: that is, they went up and down between floors on a preset timetable. Of course, this meant that passengers just had to wait for the elevator to cycle around to their floor, and the elevator ran even when it was empty. 

In the mid-1960s, the "elevator algorithm" was developed. Dunietz describes it with two rules:
  1. As long as there's someone inside or ahead of the elevator who wants to go in the current direction, keep heading in that direction.
  2. Once the elevator has exhausted the requests in its current direction, switch directions if there's a request in the other direction. Otherwise, stop and wait for a call.
Not only is this algorithm still pretty common for elevators, but it is also used to govern the motion of disk drives when facing read and write request--and the algorithm has its own Wikipedia entry.

However, if you think about how the elevator algorithm works in tall buildings, you realize that it will spend a lot of time in the middle floors, and the waits at the top and the bottom can be extreme. Moreover, if a building has a bunch of elevators all responding to the same signals, all the elevators tend to bunch up near the middle floors, even leapfrogging each other and trying to answer the same signals. So the algorithm was tweaked so that only one elevator would respond to any given signal. Buildings were sometimes divided, so that some elevators only ran to certain groups of floors. Also, when an elevator was not in use, it would automatically return to the lobby (or some other high-departure floor).

By the 1970s, it becomes possible to encode the rules for allocating elevators into software, which can be tweaked and adjusted. For example, it becomes possible to use "estimated time of arrival" calculations (for example, here) which figures out which car can respond to a call first. Such algorithms can also take energy use or length-of-journal or other factors into account.
Another big step forward in the last decade or so is "destination dispatch," where when you call the elevator, you also tell it which floor you will be going to. The elevator system can then group together people heading for similar floors. An article by Melanie D.G. Kaplan  on back in 2012 talks about how this kind of system created huge gains for the Marriott Marquis in Times Square in New York City. Before this system, people could wait 20-30 minutes for an elevator to show up. After the system was installed, there can still be some minutes of waiting at peak times, but as one measure, the number of written complaints about elevator delays went from five per week (!) to zero.

The latest thing, as one might expect, is "machine learning"--that is, define for the elevator system what "success" looks like, and then let the elevator system experiment and learn about how to allocate elevators not just at a given moment in time, but to remember how elevator traffic evolves from day to day and adjust for that as well. The definition of "success" may vary across buildings: for example, "success" in a system of hospital elevators might mean that urgent health situations get an immediate elevator response, even if waiting time for others is increased. The machine learning approach leads to academic papers like: "The implementation of reinforcement learning algorithms on the elevator control system," and ongoing research published in various places like the proceedings of the annual conferences of the International Society of Elevator Engineers, or publications like the IEEE Transactions on Automation Science and Engineering

From an economic point of view, it will be intriguing to see how the machine learning rules evolve. In particular, it will be interesting to see if the the machine learning rules that address the various tradeoffs of wait time, travel time, handling peak loads, and energy cost can be formulated in terms of the marginal costs and benefits framework that economist prefer--and whether the rules for elevator traffic find a use in organizing other kinds of traffic, from cars to online data. 

Friday, May 27, 2016

US Corporate Stock: The Transition in Who Owns It

It used to be that most US corporate stock was held by taxable US investors. Now, most corporate stock is owned by a mixture of tax-deferred retirement accounts and foreign investors. Steven M. Rosenthal and Lydia S. Austin describe the transition in "The Dwindling Taxable Share
Of U.S. Corporate Stock," which appeared in Tax Notes (May 16, 2016, pp. 923-934), and is available here at website of the ever-useful Tax Policy Center.

The gray area in the figure below shows the share of total US corporate equity owned by taxable accounts. A half-century ago in the late 1960s, more than 80% of all corporate stock was held in taxable accounts; now, it's around 25% The blue area shows the share of US corporate stock held by retirement plans,which is now about 35% of the total. The area above the blue line at the top of the figure shows the share of US corporate stock owned by foreign investors, which has now risen to 25%.

A few quick thoughts here:

1) These kinds of statistics require doing some analysis and extrapolation from various Federal Reserve data sources. Those who want details on methods should head for the article. But the results here are reasonably consistent with previous analysis.

2) The figures here are all about ownership of US corporate stock; that is, they don't have anything to say about US ownership of foreign stock.

3) One dimension of the shift described here is the ownership of US stock is shifting from taxable to less-taxable forms. Stock returns accumulate untaxed in retirement accounts until the fund are actually withdrawn and spent, which can happen decades later and (because post-retirement income is lower) at a lower tax rate.  Foreigners who own US stock pay very little in US income tax--instead, they are responsible for taxes back in their home country.

4) There is an ongoing dispute about how to tax corporations. Economists are fond of pointing out that a corporation is just an organization, so when it pays taxes the money must come from some actual person, and the usual belief is that it comes from investors in the firm. If this is true, then cutting corporate taxes a  half-century ago would have tended to raise the returns for taxable investors. However, cutting corporate taxes now would tend to raise returns for untaxed or lightly-taxes retirement funds and foreign investors. The tradeoffs of raising or lower corporate taxes have shifted.

Thursday, May 26, 2016

Lessons for the Euro from Early American History

The euro is still a very young currency. When watching the struggles of the European Union over the the euro, it's worth remembering that it too the US dollar a long time to become a functional currency. Jeffry Frieden looks at "Lessons for the Euro from Early American Monetary and Financial Experience," in a contribution written for the Bruegel Essay and Lecture Series published May 2016. Frieden's lecture on the paper can be watched here. Here's how Frieden starts:
"Europe’s central goal for several decades has been to create an economic union that can provide monetary and financial stability. This goal is often compared, both by those that aspire to an American-style fully federal system and by those who would like to stop short of that, to the long-standing monetary union of the United States. The United States, after all, created a common monetary policy, and a banking union with harmonised regulatory standards. It backs the monetary and banking union with a series of automatic fiscal stabilisers that help soften the potential problems inherent in inter-regional variation.
Easy celebration of the successful American union ignores the fact that it took an extremely long time to accomplish. In fact, the completion of the American monetary, fiscal, and financial union is relatively recent. Just how recent depends on what one counts as an economic and monetary union, and how one counts. Despite some early stops and starts, the United States did not have an effective national currency until 75 years after the Constitution was adopted, starting with the National Banking Acts of 1863 and 1864. And only after another fifty years did the country have a central bank. Financial regulations have been fragmented since the founding of the Republic; many were federalised in the 1930s, but many remain decentralised. And most of the fiscal federalist mechanisms touted as prerequisites for a successful monetary union date to the 1930s at the earliest, and in some cases to the 1960s. The creation and completion of the American monetary and financial union was a long, laborious and politically conflictual process.
Freiden focuses in particular on some seminal events from the establishment of the US dollar. For example, there's a discussion of "Assumption," the policy under which Alexander Hamilton had "the Federal government recognise the state debts and exchange them for Federal obligations, which would be serviced.This meant that the Federal governments would assume the debts of the several states and pay them off at something approaching face value." But after the establishment of a federal market for debt, the US government in the 1840s decided that it would not assume the debt of bankrupt states. A variety of other episode are put into a broader context. In terms of overall lessons from early US experience for Europe as it seeks to establish the euro, it suggests that while Europe has created the euro, existing European institutions are not yet strong enough to sustain it:

One of the problems that Europe has faced in the past decade is the relative weakness of European institutions. Americans and foreigners had little reason to trust the willingness or ability of the new United States government to honour its obligations. Likewise, many in Europe and elsewhere have doubts about the seriousness with which EU and euro-area commitments can be taken. Just as Hamilton and the Americans had to establish the authority and reliability of the central, Federal, government, the leaders of the European Union, and of its member states, have to establish the trustworthiness of the EU’s institutions. And the record of the past ten years points to an apparent inability of the region’s political leaders to arrive at a conclusive resolution of the debt crisis that has bedevilled Europe since 2008. ...
The central authorities – the Federal government in the American case, the institutions of the euro area and the EU in the European case – have to establish their ability to address crucial monetary and financial issues in a way acceptable to all member states. This requires some measure of responsibility for the behaviour of the member states themselves, which the central authority must counter-balance against the moral hazard that it creates.  In the American case, the country dealt with these linked problems over a sixty-year period. Assumption established the seriousness of the central government, but also created moral hazard. The refusal to assume the debts of defaulting states in the 1840s established the credibility of the Federal government’s no-bailout commitment. Europe today faces both of these problems, and the attempt to resolve them simultaneously has so far failed. Proposals to restructure debts are rejected as creating too much moral hazard, but the inability to come up with a serious approach to unsustainable debts has sapped the EU of most of its political credibility. Both aspects of central policy are essential: the central authorities must instil faith in the credibility of their commitments, and do so without creating unacceptable levels of moral hazard.
This is not, of course, to suggest that the European Union should assume the debts of its member states. Europe’s national governments have far greater capacity, and far greater resources, than did the nascent American states. But the lack of credibility of Europe’s central institutions is troubling, and is reminiscent of the poor standing of the new United States before 1789.
The US monetary and financial architecture evolved over decades, but in a country that was somewhat tied together with a powerful origin story--and nevertheless had to fight a Civil War to remain a single country. The European Union monetary and financial organization is evolving, too, but I'm not confident that the pressures of a globalized 21st century economy will give them decades to resolve the political conflicts, build the institutions, and create the credibility that the euro needs if it is to be part of broadly shared economic stability and growth in Europe.

Wednesday, May 25, 2016

Interview with Matthew Gentzkow: Media, Brands, Persuasion

Douglas Clement has another of his thoughtful and revealing interviews with economists, this one with Matthew Gentzkow. It appeared online in The Region, a publication of the Federal Reserve Bank of Minneapolis, on May 23, 2016.  For a readable overview of Gentzkow's work, a useful starting point is an essay by Andrei Shleifer titled  "Matthew Gentzkow, Winner of the 2014 Clark Medal,"  and published in the Winter 2015 issue of the Journal of Economic Perspectives. The Clark medal, for those not familiar with it, is a prestigious award  given each year by the American Economoic Associstion "to that American economist under the age of forty who is judged to have made the most significant contribution to economic thought and knowledge." Here are some answers from Gentzkow in the interview with Clement that caught my eye.

It seems to me that many discussions of politics neglect the entertainment factor. Politics isn't just about 30-page position papers and carefully worded statements. For lots of citizens and voters--and yes, for lots of politicians, too--it's a fun activity for observers and participants. Thus, when you think about how the spread of television (or newer media) affect voting, it's not enough just to talk about how media affect the information available to voters. It also matters if the new media just give the voters an alternative and nonpolitical source of entertainment. Here's a comment from Gentzkow on his research in this area:
I started thinking about this huge, downward trend that we’ve seen since about the middle of the 20th century in voter turnout and political participation. It’s really around the time that TV was introduced that that trend in the time series changes sharply, so I thought TV could have played a role.
Now, a priori, you could easily imagine it going either way. There’s a lot of evidence before and since that in many contexts, giving people more information has a very robust positive effect on political participation and voting. So, if you think of TV as the new source of information, a new technology for delivering political information, you might expect the effect to be positive. And, indeed, many people at the time predicted that this would be a very good thing for political participation.
On the other hand, TV isn’t just political information; it’s also a lot of entertainment. And in that research, I found that what seemed to be true is that the more important effect of TV is to substitute for—crowd out—a lot of other media like newspapers and radio that on net had more political content. Although there was some political content on TV, it was much smaller, and particularly much smaller for local or state level politics, which obviously the national TV networks are not going to cover.
So, we see that when television is introduced, indeed, voter turnout starts to decline. We can use this variation across different places and see that that sharp drop in voter turnout coincides with the timing of when TV came in. The more important effect of TV is to substitute for media that on net had more political content. So, we see that when television is introduced, indeed, voter turnout starts to decline. That drop is especially big in local elections. A lot of new technologies … are pushing people toward paying less attention to local politics, local issues, local communities.
People in different geographic areas show on average different consumption patterns. For example, Coke is more popular in some place, and Pepsi in others. Or imagine that someone moves from an area with high average health care spending to low average health care spending. Gentzkow and co-authors looked at people who moved from one geographic area to another, and how certain aspects of their consumption changed. Were people's preferences firmly established based on their previous location? Or did their preferences shift when they were in a new location? Here's how Gentzkow describes the differences between shifts in consumption related to brand preferences and shifts related to health care:
Well, imagine watching somebody move, first looking at how their brand preferences change; say they move from a Coke place to a Pepsi place and you see how their soft drink preferences change. Then imagine somebody moving from a place where there’s low spending on health care to a place with high spending, and you see how things change. In what way are those patterns different?
The first thing you can look at is how big the jump is when somebody moves. That’s sort of a direct measure of how important is the stuff you are carrying with you relative to the factors that are specific to the places. How important is your brand capital relative to the prices and the advertising? Or in a health care context, how important are the fixed characteristics of people that are different across places, relative to the doctors, the hospitals and the treatment styles across places. It turns out the jumps are actually very similar. In both cases, you close about half the gap between the place you start and the place you’re going, and so the share due to stuff people carry with them—their preference capital or their individual health—is about the same.
What’s very different and was a huge surprise to me, not what I would have guessed, is that with brands, you see a slow-but-steady convergence after people move; so, movers steadily buy more and more Pepsi the longer they live there. But in a health care context, we don’t see that at all; your health care consumption changes a discrete amount when you first move, but the trend is totally flat thereafter—it doesn’t converge at all.
Gentzkow's results on shifts in health care patterns may have some special applicability to thinking about how people react to finding themselves in a different and lower-spending health care system. Say that the change to this new system wasn't the result of a geographic shift--say, moving from a high-cost metro area where average spending on health care might be triple what it is in a low-cost area--but instead involved a change in policy. These results might imply that the policy reform would bring down health spending in a one-time jump, but then spending for the group that was used to being at a higher level would not continue to fall, as might have been predicted. 

Finally, here's an observation in passing from Gentzkow about social media. Are the new media a source of concern because they are not interactive enough (say, as compared to personal communication) or because they are too interactive and therefore addicting (say, as compared to television? Here's Gentzkow"
A lot of people are complaining about social media now. But think back to what they were saying back when kids were all watching TV: “It’s this passive thing where kids sit there and zone out, and they’re not thinking, they’re alone, they’re not communicating!” Now, suddenly, a thing that kids are spending lots of their time doing is interacting with other kids. They’re writing text messages and posts and creating pictures and editing them on Instagram. It’s certainly not passive; it’s certainly not solitary. It has its own risks perhaps, but not the risks that worried people about TV. I think there’s a tendency, no matter what the new technology is, to wring our hands about its terrible implications. Kind of amazing how people have turned on a dime from worrying about one thing to worrying about its exact opposite.

Tuesday, May 24, 2016

The Tradeoffs of Parking Spots

Sometimes it seems as if every proposal for a new residential or commercial building in an urban or suburban area is neatly packaged with a dispute over parking. Will the new development provide  a minimum number of parking spaces? Will it be harder for those already in the are to find parking? How should the flow of drivers in and out of the parking area be arranged? Of course, all of these questions presume the cars and drivers need and deserve to be placed front and center of development decisions.

Donald Shoup, an urban economist who focuses on parking issues, discusses this focus on parking in
"Cutting the Cost of Parking Requirements," an essay in the Spring 2016 issue of Access, a research center on surface transportation issues run by a number of University of California schools. Shoup starts this way:

At the dawn of the automobile age, suppose Henry Ford and John D. Rockefeller had hired you to devise policies to increase the demand for cars and gasoline. What planning regulations would make a car the obvious choice for most travel? First, segregate land uses (housing here, jobs there, shopping somewhere else) to increase travel demand. Second, limit density at every site to spread the city, further increasing travel demand. Third, require ample off-street parking everywhere, making cars the default way to travel.
American cities have unwisely embraced each of these car-friendly policies, luring people into cars for 87 percent of their daily trips. Zoning ordinances that segregate land uses, limit density, and require lots of parking create drivable cities but prevent walkable neighborhoods. Urban historians often say that cars have changed cities, but planning policies have also changed cities to favor cars over other forms of transportation.
Minimum parking requirements create especially severe problems. In The High Cost of Free Parking, I argued that parking requirements subsidize cars, increase traffic congestion and carbon emissions, pollute the air and water, encourage sprawl, raise housing costs, degrade urban design, reduce walkability, damage the economy, and exclude poor people. To my knowledge, no city planner has argued that parking requirements do not have these harmful effects. Instead, a flood of recent research has shown they do have these effects. We are poisoning our cities with too much parking. ...
Parking requirements reduce the cost of owning a car but raise the cost of everything else. Recently, I estimated that the parking spaces required for shopping centers in Los Angeles increase the cost of building a shopping center by 67 percent if the parking is in an aboveground structure and by 93 percent if the parking is underground.

Developers would provide some parking even if cities did not require it, but parking requirements would be superfluous if they did not increase the parking supply. This increased cost is then passed on to all shoppers. For example, parking requirements raise the price of food at a grocery store for everyone, regardless of how they travel. People who are too poor to own a car pay more for their groceries to ensure that richer people can park free when they drive to the store. ...
A single parking space, however, can cost far more to build than the net worth of many American households. In recent research, I estimated that the average construction cost (excluding land cost) for parking structures in 12 American cities in 2012 was $24,000 per space for aboveground parking, and $34,000 per space for underground parking
Shoup discusses California legislation that seeks to put a cap on minimum parking requirements. You can imagine how welcome this idea is. Another one of Shoup's parking projects is discussed by Helen Fessenden in "Getting Unstuck," which asks "Can smarter pricing provide a way out of clogged highways, packed parking, and overburdened mass transit?" Fessenden's article appears in the Fourth Quarter 2015 issue of Econ Focus, which is published by the Federal Reserve Bank of Richmond. On the subject of parking, she writes:

Economist Don Shoup at the University of California, Los Angeles has spent decades researching the inefficiencies of the parking market — including the high cost of minimum parking requirements — but he is probably best known for his work on street parking. In 2011, San Francisco applied his ideas in a pilot project to set up "performance pricing" zones in its crowded downtown, and similar projects are now underway in numerous other cities — including, later this spring, in D.C. ...
"I had always thought parking was an unusual case because meter prices deviated so much from the market prices," says Shoup. "The government was practically giving away valuable land for free. Why not set the price for on-street parking according to demand, and then use the money for public services?"
Taking a cue from this argument, San Francisco converted its fixed-price system for on-street parking in certain zones into "performance parking," in which rates varied by the time of day according to demand. In its initial run, the project, dubbed SFpark, equipped its meters with sensors and divided the day into three different price periods, with the option to adjust the rate in 25-cent increments, with a maximum price of $6 an hour. The sensors then gathered data on the occupancy rates on each block, which the city analyzed to see whether and how those rates should be adjusted. Its goal was to set prices to achieve target occupancy — in this case, between 60 percent and 80 percent — at all times. There was no formal model to predict pricing; instead, the city adjusted prices every few months in response to the observed occupancy to find the optimal rates.
The results: In the first two years of the project, the time it took to find a spot fell by 43 percent in the pilot areas, compared with a 13 percent fall on the control blocks. Pilot areas also saw less "circling," as vehicle miles traveled dropped by 30 percent, compared with 6 percent on the control blocks. Perhaps most surprising was that the experiment didn't wind up costing drivers more, on net, because demand was more efficiently dispersed. Parking rates went up 31 percent of the time, dropped in another 30 percent of cases, and stayed flat for the remaining 39 percent. The overall average rate actually dropped by 4 percent.
A summary of the 2014 evaluation report for the SFPark pilot study is available here.

For many of us, parking spots are just a taken-for-granted part of the scenery. Shoup makes you see parking in a different way. Space is scarce in urban areas, and in many parts of suburban areas, too. Parking uses space. Next time you are cycling a block, looking for parking, or navigating a city street that is made narrower because cars are parked on both sides, or walking down a sidewalk corridor between buildings on one side and parked cars on the other, or wending your way in and out of a parking ramp, it's worth recognizing the tradeoffs of requiring and underpricing parking spaces.

Monday, May 23, 2016


The American College of Physicians has officially endorsed "telemedicine," which refers to using technology to connect a health care provider and a patient who aren't in the same place. An official statement of the ACP policy recommendations and a background position paper, written by Hilary Daniel and Lois Snyder Sulmasy, appear in the Annals of Internal Medicine (November 17, 2015, volume 163, number 10). The same issue includes an editorial on "The Hidden Economics of Telemedicine," by David Asch, emphasizing that some of the most important costs and benefits of telemedicine are not about delivering the same care in an alternative way.  For starters, here's are some comments from the background paper (with footnotes and references omitted for readability):
Telemedicine can be an efficient, cost-effective alternative to traditional health care delivery that increases the patient's overall quality of life and satisfaction with their health care. Data estimates on the growth of telemedicine suggest a considerable increase in use over the next decade, increasing from approximately 350 000 to 7 million by 2018. Research analysis also shows that the global telemedicine market is expected to grow at an annual rate of 18.5% between 2012 and 2018. ... [B]y the end of 2014, an estimated 100 million e-visits across the world will result in as much as $5 billion in savings for the health care system. As many as three quarters of those visits could be from North American patients. ...

Telemedicine has been used for over a decade by Veterans Affairs; in fiscal year 2013, more than 600 000 veterans received nearly 1.8 million episodes of remote care from 150 VHA medical centers and 750 outpatient clinics. ... The VHA's Care Coordination/Home Telehealth program, with the purpose of coordinating care of veteran patients with chronic conditions, grew 1500% over 4 years and saw a 25% reduction in the number of bed days, a 19% reduction in numbers of hospital readmissions, and a patient mean satisfaction score of 86% ... 
The Mayo Clinic telestroke program uses a “hub-and-spoke” system that allows stroke patients to remain in their home communities, considered a “spoke” site, while a team of physicians, neurologists, and health professionals consult from a larger medical center that serves as the “hub” site. A study on this program found that a patient treated in a telestroke network, consisting of 1 hub hospital and 7 spoke hospitals, reduced costs by $1436 and gained 0.02 years of quality-adjusted life-years over a lifetime compared with a patient receiving care at a rural community hospital ... 
The Antenatal and Neonatal Guidelines, Education and Learning System program at the University of Arkansas for Medical Sciences used telemedicine technologies to provide rural women with high-risk pregnancies access to physicians and subspecialists at the University of Arkansas. In addition, the program operated a call center 24 hours a day to answer questions or help coordinate care for these women and created evidence-based guidelines on common issues that arise during high-risk pregnancies. The program is widely considered to be successful and has reduced infant mortality rates in the state. ...
An analysis of cost savings during a telehealth project at the University of Arkansas for Medical Sciences between 1998 and 2002 suggested that 94% of participants would have to travel more than 70 miles for medical care. ...  Beyond the rural setting, telemedicine may aid in facilitating care for underserved patients in both rural and urban settings. Two thirds of the patients who participated in the Extension for Community Healthcare Outcomes program were part of minority groups, suggesting that telemedicine could be beneficial in helping underserved patients connect with subspecialists they would not have had access to before, either through direct connections or training for primary care physicians in their communities, regardless of geographic location.
Most of this seems reasonable enough, except for that pesky estimate up in the first paragraph that the global savings from telemedicine will amount to $5 billion per year on a global basis. The US health care system alone has average spending of more than $8 billion per day, every day of the years. Thus, this vision of telemedicine is that it will mostly just rearrange existing care--reach out to bring some additional people into the system, help reduce health care expenditures on certain conditions with better follow-up--but not be a truly disruptive force.

In his editorial essay in the same issue, David Asch points out: "If there is something fundamentally different about telemedicine, it is that many of the costs it increases or decreases have been off the books." He offers a number of examples:

"Some patients who would have visited the physician face to face instead have a telemedicine "visit." They potentially gain a lot. There are no travel costs or parking fees. They might have to wait, but presumably they wait at home or at work where they can do something else (like many of us do when placed on hold). There is no waiting at all in asynchronous settings (the photograph of your rash is sent to your dermatologist, but you do not need a response right away). The costs avoided do not appear on the balance sheets of insurance companies or providers ...  However, the costs avoided are meaningful even if they are not counted in official ways. There are the patients who would have forgone care entirely because the alternative was not a face-to-face visit but no visit. There are no neurologists who treat movement disorders in your region. The emergency department in your area could not possibly have a stroke specialist available at all times. ...  We leave patients out when we ask how telemedicine visits compare with face-to-face visits: all of the patients who, without telemedicine, get no visit at all.
Savings for physicians, hospitals, and other providers are potentially enormous. Clinician-patient time in telemedicine is almost certainly shorter, requiring less of the chitchat that is hard to avoid in face-to-face interactions. There is no check-in at the desk. There is no need to devote space to waiting rooms (in some facilities, waiting rooms occupy nearly one half of usable space). No one needs to clean a room; heat it; or, in the long run, build it. That is the real opportunity of telemedicine. ...

On the other hand, payers worry that if they reimburse for telemedicine, then every skin blemish that can be photographed risks turning from something that patients used to ignore into a payable insurance claim. Indeed, it is almost certainly true that if you make it easy to access care by telemedicine, telemedicine will promote too much care. However, the same concern could be reframed this way: An advantage of requiring face-to-face visits is that their inconvenience limits their use. Do we really want to ration care by inconvenience, or do we want to find ways to deliver valuable care as conveniently and inexpensively as possible?
I find myself wondering about ways in which telemedicine will be more disruptive. For example, consider the combination of telemedicine with technologies that enable remote monitoring of blood pressure, or blood sugar, or whether medications are being taken on schedule. Or consider telemedicine not just as a method of communicating with members of the American College of Physicians, but also as a way of communicating with nursing professionals, those who know about providing at-home care, various kinds of physical and mental therapists, along with social workers and others. There will be a wave of jobs in being the "telemedicine gatekeeper" who can answer the first wave of questions that most people ask, and then have access to resources for follow-up concerns. My guess is that these kinds of changes will be considerably more disruptive to traditional medical practice than a worldwide cost savings of $5 billion would seem to imply.

Homage: I ran across a mention of these reports at the always-interesting Marginal Revolution website.

Saturday, May 21, 2016

Rising Tuition Discount Rates at Private Colleges

Colleges and universities announce a certain price for tuition, but based on financial aid calculations, they often charge a lot less. The difference is the "institutional tuition discount rate." The National Association of College and University Business Officers (NACUBO) has just released a report with the average discount rate for 2015-16 based on a survey of 401 private nonprofit colleges (that is, not including branches of state university systems and not including for-profit colleges and universities), along with and how that rate has been evolving over time.

The two lines in the figure imply that the level financial help a student receives as a freshman, when making a choice between colleges, is going to be more than the financial help received in later years. Beware! More broadly, a strategy of charging ever-more to parents who can afford it, while offering ever-larger discounts to those who can't, does not seem like a sustainable long-run approach.