Total Pageviews

Thursday, October 31, 2013

Planned Obsolescence, as Myth or Reality

Taking a picture with an iPhone 5C on the day it went on sale last month.Justin Sullivan/Getty Images Taking a picture with an iPhone 5C on the day it went on sale last month.

The new iPhone is out. That means, as I wrote in a column for the coming issue of The New York Times Magazine, that conspiracy theories again abound about ways that older models start to become more unattractive and dysfunctional around the time that a shiny new upgrade is available.

Among the evidence that Apple is supposedly engaging in the great capitalist sin of “planned obsolescence” â€" that is, deliberately limiting the useful life of a product so that consumers will be forced to replace it - is the slowing of older models. The iPhone 5S and 5C models coincided with a release of a new iPhone operating system, which happens to make the iPhone 4 and 4S very sluggish. When my iPhone 4 notified me that the operating system was available for download, there was no warning that the software might affect the speed of my model.

There’s also the matter of the battery, which, like all rechargeable batteries, has a finite number of charges and generally runs down much faster by the time service providers offer a subsidized upgrade. Apple makes it difficult for customers to remove and replace iPhone batteries at home, since the batteries are sealed into the phone body with special five-point screws. Having your battery replaced by Apple instead costs $79, just $20 less than the typical subsidized price for a new iPhone 5C.

These are ways in which your old phone looks unattractive not only compared with the new models, but compared with itself just a year or two earlier.

Of course, lots of these signs of “planned obsolescence” have alternative and more benign explanations, related to design, efficiency and innovation. Sure, software upgrades may make older phones run more slowly, but that could be a side effect rather than the primary intention; newer software does more sophisticated stuff (3-D maps! Photo filters! AirDrop!) intended to take advantage of the hardware capabilities of the newest phones, and these more sophisticated features happen to be quite taxing on previous-generation hardware.

Likewise that batteries are hard to replace could be justified by Apple’s commitment to design aesthetics. IPhones are sleeker and lighter with batteries screwed in, rather than manufactured with clunky, detachable phone tumors. And if Apple expects users to want to upgrade in two years as hardware innovations become available, it doesn’t make sense for the company to include batteries that last much longer than that. Consumers may not be willing to pay higher prices for that additional longevity if it’s not useful to them. Just as a clothing retailer probably doesn’t want to sell an infinitely durable pair of skinny jeans if its customers are likely to switch to bell bottoms next year anyway, Apple probably doesn’t want to equip its phones with a much longer-lasting â€" and potentially much costlier â€" battery.

Point being, it’s actually very hard to infer a company’s motives for designing a feature a certain way, and whether that decision was intended to hasten degradation of older products, as some insist that the famously secretive Apple is doing. (Apple declined to comment when I called about accusations of planned obsolescence.)

I spoke with a lot of technology experts for the Magazine column, and their interpretations of Apple’s design decisions were all over the map. Some suggested that yes, Apple is deliberately limiting its technology’s lifespan to harvest more sales from its existing user base. Others said no â€" the brand hit that Apple would take for doing this would be too damaging, and Apple knows it. Since the column was published, I have likewise seen plenty of reader emails and technology blog posts insisting that Apple is either obviously engaging in planned obsolescence or obviously not.

But the answer is not particularly obvious. The best one can do is look at whether Apple would even have the incentive to cause its products to deteriorate more quickly over time. Economic theory is somewhat ambiguous on this point; it really depends on your assumptions about the competitiveness of the high-end smartphone market.

In a notable paper from 1986, Jeremy Bulow asserted that a monopolist not threatened by entry would have an incentive to produce goods with “inefficiently short useful lives.” But if consumers have the option to switch to good substitutes â€" as arguably they do now in the smartphone market â€" the incentives could run in the opposite direction. Your company might capture a larger share of the market if consumers believe your products are more durable.

“If people are rational and forward-looking and are able to anticipate the shenanigans that company might pull, they will take that into account when buying the thing originally,” said Austan Goolsbee, an economics professor at the University of Chicago’s Booth School of Business.

On the other hand, if consumers faced substantial “switching costs” if they wanted to flee to your competitor, that could also increase your incentives to limit durability. For example, iPhone users would lose iOS-compatible apps they’ve bought if they switch to Android phones. These network effects could increase Apple’s incentives to force its existing customers to upgrade by making older models gradually become more dysfunctional - but again, that’s assuming Apple believes it can practice such Machiavellian scheming without damaging its brand too much.

Already Apple is accused of planned obsolescence (and even sued for it, in Brazil) more than most. That’s partly a function of just how big a player it is, and how suspicious consumers become when a luxury product so closely associated with excellence doesn’t meet their expectations. But these sorts of market pressures, trade-offs and concerns about public perceptions exist in other industries too â€" particularly for any company whose market power makes people suspect it is capable of arm-twisting customers into upgrades. Successful video game companies have received blowback every time they release new consoles that are not backward-compatible with old games, for example; such design decisions could be explained by planned obsolescence, or they could be explained by other considerations related to quality and price trade-offs.

These companies know, after all, that not offering backward-compatibility might drive loyal customers to switch to a competitor in a fit of pique. As I said, economic theory is somewhat ambiguous on when planned obsolescence is actually in a company’s best interest. (As with other economic questions, there are too many other-other-other-other hands!)

The best way to render an older model effectively obsolete is not to make it self-destruct, of course, but to introduce a new product that people really want. The phrase “planned obsolescence” was popularized in the 1950s by the industrial designer Brooks Stevens, who intended it to refer not to building things that deteriorate easily, but “instilling in the buyer the desire to own something a little newer, a little better, a little sooner than is necessary.” Today the term has come to be associated with conspiracies to degrade older products, but in the past it was more closely associated with innovation in new ones. Of course, innovation is expensive, and not easy to come by.



Slower Growth: It’s Not Just Government

Jared Bernstein is a senior fellow at the Center on Budget and Policy Priorities in Washington and a former chief economist to Vice President Joseph R. Biden Jr.

At an all-day meeting (gasp!) but let me dash this off quickly.  There’s a bit of meme I hear growing that sure, when it comes to economic growth right now, the government sector is a real drag, but otherwise we’re doing pretty well.  The Federal Reserve policy statement on Wednesday kinda sorta goes there:

“Taking into account the extent of federal fiscal retrenchment over the past year, the Committee sees the improvement in economic activity and labor market conditions since it began its asset purchase program as consistent with growing underlying strength in the broader economy.”

I disagree.  The chart below shows year-over-year changes in real growth in gross domestic product for the total economy and minus the government.  You can see how the stimulus (increased government spending) helped offset some of the Great Recession back in August 2009.  And lately, when you take out the fiscal drag, the private economy is clearly growing faster.

But it too has decelerated, and of course, despite silly declarations to the contrary (“the government doesn’t create jobs!” â€" except that there are currently 22 million government jobs), the two sectors are highly interdependent.  And they’re both (a) decelerating and (b) growing too slowly.

Two Pictures of Economic Growth
Sources: Bureau of Economic Analysis, author's analysis. Sources: Bureau of Economic Analysis, author’s analysis.


Wednesday, October 30, 2013

The Perils of a Free Trade Pact With Europe

DESCRIPTION

Simon Johnson, former chief economist of the International Monetary Fund, is the Ronald A. Kurtz Professor of Entrepreneurship at the M.I.T. Sloan School of Management and co-author of “White House Burning: The Founding Fathers, Our National Debt, and Why It Matters to You.”

Currently slightly beneath most people’s radar, but coming soon to the fore is a potential free trade agreement with Europe. Negotiations started in October and, after a delay because of the government shutdown, may now pick up speed. Some parts of this potential agreement make sense, but there is also an important trap to be avoided: European requests on financial services.

In a recent paper, “Financial Services in the Transatlantic Trade and Investment Partnership,” my Peterson Institute colleague Jeffrey J. Schott and I review the history of finance in free trade agreements and examine the reasonable options for any potential deal between the United States and the European Union.

In this instance, the United States Treasury has the right general idea: don’t let discussions over this free-trade agreement divert attention from completing the Dodd-Frank financial reforms and then figuring out what else is needed to make the American financial system safer. The big banks, naturally, would like American regulators to become enmeshed in constraints set by the negotiations with Europe. This is a trap that must be avoided.

The overall rationale for a free trade agreement with the European Union is that while many traditional trade barriers are low â€" such as tariffs (a form of tax on imports) and quotas â€" there are still regulatory impediments that limit some forms of trade. Removing such restrictions can make sense in some sectors â€" for example, the United States and the European Union have reached agreement on standards for organic food, telecommunications and aviation.

Whatever you think of this argument in general, it does not work well for finance - either at this moment or, I would suggest, in general.

The European banking system currently has numerous significant problems, including large holdings of sovereign debt, which remains under intermittent pressure. While a wave of “stress tests” is under way to examine actual and potential losses of these banks, progress in cleaning up balance sheets - including recognizing losses - has been slow over the last four years.

And there is a definite temptation for European politicians and regulators to go easy on banks, including by allowing them to operate with less bank capital (meaning relatively less equity and relatively more debt) and with easier rules than would otherwise be the case. The Europeans also continue to cling to the idea of very large banks as a form of national champions, despite all the indications that they have become instead a form of national millstone around the neck of the real (i.e., nonfinancial) economy.

None of this should make the United States feel confident about the strength of the European financial system today or going forward. This is the Europeans’ own business, of course - although the spillover effects on the rest of the world are not good. But the European Commission is also proposing that as part of any free trade agreement, national rules on finance should be mutually recognized, implying that American regulators should regard European financial companies as well regulated.

The European officials making this case also previously asserted, for example, that European government debt should be regarded as being just as safe as United States government debt (this was in discussions about the technical parameters of the so-called Volcker rule, which is designed to limit proprietary trading). The recent Greek debt restructuring indicates that this argument was completely wrong.

To be fair, the Europeans are trying to improve the regulation and supervision of their banks, including through the creation of elements of a banking union. But this will be an uphill battle, in large part because national governments do not want to cede sovereignty on this issue.

The global megabanks like the idea of committing the United States to treat Europe’s rules as equal to its own. This would allow high-risk activities to be placed in places with lighter regulation - just as A.I.G.’s highly risky financial products activities were centered on London before their spectacular blow-up in September 2008.

Finance should be regulated on a national basis. Some defenders of big banks say this will give an unfair competitive advantage to European banks, because they already have fewer restrictions.

Nothing could be further from the truth. How have the Europeans used their supposed “advantages” to date, including the lower capital requirements they had under the previous Basel II framework? They built bigger banks that funneled credit in irresponsible fashion and made huge mistakes in assessing risks.

Europe offers only cautionary tales in terms of how executives at big banks can lose control of their businesses. Governments step in to provide backstops, but this only worsens the problem of moral hazard - no one has an incentive to be careful.

Financial reforms in the United States are already on a precarious track. Disappointingly little has been achieved in the last five years; perhaps the Treasury secretary, Jacob Lew, will move the process forward more decisively.

The United States should not derail the reform process entirely by binding it to the failed European banking approach.



The Perils of a Free Trade Pact With Europe

DESCRIPTION

Simon Johnson, former chief economist of the International Monetary Fund, is the Ronald A. Kurtz Professor of Entrepreneurship at the M.I.T. Sloan School of Management and co-author of “White House Burning: The Founding Fathers, Our National Debt, and Why It Matters to You.”

Currently slightly beneath most people’s radar, but coming soon to the fore is a potential free trade agreement with Europe. Negotiations started in October and, after a delay because of the government shutdown, may now pick up speed. Some parts of this potential agreement make sense, but there is also an important trap to be avoided: European requests on financial services.

In a recent paper, “Financial Services in the Transatlantic Trade and Investment Partnership,” my Peterson Institute colleague Jeffrey J. Schott and I review the history of finance in free trade agreements and examine the reasonable options for any potential deal between the United States and the European Union.

In this instance, the United States Treasury has the right general idea: don’t let discussions over this free-trade agreement divert attention from completing the Dodd-Frank financial reforms and then figuring out what else is needed to make the American financial system safer. The big banks, naturally, would like American regulators to become enmeshed in constraints set by the negotiations with Europe. This is a trap that must be avoided.

The overall rationale for a free trade agreement with the European Union is that while many traditional trade barriers are low â€" such as tariffs (a form of tax on imports) and quotas â€" there are still regulatory impediments that limit some forms of trade. Removing such restrictions can make sense in some sectors â€" for example, the United States and the European Union have reached agreement on standards for organic food, telecommunications and aviation.

Whatever you think of this argument in general, it does not work well for finance - either at this moment or, I would suggest, in general.

The European banking system currently has numerous significant problems, including large holdings of sovereign debt, which remains under intermittent pressure. While a wave of “stress tests” is under way to examine actual and potential losses of these banks, progress in cleaning up balance sheets - including recognizing losses - has been slow over the last four years.

And there is a definite temptation for European politicians and regulators to go easy on banks, including by allowing them to operate with less bank capital (meaning relatively less equity and relatively more debt) and with easier rules than would otherwise be the case. The Europeans also continue to cling to the idea of very large banks as a form of national champions, despite all the indications that they have become instead a form of national millstone around the neck of the real (i.e., nonfinancial) economy.

None of this should make the United States feel confident about the strength of the European financial system today or going forward. This is the Europeans’ own business, of course - although the spillover effects on the rest of the world are not good. But the European Commission is also proposing that as part of any free trade agreement, national rules on finance should be mutually recognized, implying that American regulators should regard European financial companies as well regulated.

The European officials making this case also previously asserted, for example, that European government debt should be regarded as being just as safe as United States government debt (this was in discussions about the technical parameters of the so-called Volcker rule, which is designed to limit proprietary trading). The recent Greek debt restructuring indicates that this argument was completely wrong.

To be fair, the Europeans are trying to improve the regulation and supervision of their banks, including through the creation of elements of a banking union. But this will be an uphill battle, in large part because national governments do not want to cede sovereignty on this issue.

The global megabanks like the idea of committing the United States to treat Europe’s rules as equal to its own. This would allow high-risk activities to be placed in places with lighter regulation - just as A.I.G.’s highly risky financial products activities were centered on London before their spectacular blow-up in September 2008.

Finance should be regulated on a national basis. Some defenders of big banks say this will give an unfair competitive advantage to European banks, because they already have fewer restrictions.

Nothing could be further from the truth. How have the Europeans used their supposed “advantages” to date, including the lower capital requirements they had under the previous Basel II framework? They built bigger banks that funneled credit in irresponsible fashion and made huge mistakes in assessing risks.

Europe offers only cautionary tales in terms of how executives at big banks can lose control of their businesses. Governments step in to provide backstops, but this only worsens the problem of moral hazard - no one has an incentive to be careful.

Financial reforms in the United States are already on a precarious track. Disappointingly little has been achieved in the last five years; perhaps the Treasury secretary, Jacob Lew, will move the process forward more decisively.

The United States should not derail the reform process entirely by binding it to the failed European banking approach.



Tuesday, October 29, 2013

Work Now, and Let Uncle Sam Pay You Later

DESCRIPTION

Casey B. Mulligan is an economics professor at the University of Chicago. He is the author of “The Redistribution Recession: How Labor Market Distortions Contracted the Economy.”

Notwithstanding quirks in the Social Security system, public policy has sharply reduced the reward to work since 2007.

On Monday, the Economix blogger Nancy Folbre helped explain some of the complex factors that determine the rewards of working. Among other things, she noted the roles of work experience and Social Security benefits, both of which are examples of future consequences of working in the present.

This week I will examine Social Security and Medicare benefits from her forward-looking perspective, and in a future post I will examine work experience.

Professor Folbre says the payment of Social Security payroll taxes confers a benefit on the taxpayer in the form of additional Social Security benefits later in life. Indeed, the Social Security Administration calls the payroll taxes “contributions,” although employers are subject to penalties and even prosecution if they fail to deliver the “contributions” on time and in the legally prescribed amounts.

Technically, a worker’s lifetime history of taxable earnings, rather than the taxes themselves, traditionally determine a person’s old-age benefits, with more lifetime earnings sometimes resulting in more benefits (this book by the longtime Social Security actuary Robert J. Myers has all the details).

A classic paper by Martin Feldstein and Andrew Samwick was able to quantify the the link between lifetime earnings and old-age benefits as it was in 1990, assuming that Social Security rules would be unchanged over the next several decades. They found that secondary earners â€" in their view, spouses with significantly less lifetime earnings than the other partner â€" would receive no future old-age benefits as a consequence of working, but that the value of benefits to patient, primary earners nearing retirement could be significant, especially if they were married.

If payroll taxes were always the same share of taxable earnings, we could ignore the distinction between the two for the purposes of quantifying incentives to work. But payroll tax rates have varied over time, most recently with the partial payroll tax holiday of 2011 and 2012 (interestingly, the Obama administration refers to the two-point reduction as a “tax cut”). Because the payroll tax rates are higher now than in 2012, a person moving earnings from 2012 to this year would increase his payroll tax but not increase his Social Security benefits.

That’s why I count the entire payroll tax cut as an increase in incentives for as long as the cut lasted, even if the rest of the payroll tax confers the benefits that Professor Folbre contends. If all we wanted to know was the amount by which incentives changed over the last 10 years or so, Professor Folbre’s assertion about the future pension benefits conferred would hardly be relevant, unless we thought that the link between present earnings and future benefits had been changing during that time frame.

I agree with Professor Folbre that the best quantitative estimate of marginal tax rates would account for the future consequences of working in the present, but writing in 2013 I am not willing to follow Professors Feldstein and Samwick and assume that Social Security rules will remain unchanged for the remaining lifetimes of today’s workers. In one way or another, we can expect health benefits or cash benefits for the elderly, or both, to be taxed or means-tested more than they are under current law.

Democrats have suggested means-testing Social Security and Medicare, with the likely result that people who worked and saved more during their lifetimes will find themselves with fewer benefits from those programs, compared with people who worked and saved less. Republicans have proposed means-testing Medicare, as part of transforming it to a health insurance premium-support program. The common denominator here is means-testing and the marginal tax rates that go with it.

Professor Folbre is unwilling to assume that “taxpayers derive no marginal benefits from programs such as Social Security.” But that’s hardly relevant for understanding how incentives evolve over time. Based on the considerations cited above, my guess is that the effect of working in the present on future Social Security and Medicare benefits was once somewhat positive (primary earners) or zero (secondary earners), and for primary earners has become less positive (or even negative) over time. By approximating these changes as zero, my work has thereby understated the amount by which marginal labor income tax rates have increased since 2007.

Regardless of whether redistribution is achieved by collecting more taxes from families with high incomes, providing more subsidies to families with low incomes, or both, an essential consequence is the same: a reduction in the reward for activities and efforts that raise incomes. New and revised federal programs do exactly that, in myriad ways, and will be doing so for the foreseeable future.



#perfmatters

Our Performance event was a big success. If you didn't get a chance to watch Jake Archibald, Eitan Konigsburg, Adam Grossman and Colt McAnlis last night, you can check out the recording below.

Thanks to everyone who attended or followed along online!



Better (and More) Congressional Member Information

Over the summer, as lawmakers in Washington, D.C., haggled about the budget, energy, immigration and other issues, we've been at work upgrading the information about members contained in the Congress API.

Part of that process involved adding new elements to some of our member responses, but just as important is that with this update, the API is now part of a larger congressional data infrastructure effort that extends outside The Times.

First, we've added more detail to the member and member list responses, which now include a broader range of social media identifiers and other data that will help make it easier to connect to other sources of information. In particular, member responses now include Twitter and Facebook account names as well as the Facebook “id” used by the Graph API.

We've also added more details about lawmakers' websites, including the address and the URL of the RSS feed, if one is present. There are a number of official congressional sites that do not use RSS, and if you want to retrieve press releases from them (and those sites with feeds), we have released a Ruby gem that does just that.

In response to user requests, we've also added a new response for current members of the House or Senate. These two responses, detailed in the member list responses, include members who are serving in a given chamber as of the date of the request. For earlier congresses, this response provides the “final” membership list for that chamber and congress.

There is even more at work underneath the hood of our congressional data. The bulk of our data comes from official sites run by the House of Representatives, the Senate and the Library of Congress. And we're not the only people who are interested in compiling that data: organizations such as GovTrack and the Sunlight Foundation have been retrieving congressional data for years.

Instead of each maintaining separate code for fetching this data, last year Josh Tauberer of GovTrack, Eric Mill of Sunlight and I created a single repository for code that we'd all work on and try to incorporate into our organizations. The “unitedstates” organization also hosts other federal government data collection efforts, but our initial efforts focused on legislative data produced by the Library of Congress.

While we've all incorporated at least some of this data into our respective systems, at The Times we've most heavily relied on the member data assembled in another repository under the project. Using a single source helps us to avoid duplicative work as well as inconsistencies between sites and services.

Finally, in response to a request posted in our forums, the Congress API now supports JSONP requests. As always, let us know if you have questions or features you'd like to see.



The Beginning of the End of the Financial Crisis

On Oct. 14, 2008, Treasury Secretary Henry M. Paulson announced emergency financial measures. Agency heads looking on were, from left, Ben S. Bernanke of the Federal Reserve; Sheila Bair of the Federal Deposit Insurance Corporation; Timothy F. Geithner of the Federal Reserve Bank of New York; John Dugan, Comptroller of the Currency; Christopher Cox of the Securities and Exchange Commission, and John M. Reich of the Office of Thrift Supervision.Mark Wilson/Getty Images On Oct. 14, 2008, Treasury Secretary Henry M. Paulson announced emergency financial measures. Agency heads looking on were, from left, Ben S. Bernanke of the Federal Reserve; Sheila Bair of the Federal Deposit Insurance Corporation; Timothy F. Geithner of the Federal Reserve Bank of ew York; John Dugan, Comptroller of the Currency; Christopher Cox of the Securities and Exchange Commission, and John M. Reich of the Office of Thrift Supervision.

Phillip Swagel is a professor at the School of Public Policy at the University of Maryland and was assistant secretary for economic policy at the Treasury Department from 2006 to 2009.

Five years later, it is clear that the decisive actions to stabilize the financial system were those of Oct. 14, 2008, when the United States government put taxpayer money into banks and guaranteed their lending. With American markets closed for the Columbus Day holiday, the chief executives of nine large banks trooped past waiting television cameras into the Treasury to be told â€" or in a few cases, persuaded â€" that they would receive $125 billion in taxpayer money from the $700 billion TARP fund and that the Federal Deposit Insurance Corporation would use emergency authority to guarantee bank debt and business checking accounts, neither of which were covered by the F.D.I.C.’s usual deposit insurance. The nine firms together accounted for about half of the assets and deposits in the United States banking system; another $125 billion was to be allocated to the 8,000-plus institutions that made up the rest of the system.

Shoring up banks en masse was meant to assure market participants that the United States government would not allow the banking system to collapse â€" a serious fear after the failures of Lehman, the American International Group, Washington Mutual and Wachovia over the preceding weeks. There was no promise that every institution would be saved: indeed, some banks were denied access to the two programs and were allowed to fail (and some went bust despite getting taxpayer money). On the whole, however, the Treasury and F.D.I.C. actions arrested the mounting financial market panic in the wake of the Sept. 15 bankruptcy of Lehman Brothers. The Great Recession was not averted, as the economy plunged in late 2008 and early 2009, with consequences still felt in a subpar labor market. But Oct. 14, 2008, was the beginning of the end of the financial crisis.

The causes of the crisis and policy response are the focus of a symposium on the financial crisis being held Tuesday in Chicago, hosted by former Treasury Secretary Henry M. Paulson Jr., whose Paulson Institute is affiliated with the University of Chicago, and by David Axelrod, the former adviser to President Obama who now heads the university’s Institute of Politics. I took part in a session Tuesday morning on the economics of the crisis. The event can be followed using the Twitter hashtag #fiveyearsafter.

Treasury’s capital injections were meant to give banks receiving the funds a buffer against further losses, supporting lending and bolstering confidence in both institutions and the financial system as a whole. Banks accepting TARP money paid a 5 percent dividend on the funds until the Treasury was paid back; this yield rises to 9 percent after five years (meaning that banks still holding onto their TARP funds will soon pay more). The capital injections included modest restrictions on executive compensation, but the government remained a mostly silent partner, taking a seat on a company’s board only for banks in serious distress.

It is hard to imagine, given the subsequent unpopularity of TARP, but the offer of funding initially was hugely popular, with some banks faxing in applications as soon as Treasury posted the form. TARP’s chief, Assistant Treasury Secretary Neel Kashkari, drew on talented staff from throughout the government to set up a $250 billion investment fund within weeks, evaluating banks and taking the legal and administrative steps to get the money into the financial system. A meeting of the TARP investment committee (on which I served) became a nightly ritual at the Treasury.

While some observers complained that banks were paying less for TARP money than Warren Buffett had wrung out of Goldman Sachs for a September 2008 investment, this was an intentional policy decision to foster broad uptake that would stabilize the banking system as a whole. A further criticism in the fall of 2008 was that banks were sitting on TARP money. This was a puzzling complaint. Banks had every incentive to make loans, since otherwise they would lose money on the 5 percent dividend. And the volume of loans depends not just on banks’ willingness to lend but also on the demand for loans, which was declining given the weak economy. In any case, one could not say that a bank was using its existing resources to make loans but not TARP funds â€" all dollars inside a bank are green. These facts, however, did not stop this variety of criticism.

While some of the Treasury investments went bad, the TARP bank program on the whole has been a financial success, even returning a profit for taxpayers. As of Sept. 30, 2013, the gain was $28 billion on just over $245 billion invested.  The Treasury Department, under Secretary Timothy F. Geithner and now under Secretary Jacob J. Lew, has adeptly handled the process of selling off government stakes in banks to maximize the taxpayer return while avoiding disruptions to financial markets.

The F.D.I.C., through its Temporary Liquidity Guarantee Program, provided a three-year guarantee on bank debt, a program that complemented the TARP capital injections by ensuring that participating banks had not just more capital but also assured access to funding, and could thus avoid the crippling runs that represented the death knell of firms starting with Bear Stearns in March 2008.

Research by two University of Chicago professors indicates that the Columbus Day interventions had a positive return to society of $84 billion to $107 billion, reflecting a balance between the value to firms of the reduced likelihood that they would fail and the increased risk for taxpayers from providing the guarantee. Most of the benefit in this calculation actually arises from the debt guarantees rather than the capital injections, but the TARP capital was essential, since the F.D.I.C. would not have offered the guarantee and put its deposit insurance fund at risk without the extra protection against losses from TARP.

Among individual banks, the biggest winners of the government rescue were the weakest firms, notably Citigroup, Morgan Stanley and Goldman Sachs, which had relatively tenuous funding bases and thus were aided the most by the government guarantee on their borrowing. Stronger banks like JPMorgan Chase, in contrast, were disadvantaged by having their competitors propped up. In retrospect, JPMorgan’s chief executive, Jamie Dimon, can be seen as an economic patriot for supporting the Columbus Day actions to stabilize the financial system as a whole even though they hurt his company.

The actions taken on Columbus Day 2008 remain contentious, in particular because of concerns about bailouts and of banks that are too big to fail. It is astonishing for the United States government to tell investors, in effect, that it stands behind private firms in any industry. The 2010 Dodd-Frank financial regulatory reform law includes provisions such as increased capital and liquidity requirements that aim to make financial markets more safe, and new authority for government officials to take over failing institutions with the hope of avoiding the panic that developed in the fall of 2008. Until the next crisis, however, it will be difficult to know whether these measures will be effective, either in allowing for a better policy response or in avoiding a crisis in the first place. In the meantime, the Treasury capital injections and F.D.I.C. funding guarantees stand as the most extraordinary measures taken five years ago to deal with the financial crisis â€" and the most effective.



Monday, October 28, 2013

A New View of the Corporate Income Tax

DESCRIPTION

Bruce Bartlett held senior policy roles in the Reagan and George H.W. Bush administrations and served on the staffs of Representatives Jack Kemp and Ron Paul. He is the author of the forthcoming book “The Benefit and the Burden: Tax Reform - Why We Need It and What It Will Take.”

One of the least well-known aspects of tax policy-making is the distribution table, which is produced by the Joint Committee on Taxation, a Congressional committee, for every major tax bill. The tables show how the legislation affects taxpayers at different income levels. It is a generally understood, if unstated, rule that tax cuts should be evenly distributed in percentage terms while tax increases should primarily fall on the well to do.

A lot of the complexity of the tax code results from efforts to make the distribution tables look right. There are two key problems: those with low incomes generally don’t pay federal income taxes, while the wealthy pay a lot. Below is a typical distribution table for federal income taxes in 2013 from the Tax Policy Center, a joint venture of the Urban Institute and the Brookings Institution, that generally follows the Treasury’s methodology. These data do not include other taxes such as the payroll tax, corporate tax or estate tax.

Tax Policy Center

The table shows the share of total income taxes paid; a negative number indicates that a particular income group gets a net refund from programs like the earned income tax credit. In the aggregate, households earning less than $50,000 pay no federal income taxes; those making more than $1 million pay 34.2 percent of all federal income taxes. Thus a reduction in the top tax rate, a widely shared Republican goal, will necessarily cut taxes for the wealthy considerably.

Raising taxes on this group to offset that cut, so that the wealthy don’t benefit too much, means identifying tax provisions that primarily benefit them and restricting them in some way. This inevitably adds complexity to the tax code.

Calculating the distribution of the federal income tax is relatively straightforward, with the raw data coming directly from federal tax returns. But calculating the distribution of the corporate income tax is much more difficult. That is because corporations are artificial entities and all taxes must ultimately be paid by people. The question is who?

For many years, economists assumed that the corporate tax is paid almost entirely by shareholders. This is unquestionably true when a corporate income tax is first introduced. But over time, corporations adjust their affairs so as to minimize the tax, causing the burden to be shifted. For example, companies may try to raise prices to compensate for the corporate income tax, thus shifting some of the burden onto consumers.

Most economists don’t believe that much, if any, of the corporate tax is shifted onto consumers this way, because corporations face competition from noncorporate businesses, such as sole proprietorships and partnerships, and from businesses based in countries with higher or lower corporate taxes. Competition sets prices for goods and services without regard to the corporate tax rate.

While economists still believe that the bulk of corporate income taxes is paid by the owners of capital, in recent years they have come to believe that workers ultimately pay much of the tax in the form of lower wages. This results from lower capital investment due to a higher cost of capital, which reduces productivity and hence wages, and because capital investment moves to other countries where corporate income taxes are lower.

Economists have known about these effects for a long time; the trick has been estimating the effect precisely enough to incorporate the burden of the corporate tax into distribution tables. The Joint Committee on Taxation now believes that it understands the incidence of the corporate income tax well enough to do so and issued a study explaining its new methodology on Oct. 16.

The table shows the impact on the distribution of aggregate taxes, including the payroll and other taxes, by including the corporate tax, which was previously excluded from the calculation. The new methodology increases the overall tax burden by $216 billion, the revenue raised by the corporate income tax â€" an increase of 10.4 percent overall.

Joint Committee on Taxation

This is an important development, because cutting the corporate income tax is a bipartisan goal for tax reform. According to the Organization for Economic Cooperation and Development, the United States has the highest statutory corporate tax rate among advanced economies. This is widely believed to reduce investment in the United States, costing jobs and income for Americans.

Politically, it is now easier to show that a cut in the corporate tax rate will have benefits that are broadly shared, especially by those with incomes below $30,000. Conversely, it means that the Obama administration’s plan to raise new revenue by closing corporate tax loopholes will have a harder time gaining traction, because much of the burden will fall on those with low incomes.

A bigger problem for cutting the corporate tax may be new projections from the Congressional Budget Office that show federal corporate tax receipts are expected to fall in coming years as interest rates rise, raising corporate interest expenses, and wages and depreciation allowances rise as the economy and corporate investment rises.

Congressional Budget Office


If Prices Go Up, Incomes May Lag

Mark Wilson/Getty Images

My Sunday article about the benefits of moderate inflation drew a lot of skeptical responses, many about the premise that rising prices would lead to rising wages.

This is an old and well-established concern. “People dislike inflation because they assume that wages will not keep pace,” Robert Shiller wrote in a 1996 paper describing the results of a multinational survey.

And it is important for two reasons. First, it highlights one of the most powerful arguments against increasing inflation while the economy is weak.

If the Fed drives up inflation, prices would rise first. Even if wages follow, the very people who most need help would feel the short-term pain most acutely. It would feel something like a temporary national sales tax.

Second, there’s no guarantee that incomes would keep pace with higher inflation.

To be clear, inflation by definition increases total income. Someone ends up holding the new money. The question is about distribution: Are workers able to secure the raises necessary to keep pace with inflation, or does the extra money simply pad profits?

In some cases, it is clear that incomes will not rise in the short term.

Dick Diamond, a reader in Bay City, Ore., wrote to say that his pension rises at a fixed rate of 2 percent per year. “Inflation will wipe out any gains and if more than 2 percent will harm me,” he wrote.

This is a common problem. The nation’s largest pension fund, the California Public Employees’ Retirement System, also increases payments by up to 2 percent a year.

The federal minimum wage, now $7.25 an hour, does not adjust to keep pace with inflation. After accounting for inflation, minimum-wage workers now make much less than they did in the 1970s.

There is also reason to worry that increased foreign competition has eroded the bargaining power of American workers.

The investment banker and think tanker Daniel Alpert argues in a new book, “The Age of Oversupply,” that globalization is suppressing domestic wages, so that rising inflation would simply punish workers.

“Cheaper credit through monetary easing doesn’t yield much in an era when cheap capital already exists in abundance,” Mr. Alpert writes. “And policies that seek to stimulate growth run up against the fact that there is a huge oversupply of global labor and productive capacity.”

But against these concerns stands the bleak reality of the present situation: profound and enduring unemployment, slow growth, rising income inequality.

There are theoretical reasons to think that a little more inflation could alleviate each of these problems. We have the experience of the last five years as evidence that current economic policies have not done so.



Sunday, October 27, 2013

The Marginal Tax Rate Mess

Nancy Folbre, professor emerita at the University of Massachusetts, Amherst.

Nancy Folbre is professor emerita of economics at the University of Massachusetts, Amherst.

After years of partisan debate over marginal tax rates on the rich, it seems we are now destined for even more acrimony over implicit marginal tax rates on the poor. When families receiving such means-tested benefits as food stamps or housing subsidies earn more income, their benefits are reduced. That’s what means-testing means.

The reduction in benefits is accurately described as an implicit tax. The only way to avoid such an implicit tax is either to provide universal benefits or no benefits at all. On a fundamental level, means-tested programs represent an uncomfortable compromise between those who want governments to help their citizens, those who don’t, and all those in between.

Almost anyone anywhere on the political spectrum would, relieved of opportunities for strategic maneuver, agree that the current configuration of means-tested programs (including the Affordable Care Act) is not nearly as equitable or efficient as it could be. The same criticism should be leveled at the current state of debate over means-tested programs, seldom characterized by open discussion of conceptual differences or clear articulation of basic assumptions.

The conceptual mess is, hopefully, easier to clean up than the policy mess. Pursuing this hope, I compare my views with those of my fellow Economix blogger Casey Mulligan, whose views on most matters economic are diametrically opposed to mine.

I advocate increased public investments in health, education and employment through largely universal programs. Professor Mulligan, both in his recent posts and his book “The Redistribution Recession,” attributes both unemployment and sluggish economic growth to excessively generous public assistance.

But Professor Mulligan and I share an interest in labor supply that distinguishes us from Keynesian economists more preoccupied with labor demand. I am going to start by building on that shared interest, by explaining our differences on one introductory question: What are implicit marginal tax rates, and why should we worry about them?

In subsequent posts, I’ll address some related issues, including the definition of labor supply, levels of spending on means-tested programs, impacts on incentives to paid employment and the relative importance of labor supply compared with labor demand. (If these are the issues you care about most, please hold your fire until I get to them.)

As a result of losing eligibility for means-tested benefits, low-income and middle-income families sometimes experience much higher marginal effective tax rates (sometimes exceeding 90 percent) than those at the top of the income distribution. Phase-outs for any one program may not be large, but participation in several programs creates a cumulative effect.

The most recent estimates from the Congressional Budget Office conclude that more than 20 percent of low- and moderate-income taxpayers face marginal tax rates of 40 percent or more, based on the effect of their earnings on federal and state individual income taxes, federal payroll taxes and the phasing out of benefits from the Supplemental Nutrition Assistance Program.

Geographical variation is huge. For instance, researchers from the Urban Institute and Brookings Institution estimate that in 2008, a single parent with two children participating in Temporary Assistance for Needy Families and the Supplemental Nutrition Assistance Program (food stamps) moving from no employment to poverty-level earnings, would actually receive more benefits in New Jersey, but experience a significant reduction in benefits in Hawaii.

High effective marginal tax rates rightfully distress the families most directly affected by them. The complexity of means-tested public assistance programs makes it difficult to calculate the gains from additional hours of employment accurately, which is inherently discouraging.

But families are probably able to identify points where they “fall off a cliff,” as when a small increase in income renders them ineligible for any Medicaid assistance.

Means-tested benefits are also politically divisive, fostering resentment among those who believe they will never personally benefit from them.

These general criticisms, however, neither rely on nor complement the assumptions that Professor Mulligan makes in his quantitative estimates. As he sets out to examine the impact of marginal effective tax rates on a variety of different employment outcomes, Professor Mulligan relies on measures of the combined impact of taxes and benefit reductions, following the common practice of including Social Security taxes paid by both employers and employees.

But as a recent C.B.O. report on marginal effective tax rates notes, inclusion of Social Security taxes is problematic, because the payments made by both workers and employers offer future benefits in the form of retirement income and survivors insurance. We don’t consider expenditures on private insurance as a tax payment, so why should expenditures on public insurance be considered as such?

Nor are these explicit taxes the only ones that purchase public insurance or help finance direct benefits to taxpayers. Safety-net programs such as Temporary Assistance to Needy Families and the Supplemental Nutritional Assistance Program offer potential benefits to those who may need them in the future. State income taxes help finance education for many taxpayers’ families. The only tax payments that become “unavailable” to the worker are those that will be spent entirely on other people whom the taxpayer cares nothing about.

Of course, it would also be very difficult to actually measure marginal taxes net of marginal benefits. But the assumption that taxpayers derive no marginal benefits even from programs such as Social Security seems implausible to me.

If people don’t derive any utility from the taxes they pay, it is difficult to explain why many have voted for the politicians who put those taxes into place.

This reasoning suggests that benefit reductions might actually reduce utility more than tax payments, but there are other reasons that they may reduce it less. The marginal federal and state income tax rate goes up as families earn more and enter a higher tax bracket, but the marginal implicit tax rate goes down once taxpayers have exceeded the earnings at which they are no longer eligible for means-tested benefits.

In other words, low- or middle-income families may see their tax rates go up as they lose eligibility for benefits, but if they continue to work and earn more they may well reach a point at which this rate will decline.

Any worker who can estimate her own net effective marginal tax rate (a detailed calculation that exceeds the capacity and curiosity of many economists with Ph.D.’s) can also figure out that the labor market often rewards effort and experience more generously in the long run than in the short run. As any college student seeking an internship can explain, it is economically rational to work many hours for a zero wage if that effort will improve future job market opportunities.

Workers’ perceptions of their future opportunities in the labor market may affect their labor supply as much as, if not more than, their current marginal effective tax rate.

In making his argument that means-tested benefits have discouraged paid employment, deepened recession and prolonged unemployment, Professor Mulligan often describes his model as a simple “textbook analysis of labor supply.” For reasons I’ll lay out next week, that is a big reason that I believe his model is incorrect.



Saturday, October 26, 2013

Nudging Girls Toward Computer Science

My It’s the Economy column on Sunday looks at why traditional economic incentives alone don’t seem to be enough to encourage more women (or men, for that matter) to go into highly lucrative computer science jobs, which can often provide great flexibility to boot.

Part of the issue, it seems, is exposure. Most people don’t come into contact with computer scientists or engineers in their daily lives, and don’t really understand what they do. American schools don’t do a great job of teaching computer science skills either.

Trying to remedy this are numerous nonprofit and educational organizations, among them Code.org, which lobbies to get more computer science classes in schools. Others try to provide computer science lessons outside of a traditional school setting. Girls Who Code, for example, has eight-week boot camps that teach middle and high school girls programming skills - in languages like Java, PHP, and Python - as well as algorithms, Web design, robotics, and mobile app development.

But access to coding lessons isn’t the only factor in improving the talent pipeline. Role models (real and fictional) are important, too. Take a guess, for instance, as to what career aspiration is named most frequently on applications to Girls Who Code.

Nope, not electrical engineer, software developer, or really anything directly related to computer science or coding. In fact, many of the applicants don’t even know these jobs exist, or what computer science is. (Typically they’re applying because a  teacher or family friend urged them to.)

The answer is forensic scientist. Not because any of the girls actually know forensic scientists, mind you, but because they’ve seen “C.S.I.” or maybe “Bones,” “NCIS,” “Crossing Jordan,” “Law & Order: S.V.U.,” or “Rizzoli and Isles,” or some other show with a cool chick in a white lab coat uses scientific know-how to save the day. These shows have been credited for helping turn forensic science from a primarily male occupation into a primarily female one.

The second most common career aspiration that the Girls Who Code applicants name is medicine. Doctors, unlike programmers, are people the girls have been exposed to and whose work even much younger children can understand. They’ve met doctors in their personal lives, and have also seen glamorous yet relatable female physicians on TV (think “Grey’s Anatomy,” “Saving Hope,” “The Mindy Project,” “Scrubs”).

There is also statistical evidence suggesting that role models encourage women’s interest and persistence in the sciences. Women’s enrollment and attrition rates in STEM college majors â€" science, technology, engineering and mathematics â€" are infamously bad. The reasons often cited include feelings of isolation, lack of support, or maybe just caring too much about grades (there is typically less grade inflation in STEM fields than in the humanities or social sciences). One study found, though, that women who took math and science classes from female professors performed better, were more likely to take future math and science courses, and were more likely to graduate with a STEM degree.

Acting on this finding is challenging, of course, since the implied solution presents a Catch-22: How do you rapidly increase the number of female STEM professors if there are so few women in the STEM pipeline?

There are other efforts to pair up girls or young women with high-achieving technologists. Girls Who Code introduces its pupils to potential mentors who work at high-powered places like Google and Goldman Sachs. Similarly, one of the most important functions of the Anita Borg Institute’s annual Grace Hopper Celebration of Women in Computing, I’ve been told, is to expose women studying computer science to successful women in their field.

But still, these kinds of efforts are difficult to scale up. Which is one reason why so many in the industry are pinning their hopes on Hollywood to do some of the heavy lifting, just as it did to popularize forensic science. Pop culture, after all, is a much more scalable form of propaganda than one-on-one introductions at schools, conferences or summer programs.

Right now there’s very little representation of computer science or engineering occupations on TV or in movies, and even less representation of female characters in these fields. To be sure, over the years there have been isolated examples of female characters with sophisticated computing skills, including Chloe O’Brian on “24″; Skye in a new ABC series called “Agents of S.H.I.E.L.D.” (which actually a reader alerted me to after I wrote the magazine column); Trinity in “The Matrix” movies; and the lead character in a recently announced MTV pilot called “Eye Candy.” But these heroines, or more often supporting characters, are still few and far between.

Creating a hit TV show with a science-minded heroine is easier said than done, of course. It’s hard to make a TV show popular, even harder to force audiences to care about a particular character on that show, and probably triply hard to get a very specific audience (that is, impressionable little girls) emotionally invested in that character on that popular TV show. (Lisbeth Salander from the “Girl With the Dragon Tattoo” may be the most famous female hacker character of recent years, but the books and films she appears in are probably not appropriate for the 12-year-old girls whose career paths the tech industry and women’s groups most want to influence.)

Cumulative media messages about career options matter all the same. Even if there is no one breakout programmer babe who adorns tweens’ bedroom walls, having little girls repeatedly see tech-savvy heroines save the day while simultaneously earning a decent paycheck â€" on TV, if not in their daily lives - can still help shape career trajectories.



Friday, October 25, 2013

How the Budget Debate Could Help the Economy

Jared Bernstein is a senior fellow at the Center on Budget and Policy Priorities in Washington and a former chief economist to Vice President Joseph R. Biden Jr.

I remain truly and deeply disturbed by the immediate pivot from the debt-ceiling debacle and government shutdown to yet another set of budget negotiations, with not even a head fake toward dealing with the slogging economy.

Still, is there any lemonade to be made out of this lemon of a budget conference that’s about to get under way? I think there is.

First, it is a good sign that all sides are eschewing the elusive “grand bargain,” in which the Democrats accept significant cuts to entitlements and the Republicans accept significant new tax revenues. That’s good news, because that route leads to gridlock â€" and presents a real risk of reducing essential income and health supports for economically vulnerable retirees and others who depend on Social Security, Medicare and Medicaid.

On this latter point, while there are obviously wealthy beneficiaries of Social Security and Medicare who would be fine without those programs, far more depend on them. Kathy Ruffing, a colleague at the Center on Budget and Policy Priorities, points out that excluding Social Security benefits, the poverty rate among the elderly would be an astounding 44 percent. Including those benefits, it is 9 percent. They lift 22 million people out of poverty.

In the hands of this Congress, a “grand bargain” could easily be a disaster for these folks.

Second, there seems to be a bit of a consensus forming around replacing some of the sequester cuts that are partly responsible for the fiscal drag that’s been dampening economic growth. To remind you, sequestration is indiscriminately lowering both defense and non-defense discretionary spending by $109 billion per year.

The Congressional Budget Office recently estimated that those cuts will cost the job market around 800,000 jobs by the end of next year. So replacing some or all of the cuts would surely help. The question is: replace them with what? Tax revenues are most likely off the table (though at least one prominent Republican member gave an opening on revenues Friday), and replacing one set of cuts with another shouldn’t be expected to do much good.

Unless, that is, we back-load the replacement cuts. Replacing a year (two would be better) of sequestration cuts with cuts that unfolded over many more years would be helpful both to the current economy and to deficits in future years.

But there’s an important wrinkle. The replacement cuts will most likely have to come from the mandatory side of the budget, which includes entitlements. Now, there are definitely entitlement savings â€" say, from Medicare â€" in the president’s budget that do not affect beneficiaries, like reducing the amount that Medicare spends on drugs by allowing the program to use its clout to get better bargains from drug companies. And there’s other wasteful spending on this side of the budget, like farm subsidies, that could also contribute.

While Democrats should be open to this kind of trade-off, they should do so only to replace non-defense cuts. Here’s how Bob Greenstein of the Center on Budget and Policy Priorities put it the other day:

“I’m comfortable in replacing some or all of non-defense cuts with well-designed savings from mandatory programs like entitlements, but not on the defense side â€" one doesn’t want to create a precedent for undoing defense sequestration with domestic entitlement cuts.”

If Republicans want to replace sequestered defense spending, be my guest: go find some new tax revenue, close a loophole, whatever. But no entitlement cuts, even inoffensive ones that don’t ding beneficiaries, to replace defense cuts.

So, I know this is all pretty down in the weeds, and I also know that by sounding a bit optimistic about a deal like this, I’m seriously lowering expectations. So let me be very clear: all we’re talking about here is bending ourselves into a fiscal pretzel to undo some of the damage the Congress has already done. No one should mistake any of this for actually tackling the real economic challenges we face.

But if the parties can agree to offset a year or two of sequestration with back-loaded savings that protect economically vulnerable beneficiaries, and if they can get to all this while cordoning off the radical right who are probably eager for another shutdown or default threat â€" well, in today’s benighted Washington, that would actually be an advance.



How No-Strings Aid Affects the Poor

GiveDirectly is an unusual charity. Donors give money. GiveDirectly, well, gives it directly to the poor. It does not tell them how to spend it or when to spend it. And it does not give money to a specific group, like mothers or farmers or the elderly.

Development economists think that such so-called “unconditional cash transfers” might be a powerful and in some cases underutilized tool to help reduce poverty. That is partly because they trust the recipients of the money to use it in the way they deem best. Another aid group might give them specific goods or services â€" shoes, medical care or education, for instance. But maybe what they really need are sewing machine to help their small-scale businesses get off the ground. Or maybe a recipient would really like to attend a family member’s funeral or pay for a wedding. Unconditional cash transfers let her do that and trust that she is doing what is best for her.

But new evidence from a randomized control trial that GiveDirectly carried out in western Kenya shows that recipients are not just spending their transfers, providing a one-time boost to their consumption without affecting their overall well-being. Instead, the approach seems to have a powerful impact on their quality of life.

In the trial, GiveDirectly sent certain poor rural households money through M-Pesa, the Kenyan mobile money system. There was variation in whether it gave the money to the household’s wife or husband, whether it went out in a lump sum or installments and in the size of the transfer. (Read about how GiveDirectly conducted the study here.) The study’s authors â€" Johannes Haushofer of the Jameel Poverty Action Lab at the Massachusetts Institute of Technology and Jeremy Shapiro, one of the founders of GiveDirectly â€" then surveyed the recipients and compared them with households that did not receive any transfers.

Here are some of the headline results:

  • The value of the recipients’ “assets” â€" like farm animals and metal roofs â€" increased  58 percent.
  • The recipients were 23 percentage points more likely to have a metal roof rather than a thatch roof, and the value of their livestock increased by half.
  • The transfer reduced recipients’ hunger, increasing food consumption 20 percent and reducing the likelihood of a respondent going hungry the week before by 30 percent.
  • The recipients invested some of their money, increasing their revenue from animal husbandry and other small-scale enterprises.
  • The recipients were happier, more satisfied and felt less stress, with big transfers even reducing the recipients’ levels of cortisol, a stress hormone.
  • But not all poverty indicators improved, with little evidence of an effect on health or education.

The study does not look at the efficacy of cash transfers in relation to other interventions, like giving cash transfers for a specific business purpose or making in-kind donations. But it does look at dozens of other variables, like the likelihood of having a child vaccinated and the value of a household’s birds. There was more on GiveDirectly in a recent article from The New York Times Magazine. An earlier Economix post also looked at unconditional transfers.



How No-Strings Aid Affects the Poor

GiveDirectly is an unusual charity. Donors give money. GiveDirectly, well, gives it directly to the poor. It does not tell them how to spend it or when to spend it. And it does not give money to a specific group, like mothers or farmers or the elderly.

Development economists think that such so-called “unconditional cash transfers” might be a powerful and in some cases underutilized tool to help reduce poverty. That is partly because they trust the recipients of the money to use it in the way they deem best. Another aid group might give them specific goods or services â€" shoes, medical care or education, for instance. But maybe what they really need are sewing machine to help their small-scale businesses get off the ground. Or maybe a recipient would really like to attend a family member’s funeral or pay for a wedding. Unconditional cash transfers let her do that and trust that she is doing what is best for her.

But new evidence from a randomized control trial that GiveDirectly carried out in western Kenya shows that recipients are not just spending their transfers, providing a one-time boost to their consumption without affecting their overall well-being. Instead, the approach seems to have a powerful impact on their quality of life.

In the trial, GiveDirectly sent certain poor rural households money through M-Pesa, the Kenyan mobile money system. There was variation in whether it gave the money to the household’s wife or husband, whether it went out in a lump sum or installments and in the size of the transfer. (Read about how GiveDirectly conducted the study here.) The study’s authors â€" Johannes Haushofer of the Jameel Poverty Action Lab at the Massachusetts Institute of Technology and Jeremy Shapiro, one of the founders of GiveDirectly â€" then surveyed the recipients and compared them with households that did not receive any transfers.

Here are some of the headline results:

  • The value of the recipients’ “assets” â€" like farm animals and metal roofs â€" increased  58 percent.
  • The recipients were 23 percentage points more likely to have a metal roof rather than a thatch roof, and the value of their livestock increased by half.
  • The transfer reduced recipients’ hunger, increasing food consumption 20 percent and reducing the likelihood of a respondent going hungry the week before by 30 percent.
  • The recipients invested some of their money, increasing their revenue from animal husbandry and other small-scale enterprises.
  • The recipients were happier, more satisfied and felt less stress, with big transfers even reducing the recipients’ levels of cortisol, a stress hormone.
  • But not all poverty indicators improved, with little evidence of an effect on health or education.

The study does not look at the efficacy of cash transfers in relation to other interventions, like giving cash transfers for a specific business purpose or making in-kind donations. But it does look at dozens of other variables, like the likelihood of having a child vaccinated and the value of a household’s birds. There was more on GiveDirectly in a recent article from The New York Times Magazine. An earlier Economix post also looked at unconditional transfers.



Thursday, October 24, 2013

The Midterm Grade for HealthCare.gov

President Obama delivering remarks about the problematic debut of HealthCare.gov in the White House Rose Garden on Monday.Mark Wilson/Getty Images President Obama delivering remarks about the problematic debut of HealthCare.gov in the White House Rose Garden on Monday.
DESCRIPTION

Uwe E. Reinhardt is an economics professor at Princeton. He has some financial interests in the health care field.

I was challenged by people who commented on my last post to opine on the troubled rollout of the federal health insurance exchange HealthCare.gov, and I will oblige.

The best place to start is President Obama’s remarks in the Rose Garden of the White House on Monday.

Shortly before the president’s appearance, White House officials let it be known that the “president will directly address the technical problems with HealthCare.gov - troubles he and his team find unacceptable.” But in that Rose Garden appearance, the president did not explain what the technical problems with HealthCare.gov were, though he did acknowledge their existence and stated “there is no excuse” for them.

He then promised that in a techno-surge he would recruit the best information technology talent in the country to come to the rescue and fix the problems. It made me wonder why the A-team, as the White House now calls it, was not enlisted in the first place.

President Obama taught constitutional law at the University of Chicago Law School. How would he have graded a student’s performance on, say, a term paper or test that the professor viewed as “unacceptable,” especially when there was “no excuse” for the paper’s deficiencies?

One would hope that the grade would have been F, even under modern grade inflation. I certainly would affix that grade to such inexcusably deficient work.

But who exactly should be assigned the F for the troubled rollout of HealthCare.gov?

At the Rose Garden ceremony, President Obama noted, “There’s no sugar coating it, the Web site has been too slow, people are getting stuck during the application process, and I think it’s fair to say that nobody is more frustrated by that than I am.”

That makes it sound as if the president was surprised and then angered by the poor performance of HealthCare.gov. Indeed, in a television interview Tuesday with Dr. Sanjay Gupta on CNN, the secretary of health and human services, Kathleen Sebelius, appears to suggest as much, even though HealthCare.gov is reported to have crashed days before the start on Oct. 1 when only 100 people tried to register simultaneously.

As someone who has lectured on corporate governance and served on corporate boards, I find Secretary Sebelius’s statement astounding. Is this how the project was managed? They knew the Web site was not working and yet decided to go ahead with it anyway, without the president’s personal O.K. for so strategic and risky a decision?

Once elected, a president becomes chief executive of a giant federal enterprise. Anyone familiar with corporate management would have thought that for as ambitious and technically a complex project as the initial rollout of HealthCare.gov - so important to many uninsured Americans and so politically important to the White House - the chief executive would have remained in very close touch with the management team overseeing the project and thus would have been briefed daily or at least weekly on the progress of the project and especially on any problems with it.

Woe to the members of the management team in a corporation if problems with a project are hidden from the chief executive when they become known, exposing the chief executive to embarrassing public relations surprises. Heads would roll. The board, however, would assign the blame for such problems not primarily to the management team and instead to the chief executive himself or herself. He hired and supervised the team.

From that perspective, the blame for the disastrous rollout of HealthCare.gov goes to its entire management team, to be sure, but primarily to the chief executive on top of that project. In my view, not only the proverbial buck stops on the chief executive’s desk, but, for the management of this particular project, the grade of F goes there as well.

It is worth reminding readers, however, that grades on midterm papers or tests do not constitute the overall grade in a course. Students receiving an F on a midterm paper or test often end up with a respectable overall course grade, spurred on in part by that very failure.

Similarly, with enormous effort and, one hopes, constant future supervision by the chief executive, there is hope that the technical problems encountered so far can be fixed in time, with the celebrated A-team of software experts now on the scene.

Finally, it bears emphasizing that the ill-fated rollout of HealthCare.gov should not be taken as a commentary on the concept of health insurance exchanges in general, nor on the Affordable Care Act.

The idea to use means-tested public subsidies to assist low-income Americans to purchase competitively offered private health insurance sold through health-insurance exchanges has been popular among policy analysts and policy makers of both political parties since the 1970s. Any such exchange will have to have roughly the same kind of architecture and tasks as those required for HealthCare.gov, as is shown in the sketch below.

Particular versions of this general construct were built into the Clinton health plan in the 1990s and the Medicare Prescription Drug, Improvement and Modernization Act of 2003 (Part D of Medicare). It was also part of the health reform plan proposed by Senator John McCain, Republican of Arizona, during the presidential campaign of 2008 and of the Patients’ Choice Act proposed by Senator Tom Coburn, Republican of Oklahoma, in 2009.

Indeed, it has been the foundation of every health reform proposal in the United States other than the single-payer Medicare for All idea since the 1970s. And it would be the core of the defined contribution plan now being proposed by Representative Paul Ryan, Republican of Wisconsin, for the Medicare program.

Now, it may be argued that private electronic health insurance exchanges - for example, eHealthInsurance.com - have long been available to Americans in the market for individually purchased private health insurance, obviating the need for a new HealthCare.gov. That would imply an unfair comparison.

EHealthInsurance.com is a purely passive exchange that merely lists the policies and estimates of their premiums for sundry health insurers listing on the exchange. It does not grant subsidies toward the purchase of health insurance and establish eligibility for those subsidies, nor does it guarantee prices. It simply refers interested individuals to insurers to purchase policies, which are not community rated but actuarially priced. Such an exchange can be quite simple.

If one wants to couple means-tested federal or state government contributions toward private coverage - as the health-reform plans proposed by both parties do - then by necessity the insurance exchange must ping and interact with numerous other Web sites, each with its own software language of various vintages.

The sketch below illustrates that construct, but only for the most important linkages that must be pinged. HealthCare.gov probably has to ping still other sites. Such an exchange is incomparably more difficult to establish and prone to computer glitches than is, say, eHealthInsurance.com.

But several states did manage to establish on time such complex health insurance exchanges under the Affordable Care Act, with only minor rollout glitches of the sort one would expect. Somehow they managed.

With proper management and more energetic work earlier on, and untainted by the political desiderata reported to have affected the architecture of HealthCare.gov, that Web site’s management team should have been able to achieve the same success. It did not, hence the midterm grade F.



Wednesday, October 23, 2013

A Very Expensive Tea Party

DESCRIPTION

Simon Johnson, former chief economist of the International Monetary Fund, is the Ronald A. Kurtz Professor of Entrepreneurship at the M.I.T. Sloan School of Management and co-author of “White House Burning: The Founding Fathers, Our National Debt, and Why It Matters to You.”

The recent government shutdown and confrontation over the federal debt ceiling gained the Republicans nothing, at best - and may have cost them politically as a party. But it slowed the economy and undermined confidence in public finances in a way that will have a significant negative impact on future budgets of the United States. None of this should make for an appealing strategy, but Tea Party Republicans are giving every indication that they want to do the same thing again early next year. Their more moderate colleagues need to take a firmer hand.

On the political gains from recent tactics, it is hard to find any good news for the Republican side as a whole. Representative Thomas H. Massie, Republican of Kentucky, got it right when he said, “Goose egg, nothing, we got nothing,” in terms of policy changes. And opinion polls moved more sharply against Republicans than some had expected. Prominent Republicans including Senators John McCain of Arizona and the minority leader, Mitch McConnell of Kentucky, have now come out strongly against further shutdowns.

Unfortunately, they do not control Republicans in the House of Representatives.

The shutdown and debt ceiling brinkmanship did real damage to the economy. The immediate and direct costs are nicely summarized in a blog post by James H. Stock - an academic economist on the president’s Council of Economic Advisers. His assessment is that the effect is a

0.25 percentage point reduction in the annualized G.D.P. growth rate in the fourth quarter and a reduction of about 120,000 private sector jobs in the first two weeks of October (estimates use indicators available through Oct. 12th).

This is actually lower than the impact expected by some private-sector forecasters; after talking with people I trust, I would not be surprised if the overall impact ends up being closer to a 0.5 percentage point reduction in the fourth-quarter growth rate (annualized, as in the quotation from Mr. Stock.)

Does the country make up this growth later, for example because federal workers can now pay their bills? Probably not, because there is a persistent effect in terms of increasing uncertainty about public finances and about economic performance - and this will depress both some kinds of consumption and many forms of productive investment.

I’ve explained the point about uncertainty before - and I say the same on Capitol Hill at every opportunity. If people really believe that the government could default on its debts or otherwise not make payments to which it is committed, that introduces a huge element of uncertainty into many economic calculations. When you are less certain about what is going to happen tomorrow, you tend to postpone big irreversible decisions - like buying a new car or building a factory.

Scott Baker, Nick Bloom and Steven Davis have done really interesting work on the general issue of what causes policy uncertainty - and what kind of impact this can have. You can follow their daily data online; the latest available is from Oct. 23. Last week uncertainty increased and has now fallen back somewhat.

(I also recommend their interesting retrospective series on news coverage mentions of the terms “government shutdown” and “debt ceiling”; this confirms that the tactic has been much more prominent in the news recently than at any time since the mid-1980s, with the exception of the Gingrich shutdown in 1995-96.)

Look also at the Gallup Economic Confidence Index, which has fallen sharply to a level not seen since the last debt ceiling showdown in August 2011. (Thanks to Mr. Bloom for emphasizing this series.)

Members of the Tea Party movement express concern about the longer-run federal budget - and the potential negative impact of future debt levels. But their tactics are directly worsening the budget over exactly the time horizon that they say they care about.

The latest forecasts from the Congressional Budget Office (released in September) show a short-term improvement in the budget, i.e., a lower deficit, and then debt levels rising further down the road, with the debt-to-G.D.P. ratio reaching around 100 percent by about 2040.

The major long-term issue the United States faces is rising health-care costs (not just the part that the federal government pays for), but an important part of our projected future deficits is interest costs, i.e., what the government needs to pay holders of its debt.

The United States dollar is the world’s primary reserve currency and safe haven; the asset that major investors, such as central banks and big international companies, actually buy is United States Treasury debt. In the short term, when Congress acts in a crazy and irresponsible fashion that makes the world feel more unstable, investors “seek safety” and actually buy American government debt, pushing down yields (bond prices and yields move inversely to each other). The United States is the only country in the history of the world that has this feature; most countries, when they act irresponsibly, see their bond yields go up.

Over a longer period of time, of course, investors get the message: United States Treasury debt is not so safe and cannot be trusted as in the past. They should look for alternative assets. The euro may bounce back. The British pound, Swiss franc and Japanese yen have all been contenders in the past. And the most real threat over the next 20 years is probably the rising international role of China’s renminbi.

Unwittingly and perhaps inadvertently, the Tea Party is helping to fulfill the prophecies of my Peterson Institute colleague Arvind Subramanian, who has long predicted that the renminbi will eclipse the dollar - and that China is likely to surpass the United States, in terms of economic weight and political clout. Speeding up such a transition will directly increase the interest cost of the national debt and exactly run counter to what Tea Party representatives claim they want to do. The change would make the longer-run public finances of the United States worse, not better.

In a parliamentary democracy, this kind of careless approach would condemn the responsible party to a long period of fruitless opposition, like that experienced by Britain’s Labor Party in the 1980s and early 1990s.

In the American system, with its carefully conceived checks and balances, an organized and well-funded minority can do a lot more damage - as we have just been reminded. The only force that can rein in Tea Party extremism - and get the nation off the road to fiscal ruin - is resurgence among Republican moderates. Unfortunately, their recent performance has not been impressive.