• Recommendations for Further Reading
  • October 28, 2016

Catastrophe insurance, housing discrimination, and tuned-out voters

Insurance contracts with "parametric triggers" – meaning payouts are based on observable phenomena like Richter-scale readings or hurricane wind speed – are less expensive because there is limited scope for moral hazard.

harvepino/Bigstock

 

Smorgasbord

More than 40 million people living in the United States were born in other countries, and almost an equal number have at least one foreign-born parent. Together, the first generation (foreign-born) and second generation (children of the foreign-born) comprise almost one in four Americans. It comes as little surprise, then, that many U.S. residents view immigration as a major policy issue facing the nation. Not only does immigration affect the environment in which everyone lives, learns, and works, but it also interacts with nearly every policy area of concern, from jobs and the economy, education, and health care, to federal, state, and local government budgets.

Blau and Mackie (2016)

That’s the beginning of The Economic and Fiscal Consequences of Immigration, a National Academy of Sciences report edited by Francine D. Blau and Christopher Mackie. September 2016. A “prepublication copy” can be downloaded here. The report can be usefully read in combination with the Symposium on Immigration in the Autumn 2016 issue of the Journal of Economic Perspectives.

¤

Olivier Blanchard asks, “Do DSGE Models Have a Future?” in the Peterson Institute for International Economics Policy Brief 16-11, August 2016:

For those who are not macroeconomists, or for those macroeconomists who lived on a desert island for the last 20 years, here is a brief refresher. DSGE stands for ‘dynamic stochastic general equilibrium.’ The models are indeed dynamic, stochastic, and characterize the general equilibrium of the economy. They make three strategic modeling choices: First, the behavior of consumers, firms, and financial intermediaries, when present, is formally derived from microfoundations. Second, the underlying economic environment is that of a competitive economy, but with a number of essential distortions added, from nominal ties to monopoly power to information problems. Third, the model is estimated as a system, rather than equation by equation in the previous generations of macroeconomic models. … [C]urrent DSGE models are best seen as large-scale versions of the New Keynesian model, which emphasizes nominal rigidities and a role for aggregate demand. … There are many reasons to dislike current DSGE models. First: They are based on unappealing assumptions. Not just simplifying assumptions, as any model must, but assumptions profoundly at odds with what we know about consumers and firms. ... Second: Their standard method of estimation, which is a mix of calibration and Bayesian estimation, is unconvincing. ... Third: While the models can formally be used for normative purposes, normative implications are not convincing. ... Fourth: DSGE models are bad communication devices. A typical DSGE paper adds a particular distortion to an existing core. It starts with an algebra-heavy derivation of the model, then goes through estimation, and ends with various dynamic simulations showing the effects of the distortion on the general equilibrium properties of the model. … I see the current DSGE models as seriously flawed, but they are eminently improvable and central to the future of macroeconomics.

Blanchard (2016)

¤

Nadim Ahmad and Paul Schreyer of the OECD ask, “Are GDP and Productivity Measures Up to the Challenges of the Digital Economy?” in the International Productivity Monitor, published by the Ontario-based Centre for the Study of Living Standards, Spring 2016. From the abstract:

Recent years have seen a rapid emergence of disruptive technologies with new forms of intermediation, service provision and consumption, with digitalization being a common characteristic. These include new platforms that facilitate peer-to-peer transactions, such as AirBnB and Uber, new activities such as crowd sourcing, a growing category of the ‘occasional self-employed’ and prevalence of ‘free’ media services, funded by advertising and ‘Big data’. Against a backdrop of slowing rates of measured productivity growth, this has raised questions about the conceptual basis of GDP, and whether current compilation methods are adequate. This article frames the discussion under an umbrella of the Digitalized Economy, covering also statistical challenges where digitalization is a complicating feature such as the measurement of international transactions and knowledge-based assets. It delineates between conceptual and compilation issues and highlights areas where further investigations are merited. The overall conclusion is that, on balance, the accounting framework for GDP looks to be up to the challenges posed by digitalization. Many practical measurement issues remain, however, in particular concerning price changes and where digitalization meets internationalization.

Ahmad and Schreyer (2016)

¤

Theodore Talbot and Owen Barder discuss Payouts for Perils: Why Disaster Aid Is Broken, and How Catastrophe Insurance Can Help to Fix It in Center for Global Development Policy Paper 087, July 2016:

[T]he principle is simple: rather than transferring risk to a re-insurer, an insurance firm creates a single company (a ‘special purpose vehicle’, or SPV) whose sole purpose is to hold this risk. The SPV sells bonds to investors. The investors lose the face value of those bonds if the hazard specified in the bond contracts hits, but earn a stream of payments (the insurance premiums) until it does, or the bond’s term expires. This gives any actor—insurer, re-insurer, or sovereign risk pool like schemes in the Pacific, Caribbean and Sub-Saharan Africa, which we discuss below—a way to transfer risks from their balance sheets to investors. … Tying contracts to external, observable phenomena such as Richter-scale readings for the extent of earthquakes or median surface temperature for droughts means that risk transfer can be specifically tailored to the situation. There are three varieties of triggers: parametric, modelled-loss, and indemnity. Parametric triggers are the easiest to calculate based on natural science data—satellite data reporting a hurricane’s wind speed is transparent, publicly available, and cannot be affected by the actions of the insured or the insurer. When a variable exceeds an agreed threshold, the contract’s clauses to payout are invoked. Because neither the insured nor the insurer can affect the parameter, there is no cost of moral hazard, since the risks—the probabilities of bad events happening—cannot be changed. Modelled losses provide estimates of damage based on economic models. Indemnity coverage is based on the insurance claims and loss adjustment and are the most expensive to operate and take the most time to pay out (or not).

Talbot and Barder (2016)

¤

Darrell Duffie discusses “Financial Regulatory Reform After the Crisis: An Assessment.” “In the United States, the most toxic systemic financial firms were investment banks that relied heavily on run-prone wholesale short-term financing of their securities inventories. A large fraction of this funding was obtained from unstable money market mutual funds. A substantial amount of this money-fund liquidity was arranged in the overnight repo market, which was discovered by regulators to rely precariously on two U.S. clearing banks for trillions of dollars of intra-day credit. The core plumbing of American securities financing markets was a model of disrepair. … The FSB [Financial Stability Board] summarized progress within ‘four core elements’ of financial-stability regulation: 1. Making financial institutions more resilient. 2. Ending ‘too-big-to-fail.’ 3. Making derivatives markets safer. 4. Transforming shadow banking. At this point, only the first of these cores element of the reform, ‘making financial institutions more resilient,’ can be scored a clear success, although even here much more work remains to be done.” The paper was presented at the 2016 European Central Bank Forum on Central Banking, held June 27–29, 2016. A conference program, with links to papers and video, is at https://www.ecbforum.eu/en/content/programme/overview/programme-overview.html.

¤

The Council of Economic Advisers has published an “issue brief” titled Benefits of Competition and Indicators of Market Power:

[C]ompetition appears to be declining in at least part of the economy. This section reviews three sets of trends that are broadly suggestive of a decline in competition: increasing industry concentration, increasing rents accruing to a few firms, and lower levels of firm entry and labor market mobility. The U.S. Census Bureau tracks revenue concentration by industry, and one measurement it provides of such concentration is the share of revenue earned by the 50 largest firms in the industry. … [T]he majority of industries have seen increases in the revenue share enjoyed by the 50 largest firms between 1997 and 2012. Several industry-specific studies have found consistent results over longer periods of time.

CEA (2016)

The chair of the CEA, Jason Furman, provides some additional context in his September 16, 2016, lecture Beyond Antitrust: The Role of Competition Policy in Promoting Inclusive Growth.

¤

Steven Bovie, Michael K. Bednar, Ruth V. Aguilera, and Joel L. Andrus ask, Are Boards Designed to Fail? The Implausibility of Effective Board Monitoring:

In fact, most academic research, popular press accounts, and even U.S. legislation all echo the sentiment and deeply held belief that boards should be able to actively monitor and control management. ... Given the research reviewed in this article, we are pessimistic about the possibility of boards being able to effectively monitor managers on an ongoing basis in many circumstances. ... Given the size and complexity of many modern firms, we believe some firms may effectively be ‘too big to monitor’, and that successful monitoring by boards may be highly unlikely in many large public firms. It might be time to concede that our conception of boards as all-encompassing monitors is doubtful ... Consequently, we believe that future research and theorizing needs to focus on boards as advice-giving bodies, or bodies that get involved in punctuated events, and look to other corporate governance mechanisms to secure monitoring.

Bovie et al. (2016)

 

Symposia

Richard E. Baldwin has edited Brexit Beckons: Thinking Ahead by Leading Economists, with short and readable contributions by 19 economists. From Baldwin’s overview essay:

The 23 June 2016 Brexit referendum saw British voters reject membership of the European Union. This VoxEU eBook presents 19 essays written by leading economists on a wide array of topics and from a broad range of perspectives. … [T]he key point is that UK policy in many areas has been made at the EU level for decades. Leaving the EU thus means that the UK will have to replace EU policies, rules, and agreements with British policies, rules, and agreements. As we shall see, this will prove a massively complex task. … Charles Wyplosz suggests that Brexit would be an opportunity for the EU to re-evaluate the degree of centralisation that has been reached so far. He argues for a simultaneous bidirectional change of authority implemented in such a way such that each country gives a little and takes a little in order to arrive at a package that is both politically acceptable and economically efficient. The problem is that this sort of root-and-branch rethinking was tried ten years ago at the European Convention. The result—the Constitutional Treaty—was rejected by several members, some via referendums. … Noone can anticipate where the Brexit vote will take the UK and the EU. The alternative that seems most sensible from an economic perspective is the Norway option. It may well be that the UK government could make this palatable, despite the free movement of people, by bundling it together with a very thorough set of policies to help the UK citizens who have been left behind by globalisation, technological advances, and European integration. Maybe we could call it the ‘EEA plus anti-exclusion option’ (EEA+AE). If this came to pass, the main economic policy outcome of the Brexit vote would be simple. The UK would end up with more influence over its trade, agricultural, and regional policies, but less influence over the rules and regulation governing its industrial and service sectors.

Baldwin (2016)

¤

 

Recent research using audit studies has found evidence of ongoing housing discrimination in the United States, according to a symposium published by the U.S. Department of Housing and Urban Development.

Vlad Kyryl/Bigstock

Cityscape, a magazine of the US Department of Housing and Urban Development, has published a nine-paper symposium on "Housing Discrimination Today." Sun Jung Oh and John Yinger ask in the lead article, “What Have We Learned From Paired Testing in Housing Markets?” From the abstract:

Fair housing audits or tests, which compare the way housing agents treat equally qualified homeseekers in different racial or ethnic groups, are an important tool both for enforcing fair housing laws and for studying discriminatory behavior in housing markets. This article explains the features of two types of housing audits: in-person paired audits and correspondence audits, which are usually conducted over the Internet. … The studies reviewed include four national studies in the United States based on in-person audits and many studies based on correspondence audits in the United States and in several European countries. This article also reviews audit-based evidence about the causes of discrimination in housing markets. Despite variation in methods, sample sizes, and locations, audit studies consistently find evidence of statistically significant discrimination against homeseekers who belong to a historically disadvantaged racial or ethnic group. The 2012 national audit study found, for example, that the share of audits in which a White homebuyer was shown more available houses than an equally qualified Black homebuyer was 9 percentage points higher than the share in which the Black homebuyer was shown more houses than his or her White counterpart. In the United States, housing discrimination against Black and Hispanic homeseekers appears to have declined in some types of agent behavior, such as whether the advertised unit is shown to a customer, but to have increased in others, such as steering Black and Hispanic homeseekers toward minority neighborhoods.

Oh and Yinger (2015)

 

Negative Interest Rates and Neo-Fisherism

Benoît Cœuré gave a lecture on July 28, 2016 at the European Central Bank entitled “Assessing the Implications of Negative Interest Rates":

In June 2014, following in the footsteps of the Danish National Bank, the European Central Bank (ECB) became the first major central bank to lower one of its key policy rates to negative territory. The rate of interest on our deposit facility is now –0.4% while the rate on our main refinancing operations is zero. …. It is difficult to know how long these low interest rates will persist, but it seems possible that they will be low for quite some time. That certainly is the view of financial markets, where the return on government bonds is negative for a range of countries, even at long maturities. … Central bankers should however be mindful of a potential ‘economic lower bound’, at which the detrimental effects of low rates on the banking sector outweigh their benefits, and further rate cuts risk reversing the expansionary monetary policy stance. … The current conditions of financial intermediation suggest, however, that the economic lower bound is safely below the current level of the deposit facility rate and that the impact of negative rates, combined with the APP and forward guidance, has clearly been net positive. … Finally, what about the risks to financial stability? The ongoing economic recovery should help bolster the income and earnings position of euro area households and non-financial corporations, thereby mitigating the risks associated with a continued debt overhang which persists in some countries. …. [T]he best way to counter any potentially emerging risk in any market segment is targeted action by the macroprudential authorities.

Cœuré (2016)

¤

Stephen Williamson provides an overview of Neo-Fisherism: A Radical Idea, or the Most Obvious Solution to the Low-Inflation Problem? published in the Regional Economist by the Federal Reserve Bank of St. Louis:

A well-established empirical regularity, and a key component of essentially all mainstream macroeconomic theories, is the Fisher effect—a positive relationship between the nominal interest rate and inflation. … Many macroeconomists have interpreted the Fisher relation … as involving causation running from inflation to the nominal interest rate … Market interest rates are determined by the behavior of borrowers and lenders in credit markets, and these borrowers and lenders care about real rates of interest. … But, what if we turn this idea on its head, and we think of the causation running from the nominal interest rate targeted by the central bank to inflation? This, basically, is what Neo-Fisherism is all about. Neo-Fisherism says … that if the central bank wants inflation to go up, it should increase its nominal interest rate target, rather than decrease it, as conventional central banking wisdom would dictate.

Williamson (2016)

 

Interviews

Douglas Clement interviews Matthew Gentzkow about media economics for the Federal Reserve Bank of Minneapolis:

I started thinking about this huge, downward trend that we’ve seen since about the middle of the 20th century in voter turnout and political participation. It’s really around the time that TV was introduced that that trend in the time series changes sharply, so I thought TV could have played a role. Now, a priori, you could easily imagine it going either way. There’s a lot of evidence before and since that in many contexts, giving people more information has a very robust positive effect on political participation and voting. So, if you think of TV as the new source of information, a new technology for delivering political information, you might expect the effect to be positive. And, indeed, many people at the time predicted that this would be a very good thing for political participation. On the other hand, TV isn’t just political information; it’s also a lot of entertainment. And in that research, I found that what seemed to be true is that the more important effect of TV is to substitute for—crowd out—a lot of other media like newspapers and radio that on net had more political content. … So, we see that when television is introduced, indeed, voter turnout starts to decline. We can use this variation across different places and see that that sharp drop in voter turnout coincides with the timing of when TV came in. … That drop is especially big in local elections. A lot of new technologies … are pushing people toward paying less attention to local politics, local issues, local communities.

Clement (2016)

In the Winter 2015 issue of the Journal of Economic Perspectives, Andrei Shleifer provides an overview of Gentzkow’s work in Matthew Gentzkow, Winner of the 2014 Clark Medal.

 

In a recent interview, Matthew Gentzkow explained that TV can be a good way to stay informed, but also offers many entertaining distractions that can leave voters less attentive to politics, especially local politics.

sbc2758/Bigstock

¤

Aaron Steelman interviews Erik Hurst for the Richmond Fed on an array of topics in labor, household, and urban economics. For example:

Many urban models historically assumed that agglomeration benefits usually came from the firm side. Someone might want to be close to the center city, for instance, because most firms are located in the center city. So the spillover for the household was the commuting time to where the firms were, and the firms chose to locate near each other because of agglomeration benefits. I have always been interested in it from another angle. When we all come together as individuals, we may create agglomeration forces that produce positive or negative consumption amenities. Thinking about it this way, when a lot of high-income people live together, maybe there are better schools because of peer effects or higher taxes. Or maybe there are more restaurants because restaurants are generally a luxury good. Or maybe there’s less crime because there is an inverse relationships between neighborhood income and crime, which empirically seems to hold. So, while we value proximity to firms, that’s not the only thing we value. How important are these consumption amenities? And more importantly, how do these consumption amenities evolve over time ...

Steelman (2016)

 

Discussion Starters

Juliette Cubanski, Tricia Neuman, Shannon Griffin, and Anthony Damico provide a “Data Note” about “Medicare Spending at the End of Life: A Snapshot of Beneficiaries Who Died in 2014 and the Cost of Their Care”:

Of the 2.6 million people who died in the U.S. in 2014, 2.1 million, or eight out of 10, were people on Medicare, making Medicare the largest insurer of medical care provided at the end of life. Spending on Medicare beneficiaries in their last year of life accounts for about 25% of total Medicare spending on beneficiaries age 65 or older. … The share of total traditional Medicare spending on beneficiaries who died at some point during the year has dropped over time, from 18.6% in 2000 to 13.5% in 2014 … This drop is likely due to a combination of factors affecting total traditional Medicare spending over time and spending on decedents, including: growth in the number of Medicare beneficiaries overall, particularly in recent years as the baby boom generation ages on to Medicare, which means more younger, healthier beneficiaries, on average; longer life expectancy, which means people are living longer and dying at older ages … lower average per capita spending on older decedents compared to younger decedents … and slower growth in the rate of annual per capita spending for decedents than survivors …

Cubanski et al. (2016)

¤

Lawrence F. Katz and Alan B. Krueger provide evidence concerning The Rise and Nature of Alternative Work Arrangements in the United States, 1995–2015 in a Princeton University Industrial Relations Section working paper:

[W]e conducted the RAND-Princeton Contingent Worker Survey (RPCWS), a version of the CWS, as part of the RAND American Life Panel (ALP) in October and November of 2015. … A comparison of our survey results from the 2015 RPCWS to the 2005 BLS [Bureau of Labor Statistics] CWS indicates that the percentage of workers engaged in alternative work arrangements—defined as temporary help agency workers, on-call workers, contract company workers, and independent contractors or freelancers—rose from 10.7 percent in February 2005 to 15.8 percent in late 2015. The increase over the last decade is particularly noteworthy given that the BLS CWS showed hardly any change in the percent of workers engaged in alternative work arrangements from 1995 to 2005. … A striking implication of these estimates is that 94 percent of the net employment growth in the U.S. economy from 2005 to 2015 appears to have occurred in alternative work arrangements.

Katz and Krueger (2016)