Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

The Pakistani public is headed to the polls on July 25, to vote in the third consecutive election since 2008. While it remains difficult to predict which political party will emerge victorious, one thing is clear: Pakistan’s youth will most likely determine the winner.

Pakistan is in the middle of youth bulge. According to Pakistan’s National Human Development Report, 64 percent of the population is between the ages of 15 and 29. This population is concerned with completing their education, securing a job to increase the likelihood of financial stability, having the ability to change a job if needed (indicating a desire to not only have a strong economy but also a diverse one), being able to marry and have children, having the ability to buy a house, car, and other material comforts, and being able to emigrate and/or study aboard.

But do Pakistan’s major political parties have the capacity to address the youth’s concerns? Not really.

All major political parties—Pakistan Muslim League–N (PML–N), Pakistan Tehreek-e-Insaf (PTI), and Pakistan Peoples Party (PPP)—have long understood the importance of the youth, and have tried various techniques to appeal to young voters. When campaigning for the 2013 general elections, PML–N introduced a program that provided free laptops to poor students to increase their accessibility to technology as part of a larger initiative to improve the quality of education. PPP sought to engage the youth in policymaking by creating youth councils while PTI appealed to the youth directly, urging young people to join PTI and create a “Naya (New) Pakistan” free of corruption. The 2018 campaign season has also been filled with appeals to the youth, with political parties (even religious ones) hiring DJs to “raise the passion of people.” But the political parties manifestos don’t meet the passion of the rallies.

PML–N’s 2018 manifesto describes: a self-employment scheme for youths that includes low-interest loans and increased access to community banks; the creation of low-medium skilled jobs in the agricultural sector; and an emphasis on vocational training. The manifesto states that PML–N is making youth representation in democratic forums a top priority. Yet, the manifesto is blatantly Punjab-centric. For example, the vocation training programs are all sourced from Punjab, such as TEVTA or Technical Education and Vocational Training Authority in Punjab, the PSDF or the Punjab Skills Development Fund that is designed to provide free vocational training to poor and vulnerable populations, and the PVTC or the Punjab Vocational Training Council, which focuses on vocational teacher training. What about the youth in other provinces and tribal areas?

PPP’s 2018 manifesto has a broader scope. While it goes into a more detail reforming and modernizing education, improving access to quality education, revitalizing sports, and increasing technical and vocational programs, it fails to provide actual policies and programs that can achieve these lofty goals. For example, the manifesto states that PPP aims to regulate internship programs to all young people to increase their work experience, making them more appealing when they enter the workforce. Yet, no details have been provided on this regulation program. Will it be based on a quota system? Will students be able to get university credit for internships?

Similar to PPP’s manifesto, PTI’s 2018 manifesto lists a number of noteworthy goals but fails to provide any implementation details. For example, PTI’s manifesto focuses on doubling the size of existing skill development and vocational training programs but fails to explain how. The manifesto states that PTI will launch a national program to provide practical training to graduates in the public and private organizations but fails to name any specific organizations it has been in touch regarding such a program. PTI also wants to establish a liaison under the Ministry of Foreign Affairs to promote foreign placement of Pakistani talent but does not discuss what a PTI-led government will do to reduce visa restrictions that Pakistani nationals face worldwide.  

Pakistan’s National Human Development Report found that 80 percent of Pakistan’s youth has voted in the past, and reports indicate that Wednesday’s election won’t be much different. While youth involvement in Pakistan’s political processes has evolved over time, one thing is clear: Pakistan’s political parties need to not only engage the youth but also focus on how they can meet the youth’s demands in a fiscally responsible way. For now, none of the parties seem to have a clear idea of how to deal with the country’s youth bulge. 

Last week, the Washington Post picked up on an article in Police Quarterly that showed clearance rates for property and violent crimes increased in Colorado and Washington following their legalization of marijuana for recreational purposes. The clearance rate is the percentage of reported crimes that result in an arrest for those crimes. These data support the notion that Cato and other pro-legalization advocates have been saying for years: if the government ends the drug war, it frees up police resources to solve other crimes and perform other functions more necessary to public well-being than prosecuting drug crimes. Of course, these data are not conclusively causal and different agencies may react differently to legalization in their jurisdictions, but they are a good sign for reform that academics can measure as more states legalize.

On a related note, my colleague Jeff Miron published a piece today examining the budgetary impact of ending drug prohibition. You can find that here.

Over the weekend Treasury Secretary Steven Mnuchin made some remarks that could be interpreted as positive for trade liberalization:

Treasury Secretary Steven Mnuchin is “very hopeful” the US can make progress brokering separate free trade deals with the European Union and Japan during a weekend summit in Buenos Aires.

“I’m encouraged by the EU’s trade agreement with Japan,” Mnuchin said Saturday in an interview with CNN at the sidelines of the G-20 meeting in Argentina.

The EU and Japan signed a massive trade deal earlier this week, cutting or eliminating tariffs on nearly all goods. The deal is in contrast to escalating trade disputes between the US and several of its major allies, including the European Union.

The EU-Japan agreement, which covers 600 million people and almost a third of the global economy, will remove tariffs on European exports such as cheese and wine. It will also reduce barriers on Japanese automakers and electronic firms in the European Union.

President Donald Trump has imposed tariffs on a range of foreign goods from Europe, Canada, Mexico and other trading partners, and is threatening even more action.

Mnuchin said he is still reviewing the details of the EU-Japan agreement, but stressed that any free trade deal with the EU would have to go beyond cutting tariffs on goods.

“This has to be about dropping non-tariff barriers and subsidies as well. This has to be a deal with its entirety,” he said.

Elsewhere, it was reported that he said: “If Europe believes in free trade, we’re ready to sign a free trade agreement.”

If you haven’t been following trade policy for the last two years, you might see this as a positive and constructive approach by the Trump adminstration towards trade liberalization. But the broader context makes clear that this is not the case. Among other things, the Trump administration has imposed new tariffs on the EU, Japan, and others; and while there have been offhand remarks about trade liberalization (see similar remarks from President Trump and National Economic Council Director Larry Kudlow here), the administration has not made any formal efforts to get such a process started. In short, and contrary to Mnuchin’s statements, the Trump administration does not seem the least bit ready to sign a new free trade agreement, with the EU or anyone else (it is, however, revisiting some older trade agreements).

Of course, the Trump administration could, if it wanted to, negotiate free trade agreements with the EU, Japan, and others. These agreements are not a panacea for eliminating protectionism, but they do achieve significant liberalization. As long as expectations on both sides are kept at reasonable levels (in terms of timing and scope), deals are possible. Through these agreements, most tariffs on trade between the parties could be eliminated, and some non-tariff barriers could be reduced (subsidies, by contrast, are rarely addressed in bilateral deals).

However, aside from occasional offhand remarks, the Trump administration is not taking any steps towards starting these negotiations, and instead is making the possibility of deals less likely through its confrontational and unjustified Section 232 tariffs on steel and aluminum (and possibly soon, on cars). As the EU and Japan have just shown, these trade deals are possible. It remains to be seen if the Trump administration is willing and able to negotate them.

The federal government spends an unreal amount of taxpayer money cleaning up nuclear weapons sites. In this study at Downsizing Government, I noted that between 1990 and 2016, Congress spent $152 billion on nuclear cleanup, with about $6 billion more every year.

Where does the money go? About $5 billion has been spent at a facility in South Carolina called the Savannah River Site. In the study, I said, “The facility has a negligent safety culture, and environmental issues such as water contamination plagued it for years. Cleanup costs have soared. The construction of a mixed oxide fuel facility at the site was supposed to cost $5 billion, but the price tag has soared to $17 billion.”

The Wall Street Journal provided an update on the Savannah River boondoggle today:

The U.S. Energy Department says it is spending $1.2 million a day on a partially built South Carolina nuclear facility that it wants to abandon due to soaring costs.

Congress has continued funding construction of the plant, which would be used to dispose of surplus weapons-grade plutonium, despite a series of reviews casting doubt on the financial logic involved.

… The recent jousting marks the latest twist for the troubled Mixed-Oxide Fuel Fabrication Facility. In 2007, U.S. officials said the so-called MOX plant would cost $4.8 billion and be completed by 2016. DOE officials today estimate it would cost $17.2 billion and take until 2048, assuming $350 million a year in federal funding.

… In 2014, the Energy Department concluded that plutonium could be disposed far more cheaply using a different method, known as “dilute and dispose.” The shift is opposed by South Carolina officials and members of the state’s congressional delegation, including Republican Sen. Lindsey Graham.

… From 2014 to 2016, Congress gave the Energy Department the same message: Keep building the MOX plant. Last year, Congress authorized the energy secretary to stop construction if evidence showed another method would cost less than half as much.

In May, Energy Secretary Rick Perry invoked the provision and prepared to halt construction in June. South Carolina sued, and U.S. District Judge J. Michelle Childs granted a preliminary injunction June 7 in the state’s favor, pending further litigation.

For more on energy spending, see


On numerous occasions, President Trump has described America’s asylum laws as the most accepting—or, in his words, “dumbest,” in the world. “When people, with or without children, enter our Country, they must be told to leave… only country in the World that does this!” he tweeted this month. But many other countries are much more accepting of asylum seekers than the United States is. In fact, the United States ranks 50th in the world in net increase in asylees, refugees, and people in similar situations as a share of its population since 2012.

The United Nations High Commissioner for Refugees (UNHCR) publishes data on the number of refugees and asylum seekers in each country. From 2012 to 2017, UNHCR finds that the United States accepted a net increase of 654,128 asylees, refugees, and people in similar circumstances. That amounted to 0.2 percent of the U.S. population in 2017. As the Figure below shows, 49 other countries had higher rates of acceptance than the United States did. The average rate of acceptance for the top 50 countries was 1.2 percent of the population—six times higher than the U.S. rate.

Figure: Top 50 refugee-asylee receiving nations

In absolute terms, the United States does rank in the top 10, but it is important to control for the size of the population of the receiving country both to understand the likely effects of the absolute numbers on the country and to allow a legitimate comparison across countries. This is the same reason why per capita Gross Domestic Product (GDP) is a better measure of how wealthy people in a country are than just aggregate GDP. The Chinese are not seven times wealthier than Canadians because China’s GDP is seven times larger. In fact, Canadians are five times wealthier because Canada’s per capita GDP is five times larger. To understand how wealthy or how accepting a country is, the population of the country is as relevant as the size of its aggregate wealth or the absolute number of immigrants it accepts.

The more accepting nations include Australia and most of Western and Northern Europe—Sweden, Austria, Germany, Denmark, Switzerland, Italy, Norway, Finland, Belgium, the Netherlands, and France. The average rate for these countries was 0.7 percent—3.3 times more than the United States. But it also includes many countries that are much less wealthy than the United States. Lebanon, which has accepted an astounding 14 percent of its population in asylees just since 2012, has a per capita GDP of $8,400—7 times less than the United States—but it has accepted asylees at 73 times the rate of the United States.

President Trump is simply incorrect that other countries don’t accept refugees and asylees, including those who come in unannounced. In fact, four dozen other countries are dealing with more significant asylee populations than the United States is. Some of the difference between the United States and other countries could be explained by UNHCR shifts in methodology in who is counted as a refugee or asylee. As I have explained before, however, the United States has been one of the least welcoming wealthy countries in terms of net total immigration as a share of the country’s population in recent years. America should reform its immigration laws, but it should do so to make them more welcoming, not less.

Table: Countries with net increases in refugee-asylee populations

Venezuelans are fleeing their home country in large numbers due to the economic failure of socialism as well as the increasing authoritarianism of the Venezuelan government.  The economic collapse there, inflation reached tens of thousands of percent this year, and the escalating brutality of the Maduro dictatorship are creating a crisis unlike any faced in South America in decades – if ever.  This blog post will provide some information on the scale of the Venezuelan exodus and some suggestions for what other countries can do to mitigate problems caused by the flow of refugees and asylum seekers.   


The roots of the current collapse of Venezuela run deep. Hugo Chavez became the president of Venezuela in 1999 and immediately set about concentrating economic power in the government and political power in himself personally.  He instituted tight government controls on capital, exchange rates, and started a more irresponsible monetary policy that created chaotic financial market conditions that further justified his nationalizations of business and confiscations of private property.  Revenues from the Venezuelan oil industry helped keep the government and economy afloat while the private economy suffered under increasingly harsh and punitive restrictions.  Chavez died in 2013 and was succeeded by Nicolas Maduro who continued Chavez’s economic policies and accelerated the concentration of political power in himself.  The collapse of oil prices beginning in 2014 exposed the economic damage wrought by Chavez and Maduro as inflation took off, GDP shrank, and Maduro’s regime responded with increasingly brutal police crackdowns that are continuing to today.  Most watchers of Venezuela conclude that the current death spiral began in 2015, the year after the decline in oil prices.    

The Scale of the Exodus

The number of people who have left Venezuela is staggering.  Estimates usually range from 1.6 million to 4 million Venezuelans have left their home country.  The International Organization for Migration (IOM) estimates that about 2 million Venezuelans are living outside of Venezuela as of June 2018, a number that has increased by more than a million since 2015 but is still likely an underestimate.  For instance, the number of Venezuelans living in Columbia, Peru, Chile, Brazil, Ecuador, Argentina, and Uruguay in June 2018 was over 1.85 million, up by a little less than one million since 2017. 

To try and reconcile conflicting and confusing estimates, I combined a few different sources and make some simple assumptions.  First, I made a few conservative assumptions when estimating the number of Venezuelans in Argentina, Uruguay, and Brazil.  I estimated that the 2018 number of Venezuelans in Argentina and Uruguay was unchanged from 2017.  For Brazil, I relied on recent news reports to estimate that there was a net 22,000 increase in the number of Venezuelans there in 2018 over 2017.   I then added the additional one million Venezuelans living in those countries to the 1.64 million Venezuelans who were estimated to be living outside of their home country in 2017.  Thus, I estimate that 2.61 million Venezuelans are living abroad in mid-2018 (Figure 1).

Figure 1: Venezuelans living abroad

The emigrant Venezuelan population is equal to about 7.6 percent of all Venezuelan nationals (Figure 2).  The economic collapse in Venezuelan began in 2015, the year after the oil price started declining.  The percent of Venezuelans living abroad increased from 2.2 percent in 2015 to 7.6 percent in 2018 – a 3.5-fold increase.  The Syrian refugee crisis, which began with the start of the Syrian civil war in 2011, is the biggest in recent history.  The Syrian refugee crisis boosted the number of Syrians living abroad by 4.3-fold after four years of civil war.

Figure 2: Venezuelans and Syrians living abroad as a percent of their respective populations

Venezuela has a much larger population than Syria so it will take longer for a fifth of them to flee the country if it ever gets to the point.  However, the number of Venezuelans living outside of their country could meet or exceed the numbers of Syrians in a similar position in the next couple of years if trends continue (Figure 3).  According to a recent poll, about half of Venezuelans between ages 18 and 24 said they wanted to leave Venezuela and 55 percent of upper-middle-class respondents said they wanted to.  If those polls are accurate then the duration of the economic crisis in Venezuela will determine whether it reaches Syrian refugee-level proportions.

Figure 3: Syrians and Venezuelans Living Abroad

As of mid-2018, I estimate that about 71 percent of the Venezuelans who have fled are in other South American countries (Figure 4).  About 12 percent have made it to Canada or the United States, 5 percent are in Central America, Mexico, or the Caribbean, and 13 percent are in other parts of the world.

Figure 4: Destination countries

In the United States, 65,621 Venezuelans have applied for asylum at ports of entry since February 2014, picking up substantially in 2016 and 2017 (Figure 5).  The U.S. federal government reacted to this by cutting the number of tourist B-visas that it issues to Venezuelans, aided most recently by additional restrictions put on Venezuelans through President Trump’s so-called travel ban, but the number of asylum seekers continued to grow at least through the end of 2017 (Figure 6).

Figure 5: Venezuelan asylum seekers
Figure 6: Venezuelan asylum seekers and B-Visa issuances

How Venezuela’s Neighbors are Reacting

About 71 percent of Venezuelans who have fled have gone to other countries in South America.  These countries have reacted in myriad ways to the influx of Venezuelans, mainly by issuing work and residency permits to some of them while nations bordering Venezuela are stepping up border security and deploying troops.  Other nations not mentioned do not have a special policy for admitting Venezuelans.   

In the course of writing this blog, the Migration Policy Institute published a wonderful short paper by Luisa Feline Freier and Nicolas Parent on the Venezuelans emigration crisis.  Many of my comments in this section are based on their excellent work.


Colombia initially offered a Special Stay Permit to Venezuelans as well as Border Mobility Cards which allowed free travel between the two countries.  In February 2018, Colombia stopped issuing both permits due to worries that the influx of Venezuelans was too great.  Now, many are entering illegally in dangerous circumstances.


Brazil created a temporary residency program for Venezuelans in 2017.


Peru created the Temporary Stay Permit (PTP) for Venezuelans in January 2017.  The administrative backlog for the PTP is huge so many Venezuelans are applying for asylum instead. 


The Venezuelan emigration crisis is going to worsen before it improves.  If the labor market and economic integration of Syrians refugees outside of Syria since 2011 can offer any lessons to South America, they are:

  1. Allow Venezuelans to legally work in host countries so that their employment and labor force participation rates rise.
  2. Deregulate labor markets generally because more legal work opportunities will reduce Venezuelan labor market competition with locals. 
  3. Legal employment reduces the net cost of social services and charity as well as increases feelings of belonging and contentment among the emigrants.

Special thanks to Maria Rey for her help on this.

We do not need another rift between communities in our divided nation. But that is what Congress gave us with a provision in last year’s tax bill that imposed a patchwork of divisions spread across every state.

The Tax Cuts and Jobs Act created a complex new tax structure called “Opportunity Zones.” The law tasked governors with carving up their states into tax-favored O zones and tax-disfavored areas we can call NO zones. If investors and developers put a hotel in an O zone, they receive a federal capital gains tax break, but if they put the same project in a NO zone, no such luck.

Vanessa Brown Calder and I discuss Opportunity Zones in The Hill. But pictures are better than words in showing what an unfair mess Congress has created. The U.S. Treasury has posted a national map accessible here, but you get a better idea with these maps of various cities from Bloomberg.

On their way to work, members of Congress pass powerful lettering on the Supreme Court, “Equal Justice Under Law.” So why did they think it was OK to impose unequal tax rules on neighborhoods across the nation?

Since the 1960s, the federal government has made a hash of micromanaging local development through HUD and other spending bureaucracies. I fear O zones will accelerate federal meddling into local affairs on the tax side. Will the government start tying social-engineering regulations to the O zone tax rules like they have with spending aid to local governments?

Some features of federal tax law have differential effects on the states as a byproduct of the tax system’s structure. But the O zones are purposeful geographic discrimination. Aside from the unfairness, the new tax loopholes will fuel a 50-state lobbying frenzy by landowners and developers to be included in the O zones rather than the NO zones. Is it just coincidence that the founder of Quicken Loans owns lots of property in Detroit’s new O zones?

Below is the new O zone map for Washington, D.C. with the favored zones in yellow. If you own property at 5300 East Capitol St NE, federal tax law has just made you a winner. If you own property across the street at 5300 East Capitol SE, you are a loser. Local governments make lots of such winner/loser decisions, but we don’t need the federal government compounding the problem with its powerful and corrupting tentacles.

The best parts of the Republican tax law were a step forward for equal treatment, such as the capping of state and local tax deductions. It is unfortunate that a big new loophole goes in the opposite direction.    

Vanessa has further thoughts on O zones here.

Federal tax rules inducing local corruption? Check out the LIHTC.

A few weeks ago, President Trump surpassed his 500th day in office. That’s a good vantage point to appraise his economic policies to Make American Great Again.

Over at the Library of Economics and Liberty’s Econlog, I offer my assessment. It’s not good.

This may seem surprising, given current economic conditions. But economic policy isn’t merely about the current moment, but predominantly about improving economic conditions long-term. Aside from a couple provisions in the December 2017 tax law, President Trump has done precious little in that regard and much to harm the economy long-term, from borrow-and-spend fiscal policy, to harmful trade and immigration policies, to disinterest in serious regulatory reform, to his refusal to face the country’s dreay long-term fiscal challenges.

From my conclusion:

MAGAnomics appears to be little more than an impulsive dislike of free trade and immigration, a hazy desire for less regulation, disinterest in (or perhaps a lack courage to face) the nation’s long-term fiscal problems, and a desire to temporarily lower taxes without making the hard choices necessary to fiscally balance those cuts and make them enduring. In other words, MAGAnomics is a slogan supporting a few weak and many harmful initiatives, not a serious collection of policies thoughtfully designed to strengthen the nation’s economic health.

Take a look and see if you agree.

In a Regulation article in 2013, Johnathan Lesser described how subsidies to renewable energy generators could actually increase electricity prices by reducing the profits and thus the long run supply of unsubsidized conventional alternatives like natural gas generators. 

According to Catherine Wolfram of the University of California, Berkeley Haas School of Business, the predictions of Lesser have become reality. Natural gas generators in The Pennsylvania-New Jersey-Maryland (PJM) regional electricity market have not received revenues sufficient to cover their capital costs in most years since 2009. Under such circumstances existing plants eventually will cease operation and no new plants will be built. Higher prices and uncertain supply are inevitable.

Calpine, an operator of natural gas plants, asked the Federal Energy Regulatory Commission (FERC) to require PJM to fix the generation capacity market—a government created market that pays firms for reserve generation capacity—to account for the subsidized competitors. Last month, FERC agreed with Calpine that the capacity market is currently “unjust and unreasonable” and issued an order requiring PJM to extend a price floor, which so far only applies to natural gas generators, to all resource types.

However, the FERC order falls short of the first best option: eliminating subsidies to all resources. Federal regulators, Congress, and states should work to repeal the regulations, mandates, and subsidies that complicate the capacity market. An even bolder move would be to mimic Texas, which has no capacity market; generators are paid only for the energy they generate. 

Written with research assistance from David Kemp.

Yesterday, Chris Edwards and I co-authored a piece for The Hill on “opportunity zones.” Opportunity zones were one element of last year’s tax reform law.

They’re more or less what would happen if the Low-Income Housing Tax Credit (LIHTC) and Community Development Block Grant (CDBG) produced offspring: opportunity zones both aim at generating economic development in declining areas (similar to CDBG) and use the tax code to incentivize public private partnerships (like LIHTC).

There are other similarities to CDBG and LIHTC. Opportunity zones may benefit investors and developers more than benefit the poor, which makes them like LIHTC.

The law has no provision to measure opportunity zone’s effectiveness, and measuring effectiveness would be hard anyway, which makes opportunity zones like CDBG. Currently, advocates simply cite the number of projects built with CDBG or LIHTC funding, which doesn’t tell a savvy information-consumer whether programs are meeting their objectives. 

As a result, opportunity zones will likely run on auto-pilot, while special interest groups claim it is effective based on the number of projects that were funded through the new tax mechanism. We won’t know how many of those projects would have been built anyway.

Lawyers, accountants, and financial advisors will make money advising investors and developers on program rules, who will then make money deferring and reducing their capital gains taxes.

There’s nothing wrong with cutting taxes, but opportunity zones are the wrong way to accomplish that. And national policy shouldn’t play favorites or pretend Congress or even state governors know where businesses or people should locate. (Hint: the best place for business and poor people to locate probably aren’t declining areas.) 

Rather than federal “help”, states can create their own state-wide opportunity zones by reforming their own tax codes and fixing their zoning, occupational licensing, and childcare regulations. Zoning regulations keep low-skilled workers trapped in declining places and excluded from economic opportunity, and occupational licensing makes it harder to relocate to new economic opportunities. 

Local reforms would really help poor workers, and regardless of whether they brough declining places back, they would improve poor worker’s ability to locate in non-declining places where the jobs are. Opportunity zones? Not so much.

Last month, we summarized evidence for the long-term stability of Greenland’s ice cap, even in the face of dramatically warmed summer temperatures. We drew particular attention to the heat in northwest Greenland at the beginning of the previous (as opposed to the current) interglacial. A detailed ice core shows around 6000 years of summer temperatures averaging 6-8oC (11-14oF) warmer than the 20th century average, beginning around 118,000 years ago. Despite six millenia of temperatures that are likely warmer than we can get them for a mere 500 years, Greenland only lost about 30% of its ice. That translates to only about five inches of sea level rise per century from meltwater.

We also cited evidence that after the beginning of the current interglacial (nominally 10,800 years ago) it was also several degrees warmer than the 20th century, but not as warm as it was at the beginning of the previous interglacial.

Not so fast. Work just published online in the Proceedings of the National Academy of Sciences by Jamie McFarlin (Northwestern University) and several coauthors now shows July temperatures averaged 4-7oC (7-13oF) warmer than the 1952-2014 average over northwestern Greenland from 8 to 10 thousand years ago. She also had some less precise data for maximum temperatures in the last interglacial, and they are in agreement (maybe even a tad warmer) with what was found in the ice core data mentioned in the first paragraph.

Award McFarlin some serious hard duty points. Her paleoclimate indicator was the assembly of midges buried in the annual sediments under Wax Lips Lake (we don’t make this stuff up), a small freshwater body in northwest Greenland between the ice cap and Thule Air Base, on the shore of the channel between Greenland and Ellesmere Island. Midges are horrifically irritating, tiny biting flies that infest most high-latitude summer locations. They’re also known as no-see-ums, and they are just as nasty now as they were thousands of years ago.  

Getting the core samples form Wax Lips Lake means being out there during the height of midge season.

She acknowledges the seeming paradox of the ice core data: how could it have been so warm even as Greenland retained so much of its ice? Her (reasonable) hypothesis is that it must have snowed more over the ice cap—recently demonstrated to be occurring for the last 200 years in Antarctica as the surrounding ocean warmed a tad. 

The major moisture source for snow in northwesternmost Greenland is the Arctic Ocean and the broad passage between Greenland and Ellesmere. The only way it would snow so much as to compensate for the two massive warmings that have now been detected, is for the water to have been warmer, increasing the amount of moisture in the air. As we noted in our last Greenland piece, the Arctic Ocean was periodically ice-free for millenia after the ice age.  

McFarlin’s results are further consistent, at least in spirit, with other research showing northern Eurasia to have been much warmer than previously thought at the beginning of the current interglacial.

Global warming apocalypse scenarios are driven largely by the rapid loss of massive amounts of Greenland ice, but the evidence keeps coming in that, in toto, it’s remarkably immune to extreme changes in temperature, and that an ice-free Arctic Ocean has been common in both the current and the last interglacial period. 

Federal Reserve Chairman Jerome Powell was before the Senate Banking Committee today to present the semiannual Monetary Policy Report to Congress. Unfortunately, there was little discussion of monetary policy during the proceedings.

The Senators spent nearly all of their time asking the Chairman about the recent stress tests, changes to the tax code, and concerns over additional tariffs. On tariffs, Powell deserves credit for plainly stating that “in general, countries that have remained open to trade and haven’t erected barriers, including tariffs, have grown faster, have had higher incomes, [and] higher productivity, and countries that have…gone in a more protectionist direction have done worse.”

While many Senators ignored monetary policy, the one notable exception came when Senator Pat Toomey asked whether the flattening yield curve on bonds would cause the Fed to adjust either its path for interest rates increases or the pace of its balance sheet reduction.

A flattening yield curve means the difference, or spread, between short- and long-term bonds is narrowing. When short-term bond yields end up higher than those on long-term bonds, then the yield curve has inverted. The concern that Toomey’s question points to is that, in the past, an inverted yield curve has typically signaled a coming recession.

Rather than a direct response to what the flatter yield curve potentially means for normalizing monetary policy, Powell delivered his weakest answer of the day. He admitted that the Fed has discussed yield curve dynamics in policy meetings, that “different people think about it different ways,” and that he tries to understand the yield curve in terms of what it says about neutral interest rates. He ignored the part of the question about whether or not the narrowing spread was signaling a potential economic slowdown—something not lost on seasoned Fed watchers.

While the Senators’ questions left a lot to be desired on the monetary front, the Chairman’s prepared remarks were a bit more encouraging. There, as David Beckworth notes, Powell once again highlighted the FOMC’s use of monetary policy rules when setting policy. It was only a year ago that the Fed added a new section to its semiannual report on monetary policy rules. That the Fed has continued to update and expand that section in subsequent reports is welcome news. However, Powell discusses monetary policy rules as useful insofar as they guide FOMC decisions on the path of interest rates. Because they do not accurately reflect the stance of monetary policy, this laser focus on interest rates can be problematic.

To truly improve the Fed’s performance, Powell should move beyond policy rules that fixate on interest rates and instead explore a monetary regime that would enhance macroeconomic stability.

Powell will be on the Hill again tomorrow, before the House Committee on Financial Services.

The heat and humidity are now on the rise again after a quite pleasant respite. But the last heatwave was exceedingly uncomfortable and prompted an examination of just how miserable Mid-Atlantic summers can be. My own weather equipment, in Marshall VA, showed the maximum heat index—a weighted combination of temperature and humidity that’s akin to heat stress—topped out at an astounding 125°F late in the afternoon of July 3.

This wasn’t a nationwide event, unlike the dust-bowl summers of 1934 and 1936. Instead, as shown on climatologist Roy Spencer’s blog, the unusual heat was rather circumscribed, with a fairly even distribution of above and below-normal temperatures across North America.

It’s worth having a look at the national history of very hot temperatures, shown below:

Figure 1. Despite warmer global average temperatures, there’s no trend in extremely hot days in the US record.

The heat of the 1930s has yet to be topped. In our region, none of the recent heat holds a melting candle to the summer of 1930, which was also exceedingly dry. Except for a few locations that got hit-or-miss thunderstorms, much of the Mid-Atlantic saw less than an inch of rain between June 20th and the end of August, with reports of a mere 10% of normal rain being common.

Here’s how hot it was. Leander McCormick Observatory is Charlottesville’s long-term climate station. For 23 days, beginning on July 19, 1930, the high temperature averaged 100°. Most Mid-Atlantic stations see about one such day per year. During that heatwave, on July 20, Woodstock, in the heart of the Shenandoah Valley, set the all-time credible Virginia record with 109°. (There is a 110° reading at Balcony Falls VA in 1954, but it’s not consistent with nearby temperatures.)

Urban Washington, DC was largely without air conditioning, and residents took to the parks to sleep. But that’s not a safe option now, and it’s also not clear that we have enough grid power to handle that much heat. The hottest days in the eastern U.S. come perilously close to bringing down the electrical grid.

Lack of, or loss of, air conditioning in a major urban heatwave kills people. This happened in Chicago in 1995, with 739 excess deaths as the heat index went astronomical. Nearby southern Wisconsin and eastern Iowa saw values above 130°, and one location (Appleton, WI) hit an astounding 148⁰ at 5pm on July 13, the most uncomfortable heat ever measured in the western hemisphere. That was an official airport reading made on calibrated instruments.

A peculiarity of urban heatwaves, at least in the continental U.S., is that as they become more frequent—which they must, thanks mostly to urban sprawl, as well as a slight nudge from carbon dioxide—heat-related deaths begin to decline. This was noted both in Chicago, post-1995 and in France, post-2003, as subsequent temperature extremes resulted in far few fatalities than would have been expected by heat/death models.

The response to extreme heat is both political and personal. Because of the Chicago tragedy, cities nationwide developed heat emergency plans, which include both publicity and cooling centers. The French decided that—tres gauche—American-style air conditioning wasn’t so bad after all, as they descended in droves upon big-box stores to buy them for granny’s room.

The decline in heat-related mortality is therefore a function of adaptation. Two of the hottest cities in the US are Phoenix and Tampa, and they also have some of the oldest (and therefore most susceptible) populations. Only in Seattle, where heatwaves are very rare, is there increasing heat-related mortality. And as urban heat becomes more frequent nationwide, heat-related mortality should decline as long as the power stays on.

As a historian of the Cold War, I have a passing knowledge of a number of meetings between Soviet/Russian leaders and U.S. presidents. Some are famous for getting relations off on the wrong foot (e.g. Kennedy and Khrushchev at Vienna in 1961); others set the stage for great breakthroughs, but were seen as failures at the time (e.g. Reagan and Gorbachev at Reykjavik in 1986); still others are largely forgotten (e.g. Johnson and Kosygin at Glassboro, NJ in 1967). It is impossible to predict how we will remember the first substantive meeting between Donald Trump and Vladimir Putin.

We can see, however, what President Trump wants us to remember. “I think we have great opportunities together as two countries that, frankly,…have not been getting along very well for the last number of years,” Trump said at the opening of the meeting in Helsinki. “I think we will end up having an extraordinary relationship.” 

President Trump has long said, going back to his campaign, that it is important to have good relations with Russia. I agree. I’ve never seen meetings between American leaders and senior government officials and their foreign counterparts as a “reward” for good or bad behavior. It’s called diplomacy. If this first meeting does set a tone for cooperation between the two countries, historians might eventually judge it worthwhile.

Unfortunately, the context surrounding this meeting is not conducive to long-term success. Credible evidence of Russian interference in the 2016 election, affirmed in detail as recently as Friday, casts a long shadow, and makes it very difficult to make progress on matters of mutual interest. Any genuine breakthrough will immediately run afoul of U.S. domestic politics. That President Trump continues to dismiss the Mueller investigation as a “rigged witchhunt” and mostly blames his predecessor for failing to call the Russian election hack to the attention of the American people merely confirms a widespread perception that he doesn’t take it seriously.

In addition, on the heels of last week’s NATO summit, and the G-7 meeting last month, there is the unsettling fact that President Trump seems to prefer meeting with autocrats than with leaders of democracies. We saw that again today, with President Trump praising Vladimir Putin effusively days after he humiliated European leaders. He also spoke warmly of their mutual friend, China’s Xi Jinping. Last month, the president joked about how North Koreans “sit up at attention” when Kim Jong Un speaks, and he would like “my people to do the same.” He seems particularly impressed by how others are able to stifle domestic dissent. This behavior and rhetoric plays into his critics’ warnings about Donald Trump’s authoritarian instincts, and today’s meeting does nothing to ease such concerns.

President Trump’s idiosyncrasies notwithstanding, however, I will be paying attention to what, if anything, emerges from his meeting with Vladimir Putin. These could include agreement to discuss nuclear arms control, tamping down the civil war in Syria, and possibly reaching some resolution on Ukraine. But we’d all be advised to wait a bit before rendering a definitive judgement.

As regular Alt-M readers know, I’ve been saying for over a year now that, despite their promise to “normalize” monetary policy, Fed officials have been determined to maintain the Fed’s post-crisis “floor” system of monetary control, in which changes to the Fed’s monetary policy stance are mainly achieved by means of adjustments to the rate of interest the Fed pays on banks’ excess reserve balances, or the IOER rate, for short.

Until recently the Fed’s intentions had to be inferred by reading between the lines of its official press releases, or by referring to personal preferences expressed by leading Fed officials. But with today’s release of the Fed’s official Monetary Policy Report by the Board of Governors, it’s no longer necessary to speculate. The section “Interest on Reserves and Its Importance for Monetary Policy,” on pp. 44-46, leaves hardly any room for doubt that the Board of Governors still regards the IOER rate as “the principal tool the FOMC [sic] uses to anchor the federal funds rate,” and that it plans to keep on doing so after it “normalizes” monetary policy by completing its ongoing balance sheet unwind and by further raising its fed funds rate target upper limit by another percentage point or so.[1]

An Awkward Start

Having already spilled several gallons of ink criticizing the Fed’s floor system, on these pages and in Floored!, my forthcoming book on the subject, I don’t see the point of reviewing those criticisms here, by way of a comprehensive reply to the Board’s recent remarks defending that arrangement. Still I can’t resist pointing out some especially galling aspects of those remarks, starting with this opening passage:

The financial crisis that began in 2007 triggered the deepest recession in the United States since the Great Depression. In response, the Federal Open Market Committee (FOMC) cut its target for the federal funds rate to nearly zero by late 2008. Other short-term interest rates declined roughly in line with the federal funds rate. Additional monetary stimulus was necessary to address the significant economic downturn and the associated downward pressure on inflation. The FOMC undertook other monetary policy actions to put downward pressure on longer-term interest rates, including large-scale purchases of longer-term Treasury securities and agency-guaranteed mortgage-backed securities.

These policy actions made financial conditions more accommodative and helped spur an economic recovery that has become a long-lasting economic expansion.

Although the passage itself doesn’t refer to interest on reserves, its purpose is to introduce a discussion devoted to singing the praises of that policy instrument. It’s in light of that intention that the passage raises my hackles. For what the Fed’s report doesn’t say is that, when the Fed introduced IOER in early October 2008, it did so, not because it thought “monetary stimulus was necessary to address the significant economic downturn and the associated downward pressure on inflation,” but because it was determined to prevent its then-ongoing emergency lending from having any stimulus effect, and from thereby becoming a source of unwanted upward pressure on inflation! IOER was, in other words, originally intended to serve as a contractionary monetary policy measure, just when monetary expansion was desperately needed.

And boy did it work! NGDP, which had been growing, albeit at a snail’s pace, went into a tailspin. Nor was that all. Because the Fed’s IOER rate — first set at 75 basis points, briefly lowered to 65 bps, then quickly raised to 100 basis points, and finally lowered again (in early December 2008) to 25 basis points, where it remained for the duration of the crisis — was designed to prop-up the fed funds rate by encouraging banks to accumulate excess reserves, when the Fed finally determined that the U.S. economy could use a little stimulus after all, it had no choice but to resort to “other monetary policy actions to put downward pressure on longer-term interest rates, including large-scale purchases of longer-term Treasury securities and agency-guaranteed mortgage-backed securities.”

But we mustn’t be too hard on the authors of the report. After all, it would have been awkward for them to laud the Fed’s floor system after first pointing out how, during the last months of 2008 and the start of 2009, that system played an important part in bringing the U.S. economy to its knees.

Not a Popular System

Another irksome passage in the Board’s report is the one declaring that “Interest on reserves is a monetary policy tool used by all of the world’s major central banks.” Yes, and no. Plenty of central banks pay interest on bank reserves. But the policy the report defends isn’t simply that of paying interest on bank reserve balances, including excess reserve balances. It’s that of using the IOER rate as the Fed’s chief instrument of monetary control, which is the essence of a “floor” operating system. And that means setting an IOER rate high enough to encourage banks to stock-up on  excess reserves, instead of trading them for other assets.

Although the central banks of several other nations have employed floor systems in the past, today, besides the Fed itself, only the Bank of England and the ECB still rely on floor systems — or something close. Most  central banks now rely on “corridor” systems of some kind, in which the central bank’s IOER (“deposit”) rate sets a lower bound on movements in its policy rate, and open-market operations are routinely employed to keep the actual policy rate at a target set somewhere between that lower bound and an upper bound consisting of the central bank’s own lending rate. Finally, a number of other central banks that either used floor systems before the crisis or adopted such systems during it, including the Swiss National Bank, the Bank of Japan, Norges Bank, and the Reserve Bank of New Zealand, switched to “tiered” or “quota” systems afterwards. In a tiered system, reserves may earn interest at a rate that makes them seem attractive relative to other safe assets, but they do so only up to a fixed limit. Beyond that limit they earn only a relatively modest return — if not a zero or negative return. Because the marginal opportunity cost of reserves remains positive in tiered systems, such systems operate more like corridor systems than like a floor system.

Just How Low Has the Fed Really Gone?

But of all the irritating claims of the Board’s report, the one that has gone furthest in putting me in high dudgeon is this one:

The rate of interest the Federal Reserve pays on banks’ reserve balances is far lower than the rate that banks can earn on alternative safe assets, including most U.S. government or agency securities, municipal securities, and loans to businesses and consumers. Indeed, the bank prime rate — the base rate that banks use for loans to many of their customers — is currently around 300 basis points above the level of interest on reserves.

To which the following footnote is appended:

The Congress’s authorization allows the Federal Reserve to pay interest on deposits maintained by depository institutions at a rate not to exceed the “general level of short-term interest rates.” The Federal Reserve Board’s Regulation D defines short-term interest rates for the purposes of this authority as “rates on obligations with maturities of no more than one year, such as the primary credit rate and rates on term federal funds, term repurchase agreements, commercial paper, term Eurodollar deposits, and other similar instruments.” The rate of interest on reserves has been well within a range of short-term interest rates as defined in Board regulations.

Where to begin?

It’s absurd, first of all, to treat interest rates on “loans to businesses and consumers,” the prime rate included, as rates on safe assets. But don’t take my word for it: consider what two Fed senior economists, one of whom works at the Board of Governors, have to say on the subject, in a Liberty Street Economics post entitled, “What Makes a Safe Asset?” Safe assets, they write,

are those with a very high likelihood of repayment, and are easy to value and trade …. As a result, safe assets typically trade at a premium, known in the academic literature as a “convenience yield,” which reflects the nonpecuniary benefits investors receive for holding them …

In today’s financial system, the prime example of a safe asset is U.S. Treasury securities. These securities are considered to have zero credit risk, can be easily sold, and can be used as collateral either to raise funding or to post as margin in derivatives positions. … Treasuries’ safe asset status translates into an average yield reduction of 73 basis points. This yield spread can be interpreted as a measure of the convenience yield embedded in Treasuries.

However, Treasuries differ significantly in maturity and that affects their safe asset characteristics. Treasury bills (T-bills) have the shortest maturities and are often thought of as “money-like” assets, that is, assets similar to physical currency. Because of this moneyness, yields on short-term T-bills are typically lower than those on comparable assets….

The private sector can also create safe assets. For example, many of the benefits ascribed to public safe assets are also attributed to private short-term debt of certain issuers. An important difference between public and private safe assets, however, is that the reliability of private safe assets can come into question.

Stretch the notion as much as you like, you will never get “safe assets” to include even the safest bank loans. That is, you won’t be able to do it unless you are a Fed official trying to claim that the Fed’s IOER rate has been “far lower than the rate that banks can earn on alternative safe assets.”

Nor is it possible to justify comparing the Fed’s IOER rate — a rate on assets (reserves) of essentially zero maturity — to rates on otherwise safe longer-term assets. Instead, to sustain the claim that the Fed’s IOER rate has been low relative to that on assets of comparable safety, including comparably low exposure to interest-rate (or duration) risk, Fed officials would have to show that the IOER rate is below rates on safe assets with very short (if not zero) maturities. That rules out comparisons to  Treasury and agency bonds and notes, leaving only Treasury bills. Even then the comparison is a bit unfair, as even the shortest-term Treasury bills have longer terms — and are therefore less liquid and safe — than bank reserves.

But let that pass. Instead, let’s just consider how the report’s assertion that the Fed’s IOER rate “is far lower than the rate that banks can earn on alternative safe assets” stacks up against the record regarding yields on various Treasury bills. Let FRED do the talking:

As the chart shows, throughout most of its existence the IOER rate has been well above, not just rates on shorter-term Treasury Bills, but those on 1-year T-bills; indeed, for a long interval banks had to hold T-bills of 2-year maturities or longer to earn as much interest as excess reserves paid. And while the situation isn’t nearly so bad today, it remains the case that reserves pay more than one-month Treasury bills. That’s not “far lower than the rate that banks can earn on alternative safe assets.” It’s not even a little lower. It’s higher. Nor could things be otherwise, because having a floor system means having an IOER rate that’s high enough “to remove the opportunity cost to commercial banks of holding reserve balances,” which it wouldn’t be if it were really “far lower than the rate that banks can earn on alternative safe assets.”

“D” for Deception

And what about that footnote? It just adds insult to injury by showing the lengths to which the Fed has been willing to go to twist and bend the law authorizing it to pay interest on bank reserves. As the note correctly observes, that law requires that the Fed’s IOER rate not exceed “the general level of short-term interest rates.” Since the IOER rate is itself, as we’ve seen, a rate on a riskless zero-maturity asset, any reasonable interpretation of the statute would have it refer to the general level of rates on other short-term, riskless assets, such as 4 week-Treasury Bills or, perhaps, overnight Treasury-secured repos.

So, in preparing Regulation D, how did the Fed define short-term rates for the purpose of implementing the statute? Why, as “rates on obligations with maturities of no more than one year, such as the primary credit rate and rates on term federal funds, term repurchase agreements, commercial paper, term Eurodollar deposits, and other similar instruments” (my emphasis). If you can’t see how self-serving — not to say dishonest — the Fed’s definition is, please read it again, carefully, bearing in mind what “term” rates are and that the Fed’s “primary credit rate” is what’s more commonly known as its “discount” rate — that is, “the interest rate charged to commercial banks and other depository institutions on loans they receive from their regional Federal Reserve Bank’s lending facility–the discount window.”

That Regulation D refers to “term” rates rather than overnight rates, when the latter are obviously more appropriate, is the least of it. The inclusion on the Fed’s list of comparable rates of the Fed’s primary credit rate is the real kicker. First of all, that rate isn’t a market rate but one that the Fed itself administers. What’s more, the Fed has long had a policy of setting it well “above the usual level of short-term market interest rates” (my emphasis again). These days, for example, it sets it “at a rate 50 basis points above the Federal Open Market Committee’s (FOMC) target rate for federal funds.” Because the IOER rate once defined the upper limit of the FOMC’s fed funds target rate range, and is now set 5 basis points below that limit, any interest rate that the Fed pays on reserves is bound to be lower than the Fed’s primary credit rate. Thus the Fed has cleverly interpreted and implemented the statute in a manner that allows it to claim that it is obeying the law requiring that its IOER rate not exceed “the general level of short-term interest rates” no matter how it sets that rate, including when it sets it well above truly comparable market-determined short-term rates!

Now I hope you’re at least starting to see why the Fed’s report has got my goat.

[1] “Sic” because it is the Board of Governors, rather than the FOMC, that sets the IOER rate. Concerning this anomalous exception to the rule assigning responsibility for the conduct of monetary policy to the FOMC, see my January 10, 2018 testimony before the Monetary Policy and Trade Subcommittee of the House Financial Services Committee.

[Cross-posted from]

As a physician licensed to prescribe narcotics, I am legally  permitted to prescribe the powerful opioid methadone (also known by the brand name Dolophine ) to my patients suffering from severe, intractable pain that hasn’t been adequately controlled by other, less powerful pain killers. Most patients I encounter who might fall into that category are likely to be terminal cancer patients. I’ve often wondered why I am approved to prescribe methadone to my patients as a treatment for pain, but I am not allowed to prescribe methadone to taper my patients off of a physical dependence they may have developed from long-term opioid use, so as to help them avoid the horrible acute withdrawal syndrome. I am also not permitted to prescribe methadone as a medication-assisted treatment for addiction. These last two uses of the drug require special licensing and permits and must comply with strict federal guidelines. 

The synthetic opioid methadone was invented in Germany in 1937. By the 1960s, methadone was found to be effective as medication-assisted treatment for heroin addiction, and by the 1970s methadone treatment centers were established throughout the US, providing specialized and highly structured care for patients suffering from Substance Abuse Disorder. The Narcotic Addict Treatment Act of 1974 codified the methadone clinic structure. Today, methadone clinics are strictly regulated by the Drug Enforcement Administration, the National Institute on Drug Abuse, the Substance and Mental Health Services Administration, and the Food and Drug Administration. These regulations establish guidelines for the establishment, structure, and operation of methadone clinics, in most cases requiring patients to obtain their methadone in person at one fixed site. After a period of time, some of these patients are allowed to take methadone home from the facility to self-administer while they remain closely monitored. This onerous regulatory system has led to an undersupply in methadone treatment facilities for patients in need. Furthermore, the need for patients to travel, often long distances, each day to the clinic to receive their daily dose has been an obstacle to their obtaining and complying with the treatment program.

Earlier this month addiction specialists from the Boston University School of Medicine and Public Health and the Massachusetts Department of Public Health argued in the New England Journal of Medicine that community physicians interested in the treatment of Substance Abuse Disorder should be allowed to prescribe methadone to their patients seeing them in their offices and clinics. Doctors have been allowed to prescribe the opioid buprenorphine for medication-assisted treatment of addiction for years, and in recent years nurse practitioners and physicians’ assistants have been able to obtain waivers that allow them to engage in medication-assisted treatment as well.

The authors noted that methadone has been legally prescribed by primary care providers to treat opioid addiction in other countries for many years— in Canada since 1963, in the UK since 1968, and in Australia since 1970, for example. They state, 

Methadone prescribing in primary care is standard practice and not controversial in these places because it benefits the patient, the care team, and the community and is viewed as a way of expanding the delivery of an effective medication to an at-risk population.

Policymakers serious about addressing the ever-increasing overdose rate from (mostly) heroin and fentanyl afflicting our population should take a serious look at reforming the antiquated regulations that hamstring the use of methadone to treat addiction.


In the few days since President Trump nominated him to be an Associate Justice on the Supreme Court, Judge Brett Kavanaugh has seen his life put under the microscope. It turns out that the U.S Court of Appeals for the D.C Circuit judge really likes baseball, volunteers to help the homeless, and has strong connections to the Republican Party – especially the George W. Bush administration. More consequentially, Kavanaugh is an influential judge with solid conservative credentials. For libertarians, Kavanaugh’s record includes much to applaud, especially when it comes to reining in the power of regulatory authorities. However, at least one of Kavanaugh’s concurrences reveals arguments that should concern those who value civil liberties. Members of the Senate Committee on the Judiciary should press Kavanaugh on these arguments at his upcoming confirmation hearing.

In 2015, Kavanaugh wrote a solo concurrence in the denial of rehearing en banc in Klayman v. Obama (full opinion below), in which the plaintiffs challenged the constitutionality of the National Security Agency’s (NSA) bulk telephony metadata program. According to Kavanaugh, this program was “entirely consistent” with the Fourth Amendment, which protects against unreasonable searches and seizures.

The opening of the concurrence is ordinary enough, with Kavanaugh mentioning that the NSA’s program is consistent with the Third Party Doctrine. According to this doctrine, people don’t have a reasonable expectation of privacy in information they volunteer to third parties, such as phone companies and banks. This allows law enforcement to access details about your communications and your credit card purchases without search warrants. My colleagues have been critical of the Third Party doctrine, filing an amicus brief taking aim at the doctrine in the recently decided Fourth Amendment case Carpenter v. United States

Because the Third Party Doctrine remains binding precedent, Kavanaugh argues, the government’s collection of telephony metadata is not a Fourth Amendment search. Regardless of one’s opinion of the Third Party Doctrine, this is a reasonable interpretation of Supreme Court precedent from an appellate judge.

Yet in the next paragraph the concurrence takes an odd turn. Kavanaugh argues that even if the government’s collection of millions of Americans’ telephony metadata did constitute a search it would nonetheless not run afoul of the Fourth Amendment:

Even if the bulk collection of telephony metadata constitutes a search,[…] the Fourth Amendment does not bar all searches and seizures. It bars only unreasonable searches and seizures. And the Government’s metadata collection program readily qualifies as reasonable under the Supreme Court’s case law. The Fourth Amendment allows governmental searches and seizures without individualized suspicion when the Government demonstrates a sufficient “special need” – that is, a need beyond the normal need for law enforcement – that outweighs the intrusion on individual liberty. Examples include drug testing of students, roadblocks to detect drunk drivers, border checkpoints, and security screening at airports. […] The Government’s program for bulk collection of telephony metadata serves a critically important special need – preventing terrorist attacks on the United States. See THE 9/11 COMMISSION REPORT (2004). In my view, that critical national security need outweighs the impact on privacy occasioned by this program. The Government’s program does not capture the content of communications, but rather the time and duration of calls, and the numbers called. In short, the Government’s program fits comfortably within the Supreme Court precedents applying the special needs doctrine.

This paragraph includes a few points worth unpacking: 1) That the collection of telephony metadata is permitted under the “Special Needs” Doctrine, and 2) The 9/11 Commission Report buttresses the claim that “The Government’s program for bulk collection of telephony metadata serves a critically important special need – preventing terrorist attacks on the United States.”

Kavanaugh asserts that the NSA’s program serves a special need, and is therefore exempt from the Fourth Amendment’s warrant requirement. The so-called Special Needs Doctrine usually applies when government officials are acting in a manner beyond what is associated with ordinary criminal law enforcement. Justice Blackmun explained the justification for the doctrine in his New Jersey v. T.L.O. (1985) concurrence:

Only in those exceptional circumstances in which special needs, beyond the normal need for law enforcement, make the warrant and probable cause requirement impracticable, is a court entitled to substitute its balancing of interests for that of the Framers.

Kavanaugh’s concurrence includes a few notable examples of the Special Needs Doctrine, such as drug tests for high school athletes and drunk driving roadblocks. Unlike Klayman, which concerned the indiscriminate bulk collection of millions of citizens’ telephony metadata, these cases involved limited searches specific to an isolated government interest.

In United States v. United States District Court (1972) – the so-called “Keith Case” – the Supreme Court rejected the government’s argument that “the special circumstances applicable to domestic security surveillances necessitate a further exception to the warrant requirement.”

The Supreme Court did not find this or some of the government’s arguments persuasive:

But we do not think a case has been made for the requested departure from Fourth Amendment standards. The circumstances described do not justify complete exemption of domestic security surveillance from prior judicial scrutiny. Official surveillance, whether its purpose be criminal investigation or ongoing intelligence gathering, risks infringement of constitutionally protected privacy of speech. Security surveillances are especially sensitive because of the inherent vagueness of the domestic security concept, the necessarily broad and continuing nature of intelligence gathering, and the temptation to utilize such surveillances to oversee political dissent. We recognize, as we have before, the constitutional basis of the President’s domestic security role, but we think it must be exercised in a manner compatible with the Fourth Amendment. In this case we hold that this requires an appropriate prior warrant procedure.

Kavanaugh’s argument that the NSA’s domestic spying can override Fourth Amendment protections thanks to “special needs” is at odds with the Supreme Court’s holding in the Keith Case. If the Court expanded special needs to cover the bulk collection of telephony metadata it would be the most expansive application of the doctrine to date.

It’s important to consider why Kavanaugh believes “bulk collection of telephony metadata serves a critically important special need – preventing terrorist attacks on the United States.”

In making this claim, Kavanaugh cited the 2004 9/11 Commission Report. This report does not directly recommend the bulk collection surveillance at issue in Klayman, nor does it make the argument that such a program would have prevented the 9/11 attacks.  

In fact, the Privacy and Civil Liberties Oversight Board’s (PCLOB) 2014 report on the NSA’s bulk telephony surveillance program, published before Kavanaugh’s Klayman concurrence, found that the program was not a critically important part of the ongoing War on Terror:

Based on the information provided to the Board, we have not identified a single instance involving a threat to the United States in which the telephone records program made a concrete difference in the outcome of a counterterrorism investigation. Moreover, we are aware of no instance in which the program directly contributed to the discovery of a previously unknown terrorist plot or the disruption of a terrorist attack. And we believe that in only one instance over the past seven years has the program arguably contributed to the identification of an unknown terrorism suspect. In that case, moreover, the suspect was not involved in planning a terrorist attack and there is reason to believe that the FBI may have discovered him without the contribution of the NSA’s program.

Even in those instances where telephone records collected under Section 215 offered additional information about the contacts of a known terrorism suspect, in nearly all cases the benefits provided have been minimal — generally limited to corroborating information that was obtained independently by the FBI.

Kavanaugh’s assertion that the NSA’s invasive surveillance program is justified on national security grounds is simply not supported by the 9/11 Commission Report or the PCLOB’s report.

If the Senate does vote to confirm Kavanaugh, as is widely expected, he will likely be on the bench for decades. In that time, he will hear cases involving warrantless surveillance justified on national security grounds. This surveillance may involve facial recognition, drones, and other emerging surveillance methods. That a potential Supreme Court justice might view such warrantless surveillance as justified because of a national security-based “special needs” exception to the Fourth Amendment should worry everyone who values civil liberties. Members of the Senate Committee on the Judiciary must ask Kavanaugh to better explain his reasoning in Klayman.

Klayman v. Obama by Matthew Feeney on Scribd

Nationwide transit ridership in May 2018 was 3.3 percent less than in the same month of 2017. May transit ridership fell in 36 of the nation’s 50 largest urban areas. Ridership in the first five months of 2018 was lower than the same months of 2017 in 41 of the 50 largest urban areas. Buses, light rail, heavy rail, and streetcars all lost riders. 

These numbers are from the Federal Transit Administration’s monthly data report. I’ve posted an enhanced spreadsheet that has annual totals in columns GY through HO, mode totals for major modes in rows 2123 through 2129, agency totals in rows 2120 through 3129, and urban area totals for the nation’s 200 largest urban areas in rows 3131 through 3330.

Declines in 2018 continue a trend that began in 2014. Year-on-year monthly ridership has fallen in 21 of the last 24 months and all of the last seven months. The principle cause is likely the growth of Uber, Lyft, and other ride-hailing services, but whatever the cause, there seems to be no positive future for public transit.

Of the urban areas that saw ridership increase, ridership grew by 1.2 percent in Houston, 2.2 percent in Seattle, 2.4 percent in Denver, 1.2 percent in Portland, 5.0 percent in Indianapolis, 7.8 percent in Providence, 7.2 percent in Nashville, and an incredible 63.1 percent in Raleigh. Most of the growth in Raleigh was students carried by North Carolina State University’s bus system.

On a percentage basis, the biggest losers were Miami, Boston, Cleveland, Kansas City, and Milwaukee, all of which saw about 11 percent fewer riders in May 2018 than May 2017. Ridership fell 9.2 percent in Phoenix, 8.0 percent in Jacksonville, 7.2 percent in Virginia Beach-Norfolk, 6.4 percent in Dallas-Fort Worth, 5.9 percent in Atlanta, and 5.6 percent in Philadelphia.

Numerically, the biggest losses were in New York, whose transit systems carried 12.7 million fewer riders in May 2018 than 2017; Boston, -4.1 million; Los Angeles, -2.4 million; Philadelphia, -1.7 million; and Miami, -1.4 million. Chicago, Washington, Atlanta, and Phoenix all lost more than half a million monthly riders.

Some people have argued that ridership is declining because of cuts to transit services. Others have concluded that the cuts to transit service “mostly followed, and not led falling ridership.” The posted spreadsheet includes data for vehicle-revenue miles of service that could support either view.

Transit service in both Houston and Seattle grew by 2.6 percent, supporting Houston’s 1.2 percent and Seattle’s 2.2 percent ridership gains. Indianapolis’ 5.0 percent increase in ridership was supported by a 9.9 percent increase in service. Service declined 2.0 percent in New York and 3.7 percent in Los Angeles, either reflecting or contributing to falling ridership in those urban areas.

However, ridership declined 2.5 percent in San Diego despite a 10.9 percent increase in service. Ridership in San Jose fell by 4.2 percent despite a 2.4 percent increase in service. Jacksonville’s 8.0 percent loss of riders came in spite of a 2.6 percent increase in service.

It seems clear that service levels are only one of the factors influencing transit ridership. Moreover, there appear to be rapidly diminishing returns to service: large service increases are needed to get small ridership gains. On the other hand, ridership declines reduce agency revenues forcing reductions in service, leading to further ridership declines: a classic death spiral.

Transit industry leaders must be hoping for some kind of catastrophe that will send gasoline prices above $4 a gallon, for that is probably the only thing that could save the industry from its current trajectory. That is unlikely, and the industry is not worth saving any other way.

The Senate Judiciary Committee recently voted in favor of a bill that would update copyright law and apply new regulations to interactive streaming services, such as Spotify. The Music Modernization Act (MMA) addresses the issues of non-payment to copyright holders—the basis of a $1.6 billion lawsuit against Spotify—and undefined unenforceable music property rights stemming from the lack of a comprehensive database that records the ownership of copyrights. In the current issue of Regulation, Thomas Lenard and Lawrence White recount the history of music copyright law and discuss some of the shortcomings of the MMA.

The New York Times quotes one supporter of the Act as stating, “This is going to revolutionize the way songwriters get paid in America.” But the MMA primarily incorporates streaming services into the existing framework through which distributors of music obtain permission from and provide compensation to music copyright holders.

A key provision of the MMA is that the Register of Copyrights would designate a Musical Licensing Collective (MLC) with two primary functions. The first is to serve as a collective rights organization that grants licenses for interactive streaming, receives royalties from streaming services, and remits the royalties to copyrights holders. The second function is to create and manage a database of rights holders.

The revolutionary aspect of the MMA is the creation of such a database. Currently, the music industry lacks a comprehensive database that keeps track of copyrights, which is what has created the problems of nonpayment and limited music distributors’ ability to negotiate with individual copyright holders. Lenard and White contend that the database building function of the MLC may be necessary because the economies of scale in managing such a database might be large enough to create a natural monopoly (though nongovernmental groups are already developing open source and blockchain initiatives to solve these problems).

However, by linking the database function of the MLC with its role as a collective rights organization, Lenard and White argue that the MMA simply extends a regulatory regime that limits competition. As it stands, the music copyright system largely consists of compulsory licenses and rates set by administrative or judicial proceedings. The MLC as outlined in the MMA would be a government enforced monopoly with the same anticompetitive practices.

As Lenard and White state,

Whenever an opportunity for pro-competitive reform of music licensing arises, policymakers seem to revert to the traditional regulatory model that discourages competition. They never miss an opportunity…to miss an opportunity. The MMA— with its reliance on compulsory licensing, blanket licensing by a marketing collective, and regulated rates—is the latest of several recent examples.

Instead of extending the current anticompetitive regulations to streaming services, policymakers should instead update the music copyright registration system and allow a competitive copyright market to develop through which those copyrights are traded.  Those changes would provide greater benefits for music creators, distributors, and consumers.

Written with research assistance from David Kemp.

Readers who watched the Cato forum last November on prosecutorial fallibility and accountability, or my coverage at Overlawyered, may recall the story of how a Federal Trade Commission enforcement action devastated a thriving company, LabMD, following a push from a spurned vendor. Company founder and president Mike Daugherty, who took part on the Cato panel, wrote a book about the episode entitled The Devil Inside the Beltway: The Shocking Exposé of the U.S. Government’s Surveillance and Overreach into Cybersecurity, Medicine and Small Business.

Last month two separate federal appeals courts issued rulings offering, when combined, some consolation for Daugherty and his now-shuttered company. True, a panel of the D.C. Circuit Court of Appeals, finding qualified immunity, disallowed the company’s claims that FTC staffers had violated its constitutional rights by acting in conscious retaliation for its criticism of the agency. On the other hand, an Eleventh Circuit panel sided with the company and (quoting TechFreedom) “decisively rejected the FTC’s use of broad, vague consent decrees, ruling that the Commission may only bar specific practices, and cannot require a company ‘to overhaul and replace its data-security program to meet an indeterminable standard of reasonableness.’” [More on the ruling here and here]

As usual, John Kenneth Ross’s coverage at the Institute for Justice’s Short Circuit newsletter is worth reading, both descriptions appearing in the same roundup since they were decided in such quick succession:

Allegation: Days after LabMD, a cancer-screening lab, publicly criticized the FTC’s yearslong investigation into a 2008 data breach at the lab, FTC staff recommend prosecuting the lab. Two staffers falsely represent to their superiors that sensitive patient data spread across the internet. (It hadn’t.) The FTC prosecutes; the lab lays off all workers and ceases operations. District court: Could be the staffers were unconstitutionally retaliating for the criticism. D.C. Circuit: Reversed. Qualified immunity. (Click here for some long-form journalism on the case.)…

Contrary to company policy, a billing manager at LabMD—a cancer-screening lab—installs music-sharing application on her work computer; a file containing patient data gets included in the music-sharing folder. In 2008 a cybersecurity firm finds it and tells LabMD the file has spread across the internet. (Which is false.) When LabMD declines to hire the cybersecurity firm, the firm reports the breach to the FTC, which prosecutes the case before its own FTC judge. LabMD does not settle; the expense of fighting forces the company to shutter. The FTC orders LabMD to adopt “reasonably designed” cybersecurity measures. Eleventh Circuit: The FTC’s vague order is unenforceable because it doesn’t tell LabMD how to improve its cybersecurity.

Our friend Berin Szóka of TechFreedom sums it up: “The court could hardly have been more clear: the FTC has been acting unlawfully for well over a decade.” He continues by calling this “a true David and Goliath story”:

Well over sixty companies, many of them America’s biggest corporations, have simply rolled over when the FTC threatened to sue them [over data security practices]. … Only Mike Daugherty, the entrepreneur who started and ran LabMD, had the temerity to see this case through all the way to a federal court. …After losing his business and a decade of his life, Daugherty is a hero to anyone who’s ever gotten the short end of the regulatory stick.