Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

Sen. Dianne Feinstein has introduced the Bot Disclosure and Accountability Act, a proposal to regulate social media bots in a roundabout fashion. The bill has several shortcomings.

Automation of social media use exists on a continuum, from simple software that allows users to schedule posts throughout the day, to programs that scrape and share information about concert ticket availability, or automatically respond to climate change skeptics. Bots may provide useful services, or flood popular topics with nonsense statements in an effort to derail debate. They often behave differently across different social media platforms; Reddit bots serve different functions than Twitter bots.  

What level of automation renders a social media account a bot? Sen. Feinstein isn’t sure, so she’s relinquishing that responsibility to the Federal Trade Commission:

The term ‘‘automated software program or process intended to impersonate or replicate human activity online’’ has the meaning given the term by the [Federal Trade] Commission

If Congress wants to attempt to regulate Americans’ use of social media management software, they should do so themselves. Instead, they would hand the hard and controversial work of defining a bot to the FTC, dodging democratic accountability in the process. Moreover, the bill demands that the FTC define bots “broadly enough so that the definition is not limited to current technology”, virtually guaranteeing initial overbreadth.

While the responsibility of defining bots is improperly passed to the FTC, the enforcement of Feinstein’s proposed bot disclosure regulations is accomplished through a further, even less desirable delegation. The Bot Disclosure and Accountability Act compels social media firms to adopt policies requiring the operators of automated accounts to “provide clear and conspicuous notice of the automated program.” Platforms would need to continually “identify, assess, and verify whether the activity of any user of the social media website is conducted by an automated software program”, and “remove posts, images, or any other online activity” of users that fail to disclose their use of automated account management software. Failure to reasonably follow this rubric is to be considered an unfair or deceptive trade practice.

This grossly infringes on the ability of private firms, from social media giants like Facebook to local newspapers that solicit readers’ comments, to manage their digital real-estate as they see fit, while tipping the balance of private content moderation against free expression. Social media firms already work to limit the malicious use of bots on their platforms, but no method of bot-identification is foolproof. If failure to flag or remove automated accounts is met with FTC censure, social media firms will be artificially incentivized to remove more than necessary.  

The bill also separately, and more stringently, regulates automation in social media use by political campaigns, PACs, and labor unions. No candidate or political party may make any use of bots, however the FTC defines the term, while political action committees and labor unions are prohibited from using or purchasing automated posting software to disseminate messages advocating for the election of any specific candidate. It is as if Congress banned parties and groups from using megaphones at rallies. Would that prohibition reduce political speech? No doubt it would. How then can the prohibitions in this bill comport with the constitutional demand to make no law abridging the freedom of speech? They cannot.

Feinstein’s bill attempts to automate the process of regulating social media bots. In doing so, it dodges the difficult questions that attend regulation, like what, exactly, should be regulated, and foists the burden of enforcement on a collection of private firms ill-equipped to integrate congressional mandates into their content moderation processes. Automation may provide for the efficient delivery of many services, but regulation is not among them. Most importantly, the bill does not simply limit spending on bots. It prohibits political (and only political) speech by banning the use of an instrument for speaking to the public. Online bots may worry Americans, but this blanket prohibition of speech should worry us more.

Yesterday, the Supreme Court ruled that credit card provider American Express’ long-standing policy of including anti-“steering” clauses in its contracts with merchants was not anti-competitive.

Steering is the practice whereby merchants discourage customers from paying with comparably high-cost cards like Amex and to use Visa or Mastercard instead. Importantly, anti-steering agreements do not limit merchants’ ability to favor debit cards or cash in their dealings with customers. The majority opinion, drafted by Justice Clarence Thomas, argues that there was no evidence of consumer harm in the form of higher prices or reduced output as a result of Amex’ anti-steering requirements.

The Court notes that the credit-card market is a market for transaction services and is two-sided, involving two distinct and interdependent sets of customers: merchants (who sell goods and pay fees to credit-card providers) and buyers (who use credit cards to pay for merchants’ goods).

Two-sided markets require a somewhat more sophisticated analysis in antitrust cases because of the interdependence of each side. For example, in a one-sided market, increasing merchant fees might be regarded as evidence of an abuse of monopoly power. In a two-sided market, such a fee increase can in fact be pro-competitive if it leads credit-card providers to offer better service to the other side of the market (buyers) thus increasing their purchases from merchants.

The peculiarities of two-sided markets have been noted by a long line of economists, including the 2014 Nobel Prize laureate Jean Tirole. His analysis, among others, was cited by Justice Thomas in his opinion.

Indeed, the evidence suggests that competition in the credit-card market is thriving. Take-up of cards has boomed since the 1970s. There is an endless variety of offerings among the various providers, with different (or no) annual charges, a diverse menu of rewards programs, and of course a range of interest rates on outstanding balances. In March, Amex in fact announced that it would cut its merchant fees to better compete with lower-cost providers. Since 2004, Amex fees have dropped by 10 percent.

However, even allowing that anti-steering agreements are not anti-competitive, could it be true nonetheless that anti-steering agreements hurt particular groups of consumers? Brookings’ Aaron Klein thinks so. In a piece published in the wake of yesterday’s ruling, he argues that “[the U.S.] payment system […] rewards the wealthy while penalizing the poor” and “functions as a hidden method of increasing income inequality.”

Klein’s argument is that anti-steering agreements lead merchants to increase prices. But because only Amex cardholders secure the countervailing benefits –  in the form of rewards programs and an increased ability to use their cards in daily purchases – and because Amex cardholder incomes tend to be higher than average, there is a regressive impact on lower-income households. They get the higher prices induced by Amex fees but cannot share in the benefits.

Leave aside for a moment that anti-steering agreements, as Justice Thomas observed, only apply to credit cards and not to other forms of payment, which means that merchants can in fact price-discriminate by setting minimum purchase thresholds and surcharges for using credit cards (as many do). Even ignoring this possibility, which Klein leaves unacknowledged, his claim of regressivity is not as straightforward as might at first be apparent, for several reasons.

1. Merchants’ decision to sign anti-steering agreements is voluntary.

Merchants do not forcibly sign anti-steering agreements with Amex. They only do so if they believe it is in their benefit to accept Amex cardholders, which is a function, first, of the fees charged by Amex compared to other card networks, and, second, of the share of purchases paid for with Amex cards which would not happen if the merchants refused to take Amex.

The emphasis is important because many cardholders multi-home, meaning that they hold Visa or Mastercard as well as Amex for instances in which one might not be accepted. My dry cleaner does not take Amex but that has not yet deprived her of my custom.

Thus, merchants will only agree to the anti-steering conditions if the income earned from Amex purchases which could not otherwise be earned exceeds the cost of higher Amex fees. Many merchants will find accepting Amex worthwhile, but some will not. Indeed, there are still many outlets where all credit cards but Amex (and possibly Discover) are accepted.

In a competitive market with many providers and low switching and search costs (all of which broadly characterize most American retail markets), merchants will ask for the same price, adjusted for convenience factors such as the acceptance of Amex cards. Those who buy at Amex-positive outlets will pay a premium for it, while those who forgo the opportunity to buy with Amex can opt for other providers, who – other things being equal – will offer lower prices.

2. Amex cardholders and non-cardholders are different people, buying different things.

As Klein’s piece notes, lower-income households tend to have fewer credit card options, if any, than higher-income households. But to infer that the worse-off are therefore subsidizing the better-off because all pay the same price, but only cardholders get the rewards, is one simplification too many.

Households with different incomes differ not only in their likelihood of holding an Amex card, but also in the kinds of stores they patronize and the kinds of products they buy. This means that merchants have some means to make Amex cardholders internalize the price externality they would otherwise impose on non-cardholders, by passing on the Amex fees mainly, or exclusively, to those products that Amex cardholders buy.

Any pass-through will be crude because there is always overlap between the products bought by each of the two groups, but the homogeneous price increase for all customers posited by Klein is unlikely to reflect merchants’ reaction in practice.

3. There may be progressive redistribution among Amex cardholders.

Klein objects to the supposedly regressive impact of anti-steering agreements on those who do not hold Amex cards. But the existence of Amex rewards programs made possible by the anti-steering rules may be progressive, that is, it may redistribute benefits from higher-income to lower-income cardholders.

To see how this can be the case, consider that Amex cards tend to come with a “welcome bonus” involving, usually, a disproportionate amount of rewards (air miles, or gas points, or something else) for the first $1,000 spent on the card. These rewards are presumably as costly for Amex to give as any other rewards, so that the welcome gift must be subsidized from Amex’ ongoing business. The more households spend on their card above and beyond the first $1,000, the more they are subsidizing other cardholders. Furthermore, because card expenditure is broadly correlated with income, it is plausible that higher-income cardholders subsidize lower-income cardholders, for whom the first $1,000 are a bigger share of lifetime card expenditure.

This redistribution would offset the regressivity of steering restrictions on other buyers, to the extent there is any.

4. Innovation depends on a share of initial customers’ paying higher prices.

Innovations are always costlier at first. F.A. Hayek noted in The Constitution of Liberty that, without intending it, the rich who can afford new innovations help to make them progressively more accessible by encouraging investment and competition in the provision of the innovative good or service. From the personal computer to the smartphone to the transatlantic passenger flight, modern life is rich with examples of such gradual expansion in people’s access to new goods and services.

Credit cards are no exception. Justice Thomas noted that, since the 1950s when credit cards were first introduced, merchant fees have dropped by more than half. Because of the network effects characteristic of two-sided markets, every decrease in fees has likely disproportionately increased the amount of merchants accepting cards and the number of cardholders. The result is higher welfare for all and more widely shared affluence.

But this virtuous cycle depends on the ability of competitors to offer different price and quality options to prospective customers. Amex’ business model of higher merchant fees in exchange for more generous rewards programs is one form that such competition can take.

Klein concludes his piece by noting that financial technology can resolve the “inefficiencies of the current payment system [which] cry out for new financial technological solutions.” In this he is doubtless correct, if the history of financial innovation is anything to go by. Yet, to blame American Express for the remaining imperfections in the payment system would not just be wrong — it would be counterproductive.

[Cross-posted from]

In a decision that many First Amendment faithful might find too good to be true, in NIFLA v. Becerra, the Court delivered a solid victory for freedom of speech and against government agents who would force people to speak state-approved messages. Despite the hype to the contrary – and activists from both sides on the courthouse steps – this was NOT an abortion case.  The Court was able to separate the First Amendment principles at stake from that fraught subject.

Reiterating its previous rulings on similar provisions controlling speech based on its content, the Court held that any content-based speech regulation – in this case a California law that compels delivery of particular scripts regarding the availability of abortion services (but that could equally be applied to speech about adoption and prenatal services) – is presumptively unconstitutional. To regulate the content of speech, the government must show that it has the most important of reasons for regulating the speech in question, and that it is only prohibiting or mandating speech to the extent necessary to achieve that highly important and specific purpose. California failed to show that “compelling” interest, namely why it was necessary to single out pro-life pregnancy centers and conscript them into delivering the state’s message about low-cost abortion services.

Curiously, instead of showing why its law might be able to survive strict judicial scrutiny, California argued for an almost nonexistent level of scrutiny based on the clinic employees’ status as “professionals.” It also argued that the script the pro-life centers were forced to recite conveyed merely “factual” information. Justice Clarence Thomas, in his majority opinion, explained that there is no separate category of “professional speech” that deserves lesser First Amendment protection – and attempts by the U.S. Court of Appeals for the Ninth Circuit to create and enshrine such a category were misplaced and wrong.

As we argued in our brief, if government had a freer hand to commandeer “professional speech,” then a vast amount of speech could be compelled based on nothing more than an unsupported whim that the information might be “helpful” to those who hear it. Just like describing the California law as a regulation of “professional speech” couldn’t save it, neither could arguing that the disclosures were merely factual and uncontroversial, instituted to combat consumer misinformation. Even under that deferential standard, Justice Thomas wrote that compelled speech cannot be “unjustified or unduly burdensome.” California offered nothing more than hypotheticals to justify the need for its law, but the font size and number of languages it requires are not just burdensome, but threaten to drown out any message the crisis-pregnancy centers may want to convey.

This was an absolute win for the First Amendment. Not only did the Court refuse to create a new category of speech and designate it to receive less than full constitutional protection, it also repudiated the idea that the deferential standard the Court established in Zauderer v. Office of Disciplinary Counsel (1985) – to allow certain compulsions of purely factual information in a commercial context – can save compelled disclosures that impose a burden on the speaker and are anything less than uncontroversial.

In his brief concurrence to the Court’s opinion today in Trump v. Hawaii, the decision Ilya discusses just below, Justice Kennedy adds an important point about limits on presidential power, even where the president has wide discretion, as here. Thus, he writes:

There are numerous instances in which the statements and actions of Government officials are not subject to judicial scrutiny or intervention. That does not mean those officials are free to disregard the Constitution and the rights it proclaims and protects.

And he adds:

Indeed, the very fact that an official may have broad discretion, discretion free from judicial scrutiny, makes it all the more imperative for him or her to adhere to the Constitution and to its meaning and its promise.

Put plainly, broad discretion does not afford the president access to any means toward the ends for which discretion is given. Kennedy illustrates the point with First Amendment religion and speech guarantees. He could also have invoked the Korematsu case the Court raised, as Ilya mentions. There the president enjoyed wide discretion to conduct the nation’s foreign affairs during World War II, but the means he employed in that case—incarcerating innocent Japanese-Americans—ran afoul of the Constitution’s due process guarantees, thus properly requiring judicial intervention in an area otherwise beyond the Court’s authority.

The Supreme Court upheld President Trump’s travel ban in a 5-4 decision. The travel ban undermines a core principle of the U.S. immigration system since 1965: that the law will not discriminate against immigrants based on nationality or place of birth. The president has rewritten our immigration laws as he sees fit based on the thinnest national security pretext, setting a dangerous precedent for the future.

The ban entirely lacks any reasonable basis in the facts. Nationals of the targeted countries have not carried out any deadly terrorist attacks in the United States, and they are also much less likely to commit other crimes in the United States. Nor are their governments less able or willing than others to share information or adopt certain identity management protocols. 

We now know that the report that supposedly provided the detailed, “extensive,” and “thorough” analysis of every country in the world was just 17 pages, giving barely a tenth of a page to summarize facts relating to 200 countries. It was great to see Justice Sotomayor raise this fact in her dissent (p. 19), possibly as a result of a letter that Cato adjunct Ilya Somin sent to the court summarizing a post that I wrote on the subject.

As a matter of policy, no president should be given such broad power to determine immigration law. While the travel ban currently affects only a small share of immigrants and foreign travelers, all legal immigrants should be concerned that the president will wield this power against them next. Congress should immediately intervene to preserve its power to determine immigration policy.

It’s no surprise that the Supreme Court allowed Travel Ban 3.0 to remain in place, particularly given that the justices allowed Ban 2.0 to go into effect a year ago and this one last fall. This third version specifically carves out those with green cards, provides for waivers for those with special cases (family, medical emergencies, business ties, etc.), and also was tailored based on national-security considerations, to which the Court typically defers. One can disagree, as I do, with some of the policy judgments inherent in this executive action, but as a matter of law, the president – any president – gets a wide berth here.

The Court considered the president’s statements regarding this policy but ultimately had to apply a deferential standard; given the legitimate justifications explicitly set out in the “proclamation” announcing Travel Ban 3.0, the Court could not preference campaign rhetoric and tweets over legal documents in this context. “While we of course ‘do not defer to the Government’s reading of the First Amendment,’” Chief Justice John Roberts’s majority opinion says, citing Holder v. Humanitarian Law Project (2010), “the Executive’s evaluation of the underlying facts is entitled to appropriate weight, particularly in the context of litigation involving ‘sensitive and weighty interests of national security and foreign affairs.’”

Moreover, Congress set out a very deferential statutory regime. The majority opinion explains:

By its plain language, §1182(f) grants the President broad discretion to suspend the entry of aliens into the United States. The President lawfully exercised that discretion based on his findings—following a worldwide, multi-agency review—that entry of the covered aliens would be detrimental to the national interest. And plaintiffs’ attempts to identify a conflict with other provisions in the INA, and their appeal to the statute’s purposes and legislative history, fail to overcome the clear statutory language.

As the chief justice goes on to say, the dissenting justices don’t even try to make a case on that score.

In short, we can and should debate this or any other aspect of the Trump administration’s immigration policy, but not all political disputes can or should be resolved in the courts.

One final note: the majority opinion’s penultimate page takes issue with Justice Sonia Sotomayor’s invocation of Korematsu v. United States in her dissenting opinion:

Whatever rhetorical advantage the dissent may see in doing so, Korematsu has nothing to do with this case. The forcible relocation of U. S. citizens to concentration camps, solely and explicitly on the basis of race, is objectively unlawful and outside the scope of Presidential authority. But it is wholly inapt to liken that morally repugnant order to a facially neutral policy denying certain foreign nationals the privilege of admission.

Most importantly, “Korematsu was gravely wrong the day it was decided, has been overruled in the court of history, and—to be clear—’has no place in law under the Constitution.’” (citing Justice Robert Jackson’s dissent). This isn’t technically an overrule because that question wasn’t presented, but I would still advise lawyers not to cite Korematsu as good law in their briefing.

On a Saturday afternoon in Rochester, New Hampshire, Jehovah’s Witness Walter Chaplinsky addressed the City Marshal as “a God damned racketeer” and “a damned Fascist.” He was convicted of violating a state law that prohibited offensive words in public. The United States Supreme Court upheld the conviction and identified certain categories of speech that could be constitutionally restricted, including a class of speech called “fighting words.”

Writing for the Court, Justice Frank Murphy stated that “fighting words” are “no essential part of any exposition of ideas, and are of such slight social value as a step to truth that any benefit that may be derived by them is clearly outweighed by the social interest in order and morality.” In Hate: Why We Should Resist It with Free Speech, Not Censorship, Strossen explains the ‘fighting words’ doctrine that grew from Chaplinsky:

“Fighting words” constitute a type of punishable incitement: when speakers intentionally incite imminent violence against themselves (in contrast with third parties), which is likely to happen immediately. In the fighting words situation the speaker hurls insulting language directly at another person, intending to instigate that person’s imminent violent reaction against the speaker himself/herself, and that violence is likely to occur immediately (64).

The government could, consistent with the First Amendment, punish such speech.

With Chaplinsky v. New Hampshire (1942), the Court’s “fighting words” jurisprudence began. Since Chaplinsky, the Court has overturned every fighting words conviction that has been brought before it.

This unraveling began with Terminiello v. Chicago in 1949. Father Arthur Terminiello was arrested for “beach of peace” under a Chicago ordinance after delivering a speech in which he criticized various political and racial groups. The Court held that the ordinance unconstitutionally infringed upon Terminiello’s right to free expression. Justice Douglas explores the function of speech in the Court’s opinion:

It may indeed best serve its high purpose when it induces a condition of unrest, creates dissatisfaction with conditions as they are, or even stirs people to anger. Speech is often provocative and challenges. It may strike at prejudices and preconceptions and have profound unsettling effects as it presses for acceptance of an idea. That is why freedom of speech, though not absolute… is nevertheless protected against censorship or punishment, unless shown likely to produce a clear and present danger of a serious substantive evil that rises far above public inconvenience, annoyance, or unrest.

Strossen argues freedom of speech “is essential for forming and communicating thoughts, as well as for expressing emotions,” and also “facilitates the search for truth” (21). When speech is provoking, it often spurs debate, followed by introspection and reassessment, conditions conducive to social and intellectual growth.

In 1971, the Court again limited the “fighting words” doctrine in Cohen v. California. A California statute prohibiting the display of offensive messages barred then nineteen-year-old Paul Robert Cohen from wearing a jacket embellished with the words “Fuck the Draft.” The Court ruled that the statute violated freedom of expression as protected by the First Amendment. Cohen limited fighting words to those that involved a “direct personal insult.”

In writing the opinion for Cohen, Justice Harlan also echoes Strossen’s concern that censorship unleashes government to silence certain ideas, therefore undermining liberty and democracy and subverting equality: “…we cannot indulge the facile assumption that one can forbid particular words without also running a substantial risk of suppressing ideas in the process. Indeed, governments might soon seize upon the censorship of particular words as a convenient guise for banning the expression of unpopular views.”

Finally, in R.A.V. v. St. Paul , the 1992 Supreme Court overturned St. Paul, Minnesota’s Bias Motivated Crime Ordinance, which prohibited the display of a symbol which one knows or has reason to know “arouses anger, alarm or resentment in others on the basis of race, color, creed, religion or gender.”  Justice Scalia, writing the opinion of the Court, states that the ordinance prohibits protected speech only because of the subjects the speech addresses rendering it unconstitutional. Scalia explains: “the First Amendment does not permit St. Paul to impose special prohibitions on those speakers who express views on disfavored subjects … St. Paul has no such authority to license one side of a debate to right freestyle, while requiring the other to follow Marquis of Queensberry rules.”  This decision indicates that the Court is unlikely to countenance government restrictions on hate speech.

In Chaplinsky, the “fighting words” were uttered directly into the face of the victim. On Twitter such abuse is shared among strangers separated by space and time. That might put the final nail into the coffin of the “fighting words” doctrine.  While some social media feuds may in some sense spur real world violence, the delay between online provocation and terrestrial reaction is more than sufficient to foreclose fighting words designations based on threats of imminent violence. Online fora also give Americans a chance to engage in thoughtful and ultimately productive “counter-speech,” which “encompasses any speech that counters a message with which one disagrees” (158).

However, it is important to note that “certain “hate speech” could satisfy even the current strict standard. Imagine, for example, a member of the Ku Klux Klan personally insulting a Black Lives Matter activist with racist epithets, or vice versa. Such individually targeted, deliberately provocative “hate speech” presumably could be punished under the fighting words doctrine” (64).

Nonetheless, the narrowing of the fighting words doctrine is ultimately a good thing. Government’s ability to identify new categories of speech to regulate often leads to dangerous mission creep. The health of our institutions depends on free expression, and we must be wary of attempts to enforce ideological conformity. Because, as Justice Harlan observes in Cohen, “…it is nevertheless often true that one man’s vulgarity is another’s lyric.”

The U.S. Fifth Circuit Court of Appeals recently vacated an Obama-era rule that applied the “fiduciary rule” to Individual Retirement Account advisers, and struck the final blow to a regulation that has faced legal challenges since President Trump initiated a review of the rule last year. The Court determined that the rule constituted an overreach of the Department of Labor’s authority to regulate employee benefit plans.

Though the rule is now dead, its genesis is in a decades long fight over how financial advisers and brokers are regulated. Advisers who provide financial advice for a fee, but do not sell financial products, are required to recommend the “best” financial product to clients. But brokers who buy and sell stocks and bonds for investors are only required to recommend “suitable” products if their advice is “solely incidental” to their service as a broker.  The Department of Labor’s rule would have elevated all professionals who work with retirement plans, including brokers, to the level of a fiduciary.

Supporters of the rule argue that brokers, and other financial professionals working for commissions, have a conflict of interest with their clients. For example, advising clients to purchase investments that yield higher fees and commissions may benefit the broker at the expense of their clients. The rule’s proponents contend that without it investors cannot be sure that their interests align with those of their broker. As the New York Times recently declared, “Retirement investors, you’re back on your own.”

However, in the current issue of Regulation I review a working paper that undermines this perception. In their December 2017 paper “The Misguided Beliefs of Financial Advisors,” economists Juhani T. Linnainmaa, Brian T. Melzer, and Allessandro Previtero analyze trading and portfolio information for advisers and clients in Canada who are not subject to fiduciary duty. If the “conflict-of-interest” perception of financial advisers is correct, the analysis would show systemic differences between advisers’ accounts and their clients’ accounts.

Instead, the authors found that the advisers personal investments are similar to their clients. Both groups of accounts have net returns that are about 3 percent less than the market. And the advisers actually have less diversified portfolios and pay more in fees for their accounts relative to their clients. In fact, the authors argue that advisers are not steering their clients wrong because of a conflict interest, but instead are simply misguided: contrary to the evidence, advisers believe that active management is a better strategy than passive management.

Perhaps investors should be worried about the advice they receive from financial advisers, but not because they are trying to dupe you. Requiring advisers to provide their clients with the “best” recommendations achieves nothing if the adviser’s own perceptions are misguided.

House Republicans worked over the weekend to revise the Ryan immigration “compromise” bill in an attempt to bring enough Republicans on board to pass it.  Many restrictionist Republicans in Congress voted against the harsher Securing America’s Future (SAF) Act last week because it granted a path to legal status without citizenship for some Dreamers.  Although SAF did not offer a path to citizenship for Dreamers, some House Republicans voted against it because they consider a grant of legal status of any kind for Dreamers to be amnesty.  The political problem is that Ryan’s “compromise” bill enhances the charge of amnesty because it offers a path to citizenship for a small number of Dreamers.  As a result, House Republicans are considering a national E-Verify mandate to get the restrictionists on board with the Ryan “compromise” bill (they are also considering an agricultural guest worker visa program so Republicans from agricultural districts aren’t dissuaded by E-Verify). 

This political horse-trading won’t likely work but E-Verify has some serious problems today that could grow into worse ones tomorrow.  People must consider these problems before getting on the E-Verify bandwagon.

E-Verify is a federal electronic eligibility for employment verification system for employers to check the identities of new hires against government databases to guarantee that they are legally eligible to work.  The system is intended to exclude illegal immigrants from the workforce to reduce the incentive to immigrate here in the first place.  E-Verify is mandated for all new hires in a few states and some other categories of employers but not nationwide.  If Congress ever mandates E-Verify nationwide then all native-born Americans would also have to be run through E-Verify and get government permission to work in order for E-Verify to have a chance of meeting its objective.  Any interior immigration enforcement system that seeks to reduce the employment of illegal immigrants, including E-Verify, will have to be used against legal immigrants and native-born Americans too. 

E-Verify has severe systematic problems and will not do much to turn off the wage magnet that attracts illegal immigrants.  This is a problem for several reasons but what worries me the most is what would happen after Congress mandated E-Verify and then they realize that it doesn’t work.  What steps will Congress then take to make electronic verification for employment “work” as they intend? 

In such a situation, I predict that Congress will then try more invasive methods to make E-Verify effective or to make a more invasive successor program.  Congress created the I-9 program to attempt to cut down on the hiring of illegal immigrants as part of the 1986 Immigration Reform and Control Act (IRCA).  The I-9 requirement required employers to check new worker credentials and keep the paperwork on file for the government but, crucially, did not require approval by a government bureaucracy.  Although the I-9 requirement lowered the relative wages of illegal immigrant workers by increasing the legal risk faced by employers, it did not switch off the wage magnet. 

Recognizing the limited impact of the I-9, Congress created a basic pilot program for an electronic eligibility for employment verification system in 1996 that eventually evolved into E-Verify.  If Congress ever mandates E-Verify then its many problems and low rate of compliance in states where it is currently mandated will be obvious nationwide.  The Legal Workforce Act, which is the standard legislative proposal for a nationwide E-Verify mandate, creates new pilot programs for electronic verification – almost as if its authors anticipate that E-Verify will fail to live up to expectations.  The details of these future pilot programs are fuzzy but the legislative mandate is broad.   

Should Congress mandate E-Verify nationally for all new hires, the next logical step after E-Verify’s failure would be to add a biometric component.  This would help close some of the holes in the system but likely result in more innovative workarounds.  The RIDE program is already adding pictures to E-Verify so employers can check driver’s license photos against job applicants.  The next obvious step for Congress would be to add additional biometrics like fingerprints which some of E-Verify’s biggest supporters already favor.  Just as the Social Security number grew from a personal identifier for retirement benefits to a financial identification tool to the primary government number used for legal employment, E-Verify’s uses will also grow over time.  Such a system could be used for all types of quick legal checks from purchasing firearms to buying cigarettes or renting an apartment. 

Adding new biometric identification tools to E-Verify is worrying for civil liberties, privacy, and identity theft reasons but its expansion to non-immigration enforcement purposes should cause some reflection even among its most ardent opponents.   

The best thing about E-Verify is that it doesn’t work very well.  The worst thing about E-Verify is how Congress will react once it realizes that the system has failed to live up to its promise.  James Madison wrote, “Perhaps it is a universal truth that the loss of liberty at home is to be charged against provisions against danger, real or pretended from abroad.”  While not true in all cases, Madison’s observation certainly rings true here as interior immigration enforcement programs like E-Verify also restrain the freedom of American citizens.  Without a doubt, future programs designed to overcome E-Verify’s many flaws will go even further.

Last week the Supreme Court issued its ruling in Carpenter v. United States, with a five-member majority holding that the government’s collection of at least seven days-worth of cell site location information (CSLI) is a Fourth Amendment search. The American Civil Liberties Union’s Nathan Wessler and the rest of Carpenter’s team deserve congratulations; the ruling is a win for privacy advocates and reins in a widely used surveillance method. But while the ruling is welcome it remains narrow, leaving law enforcement with many tools that can be used to uncover intimate details about people’s private lives without a warrant, including persistent aerial surveillance, license plate readers, and facial recognition.


Timothy Carpenter and others were involved in a string of armed robberies of cell phone stores in Michigan and Ohio in 2010 and 2011. Police arrested four suspects in 2011. One of these suspects identified 15 accomplices and handed over some of their cell phone numbers to the Federal Bureau of Investigation. Carpenter was one of these accomplices.

Prosecutors sought Carpenter’s cell phone records pursuant to the Stored Communications Act. They did not need to demonstrate probable cause (the standard required for a search warrant). Rather, they merely had to demonstrate to judges that they had “specific and articulable facts showing that there are reasonable grounds to believe” that the data they sough were “relevant and material to an ongoing criminal investigation.”

Carpenter’s two wireless carriers, MetroPCS and Sprint, complied with the judges’ orders, producing 12,898 location points over 127 days. Using this information prosecutors were able to charge Carpenter with a number of federal offenses related to the armed robberies. 

Before trial Carpenter sought to suppress the CSLI data, arguing that the warrantless seizure of the data violated the Fourth Amendment, which protects “The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures.” The district court denied Carpenter’s motion to suppress. He was found guilty and sentenced to almost 116 years in prison.

Carpenter appealed to the Court of Appeals for the Sixth Circuit, which affirmed his conviction.

The Doctrines

Since the 1967 Supreme Court case Katz v. United States courts have deployed the “reasonable expectation of privacy test” to determine whether law enforcement officers have conducted a Fourth Amendment search. According to this test, outlined in Justice Harlan’s solo Katz concurrence, officers have conducted a Fourth Amendment “search” if they violate a suspect’s subjective expectation of privacy that society is prepared to accept as reasonable.  

The Sixth Circuit determined that Carpenter did not have a reasonable expectation of privacy in his physical location as revealed by CSLI. This determination is consistent with the so-called “Third Party Doctrine” developed by the Supreme Court in United States v. Miller (1976) and Smith v. Maryland (1979). According to the Third Party Doctrine, people don’t have a reasonable expectation of privacy in information they voluntarily surrender to third parties such as banks and phone companies.

The Ruling

In an opinion written by Chief Justice Roberts and joined by Justices Ginsburg, Breyer, Kagan, and Sotomayor the Court sided with Carpenter without jettisoning the “reasonable expectation of privacy test” or the Third Party Doctrine. The opinion is a narrow one, holding that the warrantless acquisition of historic CSLI information does violate a reasonable expectation of privacy in physical location. In addition, the Court noted that the Third Party Doctrine remains in place, even if it doesn’t extend to CSLI.

According to the Court, this is because of the “unique nature” of cell-site records:

But while the third-party doctrine applies to telephone numbers and bank records, it is not clear whether its logic extends to the qualitatively different category of cell-site records. After all, when Smith was decided in 1979, few could have imagined a society in which a phone goes wherever its owner goes, conveying to the wireless carrier not just dialed digits, but a detailed and comprehensive record of the person’s movements.

We decline to extend Smith and Miller to cover these novel circumstances. Given the unique nature of cell phone location records, the fact that the information is held by a third party does not by itself overcome the user’s claim to Fourth Amendment protection.

The Court’s majority opinion make a number of points to emphasize the revealing nature of CSLI data, including the ubiquitousness of cell phones among American adults, and the fact that, “when the Government tracks the location of a cell phone it achieves near perfect surveillance, as if it had attached an ankle monitor to the phone’s user.” The opinion goes on to discuss how government officials can “travel back in time” to retrace cell phone users’ behavior.

Surveillance Tools Left Available

The Carpenter ruling will have an immediate impact on law enforcement. Last year, law enforcement sent 125,000 CSLI requests to AT&T and Verizon. While presumably many of these requests were related to CSLI data revealing suspects’ movements for less than a week, it’s worth considering this comment from Laura Moy, Deputy Director of Georgetown Law’s Center on Privacy & Technology and former CSLI analyst for the Manhattan District Attorney.

However, law enforcement can still conduct intrusive and revealing warrantless searches using a wide range of technologies. In the Carpenter majority opinion the Court noted, “we hold that an individual maintains a legitimate expectation of privacy in the record of his physical movements as captured through CSLI.”

But our physical movements can be tracked without CSLI. The majority’s mention of CSLI allowing government officials to “travel back in time” reminded me of the Baltimore Police Department’s use of persistent aerial surveillance equipment, which its developer described as, “a live version of Google Earth, only with TiVo capabilities.” He wasn’t joking.

License plate readers are also useful tools for tracking physical movements. Immigration and Customs Enforcement (ICE), the federal agency responsible for deportations, has access to more than 2 billion license plate images, allowing its agents to engage in near real-time tracking and to access years-worth of location data. The license plate data available to ICE includes images from 24 of the US’ top 30 most populous metropolitan areas.

Law enforcement agencies across the country are pursuing real-time facial recognition capability. Other emerging police technologies, such as body cameras and drones, may soon be regularly outfitted with real-time facial recognition capability. This capability will provide another means by which police can track our physical movements.

Persistent aerial surveillance, license plate readers, and facial recognition remain a serious concern despite the Court’s ruling in Carpenter.

Yet the fact that the Carpenter ruling is narrow should not detract from its significance. It’s one of the the most important 4th Amendment Supreme Court cases in years and lays the groundwork for future cases involving a range of surveillance tools. 

For more on Carpenter listen to Cato Senior Fellow Julian Sanchez and Cato senior fellow in constitutional studies Ilya Shapiro discuss the case in a recent Cato Daily Podcast.

Should cryptocurrencies be regulated like securities? Financial regulators have been pondering this question for some time. In a Briefing Paper published today by the Cato Institute’s Center for Monetary and Financial Alternatives, I suggest that securities regulation would only seem warranted in certain clearly circumscribed cases. For the most part, cryptocurrencies should be treated like commodities.

It has been nearly a decade since “Satoshi Nakamoto” laid the intellectual foundations for Bitcoin, the first cryptocurrency platform. Since then, more than 1,600 peer-to-peer networks have emerged to disrupt established intermediaries. Cryptocurrencies, even in the comparably bearish first half of 2018, have an aggregate market capitalization of nearly $300 billion.

While policymakers’ attention has gradually turned to designing an appropriate regulatory framework for this emerging technology, policy uncertainty persists. On one hand, some policymakers recognize the potential for cryptocurrencies to increase competition, reduce transaction costs and improve capital formation opportunities for firms. On the other, statements from regulators at the SEC and CFTC, the two agencies most closely monitoring the development of cryptocurrencies, have been unclear, equivocal, and sometimes outright contradictory.

In April, former CFTC chairman Gary Gensler suggested that ether, the cryptocurrency of the Ethereum network, should be treated like a security. This designation would have forced onerous new registration requirements on platforms that hold ether in custody and for trading, while making access to Ethereum by retail buyers more difficult. Furthermore, because the Ethereum platform provides the infrastructure for many other cryptocurrencies, such a move would have compromised the viability of large parts of the cryptocurrency market.

Fortunately, these risks have receded into the background since William Hinman, Director of the SEC’s Division of Corporation Finance, argued in a recent speech that ether, in its present form, wouldn’t qualify as a security because the Ethereum platform is heavily decentralized. The SEC’s classic test for a security defines it as (1) an investment of money (2) in a common enterprise (3) with the expectation of profits (4) from the efforts of others. In Hinman’s opinion, developments since the launch of Ethereum in 2014 mean that the platform presently fails to meet criteria (2) and (4).

The next step is to give Hinman’s welcome pronouncement regulatory heft. In the Briefing Paper released today, I propose that the SEC and CFTC formally establish a distinction between functional cryptocurrencies, such as Bitcoin and Ethereum, and promises of cryptocurrencies to be delivered in the future. Cryptocurrencies in the first category do not meet the criteria for a security and should be regulated as commodities. Those in the second category may be securities, depending on the individual circumstances of each issue.

The launch of cryptocurrencies is often preceded by what is, somewhat misleadingly, called an initial coin offering (ICO). An ICO involves the exchange of money today for the delivery of units of cryptocurrency in the future, where the funds are used to build a new platform. ICOs are a way for startups to raise capital, so in some circumstances they may tick the four boxes in the SEC’s security test. In particular, when buyers in an ICO are able to trade their holdings before the launch of the application, the contracts could constitute securities. In other cases, however, ICO agreements may simply be advance purchases of a good or service and not tradable before the platform goes live. Those agreements more closely resemble forward contracts and should be regulated like them.

In the paper, I propose just such a two-tier regulatory structure for ICOs, recognizing that some of them may fall under the securities laws, but that this will be determined by the circumstances of each case.

Apart from being consistent with Director Hinman’s position, the suggested approach balances consumer protection and the duties of financial regulators with an open environment for cryptocurrency innovation. It recognizes that most of the fraud about which the SEC has expressed concern happens at the ICO stage, so buyers might benefit from increased disclosures then. But it also takes account of the fact that excessive regulation of functional cryptocurrencies would stifle the market and throw a spanner in its further development, with few countervailing benefits in the form of market stability or consumer protection.

It is time for policy to catch up to the exciting development of cryptocurrency markets. But catching up shouldn’t mean smothering the technology with regulation, nor crudely applying the securities laws to all cryptocurrencies. The reality of this emerging market favors a more judicious approach.

[Cross-posted from]

President Trump recently held an event with some of the relatives of people killed by illegal immigrants in the United States.  Afterward, the White House sent out a press release with some statistics to back up the President’s claims about the scale of illegal immigrant criminality.  The President’s claims are in quotes and my responses follow.

According to a 2011 government report, the arrests attached to the criminal alien population included an estimated 25,000 people for homicide.

Criminal aliens is defined as non-U.S. citizen foreigners, which includes legal immigrants who have not naturalized and illegal immigrants.  The 25,064 homicide arrests he referred to occurred from August 1955 through April 2010 – a 55-year period.  During that time, there were about 934,000 homicides in the United States.  As a side note, I had to estimate the number of homicides for 1955-1959 by working backward.  Assuming that those 25,064 arrested aliens actually were convicted of 25,064 homicides, then criminal aliens would have been responsible for 2.7 percent of all murders during that time period.  During the same time, the average non-citizen resident population of the United States was about 4.6 percent per year.  According to that simple back of the envelope calculation, non-citizen residents were underrepresented among murderers.

In Texas alone, within the last seven years, more than a quarter million criminal aliens have been arrested and charged with over 600,000 criminal offenses.  

We recently published a research brief examining the Texas data on criminal convictions and arrests by immigration status and crime.  In 2015, Texas police made 815,689 arrests of native-born Americans, 37,776 arrests of illegal immigrants, and 20,323 arrests of legal immigrants. For every 100,000 people in each subgroup, there were 3,578 arrests of natives, 2,149 arrests of illegal immigrants, and 698 arrests of legal immigrants.  The arrest rate for illegal immigrants was 40 percent below that of native-born Americans. The arrest rate for all immigrants and legal immigrants was 65 percent and 81 percent below that of native-born Americans, respectively.  The homicide arrest rate for native-born Americans was about 5.4 per 100,000 natives, about 46 percent higher than the illegal immigrant homicide arrest rate of 3.7 per 100,000.  Related to this, the United States Citizenship and Immigration Services recently released data that showed the arrest rate for DACA recipients about 46 percent below that of the resident non-DACA population.

More important than arrests are convictions.  Native-born Americans were convicted of 409,063 crimes, illegal immigrants were convicted of 13,753 crimes, and legal immigrants were convicted of 7,643 crimes in Texas in 2015. Thus, there were 1,749 criminal convictions of natives for every 100,000 natives, 782 criminal convictions of illegal immigrants for every 100,000 illegal immigrants, and 262 criminal convictions of legal immigrants for every 100,000 legal immigrants. As a percentage of their respective populations, there were 56 percent fewer criminal convictions of illegal immigrants than of native-born Americans in Texas in 2015. The criminal conviction rate for legal immigrants was about 85 percent below the native-born rate.

Criminal Conviction Rates by Immigration Status in Texas, 2015

Murder understandably garners the most attention.  There were 951 total homicide convictions in Texas in 2015. Of those, native-born Americans were convicted of 885 homicides, illegal immigrants were convicted of 51 homicides, and legal immigrants were convicted of 15 homicides. The homicide conviction rate for native-born Americans was 3.88 per 100,000, 2.9 per 100,000 for illegal immigrants, and 0.51 per 100,000 for legal immigrants.  In 2015, homicide conviction rates for illegal and legal immigrants were 25 percent and 87 percent below those of natives, respectively.

Homicide Conviction Rates by Immigration Status in Texas, 2015

Murderers should be punished severely no matter where they are from or what their immigration status is.  There are murderers and criminals in any large population, including illegal immigrants.  But we should not tolerate the peddling of misleading statistics without context.  What matters is how dangerous these subpopulations are relative to each other so the government can allocate resources to prevent the greatest number of murders possible.  Thus, enforcing immigration law more harshly is a very inefficient way to punish a population that is less likely to murder or commit crimes than native-born Americans.  Illegal immigrants, non-citizens, and legal immigrants are less likely to be incarcerated, convicted, or arrested for crimes than native-born Americans are. 

“A crisis is a terrible thing to waste,” is a phrase coined by Stanford economist Paul Romer. Politicians are always in search of new crises to address—new fires to put out—with rapid and decisive action. In their passion to appear heroic to their constituents they often act in haste, not taking the time to develop a deep and nuanced understanding of the issue at hand, insensitive to the notion that their actions might actually exacerbate the crisis.

An example of that lack of understanding was made apparent in a press release by the office of House Majority Whip Steve Scalise (R-LA) on June 22 supporting legislation that packages together over 70 bills (H.R.6) aimed at addressing the opioid (now mostly heroin and fentanyl) overdose crisis. The bills mostly double down on the same feckless—often deleterious—policies that government is already using to address the crisis. The release stated, “Whip Scalise highlighted a Slidell, Louisiana family whose son was born addicted to opioids, a syndrome called NAS, as a result of his mother’s battle with addiction.” 

The press release quoted Representative Scalise:

I highlight Kemper, a young boy from my district in Slidell, Louisiana. He was born addicted to opioids because his mother, while she was pregnant, was addicted to opioids herself…this example highlights something the Centers for Disease Control has noted. That is once every 25 minutes in America a baby is born addicted to opioids. Once every 25 minutes. That’s how widespread it is, just for babies that are born.

Before crowing that the “House Takes Action to Combat the Opioid Crisis,” as the press release was titled, Representative Scalise should get his science right. No baby is ever born addicted to opioids. As medical science has known for years, there is a difference between addiction and physical dependence—on a molecular level. Drs. Nora Volkow and Thomas McLellan of the National Institute on Drug Abuse pointed out in a 2016 article in the New England Journal of Medicine that addiction is a disease, and “genetic vulnerability accounts for at least 35 to 40% of the risk associated with addiction.” Addiction features compulsive drug use in spite of harmful, self-destructive consequences.

Physical dependence, on the other hand, is very different. As with many other classes of drugs, including antidepressants like Prozac or Lexapro, long-term use of opioids is associated with the development of a physical dependence on the drug. Abruptly stopping the drug can lead to severe withdrawal symptoms. A physically dependent patient needs the drug in order to function while avoiding withdrawal. Dependence is addressed by gradually reducing the dosage of the drug over a safe time frame. Once the dependence is overcome, such a patient will not have a compulsion to resume the drug.

NAS stands for Neonatal Abstinence Syndrome, a withdrawal syndrome resulting from physical dependence developed by the fetus due to the transplacental transmission of drugs being used by the mother during pregnancy. A combination of gradual opioid tapering with soothing and supportive measures resolves cases of NAS due to opioids.

In addition to opioids, cocaine can cause neonatal withdrawal syndrome, and alcohol has been long known to be a cause. In fact, much worse than NAS, Fetal Alcohol Spectrum Disorders (FASD) include head and face deformities, heart defects, and cognitive disabilities (none of which are sequelae in opioid-dependent newborns).

The distinction between addiction and physical dependence is important for a number of reasons. Because many people see addiction as a vice rather than a disease, stigmatizing a baby as being “addicted” can result in their growing up being seen and raised as manipulative and “bad.” Unlike babies born with fetal alcohol syndrome, these babies usually grow up to be normal, healthy children. It has also led some to advocate for the forced treatment of opioid-dependent pregnant women, a violation of their right to informed consent, considered an ethical violation by addiction specialists and medical ethicists alike.

The distinction is also important because the tendency of politicians and many in the media to use the words “addiction” and “physical dependence” interchangeably can conflate the two distinct conditions and feed the sense of urgency about the opioid overdose problem. This leads to policies that are not evidence-based and have unintended consequences. 

For example, multiple Cochrane systematic studies of chronic non-cancer pain patients on long term opioids have shown an addiction rate of roughly 1 percent. And a January 2018 study of 568,000 patients prescribed opioids for acute post-surgical pain found a total “misuse” rate of 0.6 percent. While it is true that most heroin addicts began their opioid abuse with diverted prescription opioids, as cheaper heroin and fentanyl have flooded the market in response to the cutback in prescription opioids, more and more non-medical users are beginning with heroin. A recent study found that 33 percent of heroin addicts entering rehab in 2015 reported their gateway drug was heroin. Yet, the government’s continued clamp-down on the manufacture and prescription of opioids causes many patients—including those in hospitals—to suffer needlessly, while the overdose rate continues to surge.

The overdose crisis will only be properly addressed once it is widely understood that it is primarily due to non-medical users accessing illicit drugs in a booming black market fueled by drug prohibition. Until then, members of Congress would be well-advised to stop the hysterical rhetoric and learn some science.

David Boaz blogged today on the Washington Post story about a lawsuit regarding DC childcare regulations. DC is set to require directors of child-care facilities to obtain a bachelor’s degree in early childhood development, and assistant teachers and home-care providers to have Child Development Associate (CDA) certificates in the same subject.

The WaPo write-up follows the usual boilerplate for these discussions: on the one hand, providers say complying with the regulations will be burdensome and increase costs; on the other hand, the government talks up the educational benefits of the new regulations. This all implies there is a trade-off between quality and cost.

But is there? Actually, this is a classic example of the government’s argument not considering the market for childcare as a whole.

Yes, requiring child-care workers to achieve higher qualification levels could result in more highly trained formal caregivers, who can help children to develop from an educational perspective. Such a regulation might also provide a “quality assurance” effect for some particularly conscientious parents.

But the effect on the quality of care faced by the whole population of children is ambiguous. By restraining the supply of formal care via regulation, the price of formal care will rise. If the price of formal childcare rises, then some parents will decide to substitute to more informal forms of care or even have to stay home to care for their own children. According to the government’s definitions of “quality” (which may be quite different from parents’ own perception) there will be substitution into lower quality settings as a result of childcare becoming more expensive.

Previous academic work suggests the price effects of these types of regulations are large. Diana Thomas and Devon Gorry estimated that even the more modest requirement for lead teachers to have a high school diploma increases childcare prices by between 25 and 46 percent. Hotz and Xiao likewise find that increasing the average required years of education of center directors by one year reduces the number of child care centers in the average market by between 3.2% and 3.8%. This effect manifests itself overwhelmingly in low income areas, with quality improvements (proxied here by accreditation for the center) occurring in high-income areas.

In other words, the real trade off is not quality vs. cost, but better quality for those rich enough to still be able to use formal care vs. less accessibility to care and higher prices for the poor. And that means lower quality and fewer options for the least well off – widening, rather than narrowing, supposed educational inequalities. Given average annual full time infant care in DC already costs $23,000 plus per year, one would think the government would be sensitive to these concerns about affordability.

The top left-hand story on the front page of the Metro section of today’s Washington Post:

Lawyers for the District argued Wednesday for the dismissal of a lawsuit that challenges city regulations requiring some child-care workers to obtain associate degrees or risk losing their jobs….

The requirements … stipulate that child-care center directors must earn bachelor’s degrees and assistant teachers and home-care providers must earn Child Development Associate (CDA) certificates.

Meanwhile, just across the page, in the top right-hand space:

About 1,000 teachers in D.C. Public Schools — a quarter of the educator workforce — lack certification the city requires to lead a classroom, according to District education leaders.

So how about this compromise: the child-care licensing requirement will go into effect, but it will be enforced by the crack management team at DC Public Schools?

The Court today reached the right result for the wrong reason. The majority extends the “reasonable expectation of privacy” to cell-site location data and thereby carves an exception to the third-party doctrine—as well as making various caveats about not reaching different technologies, security-related investigations, and other hypothetical situations. Good enough for restricting law-enforcement overreach in some cases, but just adding cautionary barnacles to a rusty and outmoded Fourth Amendment hull.

It’s Justice Neil Gorsuch who’s right that we need to go to the theory of the matter and the people’s right to be secure in their “persons, houses, papers, and effects” based not on privacy expectations but on property rights, contract law, and statutory protections (all of which can certainly be applied in the modern digital age). This very much aligns with what Cato argued in our amicus brief. Gorsuch styles his opinion as a dissent because he adjudges that Carpenter’s lawyers didn’t “preserve” such arguments, but it’s a concurrence in all but name.

If the Court doesn’t follow that philosophical line, going back to first principles rather than reinventing the Fourth Amendment with each technological revolution, its jurisprudence in this area will never escape the artificial muddle epitomized by the unsatisfying majority opinion here. 

[This may be updated and my colleagues are likely to have more analysis.]

Thirty years ago, NASA scientist James Hansen put greenhouse-effect warming on the map with his strident testimony indicating that global temperatures could then confidently be related to changes in atmospheric carbon dioxide. Two years ago, he made another prediction: several meters of sea level rise in this century. He told Scientific American:

Consequences [of climate change] include sea level rise of several meters, which we estimate would occur this century or at latest next century, if fossil fuel emissions continue at a high level. That would mean loss of all coastal cities, most of the world’s large cities and all their history.

There’s only one way to accomplish this: melt a substantial portion of Greenland’s ice. In fact, as early as 2004 he wrote Greenland could a substantial portion of its ice in 100 years with the warming of this century, causing a total sea level rise of nearly 20 feet.

Fortunately, there are ways to test the hypothesis that Greenland is about to shed like a calico cat in the summer.  It turns out Greenland has experienced multiple millennia of heat at times during the last 125,000 years.

The last two million years or so have been punctuated by (at least) four major ice ages. The reigning theory is that they are driven by slight but predictable variations in earth’s orbit around the sun, as well as the opening of the circumpolar Southern Ocean and the rise of the Himalayas. Our orbit is an ellipse in which the relation to the sun changes over time. Right now, earth is closest to the sun in Northern Hemisphere winter, and furthest away in summer.  Under some conditions, that would lead to an ice age, but the seasonal distances between earth and sun also precess with time, and our orbit is right now quite circular. That makes the winter-summer difference small, preventing another ice age.

At the end of each of the last two ice ages, earth and sun lined up in a position where they are closest in Northern Hemisphere summer, and the orbit was highly elliptical, which means excess sunshine in the high latitudes, warming Greenland, and northern Eurasia and North America—a lot.

The Arctic tundra holds many secrets, including the fact it was once forested. It’s now too cold for trees, but we also know how warm it has to be for trees to survive. Dying trees buried in highly acidic peat are preserved remarkably intact. Here, for example is a log, radiocarbon dated at 6,000 years old that looks like new wood:

We became interested in this years ago, with the 2000 publication by Glen MacDonald, of UCLA, and several colleagues, showing that from roughly 7,000 to 9,000 years ago, “mean July temperatures along the northern coastline of Russia may have been 2.5° to 7.0°C [3.6° to 12.6°F] warmer than modern.” This is consistent with the 2016 finding of Jason Briner (University of Buffalo) that the difference between warmest and coldest postglacial millennia is 5.4° +/- 1.8°F in Arctic Canada and Greenland. Last year, dating buried wood and cones, Leif Kullman found high latitude summer temperatures at least 3.6°C [6.5°F] warmer than today, between roughly 6,500 years to 11,200 (!) years ago.

What did all this mean for Greenland? According to a very recent paper by Lisbeth Nielsen of University of Copenhagen, all that warming melted enough ice to raise sea level between ~0.15-1.2m (0.5-4.0 ft) over several thousand years. That’s a far cry from Hansen’s 20 feet in a hundred years, and it’s telling that the 2016 Briner finding and Hansen’s forecast were concurrent.

It’s also noteworthy that, due to all this arctic warming, “Arctic sea ice cover was strongly reduced during most of the early Holocene and there appear to have been periods of ice free summers in the central Arctic Ocean.” And yet this creature survived:

All this recent research is consistent with a landmark 2013 paper by Dorthe Dahl-Jensen and several colleagues from the University of Copenhagen, who drilled a core through the     Greenland ice to the beginning of the previous interglacial–125,000 years ago, through the millennia known as the Eemian warmperiod.

If one thinks it was warm at the beginning of the current interglacial, the beginning of the last one was sweltering at high northern latitudes, given the Dahl-Jensen data. It used to be thought that a 6,000 year period, centering around 118,000 years ago, was around 3.6-5.4°F (2-3°C) warmer in summer than the 20th century average for Greenland. But data in their core showed it averaged 10.4-14.0° (6-8°C) warmer in summer for 6,000 years. And for all of that, they estimate that Greenland lost about 30% of its ice, which would raise sea level about about 1.38 inches per century over these six millennia. Not a Hansenian 20 feet, in a hundred years, but about 6/1000’s of that.

Quantitatively, here’s why Hansen’s hypothesis is wrong.

Assume that the six-millenia Eemian averaged around the lower level of Dahl-Jensen’s estimate, some 6°C warmer in summer. That means the melting heat-load (Greenland’s ice melts every summer) over Dahl-Jensen’s core region (northwestern Greenland) was:

6000 summers X 6°C = 36,000 degree-summers.

It’s doubtful we are going to warm our atmosphere with increased carbon dioxide for anywhere near 1,000 years, and climate models project a summer warming around Greenland in the top range of around 5°C.  But let’s assume these pessimistic parameters.

Although this is likely a huge exaggeration of the heat that humans could possibly unload on Greenland, it is

1000 summers X 5°C = 5,000 degree-summers.

Therefore, in a 1000-year worst-case scenario we will only melt a small fraction of Greenland’s ice, compared to the loss in the 6000-year Eemian.

That’s a far, far cry from Jim Hansen’s 20 feet in 100 years. When his alarming sea-level rise hypothesis comes up (as it will) around the 30th anniversary of his 1988 testimony on June 23, rest assured that thousands of years of ice core data—real data instead of a speculative hypothesis—show the Greenland-driven disaster scenario to be simply untrue.

For years, the Justice Department’s Bureau of Alcohol, Tobacco, and Firearms has maintained that “bump stocks”—devices that allow a firearm to reciprocate slightly and assist in “bump firing”—are not “machineguns.” From 2007 to 2017, spanning multiple administrations (including the current one), the ATF issued 10 different opinion letters confirming that the devices were not “machineguns” or “machine gun conversions,” and thus did not fall under the purview of the National Firearms Act of 1934 and Gun Control Act of 1968, two federal laws which heavily regulate machine gun ownership.

Under federal law, a “machinegun” is a device “which shoots … automatically more than one shot … by a single function of the trigger.” With language so clear, the provision was never considered ambiguous by a reviewing court over 80 years of decisions—and the ATF’s interpretation remained consistent. It is for this reason that bump stocks, and crank-operated “Gatling guns,” while having a high rate of fire, have never been considered “machineguns.” (Yes, virtually anyone can own a Gatling gun under federal law.) What could change the state of such settled law, then? Political expediency.

After the October 2017 mass shooting in Las Vegas, where the shooter used a bump stock, President Trump made clear that he intended to see the devices banned “without going through Congress.” The administration then announced that it intended to “clarify” the NFA and GCA to include bump stocks within the statutory definition of “machinegun.” The issue is, of course, that no amount of “clarification” can lawfully make a statute say something it does not. That, however, did not seem to deter the ATF when it published a Notice of Proposed Rulemaking in March, threatening to stretch statutory language beyond the point of tearing, all in an attempt to use an 83-year-old law to do away with bump stocks.

Our Constitution requires that new laws be brought through Congress, not shoehorned into old ones by executive agencies. In that light, we have filed a regulatory comment, expressing our view that the ATF’s new “interpretation” is an attempt to force new restrictions as a matter of political expediency, not a good faith interpretation of existing law. The president undoubtedly has the authority to direct the actions of his principal officers, but when those directions urge the reversal of longstanding previous interpretations based on an unambiguous statute, they smell more of an attempt to improperly change the law than a valid exercise of constitutional authority.

The most fascinating phenomena of American politics is the increasingly anti-immigration opinions of politicians like Donald Trump that contrasts with an increasingly pro-immigrant public opinion.  Gallup has asked the same poll question on immigration since 1965: “In your view, should immigration be kept at its present level, increased, or decreased?”  Gallup’s question does not separate legal from illegal immigration, likely meaning that answers to this question undercount support for increasing legal immigration.  They recently released their 2018 poll results.  The support for increasing legal immigration is at 28 percent – the highest point ever (Figure 1).  Support for increasing immigration is just one point below support for decreasing immigration – well within the 3-point margin of error (95% CI). 

Figure 1

Gallup: Should Immigration Be Kept at Its Present Level, Increased, or Decreased?


Sources: Gallup.

The Gallup trend is the clearest and best for those of us who support increasing immigration but the General Social Survey shows a similar directional trend – although not nearly so dramatic (Figure 2).

Figure 2

GSS: Should Immigration Be Kept at Its Present Level, Increased, or Decreased?


Source: General Social Survey.

If the public is increasingly pro-immigration, why is the GOP so opposed to immigration?  It can’t be radically divergent opinions across partisan lines. According to the Gallup poll, 65 percent of Republicans think immigration is good for the country compared to 85 percent of Democrats.

Another possibility is that anti-immigration voters care a lot more about the issue than pro-immigration voters and are willing to change their votes based on it.  For pro-immigration voters, immigration just isn’t their biggest issue.  The Gallup poll hints at this as 55 percent of those who are dissatisfied with the current immigration levels want to cut the numbers while only 22 percent who are dissatisfied want to increase the numbers.

Another issue is causality as anti-immigration politicians could be pushing moderate Americans into a more pro-immigration position.  The crude language used by nativists, such as President Trump’s description of illegal immigrants as an infestation, can turn off a lot of voters in the same way that the Prop 187 campaign in California in the mid-1990s convinced a lot of white voters to not support the GOP.  This is the exact worry that Reihan Salam, a moderate restrictionist, voiced. The spokesman for political issues matters and Trump is not a very good one.

Another potential explanation is the “locus of brutality,” a riff on the locus of control literature that says voters are more supportive of liberalized immigration when they perceive it to be controlled.  Under that theory, border chaos, illegal immigration, refugee surges, and the perception of immigrant-induced chaos increases support for restriction.  Thus, countries with open immigration are mostly able to maintain those policies so long as it appears orderly.  Since disorder usually arises from poor government laws, this means that more regulation can make it more chaotic and create demand for more legislation in an endless cycle.  That locus of control pattern could be countered by the brutality of immigration enforcement such that voters become more pro-immigration when they are confronted with the government’s brutal enforcement of immigration laws.  Prison camps for immigrant children thus create support for liberalization.

My final theory is that this is the last gasp of nativism.  Lots of dying political movements that are terminally ill due to shifting public opinion go all out as it is their last chance to get elected.  Think George Wallace and segregation.  During the 2016 campaign, then-Senator Jeff Sessions said that that was the “last chance for Americans to get control of their government.”  When it comes to changes in the public trends and support for cutting immigration, he is probably correct.

The public is becoming increasingly pro-immigration.  The Democratic Party is increasingly reflecting that changing public opinion while the Republican Party is getting an increasing percentage of that shrinking but sizable anti-immigration majority.  There will come a point, should public opinion continue to support increasing immigration, where both parties will adopt this position.

The House is scheduled to vote tomorrow on a bill—the Border Security and Immigration Reform Act, the supposed GOP compromise bill. The authors claim in their bill summary that “the overall number of visas issued will not change,” yet that is simply incorrect. In fact, the proposal would reduce legal immigration at least 1.4 million over 20 years.

The bill would reduce the number of legal immigrants in five ways: 1) eliminating the diversity visa lottery, 2) ending sponsorship of married adult children of U.S. citizens, 3) ending sponsorship of siblings of U.S. citizens, 4) restricting asylum claims, and 5) indirectly by restricting overall immigration, which will lead to fewer sponsorships of spouses, minor children, and parents of naturalized citizens years later. The bill partially offsets these effects by increasing employer-sponsored immigration and by granting permanent residency to some Dreamers in the United States, but the net effect is still strongly negative.

Table 1 breaks down the cuts to legal immigration by category over the 20-year period from 2020 to 2039. The net effect is a reduction in legal immigration of 1.4 million including Dreamers or 2.1 million not counting Dreamers toward the total. This is a cut of 7 percent or 10 percent in the number of legal immigrants that would have been allowed to enter under current law.

Table 1: Difference in the number of legal immigrants

Eliminating the diversity visa lottery (#1) would reduce legal immigration by about 1 million over 2 decades; ending the sponsorship of married adult children of citizens (#2) by about 465,000, ending the sponsorship of siblings of citizens (#3) by 1.4 million; restricting asylum (#4) by about 260,000; and limiting overall immigration (#5) would indirectly reduce sponsorships of spouses, minor children, and parents of U.S. citizens by almost 218,000. The bill would increase employer-sponsored immigration by almost 1.3 million and would provide permanent residence to about 700,000 Dreamers.

In an earlier post, we calculated the number of Dreamers likely to legalize and then receive permanent residence under the House bill. We explained that due to the exceptionally restrictive requirements in the bill, which are much more restrictive than the already quite limited DACA program, we estimated that only 698,620 would end up receiving permanent residence. We used our previous estimate of the effects of the restriction on asylum in this bill, requiring them to prove their claims at the border at a much higher standard. We estimated that it would cut the number of asylees in half, but it could be even more severe.

We used a statistical model to calculate how many fewer immediate relatives—spouses, minor children, and parents—of U.S. citizens would be sponsored if the other categories are cut.[i] Because immigrants can naturalize and marry a foreigner or sponsor their parents, they trigger increases in the immediate relative category after 5 years. We discounted the effect of Dreamers on increases in this category by 20 percent because the vast majority of them cannot sponsor their parents because their parents crossed the border illegally (some parents, who entered with legal visas, could be sponsored).

Table 2 compares the single year flows in 2019 to 2029, showing how, after just 10 years (and after the Dreamers all receive their green cards), the annual cut would be 11 percent relative to 2019.  Going forward to 2039, the annual cut would grow to 12 percent. Table 3 at the end of the post shows how the bill would affect each immigration category from 2019 to 2039. 

Table 2

It is important to note that almost all of the visa increases in the House bill would go to people already in the United States, whether Dreamers or employer-sponsored immigrants who are almost always already here in temporary worker visas. By contrast, all of the cuts to legal immigration are primarily of people outside of the United States, particularly in the family-sponsored categories and the diversity visa lottery. In other words, this legislation would do more reduce overall population growth than what these numbers suggest. 

In addition to reducing legal immigration, after 2019, the bill cancels the applications of people in the categories for married adult children and siblings of U.S. citizens who have waited in line for, in many cases, decades: nearly 3 million people. This adds unfairness to the economic harms associated with cutting immigration. 

Reducing legal opportunities to immigrate to the United States will only encourage more illegal immigration, going against one of the stated goals of the bill. Moreover, canceling the applications of legal immigrants would cause immigrants in the future to lose faith in the legal immigration system. This would further incentivize people to seek illegal avenues to come to the United States. 

At a time when U.S. population growth is at its lowest level since the Great Depression and the fertility rate has plunged to the lowest level on record, the United States needs more people, not fewer. Congress should increase the number of immigrant visas and create other opportunities that have proven to reduce illegal entry, such as more temporary work visas, allowing more people to enter the country lawfully. These approaches have proven to work, while cuts to legal immigration and a lack of legal work visas harm the U.S. economy and encourage more people to enter the country unlawfully.






[i] We used a vector autoregressive (VAR) model to estimate the effects of the cuts to legal immigration on the immediate relative category (spouses, minor children, and parents of U.S. citizens). A VAR models the relationship multiple variables of interest as they evolve together over time.  After estimating a VAR model, one can impose a shock to a variable of interest in one year and trace out how the shock impacts each variable over time, or an impulse response function (IRF). The impulse response function assumes that some shock occurs in a single year and traces out the shock’s impact over a forecast horizon.  This allows one to assess the impact of a shock to the non-immediate relative (IR) immigrant inflows on IR inflows over time, taking into account the dynamic, interrelated nature of these two variables. We delayed the effect of the cut until 6 years after the shock because the law requires at least five years residency prior to naturalization and sponsoring. Since these two variables are likely to be related to one another across time, one can compute orthogonalized IRF.  The orthogonalized IRF ensures that the shocks to each variable are independent of one another, i.e. so no feedback loop exists between each set of shocks.  To ensure that the shocks to the Non-IR category flows are independent of those to the IR category flows, orthogonalized IRFs are computed.  Additionally, each variable is expressed as its natural logarithm to account for non-stationarity in each time series. Credit to Andrew Forrester for his assistance with this model.