Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

This month, Attorney General Jeff Sessions announced that the Department of Justice would institute a “zero-tolerance policy” along the Southwest border, stating that he wants to criminally prosecute 100 percent of all illegal entries. Sessions claimed that “a crisis has erupted at our Southwest Border that necessitates an escalated effort to prosecute those who choose to illegally cross our border.” Yet the “crisis” amounts to a flow of illegal immigration 96 percent lower than the level in the 1980s and lower than just two years ago.

Because we cannot know how many border crossers actually evade capture, the best measure of illegal entries is the number of crossers that Border Patrol apprehends. Of course, more agents results in more apprehensions for the entire Border Patrol, which is why it is important to control for the effect of enforcement, focusing on the number each agent arrests. More attempted crossings generally translate into more apprehensions for the average agent. Figure 1 presents the average number of monthly apprehensions along the Southwest border per Border Patrol agent. The average apprehensions per agent in a month in Fiscal Year 2018 was less than 2—which is 95.5 percent lower than the rate in the peak year of 1986.

Figure 1: Average Monthly Southwest Apprehensions Per Border Patrol Agent, FY 1980 to FY 2018


Source: Agents—Border Patrol & TRAC; Apprehensions—Border Patrol (1980-2017 & 2018)

The 1.9 average monthly apprehension rate for each agent so far for 2018 is exactly the average rate for the last decade. How does the Department of Justice find a crisis in these figures? It states, “the Department of Homeland Security reported a 203 percent increase in illegal border crossings from March 2017 to March 2018.”

To show why comparing one month in 2018 to the same month in 2017 is misleading, Figure 2 compares the first six months of each fiscal year since 2010. I bolded FY 2018 in black just so that it is visible among the thicket of near parallel lines. It is obvious that FY 2017 (orange), not 2018, is the abnormal year. Every other year followed the same pattern: lower apprehensions in the fall and winter, higher apprehensions in the spring and summer. 2017 broke this pattern. People came in higher numbers in the fall and winter of FY 2017—i.e. October 2016 to February 2017—while fewer came in the spring of 2017. Now the pattern has simply returned to normal.

Figure 2: Average Southwest Apprehensions Per Border Patrol Agent by Month, FY 2010 to FY 2018

Source: Border Patrol (agents); Apprehensions—Border Patrol (2010-2017 & 2018)

This fits with the hypothesis that I proposed last August: that Trump’s campaign rhetoric had a major effect on border crossings. People moved up their travel plans to hedge against the possibility that President Trump would institute major reforms to border security. In other words, Trump caused an increase in illegal immigration starting before the election and a decrease after his inauguration, but no net change in total arrivals. I predicted that the prior trend would return once migrants and asylum seekers realized that the hype was overblown. This is exactly what has happened.

Sessions should not use the anomalous months of 2017 to argue that the border crossings in 2018 are at “crisis” levels. There is simply no evidence to support this view.  

One of the jobs of a think tanker is to synthesize information from other sources and put it in the context of his or her particular field. Hard data are particularly important to our work because data are measurable outcomes from policy and practice in the real world. No one cares what anyone at Cato “feels.” Feelings have their place, of course. Measuring the feelings of a particular group or groups of people can be useful in the aggregate because people will act in accordance with those feelings, but those feelings make up just another metric on which we collect data to explain the world. Reliable data and provable outcomes are fundamental to shaping and forming effective public policy.

As irritating as strict libertarians may find it, several bodies within the federal government are very good at collecting and analyzing data. One of these bodies is the U.S. Sentencing Commission. From the USSC website:

The U.S. Sentencing Commission, a bipartisan, independent agency located in the judicial branch of government, was created by Congress in 1984 to reduce sentencing disparities and promote transparency and proportionality in sentencing.

The Commission collects, analyzes, and distributes a broad array of information on federal sentencing practices.  The Commission also continuously establishes and amends sentencing guidelines for the judicial branch and assists the other branches in developing effective and efficient crime policy.

The Commission publishes data on the impacts of sentencing, the levels of recidivism among different populations of the formerly incarcerated, and other inputs and outputs related to our federal carceral system. For criminal justice researchers of all levels, the Commission provides detailed and easily accessible information about how federal policy and law translate into practice and outcomes.

Another component of the Commission’s work is passing along recommendations about sentencing law to Congress. Reasonable people may disagree with these recommendations, but the Commission clearly bases its recommendations on the best available data they have collected and analyzed. Individuals of all ideological persuasions should want any nominee to the Commission to share this dedication to data collection and evidence-based practices. Instead, President Trump nominated William “Bill” Otis.

Otis is an adjunct professor at Georgetown University Law Center and spent many years in the Justice Department. He has continuously lambasted bipartisan efforts to reduce sentences and remains a stalwart proponent of the “tough on crime” rhetoric of the 1980s, warning of great crime waves that will follow widespread sentencing reduction.

Otis marshals no empirical evidence for his claims—because there isn’t any. And that’s the problem.

To be clear, I’m not worried about Otis’s nomination because he’s conservative. Plenty of conservatives work on criminal justice issues, and some have led the way on reforms. Republican governors and GOP-controlled legislatures in Georgia, Texas, and other “red” states have passed significant criminal justice reforms that reduced prison and jail populations while also reducing crime rates. When presented with evidence-based opportunities to help individuals and save public money, many realized criminal justice reform could be a conservative cause.

The problem is, as Julie Stewart, the founder of Families Against Mandatory Minimums wrote in 2015, “Otis is impervious to facts and evidence.” Put another way, Bill Otis is interested in the politics, not the policy, of criminal justice.

The mountains of data that support less carceral policies and alternatives to incarceration have not swayed Otis’s rhetoric at all. When a man who had benefitted from reducing the sentence for a crack cocaine conviction brutally murdered a woman and two children, Otis was quick to blame the shortened sentence:

Three people, including two children, are dead today because of early release from a duly imposed, lawful and fully deserved federal drug trafficking sentence.

How many times were we lectured that those released under lowered sentencing rules would be only “low level, non-violent offenders?” I don’t know, exactly. Hundreds if not thousands.

Question:  How many more lives are the congressmen and senators who support the [Sentencing Reform and Corrections Act] willing to see sacrificed for their “we’ve-been-too-tough” agenda?

An exact number, please, gentlemen.  We want to remember who you are on election day.  And we will.

It’s Willie Horton all over again.

Yes, a few people who are incarcerated for drugs may do horrible things when they get out. Most of them, even those who commit new offenses, will not. According to several Commission reports, the most common reason for re-arrest among federal drug offenders is a violation of supervision policies—that is, getting drunk or committing some other minor infraction—and certainly not murder.

The latest data from the Sentencing Commission shows no statistical difference in recidivism among those released early under new drug sentencing guidelines and those who served the longer sentences:

The recidivism rates were virtually identical for offenders who were released early through retroactive application of the [Fair Sentencing Act] Guideline Amendment and offenders who had served their full sentences before the FSA guideline reduction retroactively took effect. Over a three-year period following their release, the “FSA Retroactivity Group” and the “Comparison Group” each had a recidivism rate of 37.9 percent.

These data coincide with previous data from the Commission that measured the effects of crack cocaine sentencing retroactivity that found no significant statistical difference between recidivism among those who got out early and those who served the full sentence.

All this is not to say that policymakers should not find ways to reduce recidivism, no matter how serious the offenses are. But Otis’s belief that serving long drug sentences will make convicted individuals more lawful citizens is at odds with what liberals, conservatives, progressives, and libertarians in criminal justice have found in years of research and by measuring the effects of new policies put into practice. The push for reducing incarceration is not a conspiracy of groups taken in by George Soros and Al Sharpton, as Otis suggested in 2016, but a broad coalition of individuals, organizations, and lawmakers who look at the evidence and formulate policy accordingly.

The Sentencing Commission provides, among many other things, a trove of information that can teach us more about how our policies do and do not work. Otis’s nomination signals a return to reactionary politics based on what some people think and feel, rather than what they can show and prove.

I was in the courtroom for this morning’s argument in Trump v. Hawaii, otherwise known as the “travel ban” case. Recall that this is Travel Ban 3.0, which is the most detailed executive action regarding entry restrictions yet. Indeed, Solicitor General Noel Francisco called it the most detailed immigration proclamation ever (in contrast to earlier ones by President Carter regarding Iranians and President Reagan regarding Cubans).

It’s an odd case: as Neal Katyal, lawyer for Hawaii and the other state and private challengers, put it, if Donald Trump hadn’t made all his various campaign statements and tweets about Muslim bans, “we wouldn’t be here.” In other words, “no president has ever said anything like this.”

In a normal case involving an executive action over national security, no court would ever second-guess the president. But this isn’t a normal case or a typical president, so the Supreme Court struggled mightily over a travel ban that, all sides seem to agree, wouldn’t be a legal controversy if any other president had implemented it. Indeed, the whole course of the litigation would’ve been different if Travel Ban 1.0—the one President Trump signed his first week in office without interagency process or guidance to the line agents who were supposed to implement it, causing chaos at airports—had been skipped and we’d gone straight to the more fully lawyered 2.0. I doubt there would’ve been quite as much judicial resistance and treatment of this president differently from the president.

But that’s a historical counterfactual, so you go to court with the facts you have.

Of course, it’s not that unusual for a court to apply a law to factual circumstances that were never contemplated. Here, the relevant immigration provision gives the executive wide discretion to deny entry to any type of foreigner when citing great national interest—and it’s not hard to square that with other provisions regarding nondiscrimination in granting visas. Courts don’t get to review that kind of determination.

That really should be the end of it, even if one thinks, as I do, that the travel ban doesn’t do much for national security and has a greater symbolic than practical effect. And it should be the end of it regardless whether one think that in his heart of hearts Donald Trump has anti-Muslim animus.

Chief Justice John Roberts will try mightily to cobble together a coalition to make this case go away on jurisdictional or other narrow grounds. Justice Neil Gorsuch seems ready to join him (presumably Justice Clarence Thomas too), while Justice Samuel Alito was clearly with the government on the merits. Justice Elena Kagan was the only one on the left who raised pointed questions of Katyal; given her views on administrative law and the breadth of the immigration statute here, she’s “gettable” for some sort of technical compromise. To do so, the Court would likely have to finesse Sale v. Haitian Centers Council (1993), in which it found claims against immigration-related executive actions to be justiciable (before recognizing the executive’s broad discretion in this area).

Given that weird cases make for bad law, we can only hope that, however the Court rules, no strong precedent is set.

I wrote last month that new regulations and taxes in California’s legalized marijuana regime are likely to result in a situation in which

a few people are going to get rich in the California marijuana industry, and fewer small growers are going to earn a modest but comfortable income. Just one of the many ways that regulation contributes to inequality.

Now the East Bay Express in Oakland offers a further look at the problem:

Ask the people who grow, manufacture, and sell cannabis about the end of prohibition and you’ll hear two stories. One is that legalization is ushering a multibillion-dollar industry into the light. Opportunities are boundless and green-friendly cities like Oakland are going to benefit enormously. There will be thousands of new jobs, millions in new tax revenue, and a drop in crime and incarceration.

But increasingly you’ll hear another story. The state of California and the city of Oakland blew it. The new state and city cannabis regulations are too complicated, permits are too difficult and time consuming to obtain, taxes are too high, and commercial real estate is scarce and expensive. As a result, many longtime cannabis entrepreneurs are either giving up or they’re burrowing back into the underground economy, out of the taxman’s reach, and unfortunately, further away from the social benefits legal pot was supposed to deliver….

Some longtime farmers, daunted by the regulated market’s heavy expenses, taxes, and low-profit predictions, have shrugged and gone back to the black market where they can continue to grow as they always have: illegally but free of hassle from the state’s new pot bureaucrats armed with pocket protectors and clipboards.

Not all the complaints in the two-part investigation are about taxes and overregulation. Some, especially in part 1, are about “loopholes” in the regulations that allow large corporations to get into the marijuana business and about “dramatic changes to Humboldt County’s cannabis culture, which had an almost pagan worship of a plant that created an alternative lifestyle in the misty hills north of the ‘Redwood Curtain.’”

But there’s plenty of evidence that regulations are more burdensome on newer and smaller companies than on large, established companies. Indeed, regulatory processes are oftencaptured” by the affected interest groups. The Wall Street Journal confirmed this just yesterday, reporting that “some of the restrictions [in Europe’s GDPR online privacy regulations] are having an unintended consequence: reinforcing the duopoly of Facebook Inc. and Alphabet Inc.’s Google.”

Several weeks ago, the United States and Korea reached an “agreement in principle” on an amended Korea-US Free Trade Agreement (KORUS FTA). This amendment process was minor enough that the Trump administration believed it could undertake it without having Congress vote on the changes (there will be a consultation with Congress on some tariff changes, as described here). Congress could object, as it does have the ultimate constitutional power over trade, but so far there are no signs that it plans to do so.

In an op-ed on the new KORUS, we described the result as follows: “the KORUS renegotiation looks like a minor tweak to U.S. trade relationships, rather than the wholesale ‘populist’ revolution that is sometimes indicated by Trump’s tweets.” In this blog post, we offer a more detailed assessment of the KORUS changes that have been reported  so far.

However, keep in mind that there is no final text of the amended agreement yet, so our analysis is necessarily a bit tentative. Specific wording can be important to understanding the implications of a provision, and there may be additional items that have not been reported yet. (In addition, statements by President Trump suggest the deal may be held up by other issues).

The outcomes of KORUS 2.0 can be grouped into two categories: (1) new issues that were not covered by the existing KORUS, and were negotiated as something akin to side deals to the talks, and (2) amendments or modifications to the current text. We examine each in turn.

With regard to the side deals, the biggest (and most negative) economic impact will arise from the export restrictions on steel that Korea agreed to. Pursuant to these restrictions, Korean would cap steel exports to the U.S. at 70 percent of the average volume from the past three years on a product-by-product basis. This was in exchange for a permanent exemption of the Trump administration’s Section 232 “national security” tariffs on steel. The impact of these quotas/tariffs will be some degree of price increase for U.S. consumers, with the amount of the increase depending on exactly how the measures are implemented. In terms of the impact on Korea, Korean producers may actually benefit, now that they have avoided the tariffs. Their sales to the U.S. will now be at higher prices, and they may find other markets for their steel to replace the lost volume in the U.S.

There are also provisions on currency manipulation. Media reporting on the currency provisions suggests they are non-binding. It sounds like the provisions are similar to those agreed to in a side letter to the Trans Pacific Partnership (TPP). Adding these currency provisions is not particularly signficant, as the Trump administration is mostly just carrying over an Obama-era policy. However, the Trump administration may be pushing for binding currency provisions as part of the renegotiated NAFTA. This would be a bigger deal, as there have never been such detailed provisions on this issue in trade agreements, and U.S. attempts to promote such provisions in additional agreements would have significant implications. The specific terms will be important for determining the impact. 

Turning to the amendments and modifications to the existing KORUS, the outcomes on automobile exports and truck imports stand out.

Under the existing KORUS, U.S.-based auto manufacturers can export up to 25,000 vehicles (per manufacturer) to Korea per year that will be deemed compliant with Korean safety standards simply by meeting U.S. standards. Through the renegotiation, this quota has now been increased to 50,000 vehicles per manufacturer. On its face, this is a good market-opening provision, and a positive development for increasing access to the Korean market. However, the real economic value is not clear. In 2017, U.S. passenger vehicle and light truck exports to Korea totalled only 52,607 units. Ford and General Motors shipped fewer than 10,000 vehicles each. Given the low volume of U.S. exports in these products, increasing the quota may not have much impact. (And to put these figures in perspective, Canada leads the way as a destination for U.S. exports with 912,277 units, and China is second at 267,473 units.)

With regard to light trucks, it appears that the administration took a more protectionist tack, extending until 2041 a 25% U.S. tariff that was supposed to be phased out by 2021. While there will be no immediate impact, because Korea does not currently export trucks to the U.S., this change could delay any future export plans. It has been suggested that the reason Korea has not yet sold light trucks on the U.S. market is because the existing tariff has effectively blocked the possibility of exports. In an interview with CNBC, USTR Robert Lighthizer said: “The Koreans don’t ship trucks to the United States right now and the reason they don’t is because of this tariff,” and “They were going to start next year – we would have seen massive truck shipments. So, that’s put off for two decades.” This modification can therefore be seen as an attempt by the Trump administration to prevent trucks produced in Korea from being sold in the United States. However, even if the tariff had been removed as scheduled, any trucks produced for the U.S. market after 2021 may very well have been produced in the Korean companies’ existing North American factories. As a result, the claim that “massive truck shipments” have been blocked is a bit misleading.

Other reported KORUS renegotiation results sound minor, although, again, a full assessment will have to wait for the release of the text. For instance, there appears to be a new agreement on environmental testing standards for autos. This could refer to Korea’s Fuel Economy and Greenhouse Gas Standards, which are updated every five years by the Korean Ministry of Environment. Through the negotiations, Korea has agreed to base the update of these standards for the 2021-2025 period on “global trends, including U.S. standards” and increase the number of eco-innovation credits available for auto imports to meet the fuel economy and greenhouse gas requirements. In addition, there was an agreement on harmonizing the testing requirements on gasoline engine vehicle exports so that these products will not have to be tested twice. As a result, U.S. emissions testing will be seen as equivalent to Korean testing requirements.

And Korea agreed to include American companies in a “national drug reimbursement program,” which offers premium pricing for certain new drugs. This change has been pushed by the Pharmaceutical Research and Manufacturers of America (PhRMA), which has argued that U.S. companies have been negatively affected by Korea’s low drug prices.   In addition to these changes, vague announcements were made with regard to introducing more transparency to certain dispute procedures, and changes to Korean customs inspection procedures.   Overall, from what we know so far, the KORUS renegotiation looks like a minor tweak to U.S. trade relationships, rather than the wholesale revolution that is sometimes indicated by Trump’s tweets. That is probably for the best. However, KORUS has been a somewhat minor point on the Trump administration’s trade agenda, so we should not take too much comfort from this. It may be that the administation simply wanted to focus its more aggressive trade actions on other countries. The U.S. trade relationship with each country is different. The two big items that are coming next on the agenda are the NAFTA renegotiation and the U.S.-China trade relationship. The resolution of these will tell us more about whether the administation can figure out a way to put together a coherent trade strategy that does not unravel decades of trade liberalization.

Last week the White House announced that Richard Clarida will be nominated to become Vice Chair of the Federal Reserve Board. More than a month ago, Clarida became the front-runner for the role. He is widely seen as a centrist and a pragmatist holding mostly conventional views on monetary policy. Mostly.

As Vice Chair, Clarida will be the third pillar of the Fed’s new leadership, joining Chair Jerome Powell and recently announced incoming NY Fed President John Williams. Having been an economics professor at Columbia University since 1988 and a Global Strategic Advisor at Pacific Investment Management Company (PIMCO) since 2006, Clarida provides a complement to both Powell’s largely business background and Williams’ career inside the Fed.

With a couple of mutual research interests, Clarida and Williams will likely work well together. They’ve both explored the natural rate of interest (r*) — Williams is the coauthor of the widely cited r* estimates and Clarida has examined natural rates from an international perspective. Another area of mutual interest is price level targeting. As I have noted previously, Williams is an advocate of the Fed adopting such a target while Clarida has also explored its merits for monetary policy.

At first blush this may be concerning, given the shortcomings of price level targeting. However, the evolution of Clarida’s post-crisis thinking on monetary policy, including towards price level targeting, shows that he may be persuaded by the superior merits of nominal GDP level targeting.

In 2010, Clarida presented a paper at the Boston Fed conference, Revisiting Monetary Policy in a Low Inflation Environment. The paper discussed what economists had learned throughout the 2000s, with a particular focus on what they ought to learn after years of low inflation (a subject with renewed saliency in recent years).

He also discussed the large-scale asset purchases of the Fed’s quantitative easing program, casting doubt on much of the literature of the day — which tended to find positive, but limited effects of such purchases on reducing bond yields. Clarida, on the other hand, thought large-scale asset purchases could be very robust. He had two main points, one flawed and one overlooked.

The first was that a determined central bank, prepared to buy the requisite amount of securities up to the outstanding stock, could always put a ceiling on the yield (or, put another way, a floor underneath the price) of the securities it targeted. Now, this proposal puts the central bank squarely into the credit allocation business, which is a role it ought to avoid.

However, the second, subtle point in his framework that should not be ignored is that Clarida recommends the central bank fully commit to an outcome rather than announce various mechanical steps. This goal-oriented strategy suggests that Clarida may indeed become receptive to the benefits of nominal GDP level targeting — a point to which I will return.

But why did Clarida suggest focusing on securities’ yields at the time, rather than consider changing the central bank’s nominal target?

He explained that adopting a price level target, a possible alternative to the Fed’s then “stable prices” mandate and now 2% inflation growth rate target, was not a time consistent policy. That is to say, a central bank would initially commit to level targeting when its policy was below the trend line but then fail to run an expansionary policy to reacquire that trend line. Clarida believed that, while attractive in theory, a central bank could not credibly commit itself to future actions. Modern Fed parlance would call this forward guidance and, to Clarida, that was not a sufficiently robust strategy because it lacked the “proper commitment technology” to satisfy markets and the public that the central bank would indeed execute its promises in the future.

But by 2016, as the recovery from the Great Recession proved to be weaker than expected, Clarida’s thinking about forward guidance and the viability of a level target had changed.

At a Brookings conference early that year, which focused on whether the US was ready for the next recession, Clarida said that despite the fact that textbooks and economic theory suggest forward guidance should not work in practice, it, in fact, does. He also suggested, or perhaps wondered, if this meant a price level targeting strategy could work (his slides are here).

He rightly pointed out that a price level target would have the advantage of making up for past monetary policy failures that inflation growth rate targeting lacks. Level targeting corrects for the bygones problem in growth rate targeting, making up for past mistakes rather than embedding those errors in current policy.

Incidentally, this was not the first time he had suggested a price level target for the Fed.

In a Global Perspectives note at PIMCO published in 2014, Clarida endorsed a price level target. He believed such a target would be an improvement for the new Yellen Fed over the Evans Rule, which had been in effect for more than two years. Promising to leave rates at the zero lower bound until either inflation was above 2.5% or the unemployment rate was below 6.5% was not enough to guide policy going forward. These thresholds were not goals and therefore were insufficient anchors for monetary policy (indeed the Fed abandoned the Evans Rule the following month).

Clarida saw the weakness in the Fed’s communication strategy of putting thresholds on inflation and unemployment and proposed a price level target as an alternative.

As mentioned, a price level target is not the proper alternative for the Fed’s target because it can make a central bank procyclical and thus amplify, rather than dampen, the business cycle. A price level targeting central bank runs the danger of tightening policy because of an adverse supply shock and over-easing because of a productivity boom. Nevertheless, Clarida was right to criticize the kind of open-ended policy that characterized the Evans Rule and this kind of thinking will be a welcome addition to the Board.

Clarida now seems predisposed to three views about monetary policy that could significantly influence the Fed’s actions going forward:

  1. That a central bank fully committed to reaching a nominal target is superior to one focused on mechanical operations.
  2. That employing forward guidance is indeed an effective tool for conducting monetary policy.
  3. That level targeting can make up for past errors in monetary policy in a way that growth rate targeting cannot.

Combined, I think these views point to Clarida being more amenable to a nominal GDP target than even he may presently admit. After all, nominal GDP level targeting requires two things of a central bank to work in practice: first a central bank must credibly pledge to keep nominal GDP growing along a stable trend line and then it must be prepared to do whatever is necessary to achieve that level of nominal growth.

Clarida has already expressed the importance of both of these elements. In addition, he has repeatedly shown a willingness to let his thinking evolve when presented with new information. Therefore, he may yet be persuaded on the shortcomings of price level targeting in favor of a superior option.

Clarida may have said little about nominal GDP targeting to date — but with his nomination, the Fed may be getting a nominal GDP target advocate for the future.

[Cross-posted from]

Democrats are plugging new energy into an old idea: a federal “Jobs Guarantee” program. Senator Cory Booker previously introduced legislation for a pilot in high unemployment communities. Now Senator Bernie Sanders will announce a plan guaranteeing a job or training paying $15 an hour and health-care benefits to every American worker “who wants or needs one,” in a host of public infrastructure, care giving and environmental upkeep projects.

The scheme, seemingly based on a recommendation from the Levy Economic Institute, comes with grandiose purported benefits. It would, we are told, eliminate involuntary unemployment, deliver a living wage, boost GDP, reduce the cost of recessions, raise labor market standards, reduce environmental degradation, reduce racial inequality, and much else besides. If it sounds too good to be true, that’s because it is. There are severe problems with this idea, which can be loosely grouped under three “c’s”: costs, crowd out and corruption.


The Levy Economic Institute calculates up to 16 million could take part in such a program today (including the unemployed, those working part time seeking full time work and individuals currently inactive who might move into the labor market). Given the federal government would have to pay $15 an hour for full time jobs, plus benefits equal to 20 percent of wages, total labor costs per worker would be $37,440 per year. That’s before the cost of the materials for the programs and administration of the program itself. Even assuming some opt for part-time positions, and ignoring the non-labor program costs, we are talking about a gross cost of up to around 2.4 percent of GDP, significantly higher than the existing Medicaid program (2 percent of GDP).

The net cost on these assumptions will be lower, of course. People who take jobs will require less in welfare payments and pay some back in taxes. Some might wisely consider it a risk for their employment fortunes to be tied to the whims of politicians and their willingness to fund this program, and so remain in the private sector. But even taking this into account, and assuming the policy generates the macroeconomic bounty that the Levy researchers expect, they still think the annual net cost will be between 0.8 and 2 percent of GDP, with the program employing up to 10 percent of the workforce. That would in itself be a huge new commitment to finance at a time when the long-term fiscal outlook is already dire, and the short-term deficit already expected to balloon to over 5 percent of GDP in the coming years.


In reality, the fiscal costs are likely to be much, much higher, and the economic welfare losses even more significant, because in the labor market and broader economy, a public jobs guarantee program would significantly crowd out productive private sector activity. This type of policy will radically alter behavior of both workers and businesses, and so the supply and demand for labor.

The Census shows that, among those who worked in 2016, 70+ million Americans earned under $32,500 (the full-time job guarantee salary would be $31,200). Yes, not all of these would seek out positions on the jobs guarantee program. But a large proportion would, especially those employed in uncertain roles with low levels of job security.

In fact, some even paid more than $31,200 might consider leaving their jobs to pursue guaranteed roles if they perceive better working conditions or an easier worklife (asked under what conditions someone would be fired from such a role, the Levy Institute paper suggests that you would be sacked for failing to go to work, but that your performance would not be judged by “private sector ‘efficiency criteria’”, for example.) It’s not inconceivable then that over 25 percent of the labor force could find itself part of the scheme.

This crowd-out is likely to be particularly acute in low productivity regions, and (ironically) after economic downturns. A nationwide jobs guarantee program paying $15 an hour will be particularly attractive to workers in low wage regions, and by setting a de facto wage floor the program will prevent private investment in regions on the basis of cheap labor.

Though no doubt there would be some demand spillovers from well-paid jobs, the net consequence is highly likely to be weaker private sector job creation in poor regions, which has been the experience of countries such as Britain with a nationwide minimum wages and public sector national pay bargaining. Proponents of the scheme see “higher labor standards” as a good thing, but absent productivity improvements, policies which raise labor costs significantly will reduce the quantity of workers demanded.

There’s good reason to expect the policy will reduce the efficiency and productive potential of the economy too. Taxes will eventually need to be raised to cover the net cost of the program. In infrastructure and care giving provision, costs will rise – because nobody would now work in these directly substitutable sectors for less than the wage and conditions offered in the job guarantee program. This will waste resources, and there’s highly likely to be overinvestment in lots of relatively low value ventures and programs to ensure workers are employed, especially given the explicit aim is to provide employment rather than deliver projects at low cost.

Throwing resources at regions with higher levels of unemployment and after recessions too will work directly against market signals and deter the mobility of labor (in geographic and industrial terms) and capital to its most productive uses given prevailing market conditions. This is important: yes, employment is highly likely to have some positive externalities; but the real driver of better living standards over time are productivity improvements, discovered by market-based activity. 

Proponents of this policy seem to put an enormous weight on the idea that time out of the labor market has huge scarring consequences which could be ameliorated by any type of temporary employment. But the literature on this shows that temporary jobs do not provide the workers with skills to improve longer-term labor market outcomes.

Corruption and incentives

As if all these consequences were not bad enough, such a program will be ripe for corruption and political interference at the government, provider and individual level. Senator Sanders’ plan would be administered by the Department for Labor, with local and state governments submitting projects to regional offices for consideration. There’s a huge question mark on whether projects will be considered on economic grounds, when there might be an incentive for make-work schemes to aid particular politicians or indeed to put resources towards “public good” causes or NGOs more in line with the ethos of the governing party. For Democrats this might be for environmental issues. For Republicans it might be, say, for a wall on the southern border.

NGOs and local public bodies themselves will have incentives to apply for federal funds for projects that would otherwise have occurred anyway, and to maximize the number of applications. Pork barrel projects would proliferate. What is more, at the individual level, the guarantee coupled with the purported unwillingness to judge worker performance on a commercial basis will incentivize low levels of work effort on the margin.


The Jobs Guarantee then is an extremely large and costly endeavor, which would have major economic consequences and risk a large federal politicization of the labor market and public project delivery. 

The US does have serious labor market issues to contend with - not least depressed labor participation and a weak productivity outlook - but are things really so bad that they require such a risky and extensive policy response?

Well-paid jobs and low levels of real unemployment are outcomes desired by all. But attempting to achieve that through this program amounts to cracking a nut with a sledgehammer, undermining what matters far more for living standards: efficiency and productivity. 

This morning in Jesner v. Arab Bank, the Supreme Court split 5-4 along conventional ideological lines to confirm that it is up to Congress, not the judiciary, to decide whether and when American courts should entertain international human rights cases against foreign defendants. It thus continues the course of its 2013 Kiobel v. Royal Dutch Petroleum case, about which I wrote here at the time

Today the U.S. Supreme Court unanimously and decisively buried the misguided, decades-long hope of some lawyers and academics that they could turn the Alien Tort Statute (ATS) into a wide-ranging method of hauling overseas damage claims into American courts. All nine Justices agreed with the Second Circuit that the statute does not grant jurisdiction for our courts to hear a controversy over alleged assistance in human rights violations outside the U.S. against non-U.S. plaintiffs by a non-U.S. business. A majority of five justices reiterated and relied on our law’s strong traditional presumption against extraterritoriality, that is to say, presumption against applying the law to actions that take place in other countries. While parting from this reasoning, four concurring justices nonetheless endorsed a view of ATS as applicable extraterritorially only to very extreme misconduct comparable to piracy, and also as sharply limited by considerations of comity with foreign sovereigns.

It is a good day for a realistic and modest sense of what United States courts of justice can successfully do, namely: do justice within the United States.

But in Kiobel, as Kenneth Anderson noted in the Cato Supreme Court Review that year, the Court ducked the question it had originally agreed to decide: may foreign corporations be sued in U.S. courts under the ATS, or only individuals? The correct answer is that Congress, not the courts, should decide. Issues of foreign affairs are peculiarly the province of the political branches, which can weigh (and take responsibility for) the dangers of engendering friction with foreign sovereigns by extending liability (Jordan, an important U.S. ally, has for years been riled by the attempt to go after Arab Bank over handling transactions, including some in New York, that allegedly facilitated terrorist acts abroad.) 

The only time Congress chose affirmatively to create such a cause of action, in a 1991 statute providing torture victims a right to sue over abuse abroad, it placed significant limits on the right, among which was providing that only individuals could be sued. Parallel restrictions should be read into other, unenumerated causes of action under the ATS, said Justice Anthony Kennedy in his opinion for the majority today; that means that unless Congress says so, the statute would enable holding individual wrongdoers liable but not imputing their liability to an organization. Writing separately in partial or full concurrence, Justices Gorsuch, Alito, and Thomas would have gone further to make clear that courts should simply not get into the business of inventing causes of action in this area, especially given the ATS’s history as an early American enactment meant to reduce rather than exacerbate diplomatic tensions. 

Not too many years ago, whole sectors of American legal academia were besotted with notions of “universal jurisdiction” in which misbehavior taking place in Africa, Latin America, or Southeast Asia could be sued over in American courts – in practice, often, in certain West Coast federal courts that welcomed such suits. The Court’s retreat from that proposition has been steady and prudent. Despite the dissent by Justice Sonia Sotomayor, no one has immunized business miscreants against anything. The Court has simply made it clear that if the United States courts are to become a sort of human rights policeman to the world, it is Congress that will need to decide to fit them out for that task. 



Toronto Police Chief Mark Saunders said that there is no evidence that yesterday’s “van incident,” where Alek Minassian murdered 10 people and injured 15 others on a busy sidewalk with a van, was a terrorist attack.  To count as a terrorist attack, Minassian’s motivations must have been political, religious, or social in nature beyond simply a desire to terrorize or murder others.  Minassian’s motives are so far unclear with much speculation regarding his social awkwardness and possible anti-women opinions but, so far, little surrounding his political or religious opinions.  This could change as police and investigators uncover new facts.

Many in media and government, prompted by Minassian’s mass murder, are commenting on terrorism in Canada but with little context.  By using the methods employed in my recent terrorism risk analysis for the United States, I’ve found that terrorism is rare in Canada.  Assuming that investigators will eventually find that Minassian’s mass-murder is not terrorism, as they currently claim, then the annual chance of being murdered in a terrorist attack on Canadian soil over the last 25 years was about one in 60.4 million per year.  The annual chance of being injured in a terrorist attack on Canadian soil during that time was about one in 7.4 million per year.

Data and Methodology

This post examines 25 years of terrorism on Canadian soil from 1993 through April 23, 2018.  Fatalities and injuries in terrorist attacks are the most important measures of the cost of terrorism. The information sources are the Global Terrorism Database (GTD) at the University of Maryland, the RAND Corporation, and others.  I excluded three fatalities counted by the GTD as they were the terrorists themselves.  I further grouped the ideology of the deadly attackers into four broad ideologies: Islamists, Anti-Muslims, anti-government, and Unknown/Other. GTD descriptions of the attackers, news stories, and wikipedia were my guide in grouping the attacks by ideology. The grouping by ideology was easy as there were so few terrorist attacks in Canada from 1993 to the present.  The number of Canadian residents and non-terrorist murders in each year comes from Statistics Canada.

Terrorism Risk in Canada

Terrorists have murdered 14 people on Canadian soil from 1993 through April 23, 2018.  Islamists murdered 3 of the victims, an anti-government terrorist murdered 3, suspected terrorists of an unknown ideology murdered 2, and 6 were murdered by an anti-Muslim terrorist named Alexandre Bissonnette in a shooting at a Quebec mosque last year (Figure 1).  Of the 63 terrorist attacks in Canada during that time, according to a wide definition of the term “terrorist” in the GTD, only 7 resulted in a fatality.  In other words, 89 percent of terrorist attacks in Canada during the last 25 years killed nobody.

Figure 1

Murders in Canadian Terrorist Attacks by the Ideology of the Attacker, 1993-2018


Sources: Global Terrorism Database at the University of Maryland, RAND Corporation, ESRI, and author’s calculations.

Although most of the recorded terrorist attacks targetted small groups in Canada, like Muslims or the police, it is useful to get a sense of the relative danger by looking at the annual chance of being murdered by a terrorist inspired by each ideology.  The annual chance of being murdered by an Islamist in a terrorist attack was the same as that of being murdered by an anti-government terrorist: about one in 281.7 million per year.  The annual chance of being murdered by a terrorist with an unknown ideology was about one in 422.5 million per year.  The greatest risk, but also still tiny, was being murdered by Alexandre Bissonnette in his Mosque attack last year at one in 140.8 million per year over the 25 years. 

There were 114 injuries in terrorist attacks on Canadian soil from 1993 through April 23, 2018 (Table 1).  Terrorists with unknown or other ideologies caused almost 68 percent of those injuries.  Alexandre Bissonnette, the anti-Muslim terrorist, was personally responsible for 17 percent of all injuries in terrorist attacks during this time in Canada.  Islamist terrorists were responsible for about 11 percent of injuries while anti-abortion and anti-government terrorists were responsible for 4 and 2 percent of all injuries, respectively. 

Table 1

Injuries in Canadian Terrorist Attacks by the Ideology of the Attacker, 1993-2018

  Injuries Annual Chance of Being Injured Percent of All Injuries Unknown/Other


1 in 10,973,614




1 in 44,472,016




1 in 70,414,026




1 in 211,242,077




1 in 422,484,154




1 in 7,412,003


Sources: Global Terrorism Database at the University of Maryland, RAND Corporation, ESRI, and author’s calculations.

Comparison to Murder

Fatalities and injuries in terrorist attacks are rare so a relevant comparison to non-terrorist murder puts the terrorism danger into perspective.  There were about 14,807 murders in Canada from 1993 through April 23, 2018.  Because the number of murders is not reported for 2016-2018, I assumed that the number of murders for each of those years was the same as the number in 2015.  The annual chance of being murdered outside of a terrorist attack was about one in 57,000 per year from 1993 through 2018 – about 1,058 times greater than the chance of being killed in a terrorist attack.      


There is a small chance of being murdered in a terrorist attack in Canada over the last 25 years.  By comparison, the annual chance of being murdered in a terrorist attack in the United States over that time was about 25 times greater than in Canada.  Similarly, the annual chance of being murdered in a terrorist attack in Canada also appears to be lower than in Europe.  The chance of being murdered in a non-terrorist murder in Canada was over 1000 times greater.  Alek Minassian’s horrific mass murder does not appear to be a terrorist attack based on the information available at this time, but if it does turn out to be terrorism then it would be the deadliest attack on Canadian soil since December 6, 1989, when Marc Lepine murdered 14 and injured 14 others in an attack inspired by his anti-feminism.  The murder or death of innocent people is tragic no matter the circumstances and the perpetrator should be punished to the fullest extent of the law.  Regardless, Canadians can at least take some comfort in the fact that the chance of being murdered in a terrorist attack in Canada is small in absolute terms, relative to the residents of other developed nations, and compared to the chance of being murdered in a non-terrorist homicide.     





In an April 21 editorial, the New York Times succumbs to the false narrative reverberating in the media echo chamber that blames the opioid overdose crisis on doctors overprescribing opioids to their patients in pain. Even worse, the Times perpetuates a significant component of that narrative: the myth that such overprescribing can essentially be traced to nothing more than a single letter to the editor by researchers at Boston University in the New England Journal of Medicine in 1980 touting the low addictive potential of opioids when prescribed in the medical setting. 

In fact, numerous studies before and after that now “infamous” letter continue to demonstrate the low addictive potential of medically prescribed opioids. For example, 2010 and 2012 Cochrane systematic analyses show chronic non-cancer pain patients on opioids have a roughly 1 percent addiction rate, and a January 2018 study by researchers at Harvard and Johns Hopkins of more than 568,000 “opioid naïve” patients over 8 years who were given opioids for acute postoperative pain showed a total “misuse” rate of 0.6 percent. In a 2016 New England Journal of Medicine article, Dr. Nora Volkow, the Director of the National Institute on Drug Abuse, stated, “Addiction occurs in only a small percentage of patients exposed to opioids—even those with preexisting vulnerabilities.” Furthermore, researchers at the University of North Carolina followed 2.2 million North Carolina residents prescribed opioids in 2015 and found an overdose rate of just 0.022 percent—and 61 percent of those overdoses involved multiple other drugs.

The Times then offers the same restrictive strategy—only more so— that is doomed to fail because it is based upon a false premise. The editors even suggest that opioids should be restricted to terminal cancer patients. Look at where this approach has gotten us thus far.

The prescription of opioids to patients peaked in 2010, with high-dose prescriptions down 41 percent since that time. A report last week from IQVIA showed opioid prescriptions dropped 10 percent in the last year, and high-dose prescriptions dropped 16 percent. The Drug Enforcement Administration ordered a 25 percent reduction in opioid production in 2017 and another 20 percent reduction this year. And since 2010, OxyContin has only been available in an abuse-deterrent form and many other opioids are likewise being reformulated. 

Yet the overdose rate continues to climb, and the majority of overdoses are due to fentanyl and heroin while the overdose rate from prescription opioids has stabilized or even slightly receded. The great majority of overdoses involve multiple drugs. In New York City in 2016, 75 percent of overdoses were from heroin or fentanyl and 97 percent of overdoses involved multiple drugs—46 percent of the time it was cocaine.

The opioid overdose crisis has always been primarily a manifestation of nonmedical users accessing drugs in a dangerous black market caused by drug prohibition. 

Policymakers must disabuse themselves of the false narrative they continue to embrace. It is the driving force behind a policy that has returned us to the “opioiphobia” of the Nixon era. It is making patients needlessly suffer and increasing the death rate by driving nonmedical users to more dangerous and deadly alternatives.




Investment adviser Ray Lucia conducted some seminars that ran afoul of the Securities and Exchange Commission. The SEC fined him $300,000 and, more importantly, barred him from working in the field after an SEC administrator determined that he had misled prospective clients in a quasi-judicial proceeding that the SEC investigated, prosecuted, and adjudicated without any appreciable oversight. He made a federal case out of the rather apparent separation-of-powers violation, which culminated in today’s Supreme Court argument in Lucia v. SEC. (For more background, see here.)

The central issue in the case is whether the SEC’s administrative law judges (ALJs) are “officers of the United States” such that they have to be appointed by and ultimately accountable to the president. That question in turn turns on how much discretion they have and whether their rulings change parties’ legal rights—or, given certain precedent, how you would write a rule that distinguishes officers from mere employees. 

After argument, Lucia remains hard to predict. A couple of justices (Stephen Breyer, Sonia Sotomayor) seemed dubious that the operations of a significant part of the government could be jeopardized based on what seemed to them to be legalistic technicalities. A couple of others (John Roberts, Neil Gorsuch) seemed to recognize the constitutional problem with the way administrative law judges are appointed but wonder how to write the opinion and apply the proper remedy. The rest seemed genuinely puzzled as to what to do with this case, and weren’t fully satisfied with any of the arguments presented.

None of the justices seemed particularly interested in the “removal power” aspect of the case, which the United States raised in its brief and Cato also covered in our brief. But all were troubled by the potential ramifications of ruling one way or another, because so many federal decisions could potentially be affected.

However this ends up, the case shows two things: (1) the importance of appointing judges and justices who take constitutional structure seriously, because that’s the ultimate guarantor of liberty, and (2) the incredible and unaccountable sprawl of the administrative state. 

A decision is expected by the end of June.

Minerva Dairy, based in Ohio, is America’s oldest family-owned cheese and butter dairy. It has been producing artisanal, slow-churned butter in small batches since 1935. They have gotten along by selling via their website and regional distributers in several states. This model has worked fine everywhere except Wisconsin, which requires butter manufacturers to jump through a series of cumbersome and expensive hoops to sell butter inside the state.

Of course, Wisconsin is America’s Dairyland, with many large dairy producers who naturally want to limit their competition. At the behest of these large producers, the state requires every batch of butter sold in the state to be “graded” by a specifically state-licensed grader—all of whom live in Wisconsin, except for a handful in neighboring Illinois—who must taste-test every single batch. Because Minerva’s butter is produced in multiple small batches over the course of each day, the law would effectively require the dairy to keep a licensed tester on-site at all times, which is cost-prohibitive. The state admits that the grading scheme has nothing to do with public health or nutrition, but claims that its grades—based largely on taste—inform consumers.

The fact that Wisconsin is trying to shape the taste of butter isn’t even the most absurd part of this case. The criteria used to grade the butter are a ludicrous mad-lib of meaningless jargon not even the state’s experts understand. The law purports to identify such flavor characteristics as “flat,” “ragged-boring,” and “utensil.” (All commonplace terms spoken by consumers in dairy aisles across the nation, certainly.) This terminology hearkens to a freshman—not even sophomore—term paper on the semiotics of postmodern agrarian literature. To claim that a grade calculated with reference to meaningless nonsense serves the purpose of informing anyone illustrates the danger inherent in judges’ dutifully deferring to government rationales for silly laws that burden people who are just trying to make an honest living.

Our friends at the Pacific Legal Foundation represent Minerva in a lawsuit that challenges the butter-grading law on grounds that it burdens interstate commerce in violation of the Commerce Clause, and also hurts small dairies’ Fourteenth Amendment rights to due process and equal protection of the laws. Minerva lost at the district court when the judge applied a toothless “rational basis” test to the law in question, giving little weight to the serious concerns described above. Must the judiciary rubber-stamp every legislative folly?

Because laws that abrogate constitutional rights warrant meaningful judicial oversight, Cato filed an amicus brief in Minerva Dairy v. Brancel to the U.S. Court of Appeals for the Seventh Circuit. Wisconsin’s law directly burdens the right to participate in the state’s butter market, and thus their economic liberty, for no sane or rational reason. There are simply no benefits to consumers that come from forcing producers like Minerva to pay considerable sums to have an irrational process deposit a random letter on product packaging. It curdles the mind to argue otherwise.

Over the weekend, USA Today reported that state and local law enforcement have acquired far less military equipment this year than they had at this point last year. This decline came in spite of President Trump’s executive order last August that removed some administrative hurdles to getting some of that equipment that had been implemented by the Obama Administration. The story contains a number of plausible explanations for the decline, including decreased demand from local departments and certain items not being available. There may be several factors at play in what appears to be a dramatic decrease in acquisition, but whatever the underlying reasons, the reduction is the latest evidence that much of the pro-police rhetoric and actions of the Trump Administration are less about improving police efficacy than they are about promoting the administration’s hollow posturing.

Recall that local police aren’t always jazzed about aggressive enforcement of federal immigration policy. Police who depend on the trust of the community to solve crimes and intervene in personal crises have been vocal about Trump’s impact on local crime enforcement. There are troubling signs that domestic violence and sexual assault are being underreported in Latino communities because of the distrust the administration is sowing between those communities and law enforcement. Over-the-top actions, such as raiding courthouses and seizing victims seeking protection and justice, erode public safety by enabling abusers and rapists to prey on their victims without fear of arrest or criminal charge. The cruel irony is that so much of this damage is done under the guise of restoring “law and order.”

Time will tell whether this decrease in military gear acquisition represents a genuine shift in local police priorities or is simply a lull that may end at the first sign of potential unrest in American cities. But some critics may take comfort that making it easier for police departments to attain new weapons of war didn’t automatically lead them to do it. Hopefully, the growing evidence that police militarization is detrimental to public safety and harmful to community relations is starting to sink in with local police, even if Washington isn’t listening. 

Cato published my recent Immigration Research and Policy Brief that relied on Texas state criminal data to compare the conviction rates of native-born Americans, legal immigrants, and illegal immigrants. That Texas state data was of such high quality that I was even able to compare conviction rates by the type of crime. The result was that in 2015 the criminal conviction and arrest rates for illegal immigrants were below that of native-born Americans for virtually all crimes including homicide, sexual assault, and larceny. This is just further evidence that illegal immigrants are less crime-prone than native-born Americans. I had to limit my Brief to focus on convictions only in 2015, although I also had the Texas conviction data for 2016, because there were no estimates of the illegal immigrant population statewide for the latter year. 

Since Cato published my brief in February, the estimable Center for Migration Studies published an update of the estimated number of illegal immigrants in Texas for 2016. The following graphs and numbers are the conviction rates for native-born Americans, legal immigrants, and illegal immigrants in the state of Texas in 2016. The conviction rate is the number of convictions per group (native, legal immigrants, and illegal immigrants) divided by the number of Texas residents in each group multiplied by 100,000. The final multiplication step produces the conviction rate per 100,000 residents in each subpopulation, which is how criminologists and the governments portray incarceration, crime, and conviction rates. This is the best way to portray relative crime rates as it controls the different size of the subpopulations.

The criminal conviction rate for native-born Americans in Texas was 2,116 per 100,000 natives in 2016 (Figure 1). The native-born criminal conviction rate was thus 2.4 times as high as the criminal conviction rate for illegal immigrants in that year and 7.2 times as high as that of legal immigrants. 

Figure 1

Criminal Conviction Rates by Immigration Status in Texas, 2016

Sources: Author’s analysis of Texas Department of Public Safety data, the American Community Survey, and the Center for Migration Studies.

The homicide conviction rate for native-born American in Texas in 2016 was 4.1 per 100,000 natives in 2016, about 2.4 times as great as the 1.8 per 100,000 conviction rate for illegal immigrants and 5.7 times as great as the 0.7 per 100,000 conviction rate for legal immigrants (Figure 2). The homicide conviction rate in 2016 in Texas is quite a bit different from the 2015 findings where it was only 25 percent higher for natives than for illegal immigrants. 

Figure 2

Homicide Conviction Rates by Immigration Status in Texas, 2016

Sources: Author’s analysis of Texas Department of Public Safety data, the American Community Survey, and the Center for Migration Studies.

The sexual assault conviction rates for illegal immigrants is also below that of natives but only by 16 percent while the sexual assault conviction rate for legal immigrants is 87 percent below that of natives (Figure 3). The larceny conviction rate for native-born Americans is 6.1 times as high as that for illegal immigrants and 12.5 times as high as that for legal immigrants (Figure 4). 

Figure 3

Sexual Assault Conviction Rates by Immigration Status in Texas, 2016


Sources: Author’s analysis of Texas Department of Public Safety data, the American Community Survey, and the Center for Migration Studies.

Figure 4

Larceny Conviction Rates by Immigration Status in Texas, 2016


Sources: Author’s analysis of Texas Department of Public Safety data, the American Community Survey, and the Center for Migration Studies.

The 2016 criminal conviction rates in Texas are similar to that of 2015 with one major exception: The illegal immigrant homicide conviction rate is far lower. There were 31 convictions against illegal immigrants for homicide in Texas in 2016 but 51 in 2015. Illegal and legal immigrants had lower homicide, sexual assault, larceny, and overall criminal conviction rates relative to native-born Americans in 2016. 




Marc Thiessen, a columnist at the Washington Post, is highly upset that the Senate Foreign Relations Committee may not approve President Trump’s nomination of Mike Pompeo to be Secretary of State:

For the first time in the history of the republic [since the committee started recording votes in 1925], it appears increasingly likely that a majority of the Senate Foreign Relations Committee will vote against the president’s nominee for secretary of state. If this happens, it would be a black mark not on Mike Pompeo’s record, but on the reputation of this once-storied committee.

Thiessen seems to think that the role of the Senate Foreign Relations Committee, and by extension the United States Senate, is to approve a president’s nominees. But of course, the Constitution provides that “The President … shall nominate, and by and with the Advice and Consent of the Senate, shall appoint Ambassadors, other public Ministers and Consuls, Judges of the supreme Court, and all other Officers of the United States.” The Heritage Foundation’s Guide to the Constitution affirms that “the Senate has complete and final discretion in whether to accept or approve a nomination.” The Foreign Relations Committee is today considering whether to consent to this nomination. The Senate as a whole may choose to reject the negative recommendation and consent to the nomination. (See also the novel and movie Advise and Consent, on TCM this Friday.)

It’s not that members of the committee don’t have legitimate grounds on which to withhold consent. Sen. Rand Paul, a key player as he is likely to be the only Republican on the committee to oppose the nomination, says:

Director Pompeo has not learned the lessons of regime change and wants regime change in Iran….

President Trump sought to break with the foreign policy mistakes of the last two administrations. Yet now he picks for Secretary of State and CIA Director people who embody them, defend them, and, I’m afraid, will repeat them. I will not support their nominations.

One need not agree with that criticism to acknowledge that it’s a reasonable concern on which to reject a nominee.

Thiessen is a former speechwriter to Secretary of Defense Donald Rumsfeld and President George W. Bush, which might give him an executive-branch view of Congress’s role. Before that, however, he served for six years as spokesman and senior policy advisor to Senate Foreign Relations Committee Chairman Jesse Helms, whose willingness to use his position to block presidential nominees was well known. He mentions Helms’s support of President Clinton’s nomination of Madeleine Albright for Secretary of State, but omits the nominees Helms blocked or tried to block, such as Massachusetts governor William Weld and former senator Carol Moseley-Braun.

Thiessen concludes his excoriation of the Senate Foreign Relations Committee with a flourish: Assuming he is confirmed by the Senate, Pompeo “would be more than justified in determining that the State Department is best served by working closely with the appropriators and Senate leadership, and bypassing a committee that can’t make policy, can’t legislate and can’t lead.”

His real complaint, however, is not that the committee can’t lead. It is that the Senate Foreign Relations Committee won’t blindly follow.

President Trump’s campaign promise to ban all Muslim immigration will play an important role in the arguments against his “travel ban” executive order at the Supreme Court this week. While Trump later clarified that the “Muslim ban” actually referred to more targeted policies-such as the ban on certain countries and other “extreme vetting” measures-he consistently argued that the goals of the Muslim ban and these other policies were the same. It is now apparent that these policies are working.

91 Percent Drop in Muslim Refugees

During the campaign, Trump referred to Muslim refugees as a “Trojan horse” that could bring down the United States from the inside. Not surprisingly then, Muslim refugees have seen their numbers slashed most dramatically. From October 2015 through December 2016 (“FY 2016”), monthly arrivals of Muslim refugees averaged 3,076 (Figure 1). From January 2017 through October 2017 (“FY 2017”), they fell to 951 per month. During the first six months of FY 2018, they have fallen to just 275 per month-91 percent below their rate in FY 2016. Sunni Muslims have seen their numbers cut by 98 percent and Shi’ite Muslims by 86 percent.

Figure 1: Average Monthly Muslim Refugee Arrivals

At the same time, however, President Trump is not keeping his promise to prioritize Christian refugees. Their numbers have plummeted as well, falling 63 percent from 2016 to 2018. Nonetheless, Muslims were disproportionately affected. In 2016, one in two refugees were Muslim, while just 1 in 6 were in 2018.

26 Percent Drop in Immigrants from Majority Muslim Countries

The State Department issues visas to immigrants-i.e. permanent residents-and nonimmigrants or temporary visitors, guest workers, and students. It does not record the religious affiliation of immigrant visa applicants, but its data indicate a substantial decline in immigrant visa approvals for nationals from the 48 majority Muslim countries-more than a quarter below the prior rate.

In Fiscal Year 2016 (“FY 2016”), immigrants from majority Muslim countries averaged 9,787 permanent residency visas per month. The State Department has only published monthly data starting in March 2017, but from March 2017 to September 2017 (“FY 2017”), monthly arrivals fell to an average of 8,366. From October 2017 to February 2018 (“FY 2018”), they averaged just 7,241-26 percent below the rate for FY 2016. The share of immigrants from majority Muslim countries also fell from 19 percent to 16 percent.

Figure 2: Average Monthly Immigrant Visa Issuances for Nationals of Majority Muslim Countries

32 Percent Drop in Temporary Visa Issuances from Majority Muslim Countries

Temporary visa applicants also do not tell the State Department their religious affiliation, but the department’s data show a large decline in approvals for nationals of the 48 majority Muslim countries-nearly a third below the prior rate. In Fiscal Year 2016 (“FY 2016”), travelers from majority Muslim countries averaged 71,407 visa approvals per month. The State Department has only published monthly data starting in March 2017, but from March 2017 to September 2017 (“FY 2017”), monthly arrivals averaged 8,366. From October 2017 to February 2018 (“FY 2018”), they fell to 7,241-26 percent below the rate for FY 2016. The share of travelers from these countries also fell from 8.3 percent to 7.5 percent.

Figure 3: Average Monthly Nonimmigrant Visa Issuances for Nationals of Majority Muslim Countries

As I have noted before, during the last decade, majority Muslim countries have never-even during the recession-seen temporary visa issuances fall by more than 1 percent in a single year and immigrant visas never more than 7 percent. From 2007 to 2016, temporary visa approvals for nationals of these countries actually grew 8 percent annually and permanent visas 9 percent annually. Again, compared to the expected increases, the declines are even more remarkable.

“Travel Ban” Countries: 60 Percent Drop in Both Temporary & Immigrant Visas

Of the 48 majority Muslim countries, 33 saw immigrant visas fall, while 45 saw temporary visa numbers decline (excepting Albania, Kosovo, and the Gambia). Some nationalities were much more negatively affected than others. President Trump has, at various times, placed eight majority Muslim countries on his “travel ban” lists: Iraq (January 2017 to March 2017), Sudan (January 2017 to September 2017), Chad (September 2017 to April 2018), Iran, Libya, Somalia, Syria, and Yemen (all January 2017 to now). These countries have seen much more severe declines in immigrant and nonimmigrant visa issuances.

Nationals of the eight majority Muslim travel ban countries have seen immigrant visa issuances fall from 2,654 per month in FY 2016 to 918 per month in FY 2018, a 65 percent decline. These nationals saw their nonimmigrant visa approvals fall from 5,851 per month in FY 2016 to 2,279 in FY 2018, a 61 percent decline. Despite the president removing Iraq and Sudan from the list, their visa numbers did not recover. The declines for each country started before the Supreme Court allowed the travel ban to take partial effect in June and then full effect in December, but they fell more steeply after each ruling.

Figure 4: Average Monthly Immigrant and Nonimmigrant Visa Issuances to “Travel Ban” Nationals

Causes of the Decline

The decreases in Muslim arrivals have multiple causes. Refugee admissions are entirely controlled by presidential proclamations. President Trump initially suspended the refugee program, and then, when blocked by the courts, he simply cut the refugee limit in half. His administration has failed even to achieve this target. The administration also directly controls the type of refugees admitted, so the entire decline in Muslim refugee numbers and their share of total arrivals is a consequence of policy choices. President Trump promised to “prioritize” Christian refugees, and while he has cut the number of Christians as well, he has increased their share of total numbers.

Travel ban countries also explain more than two thirds (68 percent) of the decline in immigrant visa issuances from 2016 to 2018, implying that the explicit singling out of those nationalities-including the ones subsequently removed from the list-had a major effect on their ability to obtain or willingness to apply for visas. Travel ban countries also account for 16 percent of the decrease in temporary or nonimmigrant visa issuances from 2016 to 2018. This implies, however, that other causes may be more important in driving that trend.

The State Department rolled out a new “extreme vetting” form, the DS-5535, that requires far more documentation from visa applicants, in the State Department’s words, “who have been deemed to warrant additional scrutiny” (i.e. Muslim applicants). This form-in conjunction with other policies-have resulted in more Muslim visa applications disappearing into the “administrative processing” queue, according to a new report from the American Immigration Lawyer’s Association (AILA). “Administrative processing” is code for “meets the requirements but subject to further security screening.” Unfortunately, the State Department publishes no figures on the frequency of this phenomenon.

Anecdotal reports of visa denials for Muslim applicants began to receive attention in 2017, but the State Department fails to publish visa refusal figures by country of origin, so we cannot put hard numbers behind the reports. Muslim travelers may also want to avoid the United States during President Trump’s presidency. His rhetoric may have scared off some visitors, and stories of lengthy detentions and other mistreatment of Muslims at airports and border checkpoints may discourage others.


President Trump appears to be fulfilling his campaign promise. The United States is accepting the fewest Muslim refugees in decades, and immigration from the Muslim world has received an unprecedented cut under his administration. On the campaign trail, President Trump assured voters that the Muslim ban would be a “temporary ban.” In the coming months, we will find out how temporary these policies discouraging Muslim immigration turn out to be.

In honor of April 22, consider this picture:

It’s taken from Zhu et al’s 2016 paper, “Greening of the Earth and its Drivers.” “Leaf Area Index” is a measure of the density of vegetation cover. It’s positive over most of the planet, and especially so in the purple-shaded regions, which are mainly the tropical rainforest, the most diverse and revered of our ecosystems. 

Only 4% of the surface shows the opposite, or a significant “browning.” The causes of the planetary greening? In a word, “us.” According to the Zhu et al., 91% of the greening is attributable to human activity. Perhaps it’s best to simply quote from the paper:

Factorial simulations with multiple global ecosystem models suggest that CO2 fertilization effects explain 70% of the observed greening trend, followed by nitrogen deposition (9%), climate change (8%) and land cover change (LCC) (4%). CO2 fertilization effects explain most of the greening trends in the tropics, whereas climate change resulted in greening of the high latitudes and the Tibetan Plateau.

Again, Happy Earth Day!

The dramatic news that CIA Director Mike Pompeo met in secret with North Korean leader Kim Jong-un over the Easter weekend has renewed hopes that one of the world’s most dangerous stand offs might be resolved without war. President Donald Trump confirmed via Twitter that details for a summit meeting were “being worked out” and predicted “Denuclearization will be a great thing for World, but also for North Korea!” The good feelings continued during the week, with Kim announcing on Saturday that the North no longer needs to conduct nuclear or missile tests.

Americans should welcome such prospects, but South Koreans have reason to be wary. They have the most to lose from conflict on the peninsula, a real possibility if negotiations fail. After all, President Trump has an uneven track record when it comes to making promises and following through. And even as he boasts of his success in getting the North to the negotiating table, he has also said that he wouldn’t attend the summit if he thought it wasn’t worth it. Unsurprisingly, the South Koreans have also been engaged in direct talks with the North. South Korean President Moon Jae-in will meet Kim next week. There has even been speculation that talks could end the Korean War. For now, however, and thinking beyond secret meetings and high-level summits, South Korea’s future, in a very real sense, still hinges on the decisions and actions of men and women living in the United States.

Although that has been the case for decades, it can’t ever be a comforting feeling, and that sentiment informed an essay just published in the New York Times. As I note, the process mostly played out well, for both the United States, for South Korea, and for regional stability:

Under American tutelage, South Korea eventually evolved from a desperately poor autocracy to one of the wealthiest democracies on the planet. American taxpayers continue to spend billions of dollars a year to help maintain regional security. A similar process played out in other parts of Asia and in Europe, where the American security umbrella, including tens of thousands of military personnel, provided room for those countries’ leaders to build strong democracies and economies.

American leaders argued that such policies served the cause of global peace and security. They also reasoned that the substantial costs would be tolerable. And, so long as American productivity and workers’ wages were rising, it seemed that Uncle Sam could ensure a decent standard of living at home and security around the world.

Of course, the costs associated with defending others were measured in more than American treasure. The Korean War Memorial in Washington, DC, honors the 5.9 million Americans who served in the military between June 1950 and July 1953, and especially the 54,246 who gave their lives in that conflict. The Vietnam Memorial, a short distance away, contains the names of 58,318 Americans who made the ultimate sacrifice on behalf of a government in Saigon that struggled to command the respect of the people of South Vietnam, and that collapsed soon after the United States withdrew its support.

This pattern has persisted into the present day. When Iraqi troops were routed by Islamic State fighters in 2014, some took this as proof that U.S. forces should have remained in Iraq. Fear of government collapse in Afghanistan has led two successive U.S. presidents to increase the number of U.S. troops there. President Trump quickly walked back his suggestion that a few thousand troops in Syria would leave any time soon. Supporters of an on-the-ground – and de facto permanent – presence in such places often point to South Korea, Japan, and Germany as examples to emulate. By this logic, Americans win just so long as we never leave.

It is becoming harder, however, as I note in the Times:

for America to maintain this global posture. Eventually, it may become impossible, in part because we helped create the conditions that allowed other countries to prosper and grow….

Americans should be debating how to manage that transition in a way that avoids destabilizing the rest of the world. Unfortunately, if the current administration’s maneuvers between the two Koreas are any indication, this is the last thing on the minds of policymakers.

But while many in Washington resist the suggestion that the United States should revisit its approach to the world, other countries’ leaders are rethinking their dependence on others. As Constanze Stelzenmüller explained in a recent paper for the Brookings Institution, Europeans, in particular, have an “existential” interest in “preserving an international order that safeguards peace and globalization.”

I conclude:

Transitioning to a world with many capable actors won’t be easy. It will require a deft hand to unwind defense arrangements, and patience as others find their way. Given their own domestic spending priorities and continued uncertainty about whether the United States will recommit to the old model, most American allies are likely to take a wait-and-see attitude. A gentle nudge might be needed to move them from comfortable adolescence to empowered adulthood.


The alternative is a renewed commitment to discourage self-reliance among allies. That will be an undertaking far more onerous than any the United States has attempted since World War II — and one that is unlikely to work.

You can read the whole thing here. And let me know what you think by tweeting me @capreble


With teacher strikes and demonstrations in several states tied not just to teacher compensation, but also the belief that public schooling has been starved for resources, it is worth looking at the spending data. Not trying to say what “fair” teacher pay is, or the degree to which spending may affect test scores; just seeing what we’ve been spending, and how it has changed over the years.

Let’s start with relatively recent history, the only span of years for which the federal government has readily available, total per-pupil spending data for public K-12 schools at the state level. (These data were assembled by pulling from the version of this table for every year and adjusting for inflation.) We want to look at total spending because taxpayers don’t just spend money for operating costs such as teacher salaries, but also on things like new school buildings, expenditures only included in total cost tabulations.

Look at the colorful figure below—every state is a line—and you will see that inflation-adjusted spending generally went up, on average (the bold, black line) from $11,132 in 99-00 to $13,187 in the 2014-15 school year, an 18 percent real increase. Of course, as you can see, there are some states that spent a lot more at the outset—and boosted spending much more over time—than others.

What you will also notice is a steady climb in spending between 99-00 and 07-08, a decline until 12-13, then an uptick. That’s the effect of the Great Recession and the slow recovery from it. The 12-13 trough saw an average per-pupil expenditure of $12,789, a bit more than the amount that was spent in 04-05, just 8 years earlier. As many people worried about public school resources have accurately noted, there was a drop in K-12 spending, and on average it has not yet returned to the peak spending of 07-08. It also, however, did not take a deep dive.

Of course, we didn’t only start spending money on public schools in 99-00; we’ve spent for decades. Looking at that, as you can see below, when we say we hit a funding “peak” in 07-08, we mean peak! The drop in spending since the Great Recession has been an anomaly. Since 1919-20, inflation adjusted, total per-pupil outlays have shot up from $609 to over $13,000. (Note that “year” skips 10 years after 1919, and changes in 2-year increments between 1929 and 1969.) It is a stretch to assume education funding from almost 100 years ago is comparable to today even adjusting for inflation, but just go back to 70-71; spending per-pupil has more than doubled, from $5,926 to $13,119!

Seeing the long-term funding picture provides some crucial context that is too often neglected in discussions of public K-12 funding broadly: It had escalated for decades, and only for a few years did it drop.

That said, let’s look at total spending since 99-00 in the states that have seen the most-discussed labor unrest—Arizona, Colorado, Kentucky, Oklahoma, and West Virginia. All are on the lower-end of spending. But have they been cutting?

In Arizona, spending has been a bit all over the map since 99-00, but was at a low point in 13-14, with just a slight uptick the following year.

Colorado, perhaps appropriately, looks more like a mountain than Arizona, with big spending growth until 07-08, followed by a big drop. By the end of the period the state was basically right back where it began, spending $10,900 in 99-00, and $10,815 in 14-15.

How about Kentucky? Not as striking a trend as Arizona or Colorado, with spending increasing almost 30 percent over just 8 years, and only dropping a relatively small amount afterwards, giving back around $1,000 after a more than $2,600 boost.

Next we go to Oklahoma, site of a long teacher labor action that started with a bang and ended with a bit of whimper…and a promise of a $6,000 boost in salary for every teacher. In the Sooner State we see a bit of a roller coaster, but since 1999-00 spending is up from $8,310 to $9,114. Of course that is down from a peak of $9,675 in 07-08, but the general trend has clearly been a rising one.

Finally, there’s West Virginia, the state that started the recent wave of strikes. Here again, the trend since 1999-00 is not one of cuts, but spending increases, with a zenith actually in 09-10, rather than 07-08. Indeed, in the decade between 99-00 and 09-10 per-pupil spending in West Virginia rose 22 percent, or by nearly $2,400. Yes, it generally dropped thereafter, but only down to about $12,000 in 12-13, and it had rebounded to $12,437 by 14-15.

Bottom line: Many states have seen decreasing per-pupil expenditures for public schools since the Great Recession. But how deep varies from state to state, and it comes on the heels of nearly a century of almost unremitting spending growth.

Scott Pruitt, the Administrator of the U.S. Environmental Protection Agency (EPA), is loathed by most researchers and environmentalists, but he may yet emerge as science’s unlikely redeemer.

Pruitt is one of the least popular people in America. Before coming to DC, he was the attorney general of Oklahoma, where he described himself as “a leading advocate against the EPA’s activist agenda,”— a claim he made good by suing the Agency no fewer than 14 times.

But Pruitt — who in public appears reasonable, quietly-spoken and polite — denies having declared war on the environment, only on the EPA’s scientific protocols. The 1970 Clean Air Act requires the agency, when proposing new regulations, use criteria that “accurately reflect the latest scientific knowledge.”  

Governmental skepticism of science has a long pedigree. On launching Medicare on June 15, 1966, LBJ berated the National Institutes of Health for having published lots of papers without their having benefited any patients. Earlier, in his Farewell Address, Eisenhower warned of US public policy becoming “the captive of a scientific-technological elite.” Now Pruitt maintains that skeptical tradition by challenging the EPA’s science—and by extension, much of the way research is performed in the U.S. today.

Pruitt has forbidden the EPA from referencing papers that do not allow free access to their underlying data and methods. Non-scientists are often astonished to learn that, in many academic disciplines, there is no obligation on researchers, when submitting papers for publication, to make their original data available.

Non-scientists, moreover, rarely grasp how poor many peer-reviewed papers are. John Ioannidis of Stanford University is, sadly, famous for his 2005 study entitled “Why Most Published Research Results are False,” in which he indeed explained why most published research results are false. Why? Because the authors misused statistics. Consciously.

In a 2016 paper entitled “The Natural Selection of Bad Science”— published in no less a journal than Royal Society Open Science—Paul Smaldino (University of California, Merced) and Richard McElreath (Max Plank Institute, Leipzig, Germany) showed that researchers will select “methods of analysis … to further publication rather than discovery.” Smaldino and McElreath further chronicled how, over the last half century, lone statisticians have in vain protested the institutional abuse of statistics by entire scientific disciplines. But everybody—authors, editors, university presidents, funding agencies et alia—has an incentive to maximize publication rates, and if publication has to trump discovery, so be it.  

In challenging the way the EPA does science, Pruitt is actually challenging the conflicts of interest now affecting many disciplines. When he discovered that scientists on just three of the EPA’s Science Advisory Boards had, over the previous three years, collectively received research grants from the Agency of no less than $77 million (thus incentivizing them to exaggerate environmental problems) he declared that members of the Science Advisory Boards had to be genuinely independent of the Agency. From the criticism that decision attracted, it is obvious that many researchers cannot see how receiving money can indeed generate a conflict of interest.

The source of many of science’s problems today was identified in 2016 by David Sarewitz of Arizona State University who, in an essay entitled “Saving Science,” identified peer review as the villain. In the days before science was funded by the federal government (ie, before 1940 in the U.S.) scientists were embedded in the real worlds of industry and of health foundations, where they were judged by discovery—and where, in the process we call technology, their claims of discovery were tested against reality. In Sarewitz’s words, it is “technology that keeps science honest.”

But scientists’ claims of discovery are today increasingly tested not against reality; rather, they are judged by their peers. And peers have their paradigms. And those paradigms can be wrong. So dietary fat, for example, was for decades demonized by the deft application of statistics by researchers anxious to be funded, published and promoted by their peers. And salt was claimed, falsely, to be a major cause of population-level hypertension, while the principal cause of drug overdoses is claimed to prescription opioids rather than policies restricting them. 

Pruitt’s conduct and ethics at the EPA have been and will be criticized, and his attacks on the Agency’s failings in science have been dismissed as the self-serving acts of a Trump partisan. But by highlighting  science’s systematic shortfalls, Pruitt might be doing it a favor.