Cato Op-Eds

Individual Liberty, Free Markets, and Peace
Subscribe to Cato Op-Eds feed

As a practicing physician I have long been frustrated with the Electronic Health Record (EHR) system the federal government required health care practitioners to adopt by 2014 or face economic sanctions. This manifestation of central planning compelled many doctors to scrap electronic record systems already in place because the planners determined they were not used “meaningfully.” They were forced to buy a government-approved electronic health system and conform their decision-making and practice techniques to algorithms the central planners deem “meaningful.”  Other professions and businesses make use of technology to enhance productivity and quality. This happens organically. Electronic programs are designed to fit around the unique needs and goals of the particular enterprise. But in this instance, it works the other way around: health care practitioners need to conform to the needs and goals of the EHR. This disrupts the thinking process, slows productivity, interrupts the patient-doctor relationship, and increases the risk of error. As Twila Brase, RN, PHN ably details in “Big Brother in the Exam Room,” things go downhill from there.

With painstaking, almost overwhelming detail that makes the reader feel the enormous complexity of the administrative state, Ms. Brase, who is president and co-founder of Citizens’ Council for Health Freedom (CCHF), traces the origins and motives that led to Congress passing the Health Information Technology for Economic and Clinical Health (HITECH) Act in 2009. The goal from the outset was for the health care regulatory bureaucracy to collect the private health data of the entire population and use it to create a one-size-fits-all standardization of the way medicine is practiced. This standardization is based upon population models, not individual patients. It uses the EHR design to nudge practitioners into surrendering their judgment to the algorithms and guidelines adopted by the regulators. Along the way, the meaningfully used EHR makes practitioners spend the bulk of their time entering data into forms and clicking boxes, providing the regulators with the data needed to generate further standardization.

Brase provides wide-ranging documentation of the way this “meaningful use” of the EHR has led to medical errors and the replication of false information in patients’ health records. She shows how the planners intend to morph the Electronic Health Record into a Comprehensive Health Record (CHR), through the continual addition of new data categories, delving into the details of lifestyle choices that may arguably relate indirectly to health: from sexual proclivities, to recreational behaviors, to gun ownership, to dietary choices. In effect, a meaningfully used Electronic Health Record is nothing more than a government health surveillance system.  As the old saying goes, “He who pays the piper calls the tune.” If the third party—especially a third party with the monopoly police power of the state—is paying for health care it may demand adherence to lifestyle choices that keep costs down.

All of this data collection and use is made possible by the Orwellian-named Health Insurance Portability and Accountability Act (HIPAA) of 1996.  Most patients think of HIPAA as a guarantee that their health records will remain private and confidential. They think all those “HIPAA Privacy” forms they are signing at their doctor’s office is to insure confidentiality. But, as Brase points out very clearly, HIPAA gives numerous exemptions to confidentiality requirements for the purposes of collecting data and enforcing laws. As Brase puts it, 

 It contains the word privacy, leaving most to believe it is what it says, rather than reading it to see what it really is. A more honest title would be “Notice of Federally Authorized Disclosures for Which Patient Consent Is Not Required.”

It should frighten any reader to learn just how exposed the personal medical information is to regulators in and out of government. Some of the data collected without the patients’ knowledge is generated by what Brase calls “forced hospital experiments” in health care delivery and payment models, also conducted without the patients’ knowledge. Brase documents how patients remain in the dark about being included in payment model experiments, even including whether or not they are patients being cared for by an Accountable Care Organization (ACO). 

Again quoting Brase, 

Congress’s insistence that physicians install government health surveillance systems in the exam room and use them for the care of patients, despite being untested and unproven—and an unfunded mandate—is disturbing at so many levels—from privacy to professional ethics to the patient-doctor relationship. 

As the book points out, more and more private practitioners are opting out of this surveillance system. Some are opting out of the third party payment system (including Medicare and Medicaid) and going to a “Direct Care” cash pay model, which exempts them from HIPAA and the government’s EHR mandate. Some are retiring early and/or leaving medical practice altogether. Many, if not most, are selling their practices to hospitals or large corporate clinics transferring the risk of severe penalties for non-compliance to those larger entities. 

Health information technology can and should be a good thing for patients and doctors alike. But when the government rather than individual patients and doctors decide what kind of technology that will be and how it will be used, health information technology can become a dangerous threat to liberty, autonomy, and health. 

“Big Brother In The Exam Room” is the first book to catalog in meticulous detail the dangerous ways in which health information technology is being weaponized against us all.  Everyone should read it. 

It has been a whirlwind week of negotiations on the North American Free Trade Agreement (NAFTA), ending on Friday in apparent deadlock. Canada was not able to reach a deal with the United States on some of the remaining contentious issues, but that did not stop President Trump from submitting a notice of intent to Congress to sign a deal with Mexico that was agreed to earlier this week. This action allows the new trade agreement to be signed by the end of November, before Mexican President Enrique Pena Nieto leaves office. While a high degree of uncertainty remains, it is premature to ring the alarm for the end of NAFTA as we know it.

Why? First, there is still some negotiating latitude built into the Trade Promotion Authority (TPA) legislation, which outlines the process for how the negotiations unfold. The full text of the agreement has to be made public thirty days after the notice of intent to sign is submitted to Congress. This means that the parties have until the end of September to finalize the contents of the agreement. What we have now is just an agreement in principle, which can be thought of as a draft of the agreement, with a lot of little details still needing to be filled in. Therefore, it is not surprising that the notice submitted to Congress today left open the possibility of Canada joining the agreement “if it is willing” at a later date. Canadian Foreign Minister Chrystia Freeland will resume talks with U.S. Trade Representative Robert Lighthizer next Wednesday, and this should be seen as a sign that the negotiations are far from over.

Relatedly, TPA legislation does not provide a clear answer as to whether the President can split NAFTA into two bilateral deals. The original letter of intent to re-open NAFTA, which was submitted by Amb. Lighthizer in May 2017, notified Congress that the President intended to “initiate negotiations with Canada and Mexico regarding modernization of the North American Free Trade Agreement (NAFTA).” This can be read as signaling that not only were the negotiations supposed to be with both Canada and Mexico, but also that Congress only agreed to this specific arrangement.  In addition, it could be argued that TPA would require President Trump to “restart the clock” on negotiations with a new notice of intent to negotiate with Mexico alone. The bottom line, however, is that it is entirely up to Congress to decide whether or not it will allow for a vote on a bilateral deal with Mexico only, and so far, it appears that Congress is opposed to this. 

In fact, Congress has been fairly vocal about the fact that a NAFTA without Canada simply does not make sense. Canada and Mexico are the top destination for U.S. exports and imports, with total trade reaching over $1 trillion annually. Furthermore, we don’t just trade things with each other in North America, we make things together. Taking Canada out of NAFTA is analogous to putting a wall in the middle of a factory floor. It is has been estimated that every dollar of imports from Mexico includes forty cents of U.S. value added, and for Canada that figure is twenty-five cents for every dollar of imports—these are U.S. inputs in products that come back to the United States.

While President Trump may claim that he’s playing hardball with Canada by presenting an offer they cannot reasonably accept, we should approach such negotiating bluster with caution. In fact, the reality is that there is still plenty of time to negotiate, and Canada seems willing to come back to the table next week. At a press conference at the Canadian Embassy in Washington D.C. after negotiations wrapped up for the week, Minister Freeland remarked that Canada wants a good deal, and not just any deal, adding that a win-win-win was still possible. Negotiations are sure to continue amidst the uncertainty, and it will be a challenging effort to parse the signal from the noise. However, we should remain optimistic that a trilateral deal is within reach and take Friday’s news as just another step in that direction.

A Massachusetts statute prohibits ownership of “assault weapons,” the statutory definition of which includes the most popular semi-automatic rifles in the country, as well as “copies or duplicates” of any such weapons. As for what that means, your guess is as good as ours. A group of plaintiffs, including two firearm dealers and the Gun Owners’ Action League challenged the law as a violation of the Second Amendment. Unfortunately, federal district court judge William Young upheld the ban.

Judge Young followed the lead of the Fourth Circuit case of Kolbe v. Hogan (in which Cato filed a brief supporting a petition to the Supreme Court) which misconstrued from a shred of the landmark 2008 District of Columbia v. Heller case that the test for whether a class of weapons could be banned was whether it was “like an M-16,” contravening the core of Heller—that all weapons in common civilian use are constitutionally protected. What’s worse is that Judge Young seemed to go a step further, rejecting the argument that an “M-16” is a machine gun, unlike the weapons banned by Massachusetts, and deciding that semi-automatics are “almost identical to the M16, except for the mode of firing.” (The mode of firing is, of course, the principle distinction between automatic and semi-automatic firearms.)

The plaintiffs are appealing to the U.S. Court of Appeals for the First Circuit. Cato, joined by several organizations interested in the protection of our civil liberties and a group of professors who teach the Second Amendment, has filed a brief supporting the plaintiffs. We point out that the Massachusetts law classifies the common semi-automatic firearms used by police officers as “dangerous and unusual” weapons of war, alienating officers from their communities and undermining policing by consent.

Where for generations Americans needed look no further than the belt of their local deputies for guidance in selecting a defensive firearm, Massachusetts’ restrictions prohibit these very same arms from civilians. Those firearms selected by experts for reliability and overall utility as defensive weapons, would be unavailable for the lawful purpose of self-defense. According to Massachusetts, these law enforcement tools aren’t defensive, but instead implements of war designed to inflict mass carnage.

Where tensions between police and policed are a sensitive issue, Massachusetts sets up a framework where the people can be fired upon by police with what the state fancies as an instrument of war, a suggestion that only serves to drive a wedge between police and citizenry.

Further, the district court incorrectly framed the question as whether the banned weapons were actually used in defensive shootings, instead of following Supreme Court precedent and asking whether the arms were possessed for lawful purposes (as they unquestionably were). This skewing of legal frameworks is especially troublesome where the Supreme Court has remained silent on the scope of the right to keep and bear arms for the last decade, leading to a fractured and unpredictable state of the law.

Today, the majority of firearms sold in the United States for self-defense are illegal in Massachusetts. The district court erred in upholding this abridgment of Bay State residents’ rights. The Massachusetts law is unconstitutional on its face and the reasoning upholding it lacks legal or historical foundation.

Last weekend the Federal Reserve Bank of Kansas City hosted its annual symposium in Jackson Hole. Despite being the Fed’s largest annual event, the symposium has been “fairly boring” for years, in terms of what can be learned about the future of actual policy. This year’s program, Changing Market Structures and Implications for Monetary Policy, was firmly in that tradition—making Jerome Powell’s speech, his first there as Fed Chair, the main event. In it, he covered familiar ground, suggesting that the changes he has begun as Chair are likely to continue.

Powell constructed his remarks around a nautical metaphor of “shifting stars.” In macroeconomic equations a variable has a star superscript (*) on it to indicate it is a fundamental structural feature of the economy. In Powell’s words, these starred values in conventional economic models are the “normal, or “natural,” or “desired” values (e.g. u* for the natural rate of unemployment, r* for the neutral rate of interest, and π* for the optimal inflation rate). In these models the actual data are supposed to fluctuate around these stars. However, the models require estimates for many star values (the exception being desired inflation, which the Fed has chosen to be a 2% annual rate) because they cannot be directly observed, and therefore must be inferred.

These models then use the gaps between actual values and the starred values to guide—or navigate, in Powell’s metaphor—the path of monetary policy. The most famous example being, of course, the Taylor Rule, which calls for interest rate adjustments depending on how far the actual inflation rate is from desired inflation and how far real GDP is from its estimated potential. Powell’s thesis is that as these fundamental values change, particularly as the estimates become more uncertain—as the stars shift so to speak—using them as guides to monetary policy becomes more difficult and less desirable.

His thesis echoes a point he made during his second press conference as Fed Chair when he said policymakers “can’t be too attached to these unobservable variables.” It also underscores Powell’s expressed desire to move the Fed in new directions: less wedded to formal models, open to a broader range of economic views, and potentially towards using monetary policy rules. To be clear, while Powell has outlined these new directions it remains to be seen how and whether such changes will actually be implemented.

A specific example of a new direction—and to my mind the most important comment in the Jackson Hole speech—was Powell’s suggestion that the Fed look beyond inflation in order to detect troubling signs in the economy. A preoccupation with inflation is a serious problem at the Fed, and one that had disastrous consequences in 2008. Indeed, Powell noted that the “destabilizing excesses,” (a term that he should have defined) in advance of the last two recessions showed up in financial market data rather than inflation metrics.

While Powell is more open to monetary policy rules than his predecessors, he’s yet to formally endorse them as anything other than helpful guides in the policymaking process. At Jackson Hole he remarked, “[o]ne general finding is that no single, simple approach to monetary policy is likely to be appropriate across a broad range of plausible scenarios.” This was seen as a rejection of rule-based monetary policy by Mark Spindel, noted Fed watcher and co-author of a political history of the Fed. However, given the shifting stars context of the speech, Powell’s comment should be interpreted as saying that when the uncertainty surrounding the stars is increasing, the usefulness of the policy rules that rely on those stars as inputs is decreasing. In other words, Powell is questioning the use of a mechanical rule, not monetary policy rules more generally.

Such an interpretation is very much in keeping with past statements made by Powell. For example, in 2015, as a Fed Governor, he said he was not in favor of a policy rule that was a simple equation for the Fed to follow in a mechanical fashion. Two years later, Powell said that traditional rules were backward looking, but that monetary policy needs to be forward looking and not overly reliant on past data. Upon becoming Fed Chair early this year, Powell made it a point to tell Congress he found monetary policy rules helpful—a sentiment he reiterated when testifying on the Hill last month.

The good news is that there is a monetary policy rule that is forward looking, not concerned with estimating the “stars,” and robust against an inflation fixation. I am referring to a nominal GDP level target, of course; a monetary policy rule that has been gaining advocates.

Like in years past, there was not a lot of discussion about the future of actual monetary policy at the Jackson Hole symposium. But if Powell really is moving the Federal Reserve towards adopting a rule, he is also beginning to outline a framework that should make a nominal GDP rule the first choice.

[Cross-posted from Alt-M.org]

It would have been natural to assume that partisan gerrymandering would not return as an issue to the Supreme Court until next year at the earliest, the election calendar for this year being too far advanced. But yesterday a federal judicial panel ruled that North Carolina’s U.S. House lines were unconstitutionally biased toward the interests of the Republican Party and suggested that it might impose new lines for November’s vote, even though there would be no time in which to hold a primary for the revised districts. Conducting an election without a primary might seem like a radical remedy, but the court pointed to other offices for which the state of North Carolina provides for election without a preceding primary stage.

If the court takes such a step, it would seem inevitable that defenders of the map will ask for a stay of the ruling from the U.S. Supreme Court. In June, as we know, the Court declined to reach the big constitutional issues on partisan gerrymandering, instead finding ways to send the two cases before it (Gill v. Whitford from Wisconsin and Benisek v. Lamone from Maryland) back to lower courts for more processing. 

In my forthcoming article on Gill and Benisek in the Cato Supreme Court Review, I suggest that with the retirement of Justice Anthony Kennedy, who’d been the swing vote on the issue, litigators from liberal good-government groups might find it prudent to refrain for a while from steering the question back up to the high court, instead biding their time in hopes of new appointments. After all, Kennedy’s replacement, given current political winds, is likely to side with the conservative bloc. But a contrasting and far more daring tactic would be to take advantage of the vacancy to make a move in lower courts now. To quote Rick Hasen’s new analysis at Election Law Blog, “given the current 4-4 split on the Supreme Court, any emergency action could well fail, leaving the lower court opinion in place.” And Hasen spells out the political implications: “if the lower court orders new districts for 2018, and the Supreme Court deadlocks 4-4 on an emergency request to overturn that order, we could have new districts for 2018 only, and that could help Democrats retake control of the U.S. House.”

Those are very big “ifs,” however. As Hasen concedes, “We know that the Supreme Court has not liked interim remedies in redistricting and election cases close to the election, and it has often rolled back such changes.” Moreover, Justices Breyer and Kagan in particular have lately shown considerable willingness to join with conservatives where necessary to find narrow grounds for decision that keep the Court’s steps small and incremental, so as not to risk landmark defeats at the hands of a mobilized 5-4 conservative court. It would not be surprising if one or more liberal Justices join a stay of a drastic order in the North Carolina case rather than set up a 2019 confrontation in such a way as to ensure a maximally ruffled conservative wing.

Some of these issues might come up at Cato’s 17th annual Constitution Day Sept. 17 – mark your calendar now! – where I’ll be discussing the gerrymandering cases on the mid-afternoon panel.

In the first of this series of posts, I explained that the mere presence of fractional-reserve banks itself has little bearing on an economy’s rate of money growth, which mainly depends on the growth rate of its stock of basic (commodity or fiat) money. The one exception to this rule, I said, consists of episodes in which growth in an economy’s money stock, defined broadly to include the public’s holdings of readily-redeemable bank IOUs as well as its holdings of basic money, is due in whole or in part to a decline in bank reserve ratios

In a second post, I pointed out that, while falling bank reserve ratios might in theory be to blame for business booms, a look at some of the more notorious booms shows that they did not in fact coincide with any substantial decline in bank reserve ratios.

In this third and final post, I complete my critique of the “Fractional Reserves lead to Austrian Business Cycles” (FR=ABC) thesis, by showing that, when fractional-reserve banking system reserve ratios do decline, the decline doesn’t necessarily result in a malinvestment boom.

Causes of Changed Bank Reserve Ratios

That historic booms haven’t typically been fueled by falling bank reserve ratios, meaning ratios of commercial bank reserves to commercial bank demand deposits and notes, doesn’t mean that those ratios never decline. In fact they may decline for several reasons. But when they do change, commercial bank reserve ratios usually change gradually rather than rapidly. In contrast central banks, and fiat-money issuing central banks especially, can and sometimes do occasionally expand their balance sheets quite rapidly, if not to a dramatic extent. It’s for this reason that monetary booms are more likely to be fueled by central bank credit operations than by commercial banks’ decision to “skimp” more than usual on reserves.

There are, however, some exceptions to the rule that reserve ratios tend to change only gradually. One of these stems from government regulations, changes in which can lead to reserve ratio changes that are both more substantial and more sudden. Thus in the U.S. during the 1990s changes to minimum bank reserve requirements and the manner of their enforcement led to a considerable decline in actual bank reserve ratios. In contrast, the Federal Reserve’s decision to begin paying interest on bank reserves starting in October 2008, followed by its various rounds of Quantitative Easing, caused bank reserve ratios to increase dramatically.

The other exception concerns cases in which fractional reserve banking is just developing. Obviously as that happens a switch from 100-percent reserves, or its equivalent, to some considerably lower fraction, might take place over a relatively short time span. In England during the last half of the 17th century, for example, the rise first of the goldsmith banks and then of the Bank of England led to a considerable reduction in the demand for monetary gold, its place being taken by a combination of paper notes and readily redeemable deposits.

Yet even that revolutionary change involved a less rapid increase in the role of fiduciary media, with even less significant cyclical implications, than one might first suppose, for several reasons. First, only a relatively small number of persons dealt with banks at first: for the vast majority of people, “money” still meant nothing other than copper and silver coins, plus (for the relatively well-heeled) the occasional gold guinea. Second, bank reserve ratios remained fairly high at first — the best estimates put them at around 30 percent or so — declining only gradually from that relatively high level. Finally, the fact that the change was as yet limited to England and one or two other economies meant that, instead of resulting in any substantial change England’s money stock, level of spending, or price level, it led to a largely contemporaneous outflow of now-surplus gold to the rest of the world. By allowing paper to stand in for specie, in other words, England was able to export that much more precious metal. The same thing occurred in Scotland over the course of the next century, only to a considerably greater degree thanks to the greater freedom enjoyed by Scotland’s banks. It was that development that caused Adam Smith to wax eloquent on the Scottish banking system’s contribution to Scottish economic growth.

Eventually, however, any fractional-reserve banking system tends to settle into a relatively “mature” state, after which, barring changes to government regulations, bank reserve ratios are likely to decline only gradually, if they decline at all, in response to numerous factors including improvements in settlement arrangements, economies of scale, and changes in the liquidity of marketability of banks’ non-reserve assets. For this reason it’s perfectly absurd to treat the relatively rapid expansion of fiduciary media in a fractional-reserve banking system that’s just taking root as illustrating tendencies present within established fractional-reserve banking systems.

Yet that’s just what some proponents of 100-percent banking appear to do. For example, in a relatively recent blog Robert Murphy serves-up the following “standard story of fractional reserve banking”:

Starting originally from a position of 100% reserve banking on demand deposits, the commercial banks look at all of their customers’ deposits of gold in their vaults, and take 80% of them, and lend them out into the community. This pushes down interest rates. But the original rich depositors don’t alter their behavior. Somebody who had planned on spending 8 of his 10 gold coins still does that. So aggregate consumption in the community doesn’t drop. Therefore, to the extent that the sudden drop in interest rates induces new investment projects that wouldn’t have occurred otherwise, there is an unsustainable boom that must eventually end in a bust.

Let pass Murphy’s unfounded — and by now repeatedly-refuted — suggestion that fractional reserve banking started out with bankers’ lending customers’ deposits without the customers knowing it. And forget as well, for the moment, that any banker who funds loans using deposits that the depositors themselves intend spend immediately will go bust in short order. The awkward fact remains that, once a fractional-reserve banking system is established, it cannot go on being established again and again, but instead settles down to a relatively stable reserve ratio. So instead of explaining how fractional reserve banking can give rise to recurring business cycles, the story Murphy offers is one that accounts for only a single, never to be repeated fractional-reserve based cyclical event.

Desirable and Undesirable Reserve Ratio Changes

Finally, a declining banking system reserve ratio doesn’t necessarily imply excessive money creation, lending, or bank maturity mismatching. That’s because, notwithstanding what Murphy and others claim, competing commercial banks generally can’t create money, or loans, out of thin air. Instead, their capacity to lend, like that of other intermediaries, depends crucially on their success at getting members of the public to hold on to their IOUs. The more IOUs bankers’ customers are willing to hold on to, and the fewer they choose to cash in, the more the bankers can afford to lend. If, on the other hand, instead of holding onto a competing bank’s IOUs, the bank’s customers all decide to spend them at once, the bank will fail in short order, and will do so even if its ordinary customers never stage a run on it. All of this goes for the readily redeemable bank IOUs that make up the stock of bank-supplied money no less than for IOUs of other sorts. In other words, contrary to what Robert Murphy suggests in his passage quoted above, it matters a great deal to any banker whether or not persons who have exchanged basic money for his banks’ redeemable claims plan to go on spending, thereby drawing on those claims, or not.

Furthermore, as I show in part II of my book on free banking, in a free or relatively free banking system, meaning one in which there are no legal reserve requirements and banks are free to issue their own circulating currency, bank reserve ratios will tend to change mainly in response to changes in the public’s demand to hold on to bank-money balances. When people choose to increase their holdings of (that is, to put off spending) bank deposits or notes or both, the banks can profitably “stretch” their reserves further, making them support a correspondingly higher quantity of bank money. If, on the other hand, people choose to reduce their holdings of bank money by trying to spend them more aggressively, the banks will be compelled to restrict their lending and raise their reserve ratios. The stock of bank-created money will, in other words, tend to adjust so as to offset opposite changes in money’s velocity, thereby stabilizing the product of the two.

This last result, far from implying a means by which fractional-reserve banks might fuel business cycles, suggests on the contrary that the equilibrium reserve ratio changes in a free banking system can actually help to avoid such cycles. For according to Friedrich Hayek’s writings of the 1930s, in which he develops his theory of the business cycle most fully, avoiding such cycles is a matter of maintaining, not a constant money stock (M), but a constant “total money stream” (MV).

Voluntary and Involuntary Saving

Hayek’s view is, of course, distinct from Murray Rothbard’s, and also from that of many other Austrian critics of fractional reserve banking. But it is also more intuitively appealing. For the Austrian theory of the business cycle attributes unsustainable booms to occasions when bank-financed investment exceeds voluntary saving. Such booms are unsustainable because the unnaturally low interest rates with which they’re associated inevitably give way to higher ones consistent with the public’s voluntary willingness to save. But why should rates rise? They rise because lending in excess of voluntary savings means adding more to the “total money stream” than savers take out of that stream. Eventually that increased money stream will serve to bid up prices. Higher prices will in term raise the demand for loans, pushing interest rates back up. The increase in rates in turn brings the boom to an end, launching the “bust” stage of the cycle.

If, in contrast, banks lend more only to the extent that doing so compensates for the public’s attempts to accumulate balances of bank money, the money stream remains constant. Consequently the increase in bank lending doesn’t result in any general increase in the demand for or prices of goods. There is, in this case, no tendency for either the demand for credit or interest rates to increase. Instead of being self-reversing, the investment “boom,” if it can be called such, is not inevitably self-reversing. Instead, it can go on for as long as the increased demand for fiduciary media persists, and perhaps forever.

As I’m not saying anything here that I haven’t said before, I have a pretty darn good idea what sort of counterarguments to anticipate. Among others I expect to see claims to the effect that people who hold onto balances of bank money (or fiduciary media or “money substitutes” or whatever one wishes to call bank -issued IOUs that serve as regularly-accepted means of exchange) are not “really” engaged in acts of voluntary saving, because they might choose to part with those balances at any time, or because a bank deposit balance or banknote is “neither a present nor a future good,” or something alone these lines.

Balderdash. To “save” is merely to refrain from spending one’s earnings; and one can save by holding on or adding to a bank deposit balance or redeemable banknote no less than by holding on to or accumulating Treasury bonds. That persons who choose to save by accumulating demand deposits do not commit themselves to saving any definite amount for any definite length of time does not make their decision to save any less real: so long as they hold on to bank-issued IOUs, they are devoting a quantity of savings precisely equal to the value of those IOUs to the banks that have them on their books: as Murray Rothbard himself might have put it — though he certainly never did so with regard to the case at hand — such persons have a “demonstrated preference” for not spending, that is, for saving, to the extent that they hold bank IOUs, where “demonstrated preference” refers to the (“praxeological”) insight that, regardless of what some outside expert might claim, peoples’ actual acts of choice supply the only real proof of what they desire or don’t desire.  According to that insight, so long as someone holds a bank balance or IOU, he desires the balance or IOU, and not the things that could be had for it, or any part of it. That is, he desires to be a creditor to the bank against which he holds the balance or IOU.

And so long as banks expand their lending in accord with their customers’ demonstrated preference such acts of saving, and no more, while contracting it as their customers’ willingness to direct their savings to them subsides, the banks’ lending will not contribute to business cycles, Austrian or otherwise.

Of course, real-world monetary systems don’t always conform to the ideal sort of banking system I’ve described, issuing more fiduciary media only to the extent that the public’s real demand for such media has itself increased. While  free banking systems of the sort I theorize about in my book tend to approximate this ideal, real world systems can and sometimes do create credit in excess of the public’s voluntary savings, occasionally without, though (as we’ve seen) most often with, the help of accommodative central banks. But that’s no reason to condemn fractional reserve banking. Instead it’s a reason for looking more deeply into the circumstances that sometimes allow banking and monetary systems to promote business cycles.

In other words, instead of repeating the facile cliché that fractional reserve banking causes business cycles, or condemning fiduciary media tout court, Austrian economists who want to put a stop to such cycles, and to do so without undermining beneficial bank undertakings, should inquire into the factors that sometimes cause banks to create more fiduciary media than their customers either want or need.

[Cross-posted from Alt-M.org]

Pages