Who should run the banks?

Politicians have turned financial institutions into public utilities, at everybody’s peril.

The financial crisis whose tenth anniversary we have just commemorated led many people to worry about the seeming asymmetry of rewards in the banking system. It was said that the gains to banks in the run-up to 2008 were privatized, while the losses as boom turned to bust were socialized. Bankers reaped ever higher bonuses, while taxpayers picked up the tab.

This account is true as far as it goes, but it overlooks another asymmetry that arguably played a more important role in the crash. That is the fact that financial institutions are privately owned, but much of their activity falls under the control of the government. This effective separation of ownership from control, a decades-long process, has created a perverse incentive structure that makes the financial system more fragile.

Turning banks into an arm of the government

In a recent New York Times column, Andrew Ross Sorkin outlined ways in which credit card companies, by collecting and reporting data on customer purchases in much greater detail than they do at present, could “[push] for more responsible practices by the gun industry.”

As we will see, there is nothing new in the suggestion that financial services firms should act as surrogates of the government. Yet the assumption that a natural way to address the problem of gun violence is to force payment companies to gather, quite indiscriminately, sensitive private data shows the extent to which many people view banks and other financial firms as public utilities.

Sorkin’s is not an isolated voice. Following a school shooting in Florida last February, Bank of America and Citigroup – the second- and fourth-largest U.S. banks by assets – announced new requirements from the gun retailers whom they bank. Their move drew heavy criticism from the National Rifle Association and Republican lawmakers. “Citigroup took $470 billion from the American taxpayer […] Bank of America took $340 billion. I don’t remember them saying, ‘Oh, we don’t want the money from taxpayers who believe in the Second Amendment [which protects the right to own guns],’” quipped Senator Kennedy from Louisiana.

The senator’s critique was characteristic of conservative opposition to the banks’ new gun policy. It focused not on their business freedom and the right of concerned customers to switch suppliers, but rather on the perceived injustice that beneficiaries from the generosity of taxpayers in times of crisis would ostensibly act to undermine the constitutional rights of those very taxpayers. Conservative Kennedy and liberal Sorkin both echoed a long-standing view in American public policy: that financial institutions are not ordinary private firms, but privileged entities with a duty to act on behalf of the government.

America’s very political banks

Banks for most of modern history in most countries have not operated in anything resembling a free market. Bank charters, which authorize their holders to take deposits and lend them out, are typically government-granted. States are in some cases the biggest bank debtors, enjoying the political muscle to extract lenient loan terms from their creditors. Usury laws for centuries constrained heavily the rates that banks could charge as interest. More recently, the capital and type of assets that banks may hold on their balance sheets have become a matter of statutory regulation.

In these and other ways, governments determine the volume and allocation of credit in the economy. The result has been not an arm’s length relationship between banks and public authorities, but a tight partnership whereby banks acquiesce in financing governments and their clients, while governments shield banks from competition and underwrite bank balance sheets. Financial historians Charles Calomiris and Stephen Haber have called this quid pro quo “the game of bank bargains.”

The United States, paradoxically given its status as a beacon of free enterprise, illustrates the persistence yet changing nature of this game. A key reason why America has historically had an unusually large number of small banks was that agricultural interests, which dominated American politics in the 19th century, pushed for branching restrictions in a bid to force banks to provide credit locally. In addition, before the creation of the Federal Reserve in 1913, banks could only issue notes up to the value of their U.S. Treasury bond holdings.

America’s banking system was thus inflexible, inefficient, and crisis-prone due to its fragmentation. Banks could not scale up and diversify their lending geographically, nor respond to seasonal fluctuations in money demand. Between 1833 and 1933, the United States suffered eleven major banking panics – sudden spikes in demand for currency notes relative to deposits. Canada, which by contrast had a smaller number of much larger and nationally active banks, and no restrictions on note issuance, experienced many fewer bank failures. It is worth recalling that no Canadian bank failed during the last financial crisis, either.

The symbiosis between banks and public authorities played out in different ways for different institutions. Small banks lobbied politicians to preserve their local monopolies. Wall Street banks, which thanks to branching restrictions maintained a lucrative correspondent business, lobbied for the creation of the Federal Reserve not for the public’s benefit, but to keep this business while securing a source of emergency liquidity during panics. Far from stemming financial instability, regulation insulated banks from some of its dire consequences by transferring risks onto taxpayers.

How successful was New Deal financial regulation?

The decades from the 1930s to the late 1960s are remembered as ones of historically unusual financial stability. No major banking panics occurred and the U.S. economy grew strongly, punctured by frequent but shallow recessions.

It is difficult to establish the extent to which bank regulation enacted after the Great Depression was responsible for the three decades of comparable calm that ensued. After all, the period following World War II was a time of growing prosperity across the West, including in countries whose banking systems shared little with America’s. Furthermore, stagflation in the 1970s put an end to both strong growth and bank stability, as rising rates to cope with double-digit inflation raised banks’ cost of funds while keeping returns fixed on their existing assets, such as home mortgages.

What is plain is that New Deal legislation strengthened the quid pro quo between U.S. banks and the government in ways that have proved decisive in more recent crises. Bank deposits became federally insured in 1933, creating a potential taxpayer liability in the event of bank failure – an implicit guarantee that politicians have since used to justify new regulations. In addition, housing finance was part-nationalized with the creation of Fannie Mae, a government agency tasked with boosting the liquidity of mortgage markets by buying up housing loans from banks. New Deal legislation also set an interest-rate ceiling on savings deposits that lasted into the 1980s, and an interest ban on demand deposits that was only repealed in 2011.

The New Deal seemed a sweet deal for U.S. bankers, as their funding was subsidized by deposit insurance and interest caps, while they guaranteed themselves a willing buyer for a large share of their loans. But it came with strings attached: banks became clients of the government and thus vulnerable to future demands to do the politicians’ bidding.

Those demands, predictably, have mounted. In 1970, Congress passed the Bank Secrecy Act (BSA) in a bid to fight money laundering by criminals. The BSA’s provisions and mandates have gradually expanded, especially after the 9/11 terrorist attacks. Estimates of the compliance cost to banks range between $4.8 and $8 billion.

BSA regulations are onerous because they require banks, money transmitters, securities dealers, insurance companies, and many others to file a report each time they process transactions above a certain amount. The thresholds vary by institution, but they range between $2,000 and $10,000 and have not been adjusted for inflation since 1970. The political sensitivity of the crimes that the BSA purports to prevent, and the stiff penalties associated with non-compliance, mean banks spend vast resources to ensure they are on the right side of the law.

Another milestone in the U.S. government’s creeping takeover came in 1977 with passage of the Community Reinvestment Act (CRA). This law aimed to promote credit to low-income and minority borrowers by mandating that banks lend in the communities where they take deposits. At the time, red-lining (the practice of denying credit to borrowers in certain geographies) was a widespread problem in America, particularly for blacks. The CRA sought to eliminate red-lining by placing a new mandate on banks. In the words of Senator William Proxmire, who championed the legislation: “Those who invest in new deposit facilities receive a semi-exclusive franchise […]. The Government limits […] entry […] restricts competition and [limits] the rate of interest payable on […] deposits. The Government provides deposit insurance through the FDIC […] The regulators have […] conferred substantial economic benefits on private institutions without extracting any meaningful quid pro quo for the public.”

One is reminded of that scene in The Godfather: “Some day, and that day may never come, I’ll call upon you to do a service for me.” U.S. banks have seen many days like that.

When bank branching was finally liberalized in the 1990s, the CRA became a tool by which activist groups could put pressure on banks wishing to grow or acquire another institution, since regulators must take CRA performance into account when deciding whether to authorize a bank’s expansion. Between 1992 and 2007, banks made $4.5 trillion in CRA lending commitments, some of them under terms they would not have otherwise offered. But pleasing regulators can sometimes be better for a bank’s bottom line than lending prudently on market terms.

Did we learn anything from the 2008 crisis?

Policymakers like to say that the measures taken to shore up bank capital and improve the quality of balance sheets mean a financial crisis like the one experienced a decade ago is inconceivable today. Yet despite these proclamations, much of the pre-crisis regulatory landscape is unchanged.

Bank deposits not only remain government-insured, but the limit on insurance was raised from $100,000 to $250,000 during the crisis. Thus, depositors have little incentive to monitor bank safety, and banks are not spurred to market themselves to potential depositors as well-capitalized and prudent. What is more, research by Calomiris and Sophia Chen shows that more generous deposit insurance increases bank risk-taking and financial fragility.

Mortgage lending, more than at any time since the 1930s, is a government activity. While banks hold more of the mortgages they issue on their balance sheets than they did in the run-up to 2008, that share is less than a third. The rest is securitized by the government-sponsored enterprises (Fannie Mae and Freddie Mac) or other public entities. The CRA continues to encourage mortgage lending to low-income borrowers, and the median down payment remains at a historical high of 94 percent.

Complex regulation has encouraged concentration. The U.S. banking system has fewer banks than it did in 2008, and the largest ones are larger than ever. Concentration on its own is no problem. But it is difficult to take seriously policymakers’ claim that the government’s implicit guarantee has been broken when the failure of any major bank might expose the financial system to greater turmoil than during the last crash.

Politicians, especially after the crisis, claim to want to “make banking boring again.” But their actions say otherwise: what politicians wish is for banks to do their bidding. For example, periodic revelations continue to emerge of the U.S. government’s drive, under the Obama administration, to pressure banks into refusing service to controversial organizations such as gun shops and payday lenders, in what became known as Operation Choke Point.

What is most concerning about such overreach is that it has managed to impose its terms on the opposition. Most of those who have criticized Choke Point do so claiming banks are essential facilities, like an energy utility, and thus should not be allowed to refuse service to customers. That is not the point, though: banking is a competitive industry, not a natural monopoly. The outrage of Choke Point is that the government is directing private-sector firms to act in this fashion.

However misguided, this attitude is in keeping with a history of increasing government control of financial institutions. The trend, despite many deleterious consequences, has not stopped. It is difficult today to find a country that does not run its banking system on the assumption that it is an arm of the government.

There are, however, examples of countries with stable financial systems that lack America’s myriad interventions. Israel, New Zealand, and Panama have no government insurance of bank deposits. Canada historically lacked the barriers to consolidation prevalent in the United States, resulting in more diversified bank balance sheets. Many European countries have gone without schemes to promote mortgage lending. Indeed, it was jurisdictions that had turned homeownership into a political imperative, such as America and Spain, that experienced the most damaging housing crises from 2006 onward.

It is difficult to wean politicians off the tendency to use the banking system to further their own objectives. Banks often often happy to go along during the good times, as their likelihood of a taxpayer-sponsored rescue in bad times increases. Yet, if we carry on this way, soon we will find ourselves with a banking system as safe as the bailed-out insurer AIG was in 2008, and as innovative and customer-focused as the postal service. Not your grandmother’s boring banks.