Central Banks and the Folk Tales of Money

By Pierre Ortlieb.

On June 10th, 2018, Swiss voters participated in a referendum on the very nature of money creation in their small alpine republic. The so-called “Vollgeld Initiative,” or “sovereign money initiative,” on their ballots would have required Swiss commercial banks to fully back their “demand deposits” in central bank money, effectively stripping private banks of their power to create money through loans in the current fractional reserve banking system.

In the build-up to this poll, the Swiss National Bank (SNB) tailored their statements on credit creation based on their audience: when speaking to the public, the SNB chose to promulgate an outdated “loanable funds” model of money creation, while it adopted an endogenous theory of credit creation when speaking to market participants. This served to mollify both audiences, reassuring them of the ability and sophistication of the SNB. Yet the contradicting stories offered by the SNB are part of a broader trend that has emerged as central banks have expended tremendous effort on trying to communicate their operations, with different banks offering different explanations for how money is
created. This risks damaging public trust in money far more than any referendum could.

***

At the SNB annual General Shareholders Meeting in April 2018, Governor Thomas Jordan was verbally confronted by two members of the audience who demanded Jordan explain how money is created – Jordan’s understanding of credit, they argued, was flawed and antiquated. Faced with this line of questioning, Jordan rebutted that banks use sight deposits from other customers to create loans and credit. The audience members pushed back in disagreement, but Jordan did not waver.

On the surface, Jordan’s claim on money creation is an explanation one would find in most economics textbooks. In this common story, banks act as mere intermediaries in a credit creation process that transforms savings into productive investment.

However, only months earlier, Governor Jordan told a different story in a speech delivered to the Zürich Macroeconomics Association. Facing an audience of economists and market professionals, Jordan had embraced a portrayal of money creation that is more modern but starkly opposed to the more folk-theoretical, or “textbook” view. Jordan described how “deposits at commercial banks” are created: In the present-day financial system, when a bank creates a loan, “an individual bank increases deposits in the banking system and hence also the overall money supply.” This is the antithesis of the folk theory offered to the public, presenting a glaring contradiction. Yet the SNB’s duplicity is part of a broader trend: central banks are unable to provide a unified message on credit creation, both internally and among themselves.

For instance, in 2014, the Bank of England published a short, plainly-written paper that described in detail how commercial banks create money essentially out of nothing, by issuing loans to their customers. The Bank author’s noted that “the reality of how money is created today differs from the description found in some economics textbooks,” and sought to correct what they perceived as a popular misinterpretation of the credit creation process. Norges Bank, Norway’s monetary authority, has made a similar push to clarify that credit creation is driven by commercial banks, rather than by printing presses in the basement of the central bank. Nonetheless, other central banks, such as the Bundesbank, have gone to great lengths to stress that monetary authorities have strict control the money supply. As economist Rüdiger Dornbusch notes, the German saving public “have been brought up to trust in the simple quantity theory” of money, “and they are not ready to believe in a new institution and new operating instructions.”

***

While this debate over the mechanics of money creation may seem arcane, it has crucial implication for central bank legitimacy. During normal times, trust in money is built through commonplace uses of money; money works best when it can be taken for granted. As the sociologist Benjamin Braun notes, a central bank’s legitimacy depends in part on it acting in line with a dominant, textbook theory of money that is familiar to its constituents.

Yet this is no longer the case. Since the financial crisis, this trust has been shaken, as exemplified by Switzerland’s referendum on sovereign money in June 2018. Faced with political pressures and uncertain macroeconomic environments, some central banks have had to be much more proactive about their communications, and much more frank about the murky nature of money. As quantitative easing has stoked public fears of price instability, monetary authorities including the Bank of England have sought to clarify who really produces money. In this sense, taking steps to inform the public on the real source of money – bank loans – is a worthwhile step, as it provides constraints on what a central bank can be reasonably expected to do, and reduces informational asymmetries between technical experts and lay citizens.

On the other hand, a number of central banks have taken the dangerous approach of simply tailoring their message based on their audience: when speaking to technical experts, say one thing, and when speaking to the public, say another. The janus-faced SNB is a case in point. This rhetorical duplicity is important as it allows central banks to both assuage popular concerns over the stability of money, by fostering the illusion that they maintain control over price stability and monetary conditions, while similarly soothing markets with the impression that they possess a nuanced and empirically accurate framework of how credit creation works. For both audiences, this produces a sense of institutional commitment which sustains both public and market trust in money under conditions of uncertainty.

Yet this newfound duplicity in central bank communications is perilous, and risks further undermining public trust in money should they not succeed in straddling this fine line. Continuing to play into folk theories of money as these drift further and further away from the reality of credit creation will inevitably have unsettling ramifications. For example, it might lead to the election of politicians keen to exploit and pressure central banks, or the production of crises in the form of bank runs.

Germany’s far-right Alternative für Deutschland, for instance, was founded and achieved its initial popularity as a party opposed to the allegedly inflationary policies of the European Central Bank, which failed to adequately communicate the purpose of its expansionary post-crisis monetary program. Once a central bank’s strategies and the public’s understanding of money become discordant, they lose the ability to assure their constituents of the continued functioning of money, placing their own standing at risk.

Central banks would be better off engaging in a clear, concise, and careful communications program to inform the public of how credit actually works, even if they might not really want to know how the sausage gets made. Otherwise, they risk further shaking public trust in critical monetary institutions.

About the Author: Pierre Ortlieb is a graduate student, writer, and researcher based in London. He is interested in political economy and central banks, and currently works at a public investment think tank (the views expressed herein do not represent those of his employer).

‘In Praise of Idleness’: the shorter working week is so much more than just a business fix

The January blues are a harsh reminder of the value of doing nothing. Bertrand Russell’s wistful paean to the redundancy of work shows that while this radical thinking is not new, it carries fresh challenges for today’s proponents a shorter working week.

By Robert Magowan

A four day week is back on the political menu. It’s about time. Despite global advances in technology and productivity, most countries have seen the overall number of hours worked go up, not down. It’s been over sixty years since most countries introduced a full weekend and we ‘gained a day’.

Among this current renewed chatter, one thinker above all tends to receive the deferent nod to history. John Maynard Keynes predicted in his 1932 Economic Possibilities for our Grandchildren that the march of technology would reduce the necessary working week to just 15 hours – and even that would only be to “satisfy the old Adam in most of us”.

It is worth remembering though, that Keynes was far from alone. Bertrand Russell’s In Praise of Idleness from two years earlier mounts in a few short pages a resolute defence of idleness and an ambitious course for progress: “I think that there is far too much work done in the world, that immense harm is caused by the belief that work is virtuous, and that what needs to be preached in modern industrial countries is quite different to what has always been preached.”

Our attachment to work, Russell argues, stems originally and naturally from the necessity for sustenance that pre-industrial society entailed. Modern technology has severed the need for such an attachment, yet it sustains itself in a sense of morality – “the morality of slaves”. “The conception of duty, speaking historically, has been used by the holders of power to induce others to live for the interests of their masters rather than for their own”.

Thus Russell fumes at the reverence of labour, or as he calls it, “moving matter about” (hardly a conception that needs much revision in the service economy of cursor and email). For him, the work ethic is an instrument of the aristocratic class to help maintain their own avoidance of work. Like Keynes’ Economic Possibilities, which lamented the “growing-pains of over-rapid changes” of the time, the piece is in many ways embarrassingly relevant today. The line, “that the poor should have leisure has always been shocking to the rich”, for example, rings too true in a political environment almost totally eclipsed by the ‘strivers’ vs ‘skivers’ debate just a few years ago. Modern methods of production should be sufficient to provide the “necessities and elementary comforts of life”, argues Russell, and the rest of our time to do with as we see fit.

To recall these utopian predictions is to recall just how far we have failed to realise what was once a central element of progress, indeed, how we fail now to even recognise it as such. Even major policy advances in the area, such as the EU’s Working Time Directive (with its focus on safety and productivity), have neglected any radical element of change. Idleness remains a vice, feared not just economically in terms of lost productive potential but even on the political left as a “lonely and unfulfilling vision”.

In this context, radical proposals for a four day week risk being reduced to little more than a business fix. Can employee motivation and concentration be improved sufficiently to increase marginal output? The boss of one of the most publicised recent trials, at an insurance firm in New Zealand, articulated this approach neatly in laying out his focus: “it’s productivity, productivity, productivity!”. Whilst this thinking may be helpful in establishing a sense of existing economic viability for the policy, it excludes its most crucial aspect – what we gain in our newfound free time.

Because of course, at this time of year more than ever, we know that doing nothing never means doing nothing. To recognise the true value of leisure and idleness means subverting material ideas about what it means to be productive. Most of us will have just returned from a Christmas break where jokes were made, opinions discussed, housework shared, books read, games invented, ideas pondered, friendships rejuvenated, attachments forged. No doubt for a brave few muscles were exercised and sports contested. This output isn’t the product of the invisible hand. This is the natural fruit of a shared indulgence in idleness safe in the knowledge that one’s security is guaranteed and one’s basic needs are met. As Bobby Kennedy argued in his critique of GDP as a measure of progress, it is this mass of economically ignored value that makes life worthwhile.

The modern economic challenges of working less

In Praise of Idleness also points to two major societal changes which make the task of delivering a shorter working week, if it is to be accepted, all the more challenging. Firstly, the remnants of a leisure class have largely disappeared. It is no longer just America whose “men work long hours even when they are well off” as Russell writes. Britain’s highest earners today work very long hours. Secondly, in Russell’s 1930s Britain, while earning is morally laudable, spending is deemed “frivolous”. Today, on the other hand, consumption – when based on ‘earned’ income – is not only culturally acceptable, but economically virtuous. Indeed the potential for material consumption to increase under a four day week is a convincing argument for it for some. Henry Ford, of course, was an early pioneer of this mantra, unilaterally granting workers a five day week in 1914 in the knowledge it would allow more time for weekend ‘leisure driving’.

The combined lesson of these two crucial differences is that long working hours are today more than just a moral kink underpinned by historical power. They have been fundamentally subsumed into our economic system.

Sixty years before Keynes and Russell, economist William Stanley Jevons discovered that the Watt steam engine, which greatly improved the efficiency of coal use, had actually increased overall coal consumption, as a result of heightened demand. The widespread tendency of the efficiency gains of technological progress to increase resource use rather than reduce it became known as the Jevons Paradox – and to this day it refuses to go away. The same has occurred with work. The slickest offices have delivered not shorter weeks but both higher output and bored staff. And perhaps worst of all, we are all in on the act. No longer can we simply decide to wrench the moral lauding of work from Russell’s leisure class who sustained it for their own benefit. As Aeron Davis has written, today’s elites are increasingly mere reckless opportunists, lacking the coherence and control to operate this system for theirs or anyone else’s benefit. Russell’s “morality of slaves” is preserved not in service to a dominant people but to the sovereign blob that is ‘the economy’. Hegemony – cultural and economic – is what now sustains the work ethic. This is why it is so hard to imagine a world with less of it.

In Praise of Idleness is, therefore, a timely reminder that a shorter working week is a much more fundamental challenge than simply delivering the same for less – just another efficiency gain, another productivity drive. It is part of a wider reckoning, a revaluation of the purpose of work and progress – the makings of an ideological basis for a post-capitalist economy.

At this time of year In Praise of Idleness is also a reminder of something more obvious. Leisure – “its ease and security”, and the indulgence in learning, good nature and “active energies” that it allows for – was once among the ultimate goals of civilisation. Reading Russell, it seems only right it should return.

About the author
Robert Magowan is an MSc Economics and Governance student at Leiden University in The Netherlands. He is active in the Green Party of England and Wales and is currently researching the socioeconomic implications of a shorter working week.

Brexinomics

In analyzing the consequences of Brexit, economists have relied heavily on ‘scenario testing.’ But this tool may not be fit for purpose.

As the UK embarked on Brexit, economists were given a range of opportunities in which to provide some guidance as to how the tricky process of Brexit was going to go. A sub-discipline was entitled ‘Brexinomics’. This article looks at a tool used by economists, known as scenario testing and questions the reliance on this tool to navigate us through Brexit.

On June 23rd 2016, the UK decided to travel on the unknown road ahead known as Brexit. Economists were called on to provide some navigation for policy makers, the markets and businesses. The task was (and still is) to provide policy makers with how the economy will react  either a ‘hard Brexit’, ‘soft Brexit’, or any kind of new rearrangement with the EU.

The Treasury and Bank of England have the most influential roles when it comes to acting as an economic advisor to the government. One of the methods that has been particularly relied on by these organisations is referred to generically as scenario testing.

What is scenario testing and what can it actually tell us?

Scenario testing is a broad term given to the use of economic modelling to predict how a certain event is going to impact the rest of the economy. Analysts aim to predict the future impact on the economy arising from any-one number of things. Scenario tests are predominantly conducted using a global econometric model that contains large amount of data and equations that aim to describe the behaviour of the economy. These models are used by governments and central banks alike.

Since the announcement of the Brexit Referendum, scenario tests have been frequently referred to in news articles and senior politicians alike to provide a picture of what a post-Brexit world might look like.  In 2016, the HMT published a document titled ‘HMT analysis: The immediate economic impact of leaving the EU’.  In this document, they published the table below in which they outline the response of the economy to a leave vote in the referendum.

(Source: HM Treasury)

There are two shock scenario responses listed. These refer to a moderate and severe response of the economy to a leave vote in the referendum. The table shows that in the year following a leave vote 17/18, the economy is expected to contract by around 3.6%. In the severe case it is listed as -6% which would have been worse that the financial crisis of 2009 in the UK which saw a peak decline of -4.2%. In 2017, year growth on change was actually 1.7% (Office of National Statistics). That is a marked difference from the projected values.

This reveals that scenario tests are not suitable to handle something like Brexit, and their poor performance in outlining the economic effects of a leave EU vote is unsurprising.

A better use of scenario tests is to look at the overall impact on the economy of specific one-off economic events or policy changes, like a change in oil prices. A change in oil prices would typically start with changes in the income for oil-exporting countries and rises in costs for oil-importing countries. For the oil importing countries, the initial increase in oil prices would then lead to an increase in inflation as the cost of production increases. These different impacts can be modelled by scenario tests and they can provide a scale of the final economic impact owing to an initial increase in oil prices.

However, the number and scale of policy unknowns in Brexit means that a scenario test is always under-identified. We are dealing with infinite number of policy unknowns.

This leads on to the second issue with scenario tests is that they work under the assumption of ceteris paribus. This assumption is that all other things will remain equal during and after the specific economic event that is being analysed has occurred. With something like Brexit, there is no ceteris paribus, as all other things will not necessarily remain equal.

Brexit has the possibility of creating restrictions on migration, along with restrictions on trade and restrictions on capital flows. These could all happen simultaneously. Even if a scenario test could adequately model each of these events on their own, it is not able to consider interactions between different simultaneous policy changes. It seems unlikely that previous data could be used to show us what would happen in the event of all three of these policy changes simultaneously occurring given that this is not something that has occurred before.

Despite all these shortcomings, there is still often reference in the media to results of scenario tests conducted by key organisations. Scenario tests still seem to be providing some comfort in predicting what a post-Brexit world might look like.

There are two amendments that economists need to make. The first is to limit the scope and expectations of what previous data can tell us. Given the unprecedented nature of Brexit, it seems that historical data might not have the information to show us what could happen in the event of any kind of Brexit deal. It unlikely to accurately provide a percentage point increase/decrease in GDP in the case of any type of new arrangement with the EU.

The second amendment, is to move on from the empirical macroeconomic models and look at methods that will provide a truer reflection of how individuals might behave in response to the new arrangement with the EU.

What are the alternatives?

There have been already a number of surveys conducted that ask individuals and businesses alike, what they would do in ‘Brexit-like’ events i.e. a restriction on migration leading to labour shortages. Piecing together information from macroeconomic surveys is more likely to provide a truer picture as it will give us actual behavioural responses from economic agents.


About the Author
Kanya Paramaguru is a PhD student at Brunel University London. Her current research focuses on using empirical time-series methods in Macroeconomics.

10 years after the financial crisis and its lasting effects on Americans

This year marks the 10th anniversary of the 2008 financial crisis. Although the crisis is remembered for foreclosures, bank failures and bailouts, many American citizens are still unaware of what caused it.

By Breshay Moore.

This year marks the 10th anniversary of the 2008 financial crisis. Although the crisis is remembered for foreclosures, bank failures and bailouts, many American citizens are still unaware of what caused it. Understanding this is important to prevent future crises and think about what kind of financial system we want to have: one that serves people and invests in communities, or one that enriches a handful of wealthy bankers and money managers while making our economy less fair and safe for the rest of us.

In simple terms, the financial crisis was a result of deregulation of the financial sector, and reckless and predatory practices by greedy financial players all across the board, from mortgage lenders to Wall Street traders to the largest credit rating agencies.

In the lead-up to the crisis, mortgage lenders were engaging in fraudulent and deceptive sales practices to make toxic mortgage loans to home buyers, which they knew the borrowers could not afford. Predatory lenders particularly targeted people of color, especially women of color, for these higher-rate loans. Meanwhile, these risky mortgages were packaged and sold to investors around the world, becoming implanted throughout the financial system. The economy went into a recession in late 2007, defaults on mortgage payments increased and housing prices plummeted, resulting in billions of dollars in mortgage losses. This had a chain reaction in the financial system because of the number of financial institutions that had stakes in the housing market. These string of events shook the entire economy, fueling the worst recession in the US since the Great Depression.

Millions of families lost their homes or jobs. Median wealth among households fell tremendously: From 2005 to 2009, median wealth among Hispanic households fell by 66 percent, by 53 percent among Black households, by 31 percent among Asian households, and by 16 percent among white households. Millions of people also suffered major drops in income, property values, retirement savings, and general economic well-being. The crisis produced lasting effects. Families are still struggling economically, especially in communities of color.

After all the damage was done, no one was held accountable. Financial players made billions of dollars in bonuses and profits. Instead of helping the communities that were most affected, Congress and The Federal Reserve began bailing out big banks with public money. We recently learned that 30 percent of the lawmakers and 40 percent of the top staffers involved in the congressional response to the crisis have since gone to work for Wall Street.

In 2010 President Barack Obama introduced legislation containing important reform measures in response to the crisis. The Dodd–Frank Wall Street Reform and Consumer Protection Act created rules to protect consumers and regulate the financial industry. This law created the Consumer Financial Protection Bureau (CFPB) to promote transparency and fairness in the consumer-finance industry, and to holding financial institutions accountable for engaging in predatory and discriminatory practices. This independent agency has done a lot for consumers, and has returned more than $12 billion in relief to more than 29 million cheated consumers.

In return for all the money that Wall Street has poured into political campaigns and lobbying, President Trump and Congress have been working hard to undo rules that  regulate the financial sector. Countless bills have been introduced and passed in Congress to deregulate banks and lenders. One of these bills, S. 2155, which became law in May, not only increases the risk of future financial disasters and bank bailouts, but makes it easier for mortgage lenders to discriminate on the basis of race, ethnicity and gender. Sixteen Democrats and an Independent supported the GOP in pushing this deregulatory bill. The vote did not go unnoticed and public sentiment is not on their side.  In fact, 88 percent of all likely voters — across party  lines — support holding financial companies accountable if they discriminate against people because of their race or ethnicity. And 64 percent of voters think big banks and finance companies continue to require tough oversight to avoid another financial crisis.  

The lack of restrictions on banks and other financial institutions put consumers and the economy at risk. The 10th anniversary of the financial crisis should encourage us to redouble our efforts to push for changes to our financial system so that it works for us not just for Wall Street.

 

Breshay Moore is a Senior at Towson University, studying Advertising and Public Relations. She was recently a Communications and Campaign intern for Americans for Financial Reform.

Victorian despite themselves: central banks in historical perspective

Central banks have not always been independent, inflation targeting bodies, and to treat them as such is to obscure their complex histories and alternative institutional constellations

Central banks have not always been independent, inflation targeting bodies, and to treat them as such is to obscure their complex histories and alternative institutional constellations. By Pierre Ortlieb.

Donald Trump’s most recent feud with the Federal Reserve reached a new peak late last week as the U.S. President lambasted the institution’s policy stance. “I don’t have an accommodating Fed,” he noted. Commentary on Trump’s outburst is perhaps even more alarming than his words themselves. For instance, The Week noted that Trump’s encroachment on Fed independence was “essentially unprecedented”; imperiling the central bank’s status as a guardian of price stability was reckless, foolish. This reading of the history of central banks is misguided, however. Our current paradigm of independent central banks deploying their tools to maintain low inflation is a deeply contingent historical phenomenon and obscures central banks’ frequent role as publicly-controlled institutions and fiscal buttresses throughout their centuries of existence.

The contemporary notion of independent, conservative central banks was enshrined gradually over the 1990s, a decade in which over thirty countries – developed and developing – guaranteed the legal and operational independence of their monetary authorities. This institutionalization of inflation-averse central banks has come hand-in-hand with an aversion to “inflationary” deficit financing and fiscal expansionism, which has been restrained by an exclusive focus on price stability. This has come to be treated as the best practice approach to central banking, a paradigm which, until recently, was rarely questioned among policymakers. Reaction to Donald Trump’s comments has been emblematic of this.

Yet the history of central banks shows them to be far more intertwined with states and treasuries than current commentary or policy would suggest. At their founding, central banks frequently served not as constraints on the state, but rather as fiscal agents of the state. The inception of the Bank of England (BoE) in 1694, for example, was the result of a compromise that granted the state loans to finance its war with France, while the BoE was granted the right to issue and manage banknotes. As a result of this bargain, the market for public debt in the United Kingdom exploded in the 18th century, and government debt peaked at 260 percent of GDP during the Napoleonic wars. This both facilitated the expansion of Britain’s hegemonic financial position and enabled the industrial revolution, as borrowing at low risk made vast industrial development possible.

Direct state financing was, however, not the only means through central banks fostered favorable monetary conditions and growth during this era. The use of various “gold devices” to manage credit conditions from within the straitjacket of the gold standard was commonplace. The Reichsbank, for example, granted interest-free loans to importers of gold and inhibited gold exports to establish de facto exchange controls and some degree of exchange rate flexibility.

Various central banks also pursued sectoral policies, lending government-subsidized credit at lower real interest rates to key developmental industries. The 1913 Federal Reserve Act, for instance, was designed such that it would improve the global competitiveness of New York financial institutions. It is important to note that at the time, these central banks were largely established as private institutions with government-backed monopolies; yet this did not alter the fact that, in practice, they served as crucial instruments for the expansion and development of Western economies. Beyond the US and the UK, central banks across Western Europe, such as the Banque de France (1800), the Bank of Spain (1874), and the Reichsbank (1876), served a similar initial function as developmental agents of their respective states.

Nevertheless, this was not a uniform or constant system. The existence of the gold standard itself constrained the use of monetary instruments to foster growth across developed economies during the late 19th century. Furthermore, Victorian-era British policy came to revolve around sound finance and fiscal discipline, as the use of a central bank to finance the national state was increasingly in tension with Britain’s central position in the international trading system. Inflationary fiscal deficits were seen as inhibiting growth and dampening international investment. This “Victorian model” focus on price stability produced a paradigm shift in the UK away from expansionary deficit financing towards more restrained policy.

Despite interludes, the use of central banks as macroeconomic instruments endured and emerged reinforced in the aftermath of the Great Depression and the Second World War. After 1945, governments across the Western world adopted full employment objectives as part of the consensus of “embedded liberalism,” a practice which often also involved nationalizing central banks, so they could serve as tools of macroeconomic policy. Credit allocation came to serve social goals, and central banks were given additional tasks such as managing capital flows to maintain low interest rates. In France, the Banque de France was brought under the umbrella of the National Credit Council, the institution charged with managing financial aspects of government industrial and modernization policies. While other countries employed different mechanisms in implementing this consensus, the overarching aim of monetary institutions serving social goals was broadly shared across developed countries in the postwar era, as it had been during the 19th century during the infancy of central banks.

This consensus of central banks undergirding fiscal policy fragmented and fell apart from the 1970s onwards. The experience of stagflation, the increasing influence of financial institutions in policymaking, as well as a growing academic consensus on the dangers of central bank collusion with governments, dismantled both the expansionary fiscal state and the subservient central bank. The “Volcker revolution” in the United States was a first step in the gradual, post-Nixon institutionalization of a price stability-focused, independent central bank. The Bank of England was granted operational independence in 1997 by Labour Chancellor Gordon Brown, while the ECB has been independent since its inception in 1998.

The current paradigm of independent, inflation targeting central banks thus obscures the messy history of central banks as public institutions. Since their inception, monetary authorities have performed various different roles; while they served as guardians of price stability in Victorian England, they have originally served as developmental and fiscal agents for expansionary states, and have frequently continued to do so in the centuries since. Treating central bank independence as an ahistorical best practice approach is misleading, and we should recall that there have been alternatives to the current framework. As some have heralded the end of the era of central bank independence, while others have underscored the benefits of re-politicizing monetary policy, it is worth bearing this history in mind.

About the Author: Pierre Ortlieb is a graduate student, writer, and researcher based in London. He is interested in political economy and central banks, and currently works at a public investment think tank (the views expressed herein do not represent those of his employer).

Is the falling wage share simply a statistical phenomenon?

Economists have suggested several competing theories to explain the phenomenon of the declining wage share, as it has been falling globally for several decades. In the case of the US, two specific factors can explain a significant part of the decline:  The increase in economy-wide depreciation and the rise of imputed rents as a share of total GDP.

A number of economic studies in recent years have documented the declining wage share in many countries around the world. The wage share is the part of GDP that can be attributed to labor income (wages) and is usually assumed to fluctuate around 60% while most of the remaining part of GDP is accounted for by capital income, such as rents from housing, income from Intellectual Property Products, capital gains and stock dividends, etc.

The chart below shows that the US wage share has fallen from almost 58% two decades ago to just 53% as of today, a decline of about 5 percentage points. This phenomenon is concerning insofar as the majority of the population derives most of their income from wages whereas capital income only accounts for a small share for most people. The reason is, of course, that capital ownership tends to be highly concentrated: About 75% of the US total wealth being owned by the top 10%. While home ownership tends to be much more dispersed, it can actually vary quite significantly from country to country. In the US, the home ownership rate exceeds 60%, but it is significantly below what it was before the financial crisis when it almost hit 70%.

 


Source:BEA

There are several competing (but not mutually exclusive) hypothesis that have been put forward to explain the trend of the falling wage share. Some authors have focused on the decline of the bargaining power of labor, either as a result of eroding labor unions, or the result of globalization as a number of low-wage countries like China entered the global economy during the late 1980s.

Alternatively, some papers have suggested that monopoly power in many industries has increased, thus putting downward pressure on the wage share as markups are rising. Finally, some post-keynesian economists have emphasized increasing financialization as a possible cause.

In what follows, I will focus on yet another explanation that might explain part of the downward trend of the gross wage share. While GDP is usually defined as the sum of all the income streams in the economy, there are several categories that national statistics offices include in the GDP calculation, but technically there is no income stream flowing at all. There are two items that stand out in particular because they are quantitatively the most important: Depreciation of capital and imputed rents.   

Depreciation of physical assets is included in the GDP calculation because it is counted as a cost of production to firms. From an accounting point of view, depreciation is the allocation of cost of an asset over its useful lifespan. Since depreciation expenses can be offset against a firm’s taxable profits, firms might actually have an incentive to overstate annual depreciation expenses.

US tax law allows firms to depreciate all kinds of assets that are used in the production process, ranging from nonresidential structures to all kinds of industrial equipment, and even Intellectual Property Products (IPP). Over the last few years depreciation as a share of GDP has increased mostly as a result of two factors.

First, more and more companies increasingly rely on modern technologies that tend to depreciate at a fairly rapid pace. Equipment like computers, smartphones, software, etc. tend to become obsolete within just a couple of years: According to the BEA, each of these items has a lifespan of only two to three years when it must be replaced. Compare this with other industrial equipment, which has an average lifespan of half a decade at least, with non-residential structures easily approaching a lifespan of two decades.

Second, US corporations have produced an increasing amount of intangible products in recent decades (IPP). This could be a patent, for example, which is basically a monopoly right granted by the government for a specified time period, usually 20 years in the US. As companies can depreciate all cost expenses of their patent over its useful or legal life, companies might also have an incentive to overstate their expenses because they can be offset against their tax liabilities. As the share of intangible assets has increased significantly over the last few years, so have depreciation expenses caused by the production of IPP.

The chart below shows that depreciation as a share of GDP has increased from about 6% in the postwar period to almost 13% as of today and this significant rise can account to some extent for the decline in the gross wage share. Most of the increase in depreciation can be explained by the rise of modern technologies, which tend to have a significantly lower lifespan and thus become obsolete much more quickly, as well as the increasing importance of IPP in today’s economy.

 


Source:
BEA


The second large item that might have put downward pressure on the wage share is the increase in the rent share of GDP. In the case of the US, it is mostly 
imputed rents, the rent-equivalent a house owner would pay to himself in rent, that have risen significantly.

 


Source:
BEA

 

Imputed rents are included in the GDP statistics even though there is technically no income stream flowing to anyone because otherwise a country that consists for the most part of renters (like Germany) would have an “inflated” GDP figure. Consider two countries, A and B, which are equal in every aspect with one single exception. Let’s say that in country A, I live in your house and pay rent to you and you live in my house and pay rent to me, whereas in country B we both live in our own houses. If we exclude imputed rents, country A would have higher GDP than country B simply because we pay rent to each other even though the two countries are equally productive (by assumption). Imputed rents are obviously derived from the value of the underlying dwelling.

As most advanced economies have experienced spectacular house price appreciations over the last couple of decades, mostly a result of supply-side restrictions rather than “speculative bubbles”, imputed rents surely have increased more or less in tandem. In the case of the US, imputed rents have actually surged from about 6% of GDP in the 1960s to about 8% of GDP as of today. The increase in the rent share of GDP, mostly a result of rising house prices being accompanied by increasing imputed rents, thus also puts some downward pressure on the gross wage share.

 

Source: Jorda, Schularick and Taylor (2016) Macrohistory database

 

We have therefore two items, economy-wide depreciation as well as imputed rents, which account for an increasingly large share of total GDP despite the fact that both categories actually do not represent any income streams in the classical sense.

It is exactly for that reason that some economists have argued that from an inequality point of view we should rather focus on the net wage share, meaning net of depreciation. Furthermore, as I have explained above, there is an argument to be made that such a net measure should also net out imputed rents (and potentially other imputed income streams that are included in the GDP calculations), thus focusing entirely on actual income streams instead. Doing so cancels out most but not all of the downward movement of the wage share one can observe across countries in recent decades. For the US, the graph below shows that the “net wage share”, adjusted for depreciation and imputed rents, is not significantly lower today than what it was from the late 1960s onwards.

 


Source:
FRED

 

About the Author: Julius Probst is a Phd student at the Economic History department of Lund University in Sweden. His main research area is long-term economic growth with a special focus on economic geography.

 

Basic Income’s Politics Problem

The rising interest in Basic Income, and its being put on the political agenda in countries ranging from Canada to South Africa to to Finland, is driven by a number of economic and political factors. There remain the old concerns about the administrative costs and paternalism of welfare bureaucracies, and these have been joined by observations about the increasing precariousness of the labor market, caused in part by increased automation, the growth of the informal labor sector in both the Global North and Global South and the sense that, at least at a global level, the old problem of economic scarcity may have been overcome.

Basic Income, from this perspective, represent a potential way of dealing with the fallout of massive changes within the economic structure of countries whilst also allowing individuals to retain autonomy and acting in contrast to the often-homogenizing biopolitical structures of the post-World War II Western welfare states. It is also argued that its very simplicity imbues the program with a flexibility which would allow it to work in a wider variety of economic and political contexts. Recently, both the province of Ontario in Canada and the nation of Finland have experimented with welfare delivery reforms in the direction of basic income, whilst in South Africa there is wide-ranging social push for implementation of a basic income grant program.

This rising interest has, however, led to a number of questions from both skeptics of basic income and those open to it. A number of such concerns could be classified as practical matters which are particular to the political and bureaucratic systems of each particular government concerned with implementation. There is also another set of questions which concern the basic income project at a more general level, which could be classed as pertaining to the philosophical and ontological underpinnings of such a policy.

 

The Subject Complication:

To begin with, there is the problem of exactly whom the basic income will apply to; in other words, what is the subject for claims to social justice in the world of basic income. In most formulations, from Thomas Paine forward, a basic income is conceived of as having a condition of citizenship attached to it. Though this would be a relatively straightforward in a world of limited interstate migration, the reality is that individuals and families currently exist in a wide variety of positionalities vis-à-vis the state in which they physically inhabit. In addition to citizens, there are permanent residents, refugees, students on visas, temporary foreign workers and more. The danger with a citizenship-conditional basic income, as it is unlikely that every country would implement such a policy at the same time, and certainly not at the same monetary level, is that it would further deepen the divide between citizen and non-citizen inhabitants of particular countries.

There is, of course, a  counter-proposal, of opening basic income to all living within a country, regardless of status. However, given the already fraught nature of immigration and integration policy, this would most likely prove politically damning of both the government who implemented the policy and Basic Income itself.

 

The Scarcity Complication:

This points to another potential issue with basic income, namely, that of scarcity. The compulsion to work in order to receive income, either within the market in a capitalist society or in some other arrangement, such as a communal obligation or kinship system in a non-capitalist one, has traditionally been justified on grounds of resource scarcity. In essence, the idea that one must contribute one’s labor or resources, adding to the overall “pie” of the society, in order to make a claim on taking resources out later on.

However, in practice, societies tend to exempt certain classes of people from the labor compulsion, such as the elderly and children, if sufficient resources exist to allow these populations to exist as “free riders” of a sort. The argument for basic income, in relation to the scarcity question, is that, at least at the global level, scarcity has been overcome by technological advances, productivity gains and automation, such that a labor compulsion is no longer strictly necessary.

At the national level, however, even setting aside questions of effective governance and level of citizen trust in government which affect many states, governments may be capable of deploying resources, but not all governments have the same level of resources to deploy. Given the citizenship-focused nature of most basic income projects, the scarcity question will continue to trouble such proposals absent a mass nation-to-nation wealth redistribution or the establishment of a level of transnational government capable of effectively taking on the task of administering such a program.

 

The Sustainability Complication:

Finally, there is the question of whether or not a basic income program would be sustainable in the political sense in the manner in which the growth of the social democratic welfare state was in the 20th century. The key to this growth of the welfare state, and the notion of decommodified consumption (via free-at-point-of-use services such as health care) was the mobilization of working-class political power resources, primarily trade unions and left-wing parliamentary political parties usually associated with them. By contrast, until very recently, basic income tended to be a subject of interest to academics and policymakers rather than a concrete demand made by a mobilized political power grouping, either in the traditional sense of trade unions and political parties, or in more modern social movements.

To some degree, this may be legacy of libertarian strands of support for basic income which were explicitly aimed at taking down aspects of the welfare state that such movements viewed as their major achievements. With the exception of South Africa, where there was a broad social push for the BIG, that even those social justice movements which exist outside of the traditional social democratic framework have not yet made basic income into a clearly articulated demand. Organizations explicitly concerned with labor issues, such as Fight for 15, have placed emphasis more on rights-at-work and raising wages, rather than a right-to-not-work implicit in at least the progressive, as opposed to libertarian, interpretation of basic income. In a way, this indicates that such organizations may still be stuck, to greater or lesser degrees, in the old social democratic model, with its emphasis on labor rights, albeit with some new elements.

 

A Way Forward:

With that said, the notion of basic income continues to express a certain truth about the collective stake in the commons and the ability to demand a just share of social wealth, free of restrictions or paternalistic impediments, and without the, increasingly unnecessary, compulsion to engage in the formal labor market. With both the increasing interconnectedness of global economic production, and the increasing precarity of many forms of work, the case for basic income on both moral and practical grounds has rarely been more compelling than it is now.

However, in order for basic income to be implemented in a progressive fashion, a recognition and a concrete convergence of action (as opposed to a notional convergence of interest) must be had between basic income advocates and political movements both of the precariat and the traditional working class. Just as the welfare state of the 20th century was largely built on the political muscle of the workers of its time, if basic income is to be the welfare cornerstone of the 21st, it will need a similarly strong mobilization behind it.

By Carter Vance

 

Carter Vance has a Masters of Arts from the Political Economy program at Carleton University in Ottawa, Canada. He has also published his work for Jacobin on water rights protests and has written on a variety of topics for publications such as Truthout and Inquires Journal.

 

Why Left Economics is Marginalized

After the 2009 recession, Nobel Prize winner Paul Krugman wrote a New York Times article entitled “How did economists get it so wrong?” wondering why economics has such a blind spot for failure and crisis. Krugman correctly pointed out that “the economics profession went astray because economists, as a group, mistook beauty, clad in impressive-looking mathematics, for truth.” However, by lumping the whole economics profession into one group, Krugman perpetuates the fallacy that economics is one uniform bloc and that some economists whose work is largely ignored had indeed predicted the financial crisis. These economists were largely dismissed for not falling into what Krugman calls the “economics profession.”

So let’s acknowledge there are many types of economics, and seek to understand and apply them, before there’s another crisis.

 

Left economics understands power

Let’s take labor as an example. Many leftist economic thinkers view production as a social relation. The ability to gain employment is an outcome of societal structures like racism and sexism, and the distribution of earnings from production is inherently a question of power, not merely the product of a benign and objective “market” process. Labor markets are deeply intertwined with broader institutions (like the prison system), social norms (such as the gendered distribution of domestic care) and other systems (such as racist ideology) that affect employment and compensation. There is increasing evidence that the left’s view of labor is closer to reality, with research showing that many labor markets have monopsonistic qualities, which in simple terms means employees have difficulty leaving their jobs due to geography, non-compete agreements and other factors.

In contrast, mainstream economics positions labor as an input in the production process, which can be quantified and optimized, eg. maximized for productivity or minimized for cost. Wages, in widely taught models, are equal to the value of a worker’s labor. These unrealistic assumptions don’t reflect what we actually observe in the world, and this theoretical schism has important political and policy implications. For some, a job and a good wage are rights, for others, businesses should do what’s best for profits and investors. Combative policy debates like the need for stronger unions vs. anti-union right-to-work laws are rooted in this divide.

 

The role of government

The left believes the government has a role to play in the economy beyond simply correcting “market failures.” Prominent leftist economists like Stephanie Kelton and Mariana Mazzucato, argue for a government role in economic equity and shared prosperity through policies like guaranteed public employment and investment in innovation. The government shouldn’t merely mitigate product market failures but should use its power to end poverty.

On the other hand, mainstream economics teaches that government crowds out private investment (research shows this isn’t true), raising the wage would reduce employment (wrong) and that putting money in the hands of capital leads to more economic growth (also no). As we have seen post-Trump-cuts, tax cuts lead to the further enrichment of the already deeply unequal, equilibrium.

 

Limitations to left economics: public awareness and lack of resources

History and historically entrenched power determine both final outcomes but also the range of outcomes that are deemed acceptable. Structural inequalities have been ushered in by policies ranging from predatory international development (“free trade”) to domestic financial deregulation, meanwhile poverty caused by these policies is blamed on the poor.

Policy is masked by theory or beliefs (eg. about free trade), but the theory seems to be created to support opportunistic outcomes for those who hold power to decide them. The purely rational agent-based theories that undergird deregulation have been strongly advocated for by particular (mostly conservative) groups such as the Koch Network which have spent loads of money to have specific theoretical foundations taught in schools, preached in churches and legitimized by think tanks.

There have been others who question the centrality of the rational agent, the holy grail of the free market, believe in public rather than corporate welfare, and the need for government to not only regulate but to make markets and provide opportunity. This “alternative” history exists but is less present – it’s alternative-ness defined by sheer public awareness, lack of which, perhaps, stems from a lack of capital.

Financial capital is an important factor in what becomes mainstream. I went through a whole undergraduate economics program at a top university without hearing the words “union” or “redistribution,” which now feels ludicrous. Then I went to The New School for Social Research for graduate school, which has been called the University in Exile, for exiled scholars of critical theory and classical economics. In the New School economics department, we study Marxist economics, Keynesian and post-Keynesian economics, Bayesian statistics, ecological and feminist economics, among others topics. There are only a few other economics programs in the US that teach that there are different schools of thought in economics. But after finishing at the New School and thinking about doing a PhD there, I understood this problem on a personal level.

There’s barely any funding for PhDs and most have to pay their tuition, which is pretty unheard of for an economics doctorate. Why? Two reasons – 1. Because while those who treat economics like science go on to be bankers and consultants, those who study economics as a social science might not make the kind of money to fund an endowment. And 2. Perhaps because of this lack of future payout, The New School is just one of many institutions that doesn’t deem heterodox economics valuable enough to warrant the funding that goes to other programs, in this case, like Parsons.

Unfortunately, a combination of these factors leaves mainstream economics schools well funded by opportunistic benefactors, whether they’re alumni or a lobbying group, while heterodox programs struggle or fail to support their students and their research.

 

The horizon for economics of the left

Using elements of different schools of thought, and defining the left of the economics world, is difficult. Race, class, and power, elements that define the left, are sticky, ugly, and stressful, and don’t provide easily quantifiable building blocks like mainstream economics does. Without unifying building blocks, we’re prone to continuing to produce graduates from fancy schools who go into the world believing that economics is a hard science and that the world can be understood with existing models in which human behavior can be easily predicted.

Ultimately the mainstream and the left in economics are not so different from the mainstream and the left politically, and there is room for a stronger consensus on non-mainstream economics that would bolster the left politically. It’s worth exploring and strengthening these connections because at the heart of our economic and political divides is a fundamental difference in opinion regarding how society at large should be organized. And whether we continue to promote wealth creation within a capitalistic system, or a distributive system that holds justice as a pinnacle, will determine the extent to which we can achieve a healthy, civilized society.

Fortunately, the political left in many ways is upholding, if not the theory and empirics, the traditions and values of non-mainstream economics. Calls from the left to confront a half-century of neoliberal economic policy are more sustained and perhaps successful than other times in recent history, with some policies like the federal job guarantee making it to the mainstream. After 2008 the 99 percent, supported by mainstreamed research about inequality, began to organize.

There’s hope for change stemming from a new generation of economists, in particular, the thousands of young and aspiring economists researching and writing for groups like Rethinking Economics, the Young Scholars Initiative (YSI), Developing Economics, the Minskys (now Economic Questions), the Modern Money Network, and more. But ideas and policies are path dependent, and it will take a real progressive movement, supplemented by demands by students in schools, to bring left economics to the forefront.

By Amanda Novello.

 

A version of this post originally appeared on Data for Progress’ Econo-missed Q+A column, in response to a question about the marginalization of leftist voices in economics.

Amanda Novello (@NovelloAmanda) is a policy associate with the Bernard L. Schwartz Rediscovering Government Initiative at The Century Foundation. She was previously a researcher and Assistant Director at the Schwartz Center for Economic Policy Analysis at The New School for Social Research.