How Progressives Can Win Big: Casting out the Spirit of Defeatism, One Keystroke at a Time

By Steve Grumbine.

 

Progressives Trigger warning: Compassion required. When is the last time you heard Greens, Berniecrats or Indie voters not acknowledge the distinct and pressing need for election reform, campaign finance reform, voting reform? More to the point, when haven’t they mentioned unleashing 3rd parties from the fringe of irrelevancy and up onto the debate stage?

That is mostly what is talked about, simply because it is low hanging fruit.

It has long been known that our electoral system and methods of voting are corrupt, untrustworthy, and easily manipulated by less than savvy politicians, state actors, and hackers alike. The answers to many of these issues is the same answer that we would need to push for any progressive reforms to take place in America: namely, we need enlightened, fiery, peaceful, and committed activists to propel a movement and ensure that the people rise, face their oppressors, and unify to demand that their needs be met.

What is not as well-known, however, is how a movement, the government, and taxes work together to bring about massive changes in programs, new spending, and the always scary “National Debt” (should be “National Assets”, but I will speak to that later). In fact, this subject is so poorly understood by many well-meaning people on all sides of the aisle that these issues are the most important we face as a nation. Until we understand them and have the confidence and precision necessary to destroy the myths and legends we have substituted in the absence of truth and knowledge, it must remain front and center to the movement.

Progressives, like most Americans, are almost religiously attached to the terms “the taxpayer dollar,” and the idea that their “hard earned tax dollars” are being misappropriated. Often, the most difficult pill for people to swallow is the concept that our Federal Government is self-funding and creates the very money it “spends”. It isn’t spending your tax dollars at all. To demonstrate this, consider this simplified flow chart:

 

These truths bring on even more hand wringing, because to the average voter they raise the issue of where taxes, tax revenue, government borrowing, and the misleading idea of the “National Debt” (which is nothing more than the sum of every single not yet taxed federal high-powered dollar in existence) fit into the federal spending picture. The answer is that they really don’t.

A terrible deception has been perpetrated on the American people. We have been led to believe that the US borrows its own currency from foreign nations, that the money gathered from borrowing and collected from taxing funds federal spending. We have also been led to believe that gold is somehow the only real currency, that somehow our nation is broke because we don’t own much gold compared to the money we create, and that we are on the precipice of some massive collapse, etc. because of that shortage of gold.

The American people have been taught single entry accounting instead of Generally Accepted Accounting Practices, or GAAP-approved double entry accounting, where every single asset has a corresponding liability; which means that every single dollar has a corresponding legal commitment. Every single dollar by accounting identity is nothing more than a tax credit waiting to be extinguished.  Sadly, many only see the government, the actual dollar creator, as having debt; that it has liabilities, not that we the people have assets; assets that we need more and more of as time goes on, to achieve any semblance of personal freedom and relative security from harm.

In other words, at the Federal level it is neither your tax dollars nor the dollars collected from sales of Treasury debt instruments that are spent. Every single dollar the Federal Government spends is new money.

Every dollar is keystroked into existence. Every single one of them. Which brings up the next question: “Where do our hard-earned tax dollars and borrowed dollars go if, in fact, they do not pay for spending on roads, schools, bombs and propaganda?” We already know the answer. They are destroyed by the Federal Reserve when they mark down the Treasury’s accounts.

In Professor Stephanie Kelton’s article in the LA Times “Congress can give every American a pony (if it breeds enough ponies)” (which you can find here ) She states quite plainly:

“Whoa, cowboy! Are you telling me that the government can just make money appear out of nowhere, like magic? Absolutely. Congress has special powers: It’s the patent-holder on the U.S. dollar. No one else is legally allowed to create it. This means that Congress can always afford the pony because it can always create the money to pay for it.”

That alone should raise eyebrows and cause you to reconsider a great many things you may have once thought. It will possibly cause you to fall back to old, neoclassical text book understandings as well, which she deftly anticipates and answers with:

“Now, that doesn’t mean the government can buy absolutely anything it wants in absolutely any quantity at absolutely any speed. (Say, a pony for each of the 320 million men, women and children in the United States, by tomorrow.) That’s because our economy has internal limits. If the government tries to buy too much of something, it will drive up prices as the economy struggles to keep up with the demand. Inflation can spiral out of control. There are plenty of ways for the government to get a handle on inflation, though. For example, it can take money out of the economy through taxation.”

And there it is. The limitation everyone is wondering about. Where is the spending limit?

When we run out of real resources. Not pieces of paper or keystrokes. Real resources.

To compound your bewilderment, would it stretch your credulity too much to say that the birth of a dollar is congressional spending and the death of a dollar is when it is received as a tax payment, or in return for a Treasury debt instrument, and deleted? Would that make your head explode? Let the explosions begin, because that is exactly what happens.

Money is a temporary thing. Even in the old days we heard so many wax poetically about how they took wheelbarrows of government — and bank – printed IOUs to the burn pile, and set the dollar funeral pyre ablaze.  

In the same LA Times piece, Professor Kelton goes on to say:

“Since none of us learned any differently, most of us accept the idea that taxes and borrowing precede spending – TABS. And because the government has to “find the money” before it can spend in this sequence, everyone wants to know who’s picking up the tab.

There’s just one catch. The big secret in Washington is that the federal government abandoned TABS back when it dropped the gold standard. Here’s how things really work:

  1. Congress approves the spending and the money gets spent (S)
  2. Government collects some of that money in the form of taxes (T)
  3. If 1 > 2, Treasury allows the difference to be swapped for government bonds (B)

In other words, the government spends money and then collects some money back as people pay their taxes and buy bonds. Spending precedes taxing and borrowing – STAB. It takes votes and vocal interest groups, not tax revenue, to start the ball rolling.”

Let’s be clear, we are not talking about the Hobbit or Lord of the Rings. We are not talking about Gandalf the Grey or Bilbo Baggins. We are not referencing “my precious!”. It’s not gold, or some other commodity people like to hold, taste and smell. It is simply a tally. Yet somehow, we have convinced ourselves that there is a scarcity of dollars, when it is the resources that are scarce. We have created what Attorney Steven Larchuk calls a “Dollar Famine”.

To quote Warren Mosler in his must-read book “The 7 Deadly Innocent Frauds of Economic Policy” (you can download a free copy right here) he states:

“Next question: “So how does government spend when they never actually have anything to spend?”

Good question! Let’s now take a look at the process of how government spends.

Imagine you are expecting your $1,000 social security payment to hit your bank account which already has $500 in it, and you are watching your account on your computer screen. You are about to see how government spends without having anything to spend.

Presto! Suddenly your account statement that read $500 now reads $1,500. What did the government do to give you that money? It simply changed the number in your bank account from 500 to 1,500. It added a ‘1’ and a comma. That’s all.”

Keystrokes. Is it becoming clearer? Let’s go further for good measure. Mosler continues:

“It didn’t take a gold coin and hammer it into its computer. All it did was change a number in your bank account. It does this by making entries into its own spread sheet which is connected to the banking systems spreadsheets.

Government spending is all done by data entry on its own spread sheet we can call ‘The US dollar monetary system’.

There is no such thing as having to ‘get’ taxes or borrow to make a spreadsheet entry that we call ‘spending’. Computer data doesn’t come from anywhere. Everyone knows that!”

So why do we allow people to tell us otherwise? Maybe it is too abstract. And on cue, Mosler explains this phenomenon via a sports analogy for those who are not comfortable with the straight economic narrative:

“Where else do we see this happen? Your team kicks a field goal and on the scoreboard the score changes from, say, 7 point to 10 points. Does anyone wonder where the stadium got those three points? Of course not! Or you knock down 5 pins at the bowling alley and your score goes from 10 to 15. Do you worry about where the bowling alley got those points? Do you think all bowling alleys and football stadiums should have a ‘reserve of points’ in a ‘lock box’ to make sure you can get the points you have scored? Of course not! And if the bowling alley discovers you ‘foot faulted’ and takes your score back down by 5 points does the bowling alley now have more score to give out? Of course not!

We all know how ‘data entry’ works, but somehow this has gotten all turned around backwards by our politicians, media, and most all of the prominent mainstream economists.”

Ouch! Mosler pointed out the obvious, the propaganda machine has polluted our understanding. So how is this done in economic language? Let’s let Warren finish the thought:

“When the federal government spends the funds don’t ‘come from’ anywhere any more than the points ‘come from’ somewhere at the football stadium or the bowling alley.

Nor does collecting taxes (or borrowing) somehow increase the government’s ‘hoard of funds’ available for spending.

In fact, the people at the US Treasury who actually spend the money (by changing numbers on bank accounts up) don’t even have the phone numbers of the people at the IRS who collect taxes (they change the numbers on bank accounts down), or the other people at the US Treasury who do the ‘borrowing’ (issue the Treasury securities). If it mattered at all how much was taxed or borrowed to be able to spend, you’d think they’d at least know each other’s phone numbers! Clearly, it doesn’t matter for their purposes.”

So why do progressives allow the narrative that the nation has run out of points deter us from demanding we leverage our resources to gain points, to win the game of life, and have a robust New Deal: Green Energy, Infrastructure, free college, student debt eradication, healthcare as a right, a federal job guarantee for those who want work and expanded social security for those who do not want to or cannot work?

How has a movement so full of “revolutionaries” proved to be so “full of it” believing that we must take points away from the 99% to achieve that which the federal government creates readily, when people do something worth compensating? Why does the narrative that the nation is “broke” resonate with progressives? Why do they allow this narrative to sideline the entire movement?

I believe it is because progressives are beaten down. Many have forgotten what prosperity for all looks like or sounds like. Many are so financially broke and spiritually broken that the idea of hope seems like gas lighting. It feels like abuse. It crosses the realm of incredulity and forces people into that safe space of defeatism.

If they firmly reject hope, then they can at least predict failure, be correct and feel victorious in self-defeating apathy. If the system is rigged; if the politicians are all bought off; if the voting machines are hacked; if the deep state controls everything; then we think we are too weak to unite and stand up and demand economic justice, equality, a clean environment, a guaranteed job, healthcare and security and then we have a bad guy to blame.

Then we can sit at our computers, toss negative comments around social media, express our uninformed and uninspired defeatism about the system, and proclaim it is truth by ensuring it is a self-fulfilling prophecy about which we can be self-congratulatory in our 20/20 foresight as we perform the “progressive give-up strategy”. Or, if we want to achieve a Green New Deal, then in a radical departure from the norm we can own our power; we can embrace macroeconomic reality through the lens of a monetarily sovereign nation with a free floating, non-convertible fiat currency and truly achieve the progressive prosperity we all deserve.

The choice is ours. It is in our hands.

 

**For more of Steve’s work check out Real Progressives on Facebook or Twitter

PostCapitalism: A Guide to Our Future

By Hannah Temple.­

 ­It is difficult to get through a day without encountering the idea that we as a species and a planet are at some kind of a tipping point. Whether for environmental, economic or social factors (or a mix of them all) there is a growing collective of voices claiming that the fundamental ways in which we live our lives, often linked to the structures and incentives of capitalism, must change. And they must change both radically and soon if we are to protect the future of the human race. Paul Mason’s PostCapitalism: A Guide to Our Future adds another compelling voice to this increasingly hard-to-ignore din. However, what makes this book refreshingly different is the tangible picture it paints of our possible path to a “postcapitalist” world. Mason’s belief is that capitalism’s demise is in fact already happening, and it is happening in ways we both know and like.

The book starts by looking at Kondratieff waves– the idea developed by Nokolai Kondratieff in the 1920s that capitalist economies experience waves or cycles of prosperity and growth, followed by a downswing, characterised by regular recessions, and usually ending with a depression. This is then followed by another phase of growth, and so on and so on. Many people, especially those that benefit from the current economic model, argue that what we are experiencing currently is just another of these regular downswings and we all just have to hunker down and ride the wave until the going gets good again. Mason, however says that even a quick glance at whatever form of evidence takes your fancy (global GDP growth, interest rates, government debt to GDP, money in circulation, inequality, financialization, productivity), demonstrates that the 5th wave that we should currently be riding has stalled and is refusing to take off.

The shift from the end of one wave and the start of a new one is always associated with some form of societal adaptation. Usually this is through attacks on skills and wages, pressure on redistribution projects such as the welfare state, business models evolving to grab what profit there is. However, if this de-skilling and wage reduction is successfully resisted then capitalism is forced instead into more fundamental mutation- the development of more radically innovative technologies and business models that can restore dynamism based on higher wages rather than exploitation. The 1980s saw the first adaptation stage in the history of long waves where worker resistance collapsed. This allowed capitalism to find solutions through lower wages, lower-value models of production and increasing financialization and thus rebalance the entire global economy in favour of capital. “Instead of being forced to innovate their way out of the crisis using technology, the 1 per cent simply imposed penury and atomization on the working class.”

This failure to resist the will of capital and the subsequent emergence of an increasingly atomised, poor and vulnerable global population is part of Mason’s explanation for our stalled 5th wave. The other half of the explanation comes from the nature of our recent technological innovations. Mason contends that the technologies of our time are fundamentally different to those of previous eras in that they are based on information. This is significant in that information doesn’t work in the ways that printing presses or telephones or steam engines work. Information throws all the basic tenets of capitalism- supply and demand, ownership, prices, competition- on their heads. Information technology essentially works to produce things that are increasingly cheap or even free. Think of music- from £10 for a CD in 1997 to 95p for an iTunes track in 2007 to completely free via sharing sites like Spotify in 2017. Over time, Mason claims the market mechanism for setting prices for certain information-based goods will gradually drive them down and down until they reach essentially or even actually zero – eroding profits in the process.

Capitalism’s response to this shift has basically been to put up lots of walls and retreat to stagnant rentier activity rather than productivity or genuine innovation. Legal walls such as patents, tariffs and IP property rights are used to try to maintain monopoly status so that profits can continue to be earnt. Politics is following in the same path with some real walls as well as plenty of metaphorical ones in the form of disintegrating international agreements and partnerships, import tariffs, immigration caps and so on. “With info-capitalism a monopoly is not just some clever tactic to maximise profit, it is the only way an industry can run. Today the main contradiction in modern capitalism is between the possibility of free, abundant socially-produced goods and a system of monopolies, banks and governments struggling to maintain control over power and information”.

However, what seems to be part of the problem is, according to Mason, a critical part of the solution. These new sharing, or “information” technologies, have led to what Mason sees as an already emerging postcapitalist sector of the economy. Time banks, peer-to-peer lending, open-source sharing like Linux and Wikipedia and other technologies are not based on a profit-making motive and instead enable individuals to do and share things of value socially, outside of the price system. This peer-to-peer activity represents an indication of the potential of non-market economies and what our future might look like.

Mason argues that we have now reached a juncture at which there are so many internal and external threats facing our existing system- from climate change, migration, overpopulation, ageing population, government debts- that we are in a similar position to that faced by feudalism before it dissolved into capitalism. The only way forward entails a break with business as usual. Mason emphasises that it is important to remember that capitalism is not a “natural” state of being, nor has it gone on for such a long time. We live in a world in which its existence is seen to be unquestionable but we must take time to teach our brains how to imagine something new again. For Mason, in rather sci-fi fashion, this “something new” is called Project Zero.

Project Zero aims to harness to full capabilities of information technologies to:

– Develop a zero-carbon energy system
– Produce machines, services and products with zero marginal costs (profits)
– Reduce labour time as close as possible to zero

“We need to inject into the environment and social justice movements things that have for 25 years seemed the sole property of the right: willpower, confidence and design.”

Mason provides us with a comprehensive and exciting list of activities to be cracking on with to shape our new world. Some of his ideas are excitingly fresh and new such as the development of an open, accurate and comprehensive computer simulation of current economic reality using real time data to enable the planning of major changes. Others are more familiar such as the shifting of the role of the state to be more inventive and supportive of human wellbeing by coordinating infrastructure, reshaping markets to favour sustainable, collaborative and socially just outcomes and reducing global debts. He also supports the introduction of a universal basic income, the expansion of collaborative business models with clear social outcomes and the removal of market forces- particularly in the energy sector in order to act swiftly to counter climate change. He calls for the socialisation of the finance system. This would involve the nationalization of central banks, setting them explicit sustainability targets and an inflation target on the high side of the recent average to stimulate a “socially just form of financial repression”. It would also involve the restructuring of the banking system into a mixture of non-profit local and regional banks, credit unions and peer-to-peer lenders, a state-owned provider of financial services and utilities earning capped profits. Complex, financial activities should still be allowed but should be separate and well-regulated, rewarding innovation and punishing rent-seeking behaviour.

This push towards a system that rewards and encourages genuine innovation underlies most of Mason’s suggestions for our postcapitalist future. He contends that, if we continue down our current path, it will suffocate us and lead to a world of growing division, inequality and war. We already have systems for valuing things without prices. Working on optimising the technologies we have available to expand these systems, allowing us to live more sustainable, equal and happy lives, Mason argues, should be the key focus for us all.

This book review of Paul Mason’s PostCapitalism by Hannah Temple is originally posted at Rethinking Economics.­  ­­ ­­ ­­ ­­ ­­­

Don’t Be Afraid of Robots: Technology is What We Make of It

Rapid technological change, if it is even happening, does not necessarily need to lead to mass unemployment or even major disruptions in people’s lives. In all cases, new technology is what society makes of it — that is, it should be used to broadly improve lives and work, not reorient the world around the technology itself or redistribute wealth upwards. Ride-hailing services like Uber and the promise of self-driving cars illustrate both sides of this point; polices like a job guarantee provide a path forward. 

Illustration: Heske van Doornen

Don’t Be Afraid of Robots: Technology is What We Make of It

By Kevin Cashman

There is a lot of talk of the rapid development of technology leading to changes in the way people work as well as mass automation and thus mass unemployment. However, the data generally don’t support this story (the most recent data being a notable, but very limited, exception). Nevertheless, the story has currency among the public and politicians, in part due to the novelty and allure of technology — and the political power of its promoters. Throughout recent history, the promises of revolutionary technology have captivated imaginations but also come up far short. Instead of flying cars, there are apps for refrigerators and ordering cat food over the internet.

It is important to note that the gains from technological advancements do not necessarily need to go to the rich or lead to mass unemployment. If shared fairly, the gains could lead to social benefits, such as increased social services, and broad individual gains, such as more leisure time. And there can be concerted action to help those directly affected by technological change. While there are many policies that could be implemented, a job guarantee — where the government provides jobs to all those that need them — is the simplest and most straightforward way to deal with job loss. If people lose their jobs due to factors outside of their control, why not simply provide them with new jobs?

If the gains go to the top, it is important to point out that this is because of deliberate policy. It is not a natural outcome. The rich and their allies in politics promote this redistribution to the top as inevitable — as the “future of work,” for example — whether or not advancements in technology pan out or not. Since advancements in technology do not fundamentally necessitate a change in social relations, this is intentionally deceptive at worst and wishful thinking at best. To see this dynamic, looking at particular jobs and industries is instructive, for example, in taxis and buses and trucking.

The ability to use smartphones and the internet to mediate services is not particularly revolutionary or unique but it does provide some benefits. Uber, the ride-hailing company, brought investment and these ideas to the taxi industry and quickly took over a large part of the market, despite many issues with its service and sustainability. In Uber’s case, appealing to the political power of affluent residents in cities and the supposed innovation of its app was enough to negate its blatant disregard for regulations, questionable safety record, exploitation of drivers, and unprofitability. In this sense, Uber’s investment allowed it to provide some benefits to its relatively wealthy passengers at the expense of the disabled, regular taxi drivers, and others. Most importantly, because it subsidizes every ride (Uber loses money on every ride taken), it was able to undercut the regulated taxi industry. The government’s lack of interest in maintaining fairness in the taxi industry effectively led to Uber being handed the market.

How could this have been different? The taxi industry on a whole is not an industry with large margins or much investment. In part, this is due to underlying characteristics of the industry as well as regulation, including those aimed at limiting the number of taxis operating in a city (which is good policy). To realize the benefits of technology, taxi commissions or groups of taxi drivers in various places could have developed their own app and infrastructure to facilitate ordering of cabs on the internet. This would have required substantial organization and money, which could have been facilitated and provided, respectively, by the government. The result could have been an app that allowed taxi authorities to continue to maintain standards for safety and operation and also provide the seamless service that certain groups of consumers desire. Indeed, competitor apps are being developed this way and existed before Uber, but they must now claw market share away from Uber. This is quite difficult because Uber is still subsidizing rides and keeping prices artificially low.

Let us now assume that rapid technological advancement is inevitable: self-driving cars and buses are finally right around the corner, as has been promised for years. (Indeed society could be on the cusp of this sort of technology, although the challenges shouldn’t be understated.) There would be massive benefits if self-driving vehicles are implemented successfully: increased mobility for the elderly, many fewer accidents, lower operating costs, increased productivity when in transit, etc.

Along with these benefits, there would be significant disruptions to the labor market. Ideas around how to approach these changes were discussed in a recent report, Stick Shift: Autonomous Vehicles, Driving Jobs, and the Future of Work.1 It discusses two questions that are central to evaluation rapid disruptions to the labor market: How fast will the technology develop? How much of an impact will it have?

Regarding the first question, and assuming that these technological hurdles are overcome,2 the report notes:

If the technology is successfully developed, the rate of the adoption and popularization of autonomous vehicles will depend greatly on whether necessary infrastructure is built, and whether and how regulation responds to these advances in technology. One of the inevitable debates will be between those who wish to ensure that autonomous vehicles are safe and reliable and those who want to get them to market as soon as possible. The outcome of this debate could greatly determine how the labor market is affected. Thorough vetting of the technology, along with phased rollouts, would allow time for workers to adjust to incoming shocks, and would dampen those shocks as well.

If the government were to assume the costs of building infrastructure for self-driving vehicles instead of the companies that are selling them, it would be fair for the government to also take a pro-active approach and develop a process to adequately assess the safety of those vehicles. This would somewhat mitigate the effects on the taxi industry and on bus drivers, especially in the early years of their use.

Proponents of self-driving vehicles also often forget to mention that technology will replace individual activities of workers but not necessarily all of the activities that encompass their jobs. Truckers, for instance, perform many other activities besides simply driving:

…in the trucking industry, there are many tasks that are difficult to imagine autonomous-vehicle technology being able to manage, which may limit their adoption or consign them or the driver to a secondary role. This includes many things that truck drivers are required to know, such as how to inspect the vehicle and cargo, perform maintenance and fix emergency problems, put on tire chains and deal with unpredictable weather, refuel the vehicle safely, and carry dangerous materials safely, to name a few.

If self-driving trucks took over the trucking industry, this suggests there would be many more support jobs in the trucking industry.

The other question is more pertinent considering our assumptions. How much of an impact technology will have on society is entirely up to society. The question is then not how much of an impact will self-driving cars have on society but where does society need self-driving cars and how do self-driving cars fit with social goals? There is a convincing argument that cars — self-driving or not — should have much less of a role in cities in the future. While taxis could have a role to play in the future, for example, public transportation and good urban design should be the focus, thus eliminating much of the need for taxis. In this vein, employment in the taxi industry could decline, but in addition to more social benefits from less vehicle use, employment would increase in association with an increase in investment in public transportation.

The social aspects of occupations are also important to consider when asking whether it might be desirable to transition to self-driving vehicles:

There is also the question of more socially oriented driving jobs. Bus drivers are one example. City bus drivers preserve order and safety on buses, provide information, ensure payment, and are generally considered community members and authority figures. School bus drivers have specific responsibilities related to the safety of the children they supervise. For these reasons, it may not be desirable or necessary to replace bus drivers, completely at least, even if the buses were fully autonomous.

In this sense, the elimination of these jobs would be akin to cuts in public services, and they would also eliminate some social benefits. Social aspects of jobs are rarely considered — but they are very important.

Here, a jobs guarantee would be useful, since it is a policy that prioritizes the social aspects of jobs and since social benefits are not prioritized in the private job market. Returning to the example of bus drivers, buses could be self-driving in the future but the “driver” need not be replaced. Rather, the position could be reoriented in a purely social role.

 

Whether technology will bring small changes, as in the case of Uber, or large changes, as in the case of self-driving vehicles, who benefits is entirely up to society. Gains from technology can be shared broadly with the right policies — just a few of which were described here — so there is no need to inherently fear the robots. A jobs guarantee is one of those policies, and it is perhaps the most important. (And it’s gaining traction in the mainstream.)  A broad coalition, focused on the appropriate use of technology and promoting a job guarantee, could keep the actual threat — those wanting to harness the benefits of technology for themselves — at bay. Whether or not robots and mass automation are around the corner, it’s good policy, too.


The Neoliberal Tale

“The tide of Totalitarianism which we have to counter is an international phenomenon and the liberal renaissance which is needed to meet it and of which first signs can be discerned here and there will have little chance of success unless its forces can join and succeed in making the people of all the countries of the Western World aware of what is at stake.” (Friedrich Hayek)

In the past year we’ve seen a number of mentions to the maladies that neoliberalism and globalization have brought upon Western societies (e.g., see here, here, and here). It is well known that during the past decades the levels of inequality and wealth concentration have continued to increase in capitalist economies, leading to the arrival of “outsiders” to the established political powers such as Trump in the US and Macron in France, a turn to the right all over Latin America, and Brexit.

Neoliberalism, one of the main elements to blame, is better known for the policies that defined the world economy since the 1970s. Faithful devotees like Ronald Reagan and Margaret Thatcher, in the US and UK respectively, exported a number of their neoliberal policies to low and middle income countries through the Washington Consensus under the pretense that it would bring about development.

Neoliberal policies did not exactly turn out the way their creators envisioned. They wanted to reformulate the old liberal ideas of the 19th century in a deeper and coherent social philosophy – something that was actually never accomplished. This article will review some of the origins of neoliberalism.

The first time the term “neoliberalism” appeared, according to Horn and Mirowski (2009), was at the Colloque Walter Lippmann in Paris, in 1938. The Colloque was organized to debate the ideas presented in Lippmann’s recent book The Good Society in which he proposed an outline for government intervention in the economy, establishing the boundaries between laissez-faire – a mark of the old liberalism – and state interventionism.

Lippmann set the foundations for a renovation of the liberal philosophy and the Colloque was a first opportunity to discuss the classical liberal ideas and to first draw a line in what the new liberal movement would or should differ from the old liberalism. It was a landmark that, in subsequent years, sparked several attempts to establish institutions that would reshape liberalism, such as the Free Market Study at the University of Chicago and Friedrich Hayek’s Mont Pelerin Society (MPS).

This event announced major difficulties among the peers of liberalism. Reservations and disagreements among free market advocates were not uncommon. A notable mention is Henry Simons, of the Chicago School, whose position against monopolies and how they should be addressed was a point of disagreement with fellow libertarians such as Hayek, Lionel Robbins – both at the London School of Economics (LSE) at the time – and Ludwig von Mises.

Simons’s view that the government should nationalize and dismantle monopolies would nowadays be viewed as a leftist attack on corporations but it fits perfectly under the classical liberal basis that Simons and Frank Knight, also from the University of Chicago, were following. Under their interpretation, any concentration of power that undermines the price system and therefore threatens market – and political, individual – freedoms should be countered, even if it meant using the government for that purpose.

It becomes clear that the reformulation of liberal ideas into what we know today as neoliberalism was not a smooth and certain project. In fact, market advocates struggled to make themselves heard in a world guarded by state interventionism that dominated the Great Depression and post-war period. Keynes’s publication of The General Theory in 1936 and the wake of the Keynesian revolution, swiped economic departments all over and further undermined the libertarian view.

By the end of the 1930s and of Lippmann’s Colloque, however, the perception that neoliberalism would only thrive if there were a concerted collective effort by its representatives changed Hayek’s perception over his engagement in the normative discourse. In 1946-47, the establishment of the Chicago School and the MPS, were both results of a transnational effort to shape public policy and fit liberal ideas under a broader social philosophy. The main protagonists beyond Hayek were Simons, Aaron Director, and the liberal-conservative Harold Luhnow, then director of the William Volker Fund and responsible for devoting funds to the projects.

The condition for success, as remarked in the epigraph, was to “join and succeed in making the people of all the countries of the Western World aware of what is at stake.” What was at stake? Social and political freedom. Hayek and many early neoliberals understood that any social philosophy or praxis crippling market mechanisms would invariably lead to a “slippery slope” towards totalitarianism.

It is important to note, though, that the causation runs from market to social and political freedom and not the other way around. As Burgin (2012) indicates, while market freedom is a precondition to a free democratic society, the latter may threaten market freedom. Free market should not be subjected to popular vote, it should not be ruled over by any “populist” government (a common swear-word today), and there needs to exist mechanisms to protect that from happening.

Once we have that in mind, it is not so bugging the association that Hayek, Milton Friedman, and the Chicago School once had with authoritarian governments such as Pinochet’s in Chile, one of the most violent dictatorships in Latin American history.

Several liberal economists that occupied important public positions in the Chilean dictatorship had been trained at the Chicago School. The famously known “Chicago Boys” first experimented in Chile what later would be applied in the US and UK and then exported to the rest of the developing world through the Washington Consensus.

In brief, the adoption of some form of authoritarian control over popular sovereignty was deemed acceptable in order to guarantee market sovereignty.

Nevertheless, in the discussions within the early neoliberal groups the boundaries of disciplinary economics were trespassed, and the formulation of neoliberalism – and the Chicago School and MPS – was not grounded on any scientific analytical basis but simply on political affiliation.

The multidisciplinary character, dispersion, and incertitude are some of the reasons why it is hard to give a straightforward definition of what the term “neoliberalism” really means. In order to understand it, we have to mind the set of “dualisms” (capitalism vs. socialism; Keynesianism vs. liberalism; freedom vs. collectivism, and so on) that marked the period. Its defenders (academics, entrepreneurs, journalists, etc.) did not know what their own agenda was – they only knew what they were supposed to oppose. Neoliberalism was born out of a “negative” effort.

It wasn’t until many years later that the division between normative and positive economics came to surface with Friedman and his book Capitalism and Freedom, published in 1962. The increasing participation of economists in the MPS, and a more active public policy advocacy by Milton Friedman brought an end to Hayek’s intention to construct a new multidisciplinary social philosophy.

Economically, Friedman embraced laissez-faire; methodologically, he embraced empirical analysis and positive policy recommendations, getting ever further away from abstract notions of value and moral discussions that his earlier MPS fellows, such as Hayek, were worried about. Neoliberalism lost its path on the way to its triumph; it became a “science” that offered legitimacy to a new credo, a new “illusion”.

As the shadows of neoliberalism became more intertwined with the current neoclassical economics and Friedman’s monetarism, it not only lost its name but also gave birth to a corporate type of laissez-faire; one in which social relations are downgraded to market mechanisms; politics, education, health, employment, it all could fit under the market process in which individuals maximize their own utility. There’s nothing that the government can do that the market cannot do better and more efficiently. Monopolies, if anything, are to be blamed on government actions, while labor unions are disruptive to the economy’s wellbeing. Neoliberalism became a set of policies to be followed: privatization, deregulation, trade liberalization, tax cuts, etc. on a crusade to commoditize every single essential service – or every aspect of life itself.

Hayek believed that these ideas could spread and change the world. And they certainly did. What is worth noting is that there is no fatalistic understanding that neoliberalism was unavoidably a result of historical factors.

The rise of neoliberalism was not spontaneous but rather orchestrated and planned; it was a collective transnational movement to counteract the mainstream of the time; it was originated out of delusion in a period marked by wars, authoritarianism and economic crisis; it was grounded on political affiliations and supported by the dominant ruling class that funded its endeavors and transformed public opinion. These are the roots of what is now the mainstream economic thought.

Reimagining “Right-to-Work”

“Everyone has the right to work, to free choice of employment, to just and favorable conditions of work and to protection against unemployment.” This is Article 23 of the Universal Declaration of Human Rights declared by the UN in 1948. It sounds like a pretty good right to me. I recently learned that in America, we have some states with “right-to-work” laws. That dumbfounded me. Why did unemployment exist in those states if they had a right to work?

It only took a few minutes of research to find out that the “right-to-work” laws some states have are nothing like the fundamental human right. What these laws actually do is defend a worker’s right not to be required to join a labor union to work at a company. This “right-to-work” doesn’t allow more choice, it allows less. Where’s the option to have a union that doesn’t allow freeriders? If you don’t want union benefits, you can work for a company without a union. You still have your choice, but this law has destroyed mine. These laws really promote the right to work for less money, the right to work at a business with a racist union, the right to destroy what unions could and should stand for.

Research from the Economic Policy Institute shows that states with “right-to-work” laws have hurt unionization rates, and hurt the power of unions that do exist. In such states 7% of workers are represented by a union contract, versus 17% in non “right-to-work” states. Wages are lower in states with these laws in place, which makes sense as unions allow collective bargaining for better wages. Nationally, unionized workers are making 27% more than non-unionized workers. “Right-to-work” has been a disaster for the labor movement, those who historically have won us better pay for shorter hours and better working conditions. Those who won us the weekend. Those who won women the right to vote. Those who won us the 8 hour work day.

To reinvent the labor movement we have a lot of ground to cover. One way to start would be with reimagining the “right-to-work” to be much more like the fundamental human right envisioned by the UN in 1948. This actual Right to Work, a Job Guarantee, would transform the labor market. It’d be up to communities to decide what jobs to guarantee as they see fit, but surely our communities could come up with something better than a new fast food joint. If you’re guaranteed a job serving your community, wages will go up across the board, as entry-level workers get real choice. Capitalists would be forced to make burger flipping more attractive than planting trees.

The real human right to work would mean you could quit working for your abusive boss and cross the street and get hired. We could make job guarantee jobs come with awesome benefit plans, that capitalists are forced to match. We could have contracts with our employers that require “just cause” to be fired, rather than the uneven “at-will” employment contracts we all toil under today. We could have unions that represent all of the workers in our businesses rather than just a small subset. It’s going to take a lot of work to get there, but coalescing around a shared demand, a good job for everyone, is a great start.

We can see the work that needs to be done. We need care workers to serve our aging population. We need police officers that actually serve our communities. We need solar panels harnessing the free energy of the sun. We need to capture the carbon capitalists have polluted the environment with. We need affordable housing so we don’t have people living on the streets. We need the right to build these things together, including everyone willing and able. We need to take care of those who aren’t able, for whatever reason. If we reimagine a true right to work, the unimaginable becomes possible.

Why You’re Not Getting a Raise

By Nikos Bourtzis.

 

Much of the developed world has experienced stubbornly low real wage growth since the financial crisis of 2007. Currently, the British people are seeing their earnings decline in real terms. Even in Germany, where unemployment keeps falling to record lows, wage growth is stagnating. This phenomenon has squeezed living standards and has been one of the main culprits behind the rise of anti-establishment movements. Faster pay rises are desperately needed for the global recovery to accelerate and for ordinary people to actually be a part of it. This piece explains why rising labor compensation has been relatively minuscule during the current economic upturn and how this phenomenon could be remedied.

A bit of history

The lack of meaningful pay rises is not a phenomenon that started with the financial crisis of 2007. It can be traced back to the 1970s and 1980s, when monetarism started sweeping into academia and politics. The stagflation of the 1970s, the simultaneous rise of inflation and unemployment, led some governments to abandon the Keynesian policies of the past because apparently these policies could not deal with the stagflation. Monetary policy became the preferred tool to control inflation, together with a revived notion that markets, if left to their own devices, would bring the best social outcomes. The Thatcher and Reagan governments are some of the most famous examples of States adopting and implementing these beliefs. The first institution targeted for deregulation was the labor market. Wages increases were frozen and employment protection was scaled back, because it was believed that demand and supply forces would restore full employment. However, unemployment in the UK exploded after Thatcher came into office in 1980, increasing  to over 10% and never returning to its post-World War II lows of between 1% and 2%.

Labor unions are one of the most important institutions regarding pay rises. In most industrial countries, they are responsible for wage and working conditions negotiations between employers and employees. Union membership in OECD countries grew until the mid-1970s but then started dropping. With the rise of neoliberal governments in the West, organized labor came under attack. Under the free-market ideology, unions disrupt economic activity with strikes and demand higher-than-optimal wages. Thus, their power needed to be kept in check. What is more important, though, is the shifting of ideas in what the goals of the State should be. In the post-War period, an expressed purpose of governments was to keep aggregate demand at full employment levels. The UK government, for example, stated full employment as its purpose after the War in its Economic Policy White Paper in 1944. That goal changed with the rise of neoliberalism.

When the commitment to keep employment levels high and stable was abandoned, and labor markets were deregulated, unemployment spiked in most countries and has never fallen at levels where it can be stated that full employment exists. Even during strong upturns unemployment levels in most countries did not dip below 4%. As a result, labor unions, and workers in general have lost their biggest bargaining chip. When there is full employment, and thus jobs are abundant, workers have more power to demand higher wages and better working conditions. With the neoliberal policies of the Reagan administration, real wages in the US got decoupled from productivity, meaning that workers stopped receiving their fair share of the output produced. The same phenomenon has been observed in many other industrialized countries, such as the UK. The policies introduced in the 1980s were pretty much sustained and expanded up until 2008.

 

The Financial Crisis: A turn for the worse

The situation became even worse after the financial crisis erupted. For example, in both the US and the UK the growth of wages slowed even more, as shown in the following figure, even as the headline unemployment returned to pre-crisis levels.

Moving towards a low headline unemployment rate, though, does not mean full employment is being achieved. In the US, the U-6 measure of the unemployment rate, which adds the underemployed to the headline rate, shows that the real unemployment rate is at 8.6%. Far from full employment! In the UK, it has been reported by the Office for National Statistics that the number of people employed in zero-hour contracts has risen by 400% since 2000 but most of the rise happened after the financial crisis. Thus, the employment situation is worse than before the crisis which leads to a further decline in wage growth.

 

Why is high wage growth important for the recovery?

It is essential to point out that one of the main reasons the current economic recovery has been weak is low wage growth. Wage income is the main propeller of consumer spending, which accounts for more than 60% of GDP in industrialized countries. Low wage growth means low consumer spending, thus low GDP growth and employment. Currently, households are borrowing to keep their living standards stable and that is what’s keeping consumer spending going. This process, though, is unsustainable and will not last long. When households cannot afford to borrow anymore another financial crisis will almost certainly occur. That’s why governments need to do everything in their power to restore wage growth.

What can be done?

The power of organized labor has been decimated since the 1980s. If workers cannot actually have a say in what happens in the workplace then they cannot fight for fair wages. This is why unions need to be strengthened and supported by governments. Employers should be forced to negotiate wages through collective bargaining and union coverage should be expanded above the current 50% OECD average. This will level the playing field between powerful employers and the currently weak labor class.

As mentioned before, productivity and real wages have been delinked since the 1980s. That’s where the minimum wage could potentially help. In the US, the real minimum wage fell after 1980 and has stayed relatively flat since then. With the liberalization “mania” sweeping the western world, governments are freezing public sector pay rises and Greece even cut the minimum wage in the name of restoring public finances and growth. That’s the exact opposite of what should be done to restore growth. Wages drive consumption and growth, cutting them can only depress the economy. Hiking the minimum wage will help sustain consumption based on wages, employment growth and, thus, wage growth.

A sure way to speed up wage growth again is fiscal stimulus. Government spending lifts aggregate demand directly and effectively. If enough spending is injected into the economy, it will create enough jobs to bring full employment. The momentum and labor scarcity created by the stimulus will force wages up and give workers and labor unions more bargaining power. A Job Guarantee Program, if ever implemented, would effectively set a wage floor in the economy, since any person working at a lower wage than the Job Guarantee offers will be given work in the public sector.

The “curse” of low wage growth is not something new and it definitely got exacerbated with the financial crisis. Even though unemployment is currently falling in many countries, it is still way above full employment levels. With workers’ rights under attack for some time now, unions do not have the power they once did to promote strong pay growth. If the current recovery is to accelerate, and for ordinary people to participate in it, wage growth has to rise substantially. The only way to do this is for labor unions to be strengthened and governments to once again commit to full employment.

About the Author
Nikos Bourtzis is from Greece and recently graduated with a Bachelor in Economics from Tilburg University in the Netherlands. He will be pursuing a Master in Economics and Economic analysis at Groningen University. Research interests are heterodox macroeconomics, anti-cyclical policies, income inequality, and financial instability.

Economics as a Science?

By Johnny Fulfer.

Is economics a science?

Could it be? Should it be? The debate is as alive today as it was in the early twentieth century. This article reviews some of the key arguments in the discussion and provides a helpful backdrop against which to rethink the purpose of economics today.

In 1906,  Irving Fisher argued that economics is no less scientific than physics or biology. All three aim to discover “scientific laws,” he explained. Even though they may not always be represented in reality, scientific laws are considered fundamental truths in nature. Newton’s first law of motion, for instance, cannot be observed. Only if certain circumstances were met, a body would move uniformly in a straight line. The same holds true for economic science, Fisher concluded.

But not everyone agreed. The discipline was charged with unsound methods.

Specifically, economists were accused of using the deductive method without the necessary level of precision. Jacob Hollander addressed the charges in a 1916 essay. Scientific inquiry involves uniformity and sequence, Hollander maintained. Progression in science relies on the formation of hypotheses, which may at some point become ‘laws.’ Observation and inference are the first steps toward the creation of hypotheses. The final step in the scientific process is verification, which is required before we move from theory to law. Without verification, he argued, “speculation is an intellectual gymnastic, not a scientific process.”

Hollander’s work reveals one of the questions at the heart of this debate: Is verification required, and even possible, given the complexities of economic phenomena? Scholars have the disposition to rely on the works of previous thinkers, Hollander argued, without endeavoring to move beyond familiar perspectives.

This question lives on today.

In a 2013 opinion piece for the New York Times, Stanford economist Raj Chetty argues that science is no more than testing hypotheses with precision. Large macroeconomic questionssuch as the cause of recessions or the origin of economic growth“remain elusive.” This is no different than large questions faced by the medical field, such as the pursuit to cure cancer, he explains. The primary limitation of economics, Chetty argues, is that economists have a limited ability to run controlled experiments for theoretical macroeconomic conclusions. The high monetary cost and ethical standards make these types of controlled experiments impractical. And even if we could run a controlled experiment, it may not matter in the long run, for social changes.

In a 2016 essay, Duncan Foley added to the conversation. He argued that the distinctions between the social and natural sciences are not clear. Both come from the same scientific revolution, and both are influenced by values. The notion that scholars in the natural sciences “pursue truth” is a flawed assumption, Foley argues. Scholars in the natural and social sciences choose which problems to solve and the methodology they use.

This choice involves values since a scholar must value one research project more than another.

Examining the scientific nature of economics, John F. Henry explains that neoclassical economic theory holds a position of influence in society because of its universal and abstract nature. Henry maintains that we should reexamine this assumption of universality. If economics is based on subjective values, how can it be considered universal? Should economists continue making ‘progress toward a more scientific structure of knowledge? This leads us to ask how we define progress. There is no end to this debate.

It seems unproductive to continue asking such questions. Rather than debating whether economics is or is not a science, perhaps we should shift the discussion toward questions that ask why economics needs to be a science in the first place. Where does this desire to be ‘scientific’ come from, and why is it so important for economics to be considered scientific? Perhaps the real issue is the determination to make economics a science.

About the AuthorJohnny Fulfer received a B.S. in Economics and a B.S. in History from Eastern Oregon University. He is currently pursuing an M.A. in History at the University of South Florida and has an interest in political economy, the history of economic thought, intellectual and cultural history, and the history of the human sciences and their relation to the power in society. 

Let’s face it: Monetary Policy is Failing

By Nikolaos Bourtzis.

Monetary policy has become the first line of defense against economic slowdowns — it’s especially taken the driver’s seat in combating the crisis that began in 2007. Headlines everywhere comment on central bank’s (CB) decision-making processes and reinforce the idea that central bankers are non-political economic experts that we can rely on during downturns. They rarely address, however, that central banks’ monetary policies have failed repeatedly and continue to operate on flawed logic. This piece reviews recent monetary policy efforts and explains why central bank operations deserve our skepticism–not our blind faith.

What central banks try to do

To set monetary policy central banks usually target the interbank rate, the interest rate at which commercial banks borrow (or lend) reserves from one another. They do this by managing the level of reserves in the banking system to keep the interbank rate close to the target. By targeting how cheaply banks can borrow reserves, the central bank tries to persuade lending institutions to follow and adjust their interest rates, too. In times of economic struggle, the central bank attempts to push rates down, such that lending (and investing) becomes cheaper to do.

This operation is based on the theory that lower interest rates discourage savings and promote investment, even during a downturn. That’s the old “loanable funds” story. According to the neoclassical economists in charge at most central banks, due to rigidities in the short run, interest rates sometimes fail to respond to exogenous shocks. For example, if the private sector suddenly decides to save more, interest rates might not fall in response. This produces mismatches between savings and investment; too much saving and too little investment. As a result, unemployment arises since aggregate demand is lower than aggregate supply. In the long run, though, these mismatches will disappear and the loanable funds market will clear at the “natural” interest rate which guarantees full employment and a stable price level. But to speed things up, the CB tries to bring the market rate of interest towards that “natural” rate through its interventions.

Recent Attempts in Monetary Policy

However, interest rate cuts miserably failed to kick-start the recovery during the Great Recession. That prompted the use of unconventional tools. First came Quantitative Easing (QE). Under this policy, central banks buy long-term government bonds and/or other financial instruments (such as corporate bonds) from banks, financial institutions, and investors, which floods banks with reserves to lend out and financial markets with cash. The cash is then expected to eventually filter down to the real economy. But this did not work either. The US (the first country to implement QE in response to the Crash) is experiencing its longest and weakest recovery in years. And Japan has been stagnating for almost two decades, even though it started QE in the early 2000s.

Second came “the ‘natural rate’ is in negative territory” argument; Larry Summers’ secular stagnation hypothesis. The logic is that if QE is unable to increase inflation enough, negative nominal rates have to be imposed so real rates can drop to negative territory. Since markets cannot do that on their own, central banks will have to do the job. First came Sweden and Denmark, then Switzerland and the Eurozone, and last but not least, Japan.

Not surprisingly, the policy had the opposite effect of what was intended. Savings rates went up, instead of down, and businesses did not start borrowing more; they actually hoarded more cash. Some savers are taking their money out of bank accounts to put them in safe deposits or under their mattresses! The graph below shows how savings rate went up in countries that implemented negative rates, with companies also following suit by holding more cash.


Central bankers seem to be doing the same thing over and over again, while expecting a different outcome. That’s the definition of insanity! Of course, they cannot admit they failed. That would most definitely bring chaos to financial markets, which are addicted to monetary easing. Almost every time central bankers provide
a weaker response than expected, the stock market falls.

There is too much private debt.

So how did we get here? To understand why monetary policy has failed to lift economies out of crises, we have to talk about private debt.

Private debt levels are sky high in almost every developed country. As more and more debt is piled up, it becomes more costly to service it. Interest payments start taking up more and more out of disposable income, hurting consumption. Moreover, you cannot convince consumers and businesses to borrow money if they are up to their eyeballs in debt, even if rates are essentially zero. What’s more, some banks are drowning in non-performing loans so why would they lend out more money, if there is no one creditworthy enough to borrow? Even if private debt levels were not sky high, firms only borrow if capacity needs to expand. During recessions, low consumer spending means low capacity utilization, so investing in more capacity does not make sense for firms.

How to move forward

So, now what? Should we abolish central banks? God no! Central banks do play an important role. They are needed as a lender of last resort for banks and the government. But they should not try to fight the business cycle. Tinkering with interest rates and buying up financial instruments encourages speculation and accumulation of debt, which further increases the likelihood of financial crises. The recent pick-up in economic activity is again driven by private debt and even the Bank of England is worried that this is unsustainable and might be the trigger of the next financial crisis.

The success of monetary policy depends on market mechanisms. Since this is an unreliable channel that promotes economic activity through excessive private debt growth, governments should be in charge of dealing with the business cycle. The government is the only institution that can pump money into the economy effectively to boost demand when it is needed. But due to the current misguided fears of large deficits, governments have not provided the necessary fiscal response. Investment requires as little uncertainty as possible to take place and only fiscal policy can reduce uncertainty. Admittedly in previous decades, monetary responses might have been responsible for restoring some business confidence as shown in the figure below.

This effect, though, cannot always be relied upon during severe slumps. And no doubt, more attention needs to be given to private debt, which has reached unprecedented levels.

Monetary policy has obviously failed to produce a robust recovery in most countries. It might have even contributed in bringing about the financial crisis of 2008. But central bankers refuse to learn their lesson and keep doing the same thing again and again. They don’t understand that their policies have failed to kick-start our economies because the private sector is drowning in debt. It’s time to put governments back in charge of economic stabilization and let them open their spending spigots. A large fiscal stimulus is needed if our economies are to recover. Even a Debt Jubilee should not be ruled out!

About the Author
Nikos Bourtzis is from Greece, and recently graduated with a Bachelor in Economics from Tilburg University in the Netherlands. He will be pursuing a Master in Economics and Economic analysis at Groningen University. Research interests are heterodox macroeconomics, anti-cyclical policies, income inequality, and financial instability.