When it Comes to Market Liquidity, what if Private Dealing System is Not “The Only Game in Town” Anymore? (Part 1)

A Tribute to Value Investing

“Investors persist in trading despite their dismal long-run trading record partly because the argument seduces them that because prices are as likely to go up as down (or as likely to go down as up), trading based on purely random selection rules will produce neutral performance… Apparently, this idea is alluring; nonetheless, it is wrong. The key to understanding the fallacy is the market-maker.”

–Jack Treynor (using Walter Bagehot as his Pseudonym) in The Only Game In Town.


By Elham Saeidinezhad | Value investing, or alternatively called “value-based dealing,” is suffering its worst run in at least two centuries. The COVID-19 pandemic intensified a decade of struggles for this popular strategy to buy cheap stocks in often unpopular enterprises and sell them when the stock price reverts to “fundamental value.” Such a statement might be a nuisance for the followers of the Capital Asset Pricing Model (CAPM). However, for liquidity whisperers, such as “Money Viewers,” such a development flags a structural shift in the financial market. In the capital market, the market structure moves away from being a private dealing system towards becoming a public one. In this future, the Fed- a government agency- would be the market liquidity provider of the first resort, even in the absence of systemic risk. As soon as there is a security sell-off or a hike in the funding rate, it will be the Fed, rather than Berkshire Hathaway, who uses its balance sheet and increases monetary base to purchase cheap securities from the dealers and absorb the trade imbalances. The resulting expansion in the Fed’s balance sheet, and monetary liabilities, would also alter the money market. The excessive reserve floating around could transform the money market, and the payment system, from being a credit system into a money-centric market. In part 1, I lay out the theoretical reasons blinding CAPM disciples from envisioning such a brave new future. In part 2, I will explain why the value investors are singing their farewell song in the market.

Jack Treynor, initially under the pseudo name Walter Bagehot, developed a model to show that security dealers rely on value investing funds to provide continuous market liquidity. Security dealers are willing to supply market liquidity at any time because they expect value-based dealers’ support during a market sell-off or upon hitting their finance limit. A sell-off occurs when a large volume of securities are sold and absorbed in the balance sheet of security dealers in a short period of time. A finance limit is a situation when a security dealer’s access to funding liquidity is curtailed. In these circumstances, security dealers expect value investors to act as market liquidity providers of near last resort by purchasing dealers excess inventories. It is such interdependence that makes a private dealing system the pillar of market-liquidity provision.

In CAPM, however, such interconnectedness is neither required nor recognized. Instead, CAPM asserts that risk-return tradeoff determines asset prices. However, this seemingly pure intuition has generated actual confusion. The “type” of risk that produces return has been the subject of intense debates, even among the model’s founders. Sharpe and Schlaifer argued that the market risk (the covariance) is recognizably the essential insight of CAPM for stock pricing. They reasoned that all investors have the same information and the same risk preferences. As long as portfolios are diversified enough, there is no need to value security-specific risks as the market has already reached equilibrium. The prices are already the reflection of the assets’ fundamental value. For John Lintern, on the other hand, it was more natural to abstract from business cycle fluctuations (or market risk) and focused on firm-specific risk (the variance) instead. His stated rationale for doing so was to abstract from the noise introduced by speculation. The empirical evidence’s inconsistency on the equilibrium and acknowledging the speculators’ role was probably why Sharpe later shifted away from his equilibrium argument. In his latest works, Sharpe derived his asset pricing formula from the relationship between the return on individual security and the return on any efficient portfolio containing that security.

CAPM might be confused about the kind of risk that matters the most for asset pricing. But its punchline is clear- liquidity does not matter. The model’s central assumption is that all investors can borrow and lend at a risk-free rate, regardless of the amount borrowed or lent. In other words, liquidity provision is given, continuous, and free. By assuming free liquidity, CAPM disregards any “finance limit” for security dealers and downplays the importance of value investing, as a matter of logic. In the CAPM, security dealers have constant and free access to funding liquidity. Therefore, there is no need for value investors to backstop asset prices when dealers reach their finance limit, a situation that would never occur in CAPM’s world.

Jack Treynor and Fischer Black partnered to emphasize value-based dealers’ importance in asset pricing. In this area, both men continued to write for the Financial Analysts Journal (FAJ). Treynor, writing under the pseudonym Walter Bagehot, thinks about the economics of the dealer function in his “The Only Game in Town” paper, and Black responds with his visionary “Toward a Fully Automated Stock Exchange.” At the root of this lifelong dialogue lies a desire to clarify a dichotomy inside CAPM.

Fischer, despite his belief in CAPM, argued that the “noise,” a notion that market prices deviate from the fundamental value, is a reality that the CAPM, built on the market efficiency idea, should reconcile with. He offered a now-famous opinion that we should consider stock prices to be informative if they are between “one-half” and “twice” their fundamental values. Mathematician Benoit Mandelbrot supported such an observation. He showed that individual asset prices fluctuate more widely than a normal distribution. Mandelbrot used this finding, later known as the problem of “fat tails” or too many outliers, to call for “a radically new approach to the problem of price variation.”  

From Money View’s perspective, both the efficient market hypothesis and Manderbrot’s “fat tails” hypothesis capture parts of the data’s empirical characterization. CAPM, rooted in the efficient market hypothesis, captures the arbitrage trading, which is partially responsible for asset price changes. Similarly, fat tails, or fluctuations in asset prices, are just as permanent a feature of the data. In other words, in the world of Money View, arbitrage trading and constant deviations from fundamental value go together as a package and as a matter of theoretical logic. Arbitrageurs connect different markets and transfer market liquidity from one market to another. Simultaneously, despite what CAPM claims, their operation is not “risk-free” and exposes them to certain risks, including liquidity risk. As a result, when arbitrageurs face risks that are too great to ignore, they reduce their activities and generate trade imbalances in different markets.

Security dealers who are making markets in those securities are the entities that should absorb these trade imbalances in their balance sheets. At some point, if this process continues, their long position pushes them to their finance limit-a point at which it becomes too expensive for security dealers to finance their inventories. To compensate for the risk of reaching this point and deter potential sellers, dealers reduce their prices dramatically. This is what Mandelbrot called the “fat tails” hypothesis. At this point, dealers stop making the market unless value investors intervene to support the private dealing system by purchasing a large number of securities or block trades. In doing so, they become market liquidity providers of last resort. For decades, value-based dealers used their balance sheets and capital to purchase these securities at a discounted price. The idea was to hold them for a long time and sell them in the market when prices return to fundamental value. The problem is that the value investing business, which is the private dealing system’s pillar of stability, is collapsing. In recent decades, value-oriented stocks have underperformed growth stocks and the S&P 500.

The approach of favoring bargains — typically judged by comparing a stock price to the value of the firm’s assets — has a long history. But in the financial market, nothing lasts forever. In the equilibrium world, imagined by CAPM, any deviation from fundamental value must offer an opportunity for “risk-free” profit somewhere. It might be hard to exploit, but profit-seeking arbitrageurs will always be “able” and “willing” to do it as a matter of logic. Fisher Black-Jack Treynor dialogue, and their admission of dealers’ function, is a crucial step away from pure CAPM and reveals an important fallacy at the heart of this framework. Like any model based on the efficient market hypothesis, CAPM abstracts from liquidity risk that both dealers and arbitragers face.

Money View pushes this dialogue even further and asserts that at any moment, security prices depend on the dealers’ inventories and their daily access to funding liquidity, rather than security-specific risk or market risk. If Fischer Black was a futurist, Perry Mehrling, the founder of “Money View,” lives in the “present.” For Fischer Black, CAPM will become true in the “future,” and he decided to devote his life to realizing this ideal future. Perry Mehrling, on the other hand, considers the overnight funding liquidity that enables the private dealing system to provide continuous market liquidity as an ideal system already. As value investing is declining, Money View scholars should start reimagining the prospect of the market liquidity and asset pricing outside the sphere of the private dealing system even though, sadly, it is the future that neither Fischer nor Perry was looking forward to.

Elham Saeidinezhad is Term Assistant Professor of Economics  at Barnard College, Columbia University. Previously, Elham taught at UCLA, and served as a research economist in International Finance and Macroeconomics research group at Milken Institute, Santa Monica, where she investigated the post-crisis structural changes in the capital market as a result of macroprudential regulations. Before that, she was a postdoctoral fellow at INET, working closely with Prof. Perry Mehrling and studying his “Money View”.  Elham obtained her Ph.D. from the University of Sheffield, UK, in empirical Macroeconomics in 2013. You may contact Elham via the Young Scholars Directory

Can Algorithmic Market Makers Safely Replace FX Dealers as Liquidity Providers?

By Jack Krupinski


Financialization and electronification are long term economic trends and are here to stay. It’s essential to study how these trends will alter the world’s largest market—the foreign exchange (FX) market. In the past, electronification expanded access to the FX markets and diversified the demand side. Technological developments have recently started to change the FX market’s supply side, away from the traditional FX dealing banks towards principal trading firms (PTFs). Once the sole providers of liquidity in FX markets, dealers are facing increased competition from PTFs. These firms use algorithmic, high-frequency trading to leverage speed as a substitute for balance sheet capacity, which is traditionally used to determine FX dealers’ comparative advantage. Prime brokerage services were critical in allowing such non-banks to infiltrate the once impenetrable inter-dealer market. Paradoxically, traditional dealers were the very institutions that have offered prime brokerage services to PTFs, allowing them to use the dealers’ names and credit lines while accessing trading platforms. The rise of algorithmic market markers at the expense of small FX dealers is a potential threat to long-term stability in the FX market, as PTFs’ resilience to shocks is mostly untested. The PTFs presence in the market, and the resulting narrow spreads, could create an illusion of free liquidity during normal times. However, during a crisis, such an illusion will evaporate, and the lack of enough dealers in the market could increase the price of liquidity dramatically. 

      In normal times, PTFs’ presence could create an “illusion of free liquidity” in the FX market. The increasing presence of algorithmic market makers would increase the supply of immediacy services (a feature of market liquidity) in the FX market and compress liquidity premia. Because liquidity providers must directly compete for market share on electronic trading platforms, the liquidity price would be compressed to near zero. This phenomenon manifests in a narrower inside spread when the market is stable.  The FX market’s electronification makes it artificially easier for buyers and sellers to search for the most attractive rates. Simultaneously, PFTs’ function makes market-making more competitive and reduces dealer profitability as liquidity providers. The inside spread represents the price that buyers and sellers of liquidity face, and it also serves as the dealers’ profit incentive to make markets. As a narrower inside spread makes every transaction less profitable for market makers, traditional dealers, especially the smaller ones, should either find new revenue sources or exit the market.

      During a financial crisis, such as post-COVID-19 turmoil in the financial market, such developments can lead to extremely high and volatile prices. The increased role of PTFs in the FX market could push smaller dealers to exit the market. Reduced profitability forces traditional FX dealers to adopt a new business model, but small dealers are most likely unable to make the necessary changes to remain competitive. Because a narrower inside spread reduces dealers’ compensation for providing liquidity, their willingness to carry exchange rate risk has correspondingly declined. Additionally, the post-GFC regulatory reforms reduced the balance sheet capacity of dealers by requiring more capital buffers. Scarce balance sheet space has increased the opportunity cost of dealing. 

Further, narrower inside spreads and the increased cost of dealing have encouraged FX dealers to offer prime brokerage services to leveraged institutional investors. The goal is to generate new revenue streams through fixed fees. PTFs have used prime brokerage to access the inter-dealer market and compete against small and medium dealers as liquidity providers. Order flow internalization is another strategy that large dealers have used to increase profitability. Rather than immediately hedge FX exposures in the inter-dealer market, dealers can wait for offsetting order flow from their client bases to balance their inventories—an efficient method to reduce fixed transaction costs. However, greater internalization reinforces the concentration of dealing with just a few large banks, as smaller dealers do not have the order flow volume to internalize a comparable percentage of trades.

Algorithmic traders could also intensify the riskiness of the market for FX derivatives. Compared to the small FX dealers they are replacing, algorithmic market makers face greater risk from hedging markets and exposure to volatile currencies. According to Mehrling’s FX dealer model, matched book dealers primarily use the forward market to hedge their positions in spot or swap markets and mitigate exchange rate risk. On the other hand, PTFs concentrate more on market-making activity in forward markets and use a diverse array of asset classes to hedge these exposures. Hedging across asset classes introduces more correlation risk—the likelihood of loss from a disparity between the estimated and actual correlation between two assets—than a traditional forward contract hedge. Since the provision of market liquidity relies on dealers’ ability to hedge their currency risk exposures, greater correlation risk in hedging markets is a systemic threat to the FX market’s smooth functioning. Additionally, PTFs supply more liquidity in EME currency markets, which have traditionally been illiquid and volatile compared to the major currencies. In combination with greater risk from hedging across asset classes, exposure to volatile currencies increases the probability of an adverse shock disrupting FX markets.

While correlation risk and exposure to volatile currencies has increased, new FX market makers lack the safety buffers that help traditional FX dealers mitigate shocks. Because the PTF market-making model utilizes high transaction speed to replace balance sheet capacity, there is a little buffer to absorb losses in an adverse exchange rate movement. Hence, algorithmic market makers are even more inclined than traditional dealers to pursue a balanced inventory. Since market liquidity, particularly during times of significant imbalances in supply and demand, hinges on market-makers’ willingness and ability to take inventory risks, a lack of risk tolerance among PTFs harms market robustness. Moreover, the algorithms that govern PTF market-making tend to withdraw from markets altogether after aggressively offloading their positions in the face of uncertainty. This destabilizing feature of algorithmic trading catalyzed the 2010 Flash Crash in the stock market. Although the Flash Crash only lasted for 30 minutes, flighty algorithms’ tendency to prematurely withdraw liquidity has the potential to spur more enduring market dislocations.

The weakening inter-dealer market will compound any dislocations that may occur as a result of liquidity withdrawal by PTFs. When changing fundamentals drive one-sided order flow, dealers will not internalize trades, and they will have to mitigate their exposure in the inter-dealer FX market. Increased dealer concentration may reduce market-making capacity during these periods of stress, as inventory risks become more challenging to redistribute in a sparser inter-dealer market. During crisis times, the absence of small and medium dealers will disrupt the price discovery process. If dealers cannot appropriately price and transfer risks amongst themselves, then impaired market liquidity will persist and affect deficit agents’ ability to meet their FX liabilities.

For many years, the FX market’s foundation has been built upon a competitive and deep inter-dealer market. The current phase of electronification and financialization is pressuring this long-standing system. The inter-dealer market is declining in volume due to dealer consolidation and competition from non-bank liquidity providers. Because the new market makers lack the balance sheet capacity and regulatory constraints of traditional FX dealers, their behavior in crisis times is less predictable. Moreover, the rise of non-bank market makers like PTFs has come at the expense of small and medium-sized FX dealers. Such a development undermines the economics of dealers’ function and reduces dealers’ ability to normalize the market should algorithmic traders withdraw liquidity. As the FX market is further financialized and trading shifts to more volatile EME currencies, risks must be appropriately priced and transferred. The new market makers must be up to the task.

Jack Krupinski is currently a fourth-year student at UCLA, majoring in Mathematics/Economics with a minor in statistics. He pursues an actuarial associateship and has passed the first two actuarial exams (Probability and Financial Mathematics). Jack is working to develop a statistical understanding of risk, which can be applied in an actuarial and research role. Jack’s economic research interests involve using “Money View” and empirical methods to analyze international finance and monetary policy.

Jack is currently working as a research assistant for Professor Roger Farmer in the economics department at UCLA and serves as a TA for the rerun of Prof. Mehrling’s Money and Banking Course on the IVY2.0 platform. In the past, he has co-authored blog posts about central bank digital currency and FX derivatives markets with Professor Saeidinezhad. Jack hopes to attend graduate school after receiving his UCLA degree in Spring 2021. Jack is a member of the club tennis team at UCLA, and he worked as a tennis instructor for four years before assuming his current role as a research assistant. His other hobbies include hiking, kayaking, basketball, reading, and baking.

Are the Banks Taking Off their Market-Making Hat to Become Brokers?

“A broker is foolish if he offers a price when there is nothing on the offer side good to the guy on the phone who wants to buy. We may have an offering, but we say none.” –Marcy Stigum


Before the slow but eventual repeal of Glass-Steagall in 1999, U.S. commercial banks were institutions whose mission was to accept deposits, make loans, and choose trade-exempt securities. In other words, banks were Cecchetti’s “Financial intermediaries.” The repeal of Glass-Steagall allowed banks to enter the arena so long as they become financial holding companies. More precisely, the Act permitted banks, securities firms, and insurance companies to affiliate with investment bankers.Investment banks, also called non-bank dealers, were allowed to use their balance sheets to trade and underwrite both exempt and non-exempt securities and make the market in both the capital market and the money market instruments. Becoming a dealer brought significant changes to the industry. Unlike traditional banks, investment banks, or merchant banks, as the British call it, can cover activities that require considerably less capital. Second, the profit comes from quoting different bid-ask prices and underwriting new securities, rather than earning fees. 

However, the post-COVID-19 crisis has accelerated an existing trend in the banking industry. Recent transactions highlight a shift in power balance away from the investment banking arm and market-making operations. In the primary markets, banks are expanding their brokerage role to earn fees. In the secondary market, banks have started to transform their businesses and diversify away from market-making activities into fee-based brokerages such as cash management, credit cards, and retail savings accounts. Two of the underlying reasons behind this shift are “balance sheet constraints” and declining credit costs that reduced banks’ profit as dealers and improved their fee-based businesses. From the “Money View” perspective, this shift in the bank’s activities away from market-making towards brokerage has repercussions. First, it adversely affects the state of “liquidity.” Second, it creates a less democratic financial market as it excludes smaller agents from benefiting from the financial market. Finally, it disrupts payment flows, given the credit character of the payments system.

When a banker acts as a broker, its income depends on fee-based businesses such as monthly account fees and fees for late credit card payments, unauthorized overdrafts, mergers, and issuing IPOs. These fees are independent of the level of the interest rate. A broker puts together potential buyers and sellers from his sheet, much in the way that real estate brokers do with their listing sheets and client listings. Brokers keep lists of the prices bid by potential buyers and offered by potential sellers, and they look for matches. Goldman, Merrill, and Lehman, all big dealers in commercial paper, wear their agent hat almost all the time when they sell commercial paper. Dealers, by contrast, take positions themselves by expanding their balance sheets. They earn the spread between bid-ask prices (or interest rates). When a bank puts on its hat as a dealer (principal), that means the dealer is buying for and selling from its position. Put another way, in a trade, the dealer is the customer’s counterparty, not its agent.

Moving towards brokerage activity has adverse effects on liquidity. Banks are maintaining their dealer role in the primary market while abandoning the secondary market. In the primary market, part of the banks’ role as market makers involves underwriting new issues. In this market, dealers act as a one-sided dealer. As the bank only sells the newly issued securities, she does not provide liquidity. In the secondary market, however, banks act as two-sided dealers and supply liquidity. Dealer banks supply funding liquidity in the short-term money market and the market liquidity in the long-term capital market. The mission is to earn spreads by constantly quoting bids and offers at which they are willing to buy and sell. Some of these quotes are to other dealers. In many sectors of the money market, there is an inside market among dealers. 

The money market, as opposed to the bond market, is a wholesale market for high-quality, short-term debt instruments, or IOUs. In the money market, dealing banks make markets in many money market instruments. Money market instruments are credit elements that lend elasticity to the payment system. Deficit agents, who do not have adequate cash at the moment, have to borrow from the money market to make the payment. Money market dealers expand the elasticity daily and enable the deficit agents to make payments to surplus agents. Given the credit element in the payment, it is not stretching the truth to say that these short-term credit instruments, not the reserves, are the actual ultimate means of payment. Money market dealers resolve the problem of managing payments by enabling deficit agents to make payments before they receive payments.

Further, when dealers trade, they usually do not even know who their counterparty is. However, if banks become brokers, they need to “fine-tune” quotes because it matters who is selling and buying. Brokers prefer to trade with big investors and reduce their ties with smaller businesses. This is what Stigum called “line problems.” She explains that if, for example, Citi London offered to sell 6-month money at the bid rate quoted by a broker and the bidding bank then told the broker she was off and had forgotten to call, the broker would be committed to completing her bid by finding Citi a buyer at that price or by selling Citi’s money at a lower rate and paying a difference equal to the dollar amount Citi would lose by selling at that rate. Since brokers operate on thin margins, a broker wouldn’t be around long if she often got “stuffed.” Good brokers take care to avoid errors by choosing their counterparties carefully.

After the COVID-19 pandemic, falling interest rates, the lower overall demand for credit, and regulatory requirements that limit the use of balance sheets have reduced banks’ profits as dealers. In the meantime, the banks’ fee-based businesses that include credit cards late-fees, public offerings, and mergers have become more attractive. The point to emphasize here is that the brokerage business does not include providing liquidity and making the market. On the other hand, dealer banks generate revenues by supplying funding and market liquidities in the money and capital markets. Further, brokers tend to only trade with large corporations, while dealers’ decisions to supply liquidity usually do not depend on who their counterparty is. Finally, the payment system is much closer to an ideal credit payment system than an ideal money payment system. In this system, the liquidity of money market instruments is the key to a well-functioning payment system. Modern banks may wear one of two hats, agent (broker) or principal (dealers), in dealing with financial market instruments. The problem is that only one of these hats allows banks to make the market, facilitate the payment system, and democratize access to the credit market.

Elham Saeidinezhad is Term Assistant Professor of Economics  at Barnard College, Columbia University. Previously, Elham taught at UCLA, and served as a research economist in International Finance and Macroeconomics research group at Milken Institute, Santa Monica, where she investigated the post-crisis structural changes in the capital market as a result of macroprudential regulations. Before that, she was a postdoctoral fellow at INET, working closely with Prof. Perry Mehrling and studying his “Money View”.  Elham obtained her Ph.D. from the University of Sheffield, UK, in empirical Macroeconomics in 2013. You may contact Elham via the Young Scholars Directory

The Paradox of Yield Curve: Why is the Fed Willing to Flatten the Curve but Not Control It?

From long experience, Fed technicians knew that the Fed could not control money supply with the precision envisioned in textbooks.” Marcy Stigum


By Elham Saeidinezhad – In the last decade, monetary policy wrestled with the problem of low inflation and has become a tale of three cities: interest rate, asset purchasing, and the yield curve. The fight to reach the Fed’s inflation target started by lowering the overnight federal funds rate to a historically low level. The so-called “zero-lower bound restriction” pushed the Fed to alternative policy tools, including large-scale purchases of financial assets (“quantitative and qualitative easing”). This policy had several elements: first, a commitment to massive asset purchases that would increase the monetary base; second, a promise to lengthen the maturity of the central banks’ holdings and flatten the yield curve. However, in combination with low inflation (actual and expected), such actions have translated into persistently low real interest rates at both the yield curve’s long and short ends, and at times, the inversion of the yield curve. The “whatever it takes” large-scale asset purchasing programs of central banks were pushing the long-term yields into clear negative territory. Outside the U.S., and especially in Japan, central banks stepped up their fight against deflation by adopting a new policy called Yield Curve Control, which explicitly puts a cap on long-term rates. Even though the Fed so far resisted following the Bank of Japan’s footsteps, the yield curve control is the first move towards building a world that “Money View” re-imagines for central banking. Yield curve control enables the Fed to assume its “dealer of last resort role” role to increase its leverage over the yield curve, a private dealer territory, without creating repeated dislocations in the private credit market. 

To understand this point, let’s start by translating monetary policy’s evolution into the language of Money View. In the traditional monetary policy, the Fed uses its control of reserve (at the top of the hierarchy of money) to affect credit expansion (at the bottom of the hierarchy). It also controls the fed funds rate (at the short end of the term structure) in an attempt to influence the bond rate of interest (at the long end). When credit is growing too rapidly, the Fed raises the federal fund’s target to impose discipline in the financial market. In standard times, this would immediately lower the money market dealers’ profit. This kind of dealer borrows at an overnight funding market to lend in the lend in term (i.e., three-month) market. The goal is to earn the liquidity spread.

After the Fed’s implementation of contractionary monetary policy, to compensate for the higher financing cost, money market dealers raise the term interest rate by the full amount (and perhaps a bit more to compensate for anticipated future tightening as well). This term-rate is the funding cost for another kind of dealer, called security dealers. Security dealers borrow from the term-market (repo market) to lend to the long-term capital market. Such operations involve the purchase of securities that requires financing. Higher funding cost implies that security dealers are willing to hold existing security inventories only at a lower price, and increasing long-term yield. This chain of events sketches a monetary policy transmission that happens through the yield curve. The point to emphasize here is that in determining the yield curve, the private credit market, not the Fed, sets rates and prices. The Fed has only some leverage over the system.

After the GFC, as the rates hit zero-lower bound, the Fed started to lose its leverage. In a very low-interest-rate condition, preferences shift in favor of money and against securities. One way to put it is that the surplus agents become reluctant to” delay settlement” and lower their credit market investment. They don’t want promises to pay (i.e., holding securities), and want money instead. In this environment, to keep making the market and providing liquidity, money market, and security dealers, who borrow to finance their short and long-term inventories, respectively, should be able to buy time. During this extended-time period, prices are pushed away from equilibrium. Often, the market makers face this kind of trouble and turn to the banks for refinancing. After GFC, however, the very low-interest rates mean that banks themselves run into trouble.

In a normal crisis, as the dealer system absorbs the imbalances due to the shift in preferences into its balance sheet, the Fed tried to do the same thing and take the problem off the balance sheet of the banking system. The Fed usually does so by expanding its balance sheet. The Fed’s willingness to lend to the banks at a rate lower than they would lend to each other makes it possible for the banks to lend to the dealers at a rate lower than they would otherwise charge. Putting a ceiling on the money rate of interest thus indirectly puts a floor on asset prices. In a severe crisis, however, this transmission usually breaks down. That is why after the GFC, the Fed used its leverage to put a floor on asset prices directly by buying them, rather than indirectly by helping the banks to finance dealers’ purchases.

The fundamental question to be answered is whether the Fed has any leverage over the private dealing system when interest rates are historically low. The Fed’s advantage is that it creates reserves, so there can be no short squeeze on the Fed. When the Fed helps the banks, it expands reserves. Hence the money supply grows. We have seen that the market makers are long securities and short cash. What the Fed does is to backstop those short positions by shorting cash itself. However, the Fed’s leverage over the private dealer system is asymmetric. The Fed’s magic mostly works when the Fed decides to increase elasticity in the credit market. The Fed has lost its alchemy to create discipline in the market when needed. When the rates are already very low, credit contraction happens neither quickly nor easily if the Fed increases the rates by a few basis points. Indeed, only if the Fed raises the rates high enough, it can get some leverage over this system, causing credit contraction. Short of an aggressive rate hike, the dealer system increases the spread slightly but not enough to not change the quantity of supplied credit. In other words, the Fed’s actions do not translate automatically into a chain of credit contraction, and the Fed does not have control over the yield curve. The Fed knows that, and that is why it has entered large-scale asset purchasing programs. But it is the tactful yet minimal purchases of long-term assets, rather than massive ones, that can restore the Fed’s control over the yield curve. Otherwise, the Fed’s actions could push the long-term rates into negative territory and lead to a constant inversion of the yield curve.

The yield curve control aims at controlling interest rates along some portion of the yield curve. This policy’s design has some elements of the interest rate policy and asset purchasing program. Similar to interest rate policy, it targets short-term interest rates. Comparable with the asset purchasing program, yield curve control aim at controlling the long-term interest rate. However, it mainly incorporates essential elements of a “channel” or “corridor” system. This policy targets longer-term rates directly by imposing interest rate caps on particular maturities. Like a “corridor system,” the long-term yield’s target would typically be set within a bound created by a target price that establishes a floor for the long-term assets. Because bond prices and yields are inversely related, this also implies a ceiling for targeted maturities. If bond prices (yields) of targeted maturities remain above (below) the floor, the central bank does nothing. However, if prices fall (rise) below (above) the floor, the central bank buys targeted-maturity bonds, increasing the demand and the bonds’ price. This approach requires the central bank to use this powerful tool tactfully rather than massively. The central bank only intervenes to purchase certain assets when the interest rates on different maturities are higher than target rates. Such a strategy reduces central banks’ footprint in the capital market and prevents yield curve inversion- that has become a typical episode after the GFC.

The “paradox of the yield curve” argues that the Fed’s hesitation to adopt the yield curve control to regulate the longer-term rates contradicts its own reasoning behind the introduction of a corridor framework to control the overnight rate. Once the FOMC determines a target interest rate, the Fed already sets the discount rate above the target interest rate and the interest-on-reserve rate below. These two rates form a “corridor” that will contain the market interest rate; the target rate is often (but not always) set in the middle of this corridor. Open market operations are then used as needed to change the supply of reserve balances so that the market interest rate is as close as possible to the target. A corridor operating framework can help a central bank achieve a target policy rate in an environment in which reserves are anything but scarce, and the central bank has used its balance sheet as a policy instrument independent of the policy interest rate.

In the world of Money View, the corridor system has the advantage of enabling the Fed to act as a value-based dealer, or as Mehrling put it, “dealer of last resort,” without massively purchasing assets and constantly distorting asset prices. The value-based dealer’s primary role is to put a ceiling and floor on the price of assets when the dealer system has already reached their finance limits. Such a system can effectively stabilize the rate near its target. Stigum made clear that standard economic theory has no perfect answer to how the Fed gets leverage over the real economy. The question is why the Fed is willing to embrace the frameworks that flatten the yield curve but is hesitant to adopt the “yield curve control,” which explicitly puts a cap on long-term rates.

Elham Saeidinezhad is Term Assistant Professor of Economics  at Barnard College, Columbia University. Previously, Elham taught at UCLA, and served as a research economist in International Finance and Macroeconomics research group at Milken Institute, Santa Monica, where she investigated the post-crisis structural changes in the capital market as a result of macroprudential regulations. Before that, she was a postdoctoral fellow at INET, working closely with Prof. Perry Mehrling and studying his “Money View”.  Elham obtained her Ph.D. from the University of Sheffield, UK, in empirical Macroeconomics in 2013. You may contact Elham via the Young Scholars Directory

Is the New Chapter for the Monetary Policy Framework Too Old to Succeed?

Bagehot, “Money does not manage itself.”


By Elham Saeidinezhad – In this year’s Jackson Hole meeting, the Fed announced a formal shift away from previously articulated longer-run inflation objective of 2 percent towards achieving inflation that averages 2 percent over time. The new accord aims at addressing the shortfalls of the low “natural rate” and persistently low inflation. More or less, all academic debates in that meeting were organized as arguments about the appropriate quantitative settings for a Taylor rule. The rule’s underlying idea is that the market tends to set the nominal interest rate equal to the natural rate plus expected inflation. The Fed’s role is to stabilize the long-run inflation by changing the short-term federal funds rate whenever the inflation deviates from the target. The Fed believes that the recent secular decline in natural rates relative to the historical average has constrained the federal funds rate. The expectation is that the Fed’s decision to tolerate a temporary overshooting of the longer-run inflation to keep inflation and inflation expectations centered on 2 percent following periods when inflation has been persistently below 2 percent will address the framework’s constant failure and restore the magic of central banking. However, the enduring problem with the Taylor rule-based monetary policy frameworks, including the recent one, is that they want the Fed to overlook the lasting trends in the credit market, and only focus on the developments in the real economy, such as inflation or past inflation deviations, when setting the short-term interest rates. Rectifying such blind spots is what money view scholars were hoping for when the Fed announced its intention to review the monetary policy framework.

The logic behind the new framework, known as average inflation targeting strategy, is that inflation undershooting makes achieving the target unlikely in the future as it pushes the inflation expectations below the target. This being the case, when there is a long period of inflation undershooting the target, the Fed should act to undo the undershooting by overshooting the target for some time. The Fed sold forecast (or average) targeting to the public as a better way of accomplishing its mandate compared to the alternative strategies as the new framework makes the Fed more “history-dependent.” Translated into the money view language, however, the new inflation-targeting approach only delays the process of imposing excessive discipline in the money market when the consumer price index rises faster than the inflation target and providing excessive elasticity when prices are growing slower than the inflation target.

From the money view perspective, the idea that the interest rate should not consider private credit market trends will undermine central banking’s power in the future, as it has done in the past. The problem we face is not that the Fed failed to follow an appropriate version of Taylor rule. Rather, and most critically, these policies tend to abstract from the plumbing behind the wall, namely the payment system, by disregarding the credit market. Such a bias may have not been significant in the old days when the payment system was mostly a reserve-based system. In the old world, even though it was mostly involuntarily, the Fed used to manage the payment system through its daily interventions in the market for reserves. In the modern financial system, however, the payment system is a credit system, and its quality depends on the level of elasticity and discipline in the private credit market.

The long dominance of economics and finance views imply that modern policymakers have lost sight of the Fed’s historical mission to manage the balance between discipline and elasticity in the payment system. Instead of monitoring the balance between discipline and elasticity in the credit market, the modern Fed attempts to keep the bank rate of interest in line with an ideal “natural rate” of interest, introduced by Knut Wicksell. In Wicksellians’ world, in contrast to the money view, securing the continuous flow of credit in the economy through the payment system is not part of the Fed’s mandate. Instead, the Fed’s primary function is to ensure it does not choose a “money rate” of interest different from the “natural rate” of interest (profit rate capital). If lower, then the differential creates an incentive for new capital investment, and the new spending tends to cause inflation. If prices are rising, then the money rate is too low and should be increased; if prices are falling, then the money rate is too high and should be decreased. To sum up, Wicksellians do not consider private credit to be intrinsically unstable. Inflation, on the other hand, is viewed as the source of inherent instability. Further, they see no systemic relation between the payment system and the credit market as the payment system simply reflects the level of transactions in the real economy.

The clash between the standard economic view and money view is a battle between two different world views. Wicksell’s academic way of looking at the world had clear implications for monetary policy: set the money rate equal to the natural rate and then stand back and let markets work. Unfortunately, the natural rate is not observable, but the missed payments and higher costs of borrowing are. In the money view perspective, the Fed should use its alchemy to strike a balance between elasticity and discipline in the credit market to ensure a continuous payment system. The money view barometer to understand the credit market cycle is asset prices, another observable variable. Since the crash can occur in commodities, financial assets, and even real assets, the money view does not tell us which assets to watch. However, it emphasizes that the assets that are not supported by a dealer system (such as residential housing) are more vulnerable to changes in credit conditions. These assets are most likely to become overvalued on the upside and suffer the most extensive correction on the downside. A central bank that understands its role as setting interest rates to meet inflation targets tends to exacerbate this natural tendency toward instability. These policymakers could create unnaturally excessive discipline when credit condition is already tight or vice versa while looking for a natural rate of interest.

Elham Saeidinezhad is Term Assistant Professor of Economics  at Barnard College, Columbia University. Previously, Elham taught at UCLA, and served as a research economist in International Finance and Macroeconomics research group at Milken Institute, Santa Monica, where she investigated the post-crisis structural changes in the capital market as a result of macroprudential regulations. Before that, she was a postdoctoral fellow at INET, working closely with Prof. Perry Mehrling and studying his “Money View”.  Elham obtained her Ph.D. from the University of Sheffield, UK, in empirical Macroeconomics in 2013. You may contact Elham via the Young Scholars Directory