CFA Institute

Times Change: The Era of the Private Equity Denominator Effect

After private equity’s extraordinary performance in 2021, private market valuations decoupled from those of both public equities and bonds in 2022. This led many institutional investors to end up over-allocated to private markets. This is the so-called denominator effect, whereby private asset allocations exceed the percentage threshold established in an allocation policy and must be corrected. The simultaneous negative cash flow cycle has reduced anticipated liquidity that latent paper portfolio losses in traditional assets have already compressed. This makes portfolio adjustment decisions even more challenging. Last year’s data show that the rebound in equity prices and the pause in interest rate hikes have provided some relief, but they have not solved the private market liquidity issue or addressed the denominator effect’s implications. Liquidity needs have led to a significant increase in 2023 limited partner (LP)-led secondary sales, according to recent Lazard research. The economic paradigm may have changed and will remain uncertain. Given the potential for higher-for-longer interest rates, NAV staleness, and a negative cash flow cycle, the denominator effect may become more systematic in LP portfolios and force LPs to make more frequent allocation and liquidity decisions. So, what are some traditional strategies for addressing the denominator effect in private equities, and are there other, more innovative and efficient risk-transfer approaches available today? The Current PE Denominator Effect While 2021 was a year of extraordinary PE outperformance, 2022 was the real outlier as private markets showed unprecedented relative performance/valuation divergence from their public counterparts. A reverse divergence followed in 2023, with the highest negative return difference ever recorded, but it did not offset the current denominator effects. According to Cliffwater research, PE returned 54% in 2021, compared with 42% for public equities. The following year, PE generated 21%, outperforming stocks by 36 percentage points. In 2023, however, PE returned only 0.8% compared with 17.5% for equities. Impact of the Denominator Effect For investors building up an allocation in PE who have not yet reached their target, the denominator effect, albeit painful from the standpoint of negative performance overall, could accelerate the optimal portfolio construction process. For the (many) other investors with a near-to-optimal allocation, and a related overcommitment strategy, the emergence of the denominator effect traditionally implies the following: Consequence  Negative Impact  Reduced allocations to current andpossibly future vintages  1. Lower future returns2. Out-of-balance vintage diversification  Smoothed compounding effect ofPE returns amid curtailed reinvestment 1. Lower returns  Latent/potential negative risk premium ofthe PE portfolio since NAV staleness, which protected the downside, may limit the “upside elasticity”that accompanies any market rebound. 1. Compromised risk diversification2. Suboptimal asset allocation dynamics 3. Potential impact on future return targets  Crystallization of losses 1. Lower current returns 2. Unbalanced vintage diversification  Tackling the Denominator Effect  Investors counter the denominator effect with various portfolio rebalancing strategies based on their specific targets, constraints, and obligations. Traditionally, they either wait or sell the assets in the secondary market. Recently introduced collateralized fund obligations (CFOs) have given investors an additional, if more complex, tool for taking on the denominator effect.  1. The Wait-and-See Strategy Investors with well-informed boards and flexible governance could rebalance their overall portfolio allocation with this technique. Often, the wait-and-see strategy involves adopting wider target allocation bands and reducing future commitments to private funds. The former make market volatility more tolerable and reduce the need for automatic, policy-driven adjustments. Of course, the wait-and-see strategy assumes that market valuations will mean revert and within a given time frame. Cash flow simulations under different scenarios and examinations of how various commitment pacing strategies can, in theory, navigate different market conditions.  In practice, commitment pacing strategies are inherently rigid. Why? Because no change would be valid for stipulated commitments, legacy portfolio NAVs, and future cash flows thereof. Funding risk is a function of market risk, but private market participants have neglected this for two reasons: because of the secular abundance of liquidity and the cash flow–based valuation perspective, which has limited structural sensitivity to market risk. Internal rates of return (IRRs) and multiples can’t be compared with time-weighted traditional asset returns. Moreover, NAVs have historically carried uneven information about market risk since they are non-systematically marked to market across all funds. What does this mean? It indicates an unmeasured/implicit possibility that the existing stock of private asset investments is overvalued and that a negative risk premium could result with private asset valuations rebounding less acutely than those of public assets.  According to Cliffwater commentary and analysis, data show that private equity delivered a significant negative risk premium in 2023.  As of June 2022, the annual outperformance of PE vs. public stocks was worth 5.6 percentage points (11.4% – 5.8%), with excess performance of 12% and 36% for 2021 and 2022, respectively. The public markets rebounded through June 2023 by 17.5% compared with private equity’s 0.8%. As a consequence, the reported long-term trends are adjusted to 11% for PE and 6.2% for the public markets, and to 4.8% for the derived outperformance. Compared with the 17.5% of public stocks, there is a negative risk premium impact of 16.7% on the value of balance sheet assets for which long-term outperformance data do not matter. The allocation strategy is long term, but an actual PE portfolio’s valuation is not. Its true economics are a function of its actual liquidation and turnover terms.  Patience may be neither mandatory nor beneficial. Whether to hold on to private assets should always be considered from the expected risk premium perspective. Notably, the consequent reduction in future commitments, associated with negative cash flow cycles, may further reduce the benefits of return compounding for private assets. 2. The Secondary Sale Strategy Investors may tap into secondary market liquidity by selling their private market stakes through LP-led secondaries, or an LP can sell its fund interests to another LP. Although this provided investors with liquidity and cash in hand, which is critical because of reduced fund distributions, in 2022, LPs could only sell their PE assets at an average of 81% of NAV, according to Jefferies. By selling in the

Times Change: The Era of the Private Equity Denominator Effect Read More »

The FX Global Code: Why Now Is the Time

Not for syndication. This article cannot be republished without the express permission of RBC Global Asset Management Inc. What if I told you that the largest, most liquid market in the world is also one of the least understood? Its $2.1 trillion daily spot turnover dwarfs that of bonds or equities, and all its transactions are conducted over the counter (OTC). The market also connects thousands of participants in 52 different jurisdictions and facilitates a further $5 trillion daily in forwards, swaps, and options, in addition to spot transactions. I am talking, of course, about the highly fragmented foreign exchange (FX) market. Such a large and interconnected market should operate in an open, liquid, fair, robust, and transparent manner. Since the global financial crisis (GFC), the FX market’s daily turnover has approximately doubled. That has raised expectations regarding transparency and liquidity and has necessitated increased oversight. I have had a front-row seat to the FX market’s evolution over the last 20 years and recall all too well how it often made headlines for all the wrong reasons as information imbalances between dealers and clients led to abuses. “Suspicion of Forex Gouging Spreads,” the Wall Street Journal blared in February 2011: “Some of the largest investment firms in the U.S. have been overcharged by banks for currency trades, bank insiders and others claim, broadening the scope of alleged abuses in pockets of the $4 trillion foreign-exchange market.” In response to such excesses, G10 central bank governors launched a global initiative to establish the FX Global Code (“the Code”) in May 2015. Over the next several years, representatives from 16 central banks, in collaboration with private market participants from both the buy-side and sell-side, drafted a comprehensive document. RBC Global Asset Management (RBC GAM) participated in one of the working groups. The final 70-plus-page document, published in 2018, went beyond ethics to embody industry best practices. Organized around six leading principles, the Code outlined what market participants expected from themselves and each other: Ethics: “To behave in an ethical and professional manner to promote the fairness and integrity of the FX market.” Governance: “To have a sound and effective governance framework to provide for clear responsibility for and comprehensive oversight of their FX market activity, and to promote responsible engagement in the FX market.” Execution: “To exercise care when negotiating and executing transactions.” Information Sharing: “To be clear and accurate in their communications and to protect confidential information.” Risk Management and Compliance: “To promote and maintain a robust control and compliance environment to effectively identify, manage, and report on the risks associated with their engagement in the FX market.” Confirmation and Settlement Processes: “To put in place robust, efficient, transparent, and risk-mitigating post-trade processes to promote the predictable, smooth, and timely settlement of transactions in the FX market.” The Code is not part of regulatory frameworks in most jurisdictions, so adherence to it is voluntary and signifies the participant’s commitment to good governance and good practices as well as promoting fair, transparent, liquid, and robust markets. The Code is meant to apply to all wholesale FX market participants — both buy- and sell-side as well as trading venues and other entities that provide brokerage and execution services. The Code allows for proportional implementation, however, as specific circumstances and variations in business activities may dictate. This acknowledges that dealers’ activities are inherently different from those of asset managers, corporations, or central banks, and not every principle applies to every participant. For example, as an asset manager, RBC GAM doesn’t make markets for clients and doesn’t conduct any proprietary trading on behalf of the firm, so many of the sell-side rules don’t apply to us. Determining which principles apply is the first step before a market participant can confirm adherence to the Code. As a living document, the Code is maintained and updated to reflect market changes, which is a key objective of the Global Foreign Exchange Committee (GFXC). The GFXC website is an excellent resource for information and tools to facilitate adoption. The original 2018 version of the Code was updated in 2021, and with each triennial revision, participants are expected to re-affirm their commitment to the latest document. In the four years since the Code’s release, most sell-side FX market participants have signed on. Buy-side adoption, however, has been slow to follow. Limited resources, that FX constitutes a small part of their business, the Code’s voluntary nature, and the perception that it’s a “sell-side thing” are among the reasons cited for the poor buy-side uptake. Having worked as a portfolio manager for more than 20 years, I find this perplexing. We have relied on our in-house FX desk for execution for more than 25 years at RBC GAM. Based on our experience, we believe that as an ecosystem, the FX market requires all participants to know, follow, enforce, and uphold the principles. We care about best execution in FX just as we do in fixed income and equities: It’s an important part of our governance framework. So, how has adhering to the Code helped us? It has become a training and education tool for new members of our FX, trade support, and operations teams and is part of our onboarding materials. It has prompted a review of our policies and an in-depth discussion about the Code’s applicability, which has strengthened our understanding of how the market functions as well as its best practices. It has empowered our trading staff to demand best execution practices, and all our counterparties must sign the Code. It has enabled us to continuously improve our policies and procedures. Each update has removed ambiguity. It has increased our confidence in our internal policies and procedures and highlighted the strength of our governance framework to clients. As corporations and asset managers look to demonstrate their commitment to environmental, social, and governance (ESG) values, they should embrace an opportunity for a thorough review of the governance framework supporting their FX business. Signing the Code has also benefitted our clients.

The FX Global Code: Why Now Is the Time Read More »

The FX Swap Market: Growing in the Shadows

Introduction The foreign exchange (FX) swap market generates almost $4 trillion in new contracts on any given day. To put that in perspective, imagine global equities had a daily trading volume of 12 billion. Such an enormous market ought to be both transparent and well regulated. Yet the rapidly expanding FX swap market is neither; it is instead exceedingly opaque with many key statistics hard or impossible to find. Global Foreign Exchange Market Turnover: Instruments Source: “Triennial Central Bank Survey of Foreign Exchange and Over-the-Counter (OTC) Derivatives Markets in 2022,” Bank for International Settlements (BIS) How Do FX Swaps Work? FX swaps are derivatives through which counterparties exchange two currencies. One party borrows a currency and simultaneously lends another currency. The amount a party must later repay is fixed at the start of the contract, and the counterparty repayment obligation serves as the transaction’s collateral. FX swaps thus are an easy way for a party to quickly obtain dollar or FX funds. FX Swaps: How They Work On balance, the currency gap is fully hedged by the off-balance FX swap. One counterparty obtains more lending in a foreign currency without an increase on its balance sheet. Though an FX swap in theory implies that the counterparties transact with each other, in fact, banks are the main intermediaries. When they receive a request from a client to hedge an exposure, banks source the funds through matched-book or reserve draining intermediation. In the former, the banks finance expanded FX lending by increasing their repo borrowing and other liabilities. The main drawback of such an approach is that it grows the bank’s balance sheet, which impacts its leverage ratio or liquidity coverage ratio. Since the global financial crisis (GFC), these Basel III ratios are binding and costly. Through reserve draining intermediation, banks can finance the dollar lending and thus reduce their excess reserve balance with the US Federal Reserve. This way the size of the balance sheet stays the same, and the bank avoids any potential Basel III regulatory implications. But there is more to the FX swap market: Banks also conduct FX arbitrage and market making, so the real FX swap market resembles the following chart. Banks treat the three different positions — hedging, arbitrage, and market making — as fungible and just manage the overall currency exposure for all their activities. FX Swaps: How They Work with Arbitrage and Market Making A Growing Market Why is the FX swap market expanding at such a rapid clip? Profitability is one key factor. Banks lend dollars through FX derivatives that pay a dollar basis premium. This is what the banks make on top of what they would accrue simply by lending on the money market. The dollar basis premium has been very lucrative, especially for banks with abundant dollar funding. At the same time, by turning to FX swaps, these banks are accommodating their clients’ hedging requirements without affecting their Basel III ratios. Technology is another often-overlooked contributor to the growing market. FX swaps are short-term instruments, with more than 90% maturing in under three months. Rolling the spot positions to the nearest date can impose an administrative burden. Technology can automate many of these tasks and add other functionalities, such as automatic hedging and collateral management. Innovation is also disrupting how FX swaps are intermediated. Phone usage is declining, while electronic intermediation is expanding. Such a large and lucrative market ought to be fiercely competitive. Yet US banks dominate, with the top 25 accounting for more than 80% of the positions. What explains this preeminence? Up to 90% of FX swaps involve the US dollar in one leg. For example, a Dutch pension fund conducting a euro-to-yen FX swap would first swap euros into dollars and then dollars into yen. Opaque and Fragile The main risk posed by the FX swap market is the dollar squeeze. In this scenario, those entities without access to Fed dollars acquire large, short-term payment obligations. When the market functions smoothly, these FX swaps can be rolled over. But amid increased market volatility, dollar funding may dry up, leaving non-US banks and entities to scramble to find dollars to make good on their commitments. Ultimately, during the GFC and the COVID-19 pandemic, the Fed countered a dollar squeeze by providing swap lines to other central banks, funneling the needed dollars directly to them. However, these lines came with incomplete information given the market’s opacity. In fact, Dodd-Frank legislation exempted FX forwards and swaps from mandated clearing, so the market has no central clearinghouse. Even without a legal obligation, about half the FX turnover was settled by the largest global FX settlement system, CLS, in 2022. By using CLS, banks mitigate their settlement risk. This system has held up during periods of severe financial distress, and more counterparties are choosing to settle with CLS. Still, the other half of the market remains over the counter (OTC) and unaccounted for. Which begs the question: what happens during the next period of market turmoil? How many dollars should the Fed provide? To which countries? The FX swap market also suffers from a lack of price efficiency. Despite the enormous volumes traded, there is clear evidence of window dressing: As each month and quarter ends, intermediation spreads spike. In “FX Spot and Swap Market Liquidity Spillovers,” Ingomar Krohn and Vladyslav Sushko find that prices are not only distorted, but liquidity is also impaired. When globally systemically important banks (G-SIBs) periodically pull out of the swap market to avoid increasing the so-called complexity component, it leads to higher capital requirements. But reducing regulatory exposure does not reduce risk exposure. When banks intermediate in FX swaps, it affects their intraday liquidity and intra-bank credit and ultimately changes their asset composition changes. That’s why the FX swap market needs both regulatory management and effective risk management. What’s Next? Technology and increased settlement through CLS may help make the FX swap market more transparent and price efficient, but they are no substitute for what’s

The FX Swap Market: Growing in the Shadows Read More »

How Goals-Based Portfolio Theory Came to Be

The following is excerpted from Goals-Based Portfolio Theory by Franklin J. Parker, CFA, published this year by Wiley. “I’ve heard people compare knowledge of a topic to a tree. If you don’t fully get it, it’s like a tree in your head with no trunk — when you learn something new about the topic there’s nothing for it to hang onto, so it just falls away.” —Tim Urban When presented a choice between multiple possibilities, which one should you choose? This simple question has perplexed many a human being. Modern economics found its beginning with an attempt to answer this basic question. The wealthy class of Europe had quite a bit of time on their hands, and, as it turned out, they enjoyed gambling on games of chance. The Renaissance had shifted the traditional view of these games — rather than simply accept randomness, some of these aristocrats began to analyze the games mathematically in an attempt to understand their randomness. It was not through any pure mathematical interest, of course, but rather an attempt to gain an edge over their fellow gamblers and thereby collect more winnings! The thinking of the time coalesced around a central idea: expected value theory. Expected value theory stated that a gambler should expect to collect winnings according to the summed product of the gains or losses and the probabilities of those outcomes (i.e., Σi pi vi , where p is the probability of gaining/losing v, and i is the index of possible outcomes). If, for example, you win $1 every time a six-sided die rolls an even number, and you lose $1 when it rolls odd, then the expected value of the game is 1 / 2 x $1 + 1 / 2 x (–$1) = $0. In 1738, Daniel Bernoulli challenged that idea. As a thought experiment he proposed a game: a player is given an initial pot of $2, and a coin is flipped repeatedly. For every heads, the player doubles their money and the game continues until the coin lands on tails. When tails comes up, the player collects winnings of $2n, where n is the number of times the coin was flipped, and the game is over. Bernoulli’s question is, how much should you pay to play this game? Expected value theory fails us here because the payoff of the game is infinite! Clearly no one would pay an infinite amount of money to play the game, but why? Bernoulli’s answer is our first glimpse of a marginal theory of utility — a theory that would come to support all modern economics: “Thus it becomes evident that no valid measurement of the value of a risk can be obtained without consideration being given to its utility, that is to say, the utility of whatever gain accrues to the individual or, conversely, how much profit is required to yield a given utility. However it hardly seems plausible to make any precise generalizations since the utility of an item may change with circumstances. Thus, though a poor man generally obtains more utility than does a rich man from an equal gain, it is nevertheless conceivable, for example, that a rich prisoner who possesses two thousand ducats but needs two thousand ducats more to repurchase his freedom, will place a higher value on a gain of two thousand ducats than does another man who has less money than he.” The idea that humans do not value changes in wealth linearly, but rather find less value in the next ducat than they found in the first, launched the entirety of modern economics. Bernoulli went on to propose a logarithmic function for the utility of wealth — diminishing as the payoff grows. This, of course, solved the paradox. People are not willing to pay an infinite amount to play the game because they do not have infinite utility for that wealth. The value of each subsequent dollar is less than the previous one — that is the essence of marginal utility, and the foundation of modern economics. Of more interest to this discussion, however, is that Bernoulli also gives a first glimpse of a goals-based theory of utility! Bernoulli points out that we must think of what it is the wealth can do for us, rather than the absolute value of that wealth. In other words, it is not the cash that we care about, but rather what that cash represents in the real world: freedom from prison in Bernoulli’s Prisoner’s case, and transportation, housing, leisure, food, and so on, for the rest of us. What you wish to do with the money is an important consideration to how much you would pay to play Bernoulli’s game. This idea is echoed by Robert Shiller, winner of the 2013 Nobel Prize in Economics: “Finance is not merely about making money. It is about achieving our deep goals and protecting the fruits of our labor.” In short, investing is never done in the abstract! Investing is — and always has been — goals-based. It would be another two centuries before the theory underpinning rational choices was developed. John von Neumann and Oskar Morgenstern authored The Theory of Games and Economic Behavior in 1944, which has become the foundation upon which all theories of rational choice are built. Von Neumann was a mathematician (and a brilliant one at that), so their additional contribution — beyond the actual foundational ideas — was to apply a mathematical rigor to the theory of human choice. In 1948, Milton Friedman (later to win the 1976 Nobel prize in economics) and L. Savage explored the implications of von Neumann and Morgenstern’s rational choice theory to an economic conundrum: why do people buy both insurance and lottery tickets? Rational choice theory would generally expect individuals to be variance-averse, so the fact that people express preferences for both variance-aversion and variance-affinity in the same instance is troubling. This has since become known as the Friedman-Savage paradox, and their solution was that the utility curve of

How Goals-Based Portfolio Theory Came to Be Read More »

Does a Stock’s Price Influence Its Risk Profile?

As a stock’s nominal share price falls, what happens to its risk profile? The answer to this question has important implications for managing investor expectations and reducing portfolio turnover. Afterall, investors often deviate from their chosen long-term strategies due to emotional reactions to unanticipated market movements. These market-timing actions present their own form of risk, adding to the existing risk of unpredictable markets. Some would argue that as a stock approaches the lower end of penny stock territory, volatility will moderate because there is an inherent threshold below which the price cannot drop. Others would contend that the stock will become more sensitive to market movements because market conditions dictate the survival of the company. We investigated what happens to systematic risk and the total volatility of a stock when it becomes a penny stock, i.e., its price drops below $5 per share. The results may surprise you. We found that as a stock declines in value, it becomes more sensitive to market movements. In other words, its beta increases and its total volatility increases accordingly. We pulled stock returns on all NASDAQ- and NYSE-listed firms going back 50 years. We examined stocks that during the 50-year period crossed the threshold of $1 a share, $2.50 a share, or $5 a share. We captured the instances when each stock crossed these thresholds for the first time. We then noted the beta of the stocks before the threshold crossovers and compared them to the same betas of the stocks two years after the crossover date. The Findings The first interesting finding is that when a stock dips below the $1 threshold, on average, its beta goes from 0.93 to 1.57. A beta greater than 1.0 means a stock’s price is more volatile than the overall market, i.e., its price swings more wildly. The opposite is true of a beta less than 1.0. The jump in beta to 1.57 from 0.93 for the stocks that dipped below the $1 threshold represents a significant shift in risk profiles. In fact, it is statistically significant at 1%. At the $1 threshold, the average penny stock has much more systematic risk and total volatility. And this shift is across the board. Stocks with negative betas go from an average of -0.62 to 1.14. Stocks with betas between 0 and 1.0 go from 0.55 to 1.37. And stocks with betas higher than 1.0 go from 1.95 to 1.88. What happens to systematic risk and the total volatility of a stock when it becomes a penny stock: Beta Before Price Drop Beta 2 Years After Price Drop Average Price Drop Cutoff: $1/share 0.93 1.57 Beta below 0 -0.62 1.14 Beta between 0 and 1.0 0.55 1.37 Beta higher than 1.0 1.95 1.88 Beta Before Price Drop Beta 2 Years After Price Drop Average Price Drop Cutoff: $2.50/share 0.90 1.56 Beta below 0 -0.55 1.01 Beta between 0 and 1.0 0.52 1.27 Beta higher than 1.0 1.90 1.94 Beta Before Price Drop Beta 2 Years After Price Drop Average Price Drop Cutoff: $5/share 1.00 1.07 Beta below 0 -0.56 -0.51 Beta between 0 and 1.0 0.47 0.50 Beta higher than 1.0 2.02 2.17 The results highlight that this drastic increase in risk (volatility) is entirely due to increases in systematic risk, i.e., movement with the market index. Notably, these results are not driven by a reversion to the mean over time in betas. At the high end of our study, we examined when stocks cross the $5 a share barrier. The results look quite different. Before a stock crossed the $5 threshold, on average, its beta is 1.0 and afterward it is 1.07.  The other beta tiers at $5 a share showed the same results. This affirms that the $1 threshold results are truly due to the stock entering penny stock territory. The results support the idea that penny stocks become much more risky (higher volatility) as they approach the zero-price barrier and that this risk is due to increases in systematic risk (increased sensitivity to market movements). If you liked this post, don’t forget to subscribe to the Enterprising Investor. All posts are the opinion of the author. As such, they should not be construed as investment advice, nor do the opinions expressed necessarily reflect the views of CFA Institute or the author’s employer. Professional Learning for CFA Institute Members CFA Institute members are empowered to self-determine and self-report professional learning (PL) credits earned, including content on Enterprising Investor. Members can record credits easily using their online PL tracker. source

Does a Stock’s Price Influence Its Risk Profile? Read More »

Private Equity: In Essence, Plunder?

Statistically, there is an increased risk of failure with private equity ownership. PE portfolio companies are about 10 times as likely to go bankrupt as non-PE-owned companies. Granted, one out of five companies going bankrupt doesn’t portend certain failure, but it is a startling statistic. The rejoinder, of course, is that PE firms gravitate toward companies in distress, a practice that weighs down their success rate. But to understand what private equity is at its worst is a call to action, personally and professionally. We need to monitor the specific and repetitive activities that benefit the operators and no one else. That, in a nutshell, is the key takeaway from our conversation with Brendan Ballou, the award-winning author of Plunder: Private Equity’s Plan to Pillage America. Ballou, who has experience as a federal prosecutor and special counsel for private equity at the US Department of Justice, was speaking in a personal capacity at the fireside chat hosted by CFA Society Hong Kong. Drawing from his extensive background, Ballou is well-placed to help us understand how PE firms leverage their influence to the detriment of the broader economy. He shared his insights on the inner workings and profound impact of private equity firms. During our discussion, Ballou focused on leveraged buyouts (LBOs). PE firms typically invest a small amount of their own money, a significant amount of investor money, and borrowed funds to acquire portfolio companies. And they aim to profit within a few years. He emphasized the influence of private equity in the US economy, noting that top-tier PE firms collectively employ millions of people through their portfolio companies. Despite their significant presence, public awareness of their activities remains low. Ballou highlighted several adverse outcomes associated with PE ownership, including a higher likelihood of bankruptcy for portfolio companies, job losses, and negative impacts on industries such as retail and healthcare. He cited three main reasons: PE firms’ short-term investment horizons, their heavy reliance on debt and extraction of fees, and insulation from legal consequences. He shared two case studies to demonstrate how PE firms can use financial engineering to benefit themselves while harming companies, employees, and customers. There are ways to mitigate the negative impacts of private equity, he maintained, advocating for regulatory changes to align sponsor activities with the long-term health of businesses and communities. Lightly Edited Excerpts From Our Conversation CFA Society Hong Kong: In Plunder, you discussed seven ways PE firms extract excessive profits from investments: sale-leaseback, dividend recapitalization, strategic bankruptcy, forced partnership, tax avoidance, roll-up, and a kind of operation efficiency that entails layoff, price hikes and quality cuts. Which one or two of these do you think are the most harmful and get to the core of your concerns? Brendan Ballou: It’s hard to pick just one or two. Sale-leasebacks, for instance, aren’t necessarily problematic but often can be, especially when the owner only plans to invest in the business for a few years. If you have a long-term perspective on a business, a sale-leaseback might make sense. However, a PE firm might buy the business and execute it primarily to maximize short-term value rather than to ensure a good real estate situation for the coming years. This was very vividly demonstrated in the buyout of Shopko, a regional retailer like Walmart. The PE firm executed a sale-leaseback, locking Shopko into 15-year leases. In retail, owning property is valuable due to its cyclical nature, and it’s helpful to have assets to borrow against. The PE firm took that away from Shopko. The second example is dividend recapitalizations. The basic concept is that the portfolio company borrows money to pay a dividend to the PE firm. The challenge is that a PE firm might only be invested in the company for a few years. Through some contractual arrangements, it can have significant control over the business despite a small equity investment (1% to 2%). This often leads the PE firm to execute a dividend recapitalization, directing the business to borrow and pay back the acquisition cost. This way, the PE firm is made whole on the purchase and turns subsequent income into pure profit. This approach makes sense for the PE firm but leaves the company saddled with debt it may or may not be able to manage. These examples illustrate that misalignments frequently create pain and controversy in PE acquisitions. Aren’t strategies like sale-leasebacks and dividend recapitalizations traditional business practices? None of them are illegal. Is it possible that you’re just focusing on the “wrong” data points? This is probably a very valid critique. However, it goes back to the basic problems we discussed earlier. PE firms have operational control over their businesses but often face very little financial or legal liability themselves. It means that PE firms can capture all the benefits when things go well in a business and sometimes benefit even when things go poorly. However, when things go poorly, there are often very few consequences for the PE firms. Tactics like sale-leasebacks, roll-ups, and dividend recapitalizations may be perfectly appropriate for a lot of businesses in various circumstances. But when you couple these tactics with a business model that operates on a “heads I win, tails you lose” often, maybe even most times, the outcome is destructive for all stakeholders except the PE sponsors. The business practices you described in Plunder could be seen as capitalism at its finest. By reorganizing balance sheets, value is created without necessarily having to invent something new, like an iPhone. Are you suggesting that these capitalists — by working within the system and collaborating with government officials — can do deals that exacerbate inequality? Absolutely. First, I often say that lawyers in the United States tend to invent a problematic business model every 20 years or so. Currently, I would argue it’s leveraged buyouts. Twenty years ago, it was subprime lending. Forty years ago, savings and loans. Sixty years ago, conglomerates. A hundred years ago, trusts. We can just create laws and regulations that incentivize short-term, extractive thinking. To be

Private Equity: In Essence, Plunder? Read More »

The Yield Curve, Recessions, and Monetary Policy Blunders: EI Podcast Highlights

Editor’s Note: Our Enterprising Investor podcast features intimate conversations with some of the most influential people from the world of finance. This post highlights some key talking points from a conversation between the show’s host, Mike Wallberg, CFA, MJ, and Campbell Harvey, PhD. In this episode of Enterprising Investor podcast, Cam Harvey delves into his groundbreaking research on the yield curve as a predictor of economic recessions within the context of today’s economy and recent monetary policy actions. Harvey, a finance professor at Duke University, pioneered the study connecting inverted yield curves with impending recessions — a relationship that has proven remarkably reliable over the past four decades. Understanding Yield Curve Inversion A normal yield curve slopes upward, reflecting higher yields for longer-term investments due to their increased risk and time horizon. An inverted yield curve — where short-term interest rates exceed long-term rates — signals that investors expect lower economic growth or a recession soon. This inversion is considered a powerful leading indicator of economic downturns. Indeed, Harvey’s research made the yield curve one of the most closely monitored tools by economists, investors, and policymakers. Its predictive power has stood the test of time, maintaining its relevance across different economic environments. In this episode of EI podcast, Harvey shares the remarkable story of how he developed and tested his original theory. Current Economic Context Harvey addresses the current 20-month inversion of the yield curve and implications for the economy. He explains that the curve inverted again in late 2022, sparking widespread concern about an impending recession. There have been eight yield curve inversions since the 1960s, all of which were followed by recessions. “This is a very simple indicator that is eight out of eight with no false signals. The economy is so complex, it’s remarkable you can have something that does such a reliable job,” Harvey enthuses. He concedes that the yield time between inversion and recession is inconsistent, ranging from six months to 23 months. The current inversion is 20 months. Monetary Policy Harvey has been critical of the Federal Reserve in the press. In this EI podcast episode, he discusses the Fed’s role in the current yield curve inversion. He maintains that the Fed’s aggressive interest rate hikes aimed at combating inflation have contributed to the inversion. As the central bank increases short-term interest rates to curb inflation, long-term rates have not risen as quickly, leading to the inversion. CFA Institute Research and Policy Center’s “Monetary Policy: Current Events and Expert Analysis” curates a range of research and opinions across markets and asset classes. Nuances and Considerations While the yield curve is a critical tool for forecasting, Harvey emphasizes that it should not be used in isolation. He advises that other economic indicators and market conditions must be considered when assessing the risk of a recession. For instance, factors like employment rates, consumer confidence, and corporate earnings also play crucial roles in understanding the broader economic picture. He shares the data he believes market participants and policymakers are ignoring, to their detriment. Harvey also explores the potential consequences of a prolonged yield curve inversion. Historically, prolonged inversions have often led to deeper and more severe recessions. He warns that if the current inversion persists, it could indicate more significant economic troubles ahead. However, he also suggests that appropriate policy responses, particularly from the Federal Reserve, could mitigate these risks. source

The Yield Curve, Recessions, and Monetary Policy Blunders: EI Podcast Highlights Read More »

Manager Selection: The Power of Payoff

The most important portfolio manager skill metric is often overlooked. I often hear fund managers say, “I only need to get it right slightly more than 50% of the time.” What they are referring to is the hit rate. It’s similar to batting average in baseball: It represents the percentage of their decisions that makes money, in absolute or relative terms. And yes, the ideal is to achieve a hit rate on decision making that is higher than 50% — whether you are a fund manager or a regular person in everyday life, right? Yet the fact is that most fund managers have a hit rate on their overall decision making of less than 50%. Our recent study, The Behavioral Alpha Benchmark, found that only 18% of portfolio managers make more value-additive decisions than value-destroying ones. We examined trading behavior in 76 portfolios over three years and isolated the outcome of investment decisions in seven key areas: stock picking, entry timing, sizing, scaling in, size adjusting, scaling out, and exit timing. Among our findings: While hit rate captures a lot of attention, it is often less consequential than payoff. A good payoff ratio can more than compensate for a sub-50% hit rate, and a poor payoff ratio can completely nullify the effect of a strong hit rate.  Here’s why: Payoff measures whether a manager’s good decisions have typically made more than their bad decisions have lost. It is expressed as a percentage: Over 100% is good; under 100% is bad. A few decisions with payoffs well in excess of 100% can more than compensate for several that fall below the 100% mark. He didn’t use the term, but the legendary Peter Lynch emphasized payoff as a key theme: In 1990, he told Wall Street Week’s Louis Rukeyser that “You only need one or two good stocks a decade.” Those would need to be VERY good stocks, of course, but the point is that payoff is one of the most critical factors in successful professional investing. Successful managers need to make sure their winners win more in aggregate than their losers lose. Perhaps it’s ironic, then, that asset owners and allocators examine a wide variety of manager statistics in an effort to separate luck from skill but tend to overlook payoff. In fact, payoff is one of the purest skill metrics out there. Managers who consistently achieve a payoff over 100% exhibit true investment skill: They know when to hold ‘em, and when to fold ‘em.  Essential Behavioral Alpha Frontier The ability to cut losers — and, indeed, to cut winners before they become losers — is what the best investors are good at. And that manifests in a high payoff.  The diagram above comes from The Behavioral Alpha Benchmark. It looks at all of the trading decisions made by our sample of 76 active equity portfolios over the last three years and plots their hit rate against their payoff. The dashed line represents what would be achieved by chance: If the manager is correct half the time with a 50% hit rate and their average winner makes exactly as much as their average loser loses for a 100% payoff. While the managers’ hit rates fall in a pretty tight band along the X axis, their payoffs vary dramatically on the Y axis. The top five managers, colored in magenta, have both high hit rates and high payoffs.  This diagram, and its use of payoff as a key comparative metric for portfolio managers, represents an important next step in the evolution of manager assessment methodology. It enables us to look beyond traditional evaluative metrics based on past performance — which are highly subject to the random effects of luck and thus limited in their utility — and focus instead on the quality of a manager’s decision making. And that’s a far more accurate assessment of their skill.  If you liked this post, don’t forget to subscribe to the Enterprising Investor. All posts are the opinion of the author. As such, they should not be construed as investment advice, nor do the opinions expressed necessarily reflect the views of CFA Institute or the author’s employer. Image credit: ©Getty Images/Wachiwit Professional Learning for CFA Institute Members CFA Institute members are empowered to self-determine and self-report professional learning (PL) credits earned, including content on Enterprising Investor. Members can record credits easily using their online PL tracker. source

Manager Selection: The Power of Payoff Read More »

Revisiting the Factor Zoo: How Time Horizon Impacts the Efficacy of Investment Factors

The returns of investments are not completely random over time (i.e., do not follow a perfect “random walk”). This contrasts with assumptions in common portfolio construction approaches, such as mean variance optimization (MVO), which generally assume that returns are independent and identically distributed (IID). In a recent CFA Institute Research Foundation brief, we demonstrated that serial dependence can have a notable impact on efficient portfolios for investors with varying time horizons. In this piece, we focus on how the optimal allocation to six risk factors: size, value, momentum, liquidity, profitability, and investment[1] varies by investment horizon. We demonstrate that size and value factors become more attractive over longer time horizons, while momentum and profitability factors become less attractive, and that evidence for liquidity and investment factors is more mixed. While it is uncertain to what extent these historical relations will persist, this analysis provides additional evidence that serial correlations should be considered when building portfolios for investors. A Quick Visit to the Factor Zoo Factors are designed to capture the returns of a specific set of investments while largely controlling for overall market risk. For example, the value factor would be estimated by subtracting the return of a portfolio of growth stocks from a portfolio of value stocks. To the extent value stocks outperform growth stocks, the factor would have a positive average value, and vice versa. There are a variety of ways to define and build factors. For example, to determine where a security falls on the value/growth continuum, Fama and French use book-to-mark. There are other potential definitions, however, including price-to-earnings, dividend yields, and price-to-sales, among others. The number of factors identified in research pieces continues to grow. While some of these factors may add new ways to help explain the cross section of stock returns, many are likely to add little actual benefit, especially when considering the marginal contribution of the respective factor beyond existing identified factors. This is something Feng, Giglio, and Xiu (2020) dub the “factor zoo.” Among the 150+ factors reviewed in their research, only a few were economically significant when considered collectively. For this analysis, we focus on six relatively well-known factors: size, value, momentum, liquidity, profitability, and investment. Here is some additional information on reach: Size (SMB): small companies tend to outperform large companies, see Fama and French (1992) Value (HML): value companies tend to outperform growth companies, see Fama and French (1992) Momentum (MOM): stocks that have been trading up tend to continue performing well in the short-term, see Jegadeesh and Titman (1993) Liquidity (LIQ): less-liquid stocks offer higher expected returns to compensate for lower liquidity, see Pastor and Stambaugh (2003) Profitability (RMW): companies with robust operating profitability outperform those with weak operating profitability, see Fama and French (2015) Investment (CMA): companies that invest conservatively outperform those that invest aggressively, see Fama and French (2015) These factors are not intended to span the universe of known factors. Rather, they reflect a set of factors that have a have a reasonable amount of freely available historical data for 60+ years. Data for each factor is obtained from Kenneth French’s data library[2] except for the liquidity factor (LIQ), which is obtained from Lubos Pastor’s data library[3]. For LIQ, we use the non-traded liquidity factor for the first four years (1964 to 1967, inclusive) and the traded liquidity factor thereafter. The analysis uses calendar year returns from 1964 to 2023 (60 years). The analysis begins in 1964 because that’s when data on the profitability factors (RMW) and the investment factors (CMA) are first available on Kenneth French’s Data Library. Exhibit 1 includes data on rolling five-year cumulative returns for the factors. Exhibit 1. Five-Year Cumulative Returns: 1964-2023. Source: Authors’ Calculations, Kenneth French’s Data Library, Lubos Pastor Data Library, and Morningstar Direct. Data as of December 31, 2023. The historical differences in rolling five-year returns for some factors are relatively staggering. For example, for the five-year period ending December 31, 2013, MOM had a cumulative return of -78.95% while SMB had a cumulative return +24.81%. Alternatively, SMB had a cumulative five-year return of -34.50% as of December 31, 1999, versus +132.90% for MOM. In other words, there have been significant periods of outperformance and underperformance among the factors, suggesting some potential diversification benefits for allocating across them historically. The recent returns of each of the factors have generally been lower than the long-term averages. For example, while SMB and HML had annual geometric returns of 4.22% and 4.97%, respectively, from 1968 to 1992 (i.e., pre-discovery), the annual geometric returns have only been 0.3% and 0.1%, respectively, from 1993 to 2023 (i.e., post-discovery), a relatively well-documented decline.  Wealth Growth Over the Long Run First, to provide some perspective on how the risk of the factors varies by investment horizon, we estimate how the standard deviation of wealth changes for the factors for different investment horizons, looking at periods from one to 10 years. For each period, we compare the actual historical distribution of wealth growth using the actual historical sequential returns (e.g., all the rolling five-year periods available from 1964 to 2023) to the standard deviation of wealth using the same investment period but using bootstrapped returns. Bootstrapping is an approach where the historical annual returns are used, but they are effectively recombined to generate wealth growth. For each factor we consider 10,000 bootstrapped periods. Bootstrapping is useful when exploring serial correlation because it preserves the unique aspects of the times series data, by capturing the means and covariances, as well as the annual skewness and kurtosis. But bootstrapping removes the serial dependence potentially present in the return. In other words, the only difference in the analysis is how the returns are related to each other over time. If there is no type of serial dependence, the annualized standard deviation values would effectively be constant over time, consistent with the assumptions of IID. However, it’s possible that risk levels could increase or decrease, depending on the serial correlations present. Exhibit 2 includes the results of this analysis. Exhibit 2. Annualized Standard Deviation Ratios for Factors,

Revisiting the Factor Zoo: How Time Horizon Impacts the Efficacy of Investment Factors Read More »

The Remarkable Story of Style Regimes: For the Data-Driven Investor

Style regimes constitute one of investors’ largest risk factors, second only to overall equity exposure. After 15 years of growth style dominance, the return of intra-market volatility has prompted renewed interest in style framework and cyclical rotations. By reacquainting ourselves with the dynamics of style cycles, we can better understand how these portfolio building blocks shape our financial futures. In this analysis, I will demonstrate that style returns are the market’s veritable gulf stream, and investors should not ignore their powerful currents. I will address three basic yet fundamental questions:  1. What is the typical duration of growth and value style regimes? 2. How impactful are oscillations between growth and value? 3. What are the mechanics of style transition? With its three simple, yet powerful inputs, I believe the Russell Style methodology can unravel some of the market’s most resonating behaviors. What is the typical duration of growth and value style regimes? With the sharp 2022 rotation to value stocks fresh in the memory, investors want to know whether rotations are transitory movements or durable market trends. To provide context and guidance, I measured the ratio of the total returns of the Russell 1000 Growth and Value Indexes from December 1978, rebased to 100 as an initial value. This methodology allows us to observe distinct periods of outperformance by either growth or value without distraction from the runaway compounding of equity returns. The approach is time-agnostic: cross-period comparisons, such as between the 1980s and the 2010s, can be made on a roughly equivalent basis. Depiction of Russell 1000 Growth Index total returns divided by Russell 1000 Value Index total returns, parity set to 100 with an inception date of December 31st, 1978. Source: FTSE Russell Data, February 2024. By connecting peaks and troughs in the chart above, 10 discrete periods of style performance can be readily identified. Upward surges indicate the outperformance of growth, whereas downward trends reveal a rotation toward value. What is fascinating is that such clear cyclical patterns emerge, even though month-over-month style returns continue in the same direction only 51.9% of the time — a rate indistinguishable from a coin toss! Some model judgements are necessary in assigning style regimes. For example, regimes five and six are separated instead of counting one combined growth regime during the 1990s, because these two phases are more distinct from each other than growth and value are on average. Notwithstanding such discretionary calls, this framework offers an evidence-based approach to breaking down the wave function of style returns. Four different measures of trend size and intensity are depicted.  PP Change denotes the percentage point change in the ratio of Russell 1000 Growth and Value Index total returns during each regime. Column PP/Month is the rate of change in the previous value and is the average slope for each regime.  Regime 10 is still in phase and does not signify a completed regime. Source: FTSE Russell, February 2024. The average duration of style regimes is 64 months, but there is far more nuance than this headline number would suggest. First, there is a high dispersion in regime length, ranging from 13 months at the short end (regime nine) to 184 months at the long end (regime eight), a spread of more than one order of magnitude. In fact, the 15-year Great Growth Regime (GGR, regime eight), which lasted from July 2006 to November 2021, is a true outlier that skews the overall results. Notably, regime eight lies 2.3 standard deviations out from the mean regime length (4.6 if excluded from sample). We arrive at a more representative understanding of style regime length by isolating the impact of the 15-year GGR. The overall average cycle length decreases to 46 months, and the average duration of growth regimes is nearly halved to 33 months. Hence, we can conclude that style regimes are not flavor of the month phenomena, but rather they are generally multi-year trends. Furthermore, when excluding the GGR, value regimes tend to persist for twice as long as their growth brethren. How impactful are oscillations between growth and value? After 44 years, the annualized returns of these antithetical strategies differed by only 42 basis points, and growth and value achieved return parity as recently as March 14, 2023. If both style methodologies take investors to roughly the same destination, just how significant are style trends? Are they mere ripples on the overall surface of equity returns? It is more appropriate to talk of powerful waves: the oscillations between growth and value carry tremendous impact. Calculating the rates of change in the ratio of growth and value total returns shows that style trends progress on average at a rate of 1.15 percentage points per month (pp/m). For context, this style trend velocity is 44% greater than the expected monthly returns for equity markets, while progressing at only 55% of the latter’s volatility. This analysis demonstrates that style trends are both more forceful and more consistent than those of the underlying equity market. In sum, these gyrations equate to $600 billion in shareholder wealth being reallocated between growth and value each month. While the average style regime sees a 40.9 percentage point swing in the ratio of growth/value total return, there is great variance in the pacing of style returns at the regime level. Historically, value regimes have progressed 26% more quickly than their growth counterparts, owing to rapid value reversions after growth trends culminate. Excluding the mid-1990s style neutrality of regime five with its progression rate of only 0.12pp/m, the GGR was the least dynamic style trend, progressing at only 0.39 pp/m. Compare this slow pacing with the next value cycle (regime nine in the table) which was the most aggressive on record, surging at a negative 2.52pp/m clip. This reversal of style direction after a 15-year steady state, as well as a sixfold intensification of style, contributed to the market whiplash sensation experienced by many equity investors in 2022. Perfectly timing these 10 Russell style regimes would have meant a

The Remarkable Story of Style Regimes: For the Data-Driven Investor Read More »