CFA Institute

Backtests, Causality, and Model Risk in Quantitative Investing

Quantitative finance continues to debate the reliability and limits of model-driven investment strategies. One central question is how much weight investors should place on backtesting. In The Factor Mirage: How Quant Models Go Wrong, Marcos López de Prado, PhD, and Vincent Zoonekynd, PhD, outline why investors should move beyond accepting historical performance at face value and focus on understanding why a model works. That is a valuable contribution to strengthening the rigor of quantitative investing — and one that invites further reflection on how that reasoning is structured. It may help to frame the issue not as a binary choice between correlation and causation, but as a layered problem in which different forms of reasoning play distinct roles. In practice, the choice is rarely between simple correlation and fully specified causality. Most investment research operates somewhere in between. Sometimes we can describe and test a mechanism directly. Sometimes we cannot. The system may move too quickly, key variables may be only partially observable, or the time and resources required to build a richer model may not be available. In those settings, association-based reasoning still has value. That is not a defect of finance; it is a general feature of decision-making under uncertainty. Association Under Constraint Human beings often rely on associations when there is no time to construct a full causal account. That is not necessarily irrational; it can be adaptive. A fast association can guide action before slower, more elaborate reasoning is possible. The same is true in investment practice. When relevant drivers cannot be directly observed or causal structure is only partly understood, associational signals may still contain useful information. Association is not explanation. The question is not whether association has value, but whether it is sufficient. For institutional investors, this distinction has practical implications for due diligence, including how managers justify the inclusion and exclusion of variables in systematic models. When stronger structural knowledge exists, ignoring it is not sophistication; it is a loss of information. Association has a place, but it should not become a stopping point. The call for greater causal discipline in finance is not new. The more interesting question is how to incorporate that discipline without oversimplifying the nature of markets themselves. Epidemiology as a Model of Structured Reasoning An epidemiologist would not analyze an epidemic as a purely statistical pattern detached from what is known about transmission. If susceptible individuals can become infected and infected individuals can recover or be removed, that knowledge becomes part of the model’s structure. Compartmental models such as SIR (susceptible, infected, recovered) and SEIR (susceptible, exposed, infected, recovered) formalize those transitions. Statistical methods remain essential for estimating parameters and testing fit. But the analysis does not begin from a blank slate; it begins from established causal structure. Finance can draw a similar lesson. Where durable mechanisms are reasonably well understood, they should be represented explicitly. If leverage amplifies forced selling, refinancing conditions shape default risk, inventories influence pricing power, passive flows affect demand, or network structures transmit distress, these are more than recurring correlations. They are mechanisms that can be modeled, tested, and challenged. Dynamic models can be especially useful here. A regression captures co-movement; a dynamic model represents stocks, flows, delays, and feedback. In finance, that may mean balance-sheet capacity, funding conditions, capital flows, or adoption dynamics. Such models help clarify how the state of the system evolves and how today’s conditions shape tomorrow’s outcomes. Reflexivity and Adaptive Markets Finance differs from epidemiology. Markets are reflexive. Beliefs influence prices, and prices in turn reshape beliefs, incentives, and financing conditions. A narrative can attract capital; capital flows can move prices; rising prices can reinforce the original narrative. What appears to be a durable relationship may, for a time, reflect a self-reinforcing loop. Causal reasoning remains essential, but the relevant structure may itself include feedback between beliefs, flows, and outcomes. A Three-Layered Framework Investment research can operate on three distinct but related layers: Association: What appears to predict, even imperfectly? Causal: What mechanism could plausibly generate that relationship? Reflexive: How might the use of the signal itself alter behavior, crowd the trade, change flows, or reshape the environment being modeled? Seen this way, the debate is not about choosing correlation over causation. It is about knowing when association is sufficient, when mechanisms must be modeled explicitly, and when reflexive feedback makes the system more adaptive than either approach assumes. Few serious quantitative researchers would defend correlation without scrutiny. Robust practice already includes stress testing, economic intuition, and structural reasoning. The question is not whether causality matters, but whether we are explicit about which layer is doing the work — and how those layers interact. Toward a More Disciplined Quantitative Practice We should use causal knowledge when it is available and test causal hypotheses when we have them. When a phenomenon involves accumulation, delay, or feedback, dynamic models may be more appropriate than static statistical fits. Association-based thinking retains an important role, especially under constraints of time and observability. But where established structure exists, ignoring it is not sophistication; it is a loss of information. The opportunity for quantitative finance is not to replace one methodological slogan with another. It is to become more disciplined and more transparent about how different forms of reasoning contribute to robust investment research — when patterns are enough, when mechanisms are required, and when reflexivity demands that we treat markets as adaptive systems shaped in part by our own participation. The future of investment research is therefore unlikely to be purely correlational or narrowly causal. It will be more plural, more dynamic, and more explicit about the difference between patterns that merely appear stable and mechanisms capable of sustaining them. References López de Prado, Marcos, and Vincent Zoonekynd. The Factor Mirage: How Quant Models Go Wrong. Enterprising Investor, CFA Institute, 30 October 2025. Delli Gatti D, Gusella F, Ricchiuti G. Endogenous vs exogenous fluctuations: unveiling the impact of heterogeneous expectations. Macroeconomic Dynamics. 2025;29:e125. doi:10.1017/S1365100525100345 Gigerenzer, Gerd, and Daniel G. Goldstein. “Reasoning the

Backtests, Causality, and Model Risk in Quantitative Investing Read More »

AI Strategy After the LLM Boom: Maintain Sovereignty, Avoid Capture

Time to rethink AI exposure, deployment, and strategy This week, Yann LeCun, Meta’s recently departed Chief AI Scientist and one of the fathers of modern AI, set out a technically grounded view of the evolving AI risk and opportunity landscape at the UK Parliament’s APPG Artificial Intelligence evidence session. APPG AI is the All-Party Parliamentary Group on Artificial Intelligence. This post is built around Yann LeCun’s testimony to the group, with quotations drawn directly from his remarks. His remarks are relevant for investment managers because they cut across three domains that capital markets often consider separately, but should not: AI capability, AI control, and AI economics. The dominant AI risks are no longer centered on who trains the largest model or secures the most advanced accelerators. They are increasingly about who controls the interfaces to AI systems, where information flows reside, and whether the current wave of LLM-centric capital expenditure will generate acceptable returns. Sovereign AI risk “This is the biggest risk I see in the future of AI: capture of information by a small number of companies through proprietary systems.” For states, this is a national security concern. For investment managers and corporates, it is a dependency risk. If research and decision-support workflows are mediated by a narrow set of proprietary platforms, trust, resilience, data confidentiality, and bargaining power weaken over time.  LeCun identified “federated learning” as a partial mitigant. In such systems, centralized models avoid needing to see underlying data for training, relying instead on exchanged model parameters. In principle, this allows a resulting model to perform “…as if it had been trained on the entire set of data…without the data ever leaving (your domain).” This is not a lightweight solution, however. Federated learning requires a new type of setup with trusted orchestration between parties and central models, as well as secure cloud infrastructure at national or regional scale. It reduces data-sovereignty risk, but does not remove the need for sovereign cloud capacity, reliable energy supply, or sustained capital investment. AI Assistants as a Strategic Vulnerability “We cannot afford to have those AI assistants under the proprietary control of a handful of companies in the US or coming from China.” AI assistants are unlikely to remain simple productivity tools. They will increasingly mediate everyday information flows, shaping what users see, ask, and decide. LeCun argued that concentration risk at this layer is structural: “We are going to need a high diversity of AI assistants, for the same reason we need a high diversity of news media.” The risks are primarily state-level, but they also matter for investment professionals. Beyond obvious misuse scenarios, a narrowing of informational perspectives through a small number of assistants risks reinforcing behavioral biases and homogenizing analysis. Edge Compute Does Not Remove Cloud Dependence “Some will run on your local device, but most of it will have to run somewhere in the cloud.” From a sovereignty perspective, edge deployment may reduce some workloads, but it does not eliminate jurisdictional or control issues: “There is a real question here about jurisdiction, privacy, and security.” LLM Capability Is Being Overstated “We are fooled into thinking these systems are intelligent because they are good at language.” The issue is not that large language models are useless. It is that fluency is often mistaken for reasoning or world understanding — a critical distinction for agentic systems that rely on LLMs for planning and execution. “Language is simple. The real world is messy, noisy, high-dimensional, continuous.” For investors, this raises a familiar question: How much current AI capital expenditure is building durable intelligence, and how much is optimizing user experience around statistical pattern matching? World Models and the Post-LLM Horizon “Despite the feats of current language-oriented systems, we are still very far from the kind of intelligence we see in animals or humans.” LeCun’s concept of world models focuses on learning how the world behaves, not merely how language correlates. Where LLMs optimize for next-token prediction, world models aim to predict consequences. This distinction separates surface-level pattern replication from models that are more causally grounded. The implication is not that today’s architectures will disappear, but that they may not be the ones that ultimately deliver sustained productivity gains or investment edge. Meta, Open Platforms Risk LeCun acknowledged that Meta’s position has changed: “Meta used to be a leader in providing open-source systems.” “Over the last year, we’ve lost ground.” This reflects a broader industry dynamic rather than a simple strategic reversal. While Meta continues to release models under open-weight licenses, competitive pressure, and rapid diffusion of model architectures — highlighted by the emergence of Chinese research groups such as DeepSeek — have reduced the durability of purely architectural advantage. LeCun’s concern was not framed as a single-firm critique, but as a systemic risk: “Neither the US nor China should dominate this space.” As value migrates from model weights to distribution, platforms increasingly favor proprietary systems. From a sovereignty and dependency perspective, this trend warrants attention from investors and policymakers alike. Agentic AI: Ahead of Governance Maturity “Agentic systems today have no way of predicting the consequences of their actions before they act.” “That’s a very bad way of designing systems.” For investment managers experimenting with agents, this is a clear warning. Premature deployment risks hallucinations propagating through decision chains and poorly governed action loops. While technical progress is rapid, governance frameworks for agentic AI remain underdeveloped relative to professional standards in regulated investment environments. Regulation: Applications, Not Research “Do not regulate research and development.” “You create regulatory capture by big tech.” LeCun argued that poorly targeted regulation entrenches incumbents and raises barriers to entry. Instead, regulatory focus should fall on deployment outcomes: “Whenever AI is deployed and may have a big impact on people’s rights, there needs to be regulation.” Conclusion: Maintain Sovereignty, Avoid Capture  The immediate AI risk is not runaway general intelligence. It is the capture of information and economic value within proprietary, cross-border systems. Sovereignty, at both state and firm level, is central and that means a safety-first

AI Strategy After the LLM Boom: Maintain Sovereignty, Avoid Capture Read More »

Why Financial Advisors Struggle to Embrace Bitcoin’s Rise

Bitcoin is one of the most powerful technologies of our time and has delivered financial freedom to millions and disrupted established financial players. Yet, many of my fellow financial professionals remain deeply skeptical of its worth. This skepticism is starting to shift as seen in recent headlines. The rise of Bitcoin exchange traded funds (ETFs) and the marketing push from giants like BlackRock are softening attitudes. BlackRock’s IBIT has received $100bn worth of flows, making it one of the most successful ETFs in history, so clearly many investors are taking notice. JPMorgan said last week it would allow institutional clients to use Bitcoin as loan collateral. The Trump Administration is examining adding crypto to the list of approved 401-k investments. To be sure, challenges and resistance remain. And for many, everyday conversations with financial advisors still feel like hitting a wall. Young financial professionals tell me all the time, “If I mention Bitcoin at the office, people glaze over…” So why the resistance? Tech Friction                                                    With any shift from old to new, there will always be resistance. There is a learning curve to the internet, to artificial intelligence, or to any other breakthrough technology. These changes can be particularly challenging for older generations, but age alone is not the obstacle. Crypto’s user interface has presented additional challenges for the masses. Dealing directly with crypto assets onchain through hardware wallets and seed phrases is not particularly difficult but there are large swathes of the population that have neither the technical knowledge, nor the desire to up-skill sufficiently to feel safe enough to store significant portions of their net worth in these assets. The launch of ETFs in the US in January 2024 changes this dynamic, allowing anyone with a brokerage account to invest. I expect there will be other solutions which make self-custody security (security without a third-party intermediary) easier for non-technical users, allowing users to utilize the technology day-to-day, but it takes time for all these functionality layers to be built. We must also appreciate that there is a difference between using the internet to search for a product online or using AI to plan a business project, versus storing significant portions of one’s wealth in a new financial technology. The stakes are higher with crypto, and this could be hampering financial professionals’ approval. The higher stakes draw in some investors but are off-putting to others who would rather wait until the risks have declined and the technology is second nature. But financial professionals are smart, tech savvy people. Technical friction does not explain the visceral reaction when speaking to your resident economist. Economic Ideology Bitcoin is a non-state monetary asset. Its monetary policy is determined without a central bank. “Chancellor on the brink of second bailout” was embedded by its creator Satoshi Nakamoto into the blockchain’s first block, highlighting concern of overusing monetary and fiscal policy. The mindset required to understand its value and its unique proposition runs directly against economic orthodoxy. Source: The Times of London By contrast, traditional economists assume that central banks are necessary to set interest rates and manage inflation. In fact, many economists work at central banks, treasury departments, or private banks. They have a personal stake in maintaining the status quo. These same institutions dominate not just the profession, but also economic academia. As a result, this line of thinking is what gets taught to 95% of economics students around the world, which becomes the foundation for most financial professionals. Economic ideology is similar to political ideology and religion — it is deep-rooted and difficult to change. Once we have been taught that this is the way the world works and we have espoused the virtues of that school of thought, we are deeply entrenched in its continuity. Financial professionals probably have far stronger ideological bias than we would like to admit. Financial Valuation Investments are grounded in quantitative methods, and for good reason. We want substance behind these important decisions. As the field of finance has developed, a set of generally accepted valuation methodologies has emerged. That makes complete sense. For example, dividend discount models, discounted cash flow models, credit spreads, and option-adjusted spreads are all well-established approaches to valuing different asset classes. But Bitcoin doesn’t have earnings, dividends, yields, or interest rates. The many ways to think about valuing Bitcoin do not neatly fit into traditional methodologies. This asset requires more abstract thinking. You may need to question the long-term sustainability of the dollar monetary system or the inherent value of our current forms of money. This kind of conceptual thinking, and its clash with conventional valuation methods, fuels both ideological and technological friction. How do you explain to Warren Buffet that the valuation methods he relies on do not apply to this asset? It sounds suspicious. From his perspective, skepticism makes sense. Regulatory Restrictions Finance is a heavily regulated industry. Professionals have significant reporting requirements and are often mandated to hold specific approved assets. Regulators are almost always behind the ball when it comes to innovative technology, so it has taken them a long time to respond to Bitcoin. Bitcoin has been around for more than 15 years now and still regulated Bitcoin instruments are not available to many investors in various jurisdictions. Financial professionals are incentivized to promote the products that they manage and are licensed to sell. If Bitcoin is not on this list, then there is a major incentive misalignment. Even if a financial professional had a constructive view on Bitcoin in their personal capacity, their views might be tied when speaking to clients or in the media. With the advent of Bitcoin ETFs in the US and the GENIUS Act, which regulates stablecoins, regulatory restrictions are shifting. But regulations take time and they still serve as another barrier hindering support from the financial institutions. Career Risk Financial professionals spend years studying, achieving the Chartered Financial Analyst designation, PhDs, MBAs, CFPs, CPAs, and more. We have built a major barrier to entry for the powerful industry

Why Financial Advisors Struggle to Embrace Bitcoin’s Rise Read More »

Why Static Portfolios Fail When Risk Regimes Change

How shifting correlations, volatility, and macro drivers undermine traditional diversification In March 2020, diversification broke down because liquidity disappeared. In 2022, it failed because inflation overwhelmed both stocks and bonds at the same time. Yet many institutional portfolios remained anchored to static allocation frameworks that assume risk relationships will eventually revert to historical norms, even as the underlying drivers of risk changed. This analysis examines why fixed portfolio structures struggle when regimes shift, and what portfolio managers must do differently when correlations, volatility, and macro forces no longer behave as expected. It is the first in a new series, Risk Regimes and Portfolio Resilience. Two Crises, Different Breakdowns March 16, 2020. The VIX hit 82.69, surpassing its 2008 crisis peak. Liquidity evaporated, correlations flipped, and diversification failed as markets moved from an initial flight to quality into widespread forced selling. In 2022, the breakdown looked very different. Inflation, not liquidity stress, became the dominant risk. Rising rates drove stocks and bonds lower together, producing the first simultaneous calendar-year loss for both asset classes since the Bloomberg Aggregate Bond Index was created in 1980. The classic 60/40 portfolio lost 16.7%, its worst calendar-year performance in modern history. The Question Every Portfolio Manager Should Ask Here’s the uncomfortable truth: most institutional portfolios operate under a dangerous fiction — that risk relationships remain stable enough to justify fixed allocation frameworks. We build models assuming correlations will revert to historical means, that volatility cycles predictably, and monetary policy acts as a reliable backstop. Then reality intervenes, regimes shift, and these assumptions unravel precisely when portfolios need them most. The question isn’t whether your portfolio can weather volatility. It’s whether it can recognize when the very nature of risk has fundamentally changed, and respond accordingly. What Actually Changed and Why It Matters Let’s be precise about what happened in the 2020 and 2022 regime shifts, because the details reveal why traditional approaches failed. In March 2020, we initially saw classic flight-to-quality dynamics. The S&P 500 lost a third of its value between February 20 and March 23. Treasury yields plummeted as investors stampeded into safe havens. The 10-year yield dropped below 0.71%, an unprecedented level. For roughly two weeks, the textbook negative stock-bond correlation held. Bonds rallied as stocks cratered. Then liquidity evaporated. Everything became a forced sale. Correlations flipped. The regime wasn’t just high volatility; it was a complete breakdown of market structure. Portfolio managers who relied on historical correlation matrices for their hedging strategies found themselves exposed on both sides. Fast forward to 2022. A completely different regime break. This time, the enemy was inflation, the dominant macro variable for the first time in decades. The Fed’s aggressive rate hiking cycle created a synchronized selloff across asset classes. Stocks and bonds declined together for 14 consecutive months, representing 31% of trading days. The 36-month stock-bond correlation spiked to 0.66 by December 2024, compared to a 20-year average of negative 0.10. Think about that: two profound market dislocations within 30 months, each requiring opposite defensive positioning. A portfolio optimized for the 2020 regime would have been decimated in 2022. And vice versa. The Tradeoffs Nobody Wants to Acknowledge This creates a genuine strategic dilemma for portfolio construction. You can’t build for both regimes simultaneously using traditional tools alone. Option 1: Optimize for the last crisis. This is the most common institutional response. After 2008, portfolios tilted heavily toward tail-risk hedging and liquidity buffers. These positions offered little protection in 2022 when the threat wasn’t deflation and financial contagion. It was persistent inflation and rising rates. Option 2: Stay perpetually defensive. Hold enough cash and short-duration bonds to weather any storm. But this comes at a massive opportunity cost. Over the past 20 years, equity risk premiums rewarded long-term holders handsomely. The price of permanent defensiveness is structural underperformance in non-crisis years, which are most years. Option 3: Accept the whipsaw. Build for average conditions, acknowledge you’ll get hurt in regime shifts, and trust in mean reversion to bail you out eventually. This works until it doesn’t — typically when client redemptions or regulatory capital requirements force you to lock in losses at precisely the wrong time. None of these are ideal responses. They’re just different ways of accepting static frameworks that can’t solve dynamic problems. What Adaptive Portfolio Management Looks Like The path forward requires acknowledging an uncomfortable reality: Effective risk management in modern markets demands regime-aware positioning. Not prediction recognition. The distinction matters. Consider what you actually need to identify regime shifts as they’re happening, not six months after the damage is done: Volatility isn’t a single number. Realized volatility and implied volatility can diverge dramatically during regime transitions. In early 2020, implied vol (VIX) spiked to 82 while many stocks showed relatively modest realized volatility in the weeks prior. The options market was screaming about a regime shift that backward-looking risk metrics hadn’t fully captured yet. You need frameworks that can synthesize these signals in real-time. Correlations are conditional, not constant. The relationship between stocks and bonds depends entirely on whether inflation or growth uncertainty dominates. When inflation expectations are anchored and growth drives markets, you get the classic negative correlation. When inflation becomes the primary concern, correlations flip positive. Monitoring the ratio of inflation volatility to growth volatility gives you advance warning of these shifts. Institutional flow matters more than most quantitative models acknowledge. In March 2020, the breakdown wasn’t just about fundamentals, it was about leveraged funds forced to deleverage, creating cascading liquidity crises. In 2022, the shift from QE to QT fundamentally altered the supply-demand dynamics for duration. Risk models that ignore these flow dynamics will consistently underestimate systemic stress. The operational challenge is integration. Most firms run separate models for volatility forecasting, correlation estimation, fundamental analysis, and flow monitoring. Each produces valuable signals. But they rarely communicate with each other in a coherent framework. A Framework for Thinking About Regime-Aware Positioning What would regime-adaptive portfolio management look like in practice? Start with regime identification that’s actually implementable.

Why Static Portfolios Fail When Risk Regimes Change Read More »

Building Commitment to Long-Term Investing

Long-term investing is one of the most widely accepted principles in finance. The strategy is well supported: the data is clear, the logic is sound, and the outcomes are well documented. So, when clients hesitate, many financial advisors assume the reason is risk tolerance, lack of conviction, or insufficient understanding. In practice, stalled decisions often have little to do with any of these. Clients don’t necessarily disagree with the strategy, but committing early can feel internally misaligned. They understand the rationale. And still, when it comes time to move forward, momentum slows. Advisors may grow frustrated by the hesitation, but it helps to understand its source. The resistance is not about whether the strategy makes sense. It is about how the act of committing feels. For some clients, a decision is never just a choice — it is also a rejection of every other possibility. While the advisor points to the door labeled “long-term strategy,” the client’s attention lingers on all the other doors still open. Choosing one can feel like stepping onto ground that has not fully formed. This piece explores how to coach clients through that mental framework. A Decision That Feels Premature In conversations with clients, this often appears subtly: “I want to sit with it a bit longer.” “Let’s see how things evolve.” “I’m not against it — I just don’t feel ready yet.” Unless there is clear urgency, these clients experience a decision as acting too early. Advisors, on the other hand, often operate through a different mental filter. They approach long-term planning as an act of control: Decide early Reduce noise Remove future pressure For them, structure brings relief. For some clients, however, that same structure feels constraining. Planning and discipline can register as a loss of responsiveness — an obligation to follow a path even if conditions change. When advisors reinforce confidence with statements like “the data supports it” or “we’ve thought this through,” they address the logic but miss the lived experience. When advice sounds final, the client’s instinct is to slow the process. How to Spot It In conversation, you may notice that these clients: Use language that softens conclusions: “maybe,” “it depends,” “for now” Rarely reject your advice outright Ask “What if?” more often than “Which one is best?” Feel more comfortable when decisions “emerge” rather than when they are scheduled Coaching Shift #1: Reframe Commitment as Protection of Freedom Stop emphasizing what is “right.” Start showing clients how the decision protects future flexibility. Logic is not the missing ingredient. Many clients equate indecision with freedom. From their perspective, postponement preserves optionality. Their attention is anchored in the present, where future consequences feel abstract. In this case, the advisor’s role is to gently redirect attention toward how acting now preserves choice later. Language that helps: “Putting this in place now reduces the chance of being forced into a decision you don’t want.” “This keeps your options open when conditions are less favorable.” “Making a choice today protects your future freedom to choose.” The shift is subtle but powerful: the decision is no longer about being right today, but about preserving choice tomorrow. Coaching Shift #2: Reduce the Psychological Weight For clients who resist long-term commitment, the difficulty is rarely the goal itself. It is the perceived size and finality of the step required to reach it. Large, one-time decisions carry a heavy psychological burden and ruminating thoughts: What if this is the wrong moment? What if I regret acting now? Progress often improves when the decision is broken into smaller, sequential steps. Instead of proposing a single decisive allocation, structure the strategy as a series of intentional moves. The client is no longer deciding the entire future — only the next manageable step. Coaching Shift #3: Make Flexibility Visible in the Design For these clients, flexibility must be visible in the structure of the plan. One practical approach is to separate the portfolio in distinct sections rather than treating it as a single unified commitment. For example: A liquidity component for access and responsiveness A long-term component with a patient objective A more opportunistic component for optionality The exact structure will vary by client, but the principle remains: different parts of the portfolio follow different rules. This accomplishes two things: It reassures the client that not everything is locked in at once. It allows long-term capital to remain invested without triggering constant second-guessing. When flexibility is built into the design, commitment becomes easier. Framing Decisions Long-term investing often fails to gain traction not because clients lack discipline, but because the decision architecture does not match how they experience choice. When advisors adjust how decisions are framed — not just what is recommended — follow-through improves without pressure. This blog is part of the author’s series on behavioral investing. See more here: Managing Client Fear: The Cognitive Skill Every Financial Advisor Should Master Coaching Investors Beyond Risk Profiling: Overcoming Emotional Biases How Clients’ Investment Goals Reflect Risk Behavior and Hidden Biases source

Building Commitment to Long-Term Investing Read More »

Three Levers That Drive VC Returns

Venture capitalists often emphasize their ability to pick winners. Yet the data tell a harsher story: roughly 90% of early-stage VCs fail to outperform a simple Nasdaq ETF after fees. True outperformance is confined to a narrow slice of the top decile. The reason is not mystery or macro conditions. It is misplaced focus. Once you strip away what investors do not control, such as exit multiples, market cycles, acquirer behavior, or timing, early-stage venture capital reduces to just three economic levers: entry valuation, loss avoidance, and right-tail frequency. These determine how much cash limited partners ultimately keep. The three levers operate differently, and not equally. Entry valuation determines ownership. It scales all outcomes. Conditional on exit, it is the only direct way investors affect realized multiples. Loss avoidance reduces the share of capital that goes to zero. It shifts probability mass from complete failures into modest positive outcomes, reshaping the left tail of the distribution. Right-tail frequency determines whether a portfolio includes extreme outliers — 20x, 50x, or 100x returns on invested capital. Stylized Portfolio Consider a stylized portfolio consistent with the empirical venture literature: 100 equal investments of $1 million each. Sixty return zero; twenty-five return 1.8x; ten return 5x; four return 18x; and one returns 50x. Gross proceeds equal $260 million, implying a gross multiple of 2.6x. With a 23.8% capital gains tax rate and no venture-favorable treatment, the after-tax multiple falls to approximately 2.22x. With loss deductibility and qualified small business stock treatment, which reduces taxes on large gains, the after-tax multiple rises to roughly 2.6x. The precise distribution is not central. What matters is how expected returns respond to proportional improvements in each lever. When modeled using a 10% proportional improvement, the results are revealing: a 10% improvement in loss avoidance or valuation discipline increases post-tax returns by roughly 10–12%. A 10% improvement in tail frequency increases returns by only a fraction of that. Now consider how each lever moves performance under that same 10% proportional improvement. Entry Valuation: Ownership Is the Multiplier A 10% improvement in entry valuation increases ownership across all deals and scales all outcomes proportionally. If you pay less for the same asset, you own more. If the company succeeds, you capture more upside. If it fails, you lose less — your downside is bounded by your smaller investment, while upside remains convex. Conditional on exit, entry valuation is the only direct way investors influence realized multiples. Exit size, market timing, and acquisition premiums are not controllable: ownership is. Importantly, valuation discipline is learnable. In bilateral transactions, which characterize much of early-stage venture, investors can improve pricing through structured negotiation, rules, and constraints. Evidence from illiquid markets suggests disciplined buyers can meaningfully improve entry pricing over time. In expected value terms, small improvements in valuation compound across every investment in the portfolio. Loss Avoidance: The Hidden Engine of Returns A 10% reduction in failures meaningfully lifts portfolio returns. In early-stage ventures, where failure rates are high, even modest reductions in wipeouts compound quickly across a portfolio. This lever works by reshaping the left tail of the distribution. Moving capital from complete losses into low-positive outcomes has an outsized impact on expected value, especially after tax. Losses are only partially deductible; avoided losses translate into retained capital. Unlike tail selection, loss avoidance does not inherently trade off against extreme winners. Disciplined screening, staged commitments, and explicit downside checks can eliminate obvious false positives without excluding the right tail. Because zeros are common in VC, avoiding them is economically powerful — and empirically improvable. Right-Tail Frequency: Necessary but Overemphasized Right-tail frequency is the weakest lever in proportional terms. A 10% increase in the probability of an extreme winner raises the expected contribution of the 50x outcome by 10%, increasing the gross multiple from about 2.6x to roughly 2.65x, a pre-tax improvement of approximately 2%. Post-tax, this effect is amplified because extreme winners are exactly where favorable tax treatment applies. Even so, the post-tax improvement remains materially smaller than for the other two levers. While exposure to extreme outliers is necessary for top-decile performance, the key question is not whether they matter; it is whether investors can reliably increase their probability of selecting them. The evidence is thin. Venture outcomes are slow and noisy, limiting feedback. Even optimistic assumptions suggest that proportional improvements in tail selection move expected returns far less than improvements in valuation discipline or loss avoidance. Tails dominate outcomes ex post because they are rare and discrete, not because small improvements in selecting them are especially powerful in expectation. Implications for Practitioners Post-tax expected returns are most sensitive to loss avoidance, next most sensitive to valuation discipline, and least sensitive, by a meaningful margin, to proportional improvements in tail access. For practitioners deciding where to invest scarce learning effort, the implication is straightforward: focus less on trying to identify rare unicorns and more on pricing discipline and avoiding obvious losses. In venture capital, discipline moves expected value more than heroics. source

Three Levers That Drive VC Returns Read More »

What Makes an Ideal Leveraged Buyout Candidate?

With more than $4.6 trillion of capital committed but yet to be invested across private markets (30 June 2025),[1] fund managers face growing pressure to deploy capital while maintaining discipline in due diligence. Buyouts and growth capital, in particular, are highly competitive, with approximately $2 trillion of dry powder chasing a limited pool of suitable targets. Although the largest proportion of private equity (PE) performance is delivered thanks to the mechanical benefits of leverage,[2] experienced fund managers know that it pays to be selective when making investment decisions. Slow and Steady Wins the Race Leveraged buyouts (LBOs) with the best odds of success share a common trait: recurring revenues and predictable cash flows. Indebted companies are exposed to years of compounding interest and, ultimately, the repayment of the loans they borrow. They therefore need to produce regular streams of cash flows. The business should face no substantial capex or working capital requirements, though the best way to secure such regularity in liquidity is by embracing a business model where profits and cash flows are not subject to much variability. As one example, software as a service (SaaS) is better than the delivery of software or hardware on its own. A SaaS provider offers solutions over time, not just a one-off product sale. Likewise, a smartphone maker like Apple is not just a hardware and software designer. The company provides application platforms that attract app developers that make its offer stickier with the end user. Once smartphone users have downloaded multiple apps on their phones, their apps sit in the cloud and are transferable from one phone to the next. The fact that app developers are independent, usually self-employed contractors, also reduces the risk profile of this revenue model from the app platform’s standpoint. Apps follow a blockbuster profile, meaning that very few of them are winners. If Apple had to develop all apps in-house, the fact that many of them generate limited demand would create an uncertain flow of revenue while the salaries of developers would be fixed. In summary, businesses with a sticky revenue profile and variable (or outsourced) costs are great LBO targets. The value is no longer in a one-off product sale but in recurring platform access. This shift toward solutions rather than products reflects the business model General Electric introduced in the 1980s under Jack Welch’s leadership. Moving beyond fridges and aircraft engines, GE became a supplier of options, accessories, maintenance, and even financing services. Proposing a complete, integrated solution makes cash flows more predictable because customer switching costs rise. Subscription- and fee-based revenue models, like the ones espoused by fund managers, are better than blockbuster projects like video games and movies because they provide strong visibility. Similarly, businesses with an installed base offer greater predictability. A commonly cited example is Gillette’s razor-and-blade model, which ensures customer stickiness. Social networks like Facebook and search engines like Google also benefit from economies of scale through network effects, a modern extension of the installed base principle. Another strong point of predictable, positive cash flows is that they attract lenders, as loan agreements typically offer limited upside participation yet sizeable downside exposure. Imperfect Market Structure The best LBO candidates should hold a dominant market position with high barriers to entry. Monopolization favors profit maximization.[3] They should not face the risk of disruption from new technologies nor from new entrants or substitutes. Let’s review a few practical implications: Fragmentation of customer and supplier base: One way to protect cash flows is to trade with many suppliers and clients. Inversely, being dependent on one or only a handful of key service providers or clients is risky. In the wake of the global financial crisis (GFC), for instance, TPG-sponsored broadcaster Univision was heavily dependent on one key content provider, namely Televisa, which negatively affected its performance during contract renegotiations. Companies with that sort of concentrated sourcing or sales profile represent too much of a risk to undergo an LBO. Cyclical vs. cycle agnostic: Cyclical companies are not reliable sources of leverageable assets, either. Sectors like retail, especially fashion retail, as well as transaction-based industries like investment banking, air travel, commodities trading, and advertising-dependent segments are best avoided. There is a dangerously complacent phrase in the investing world: “recession proof.” No company is truly safe from the negative effects of an economic downturn, especially if it is overleveraged. Nonetheless, subscription-based models, food & beverage manufacturing — a key staple of many PE firms — and businesses that operate on long-term contracts like airport and toll-road operators are more resilient. Popular culture vs. tech culture: For years, outside of downturn-driven corporate turnarounds, LBO fund managers focused almost exclusively on value plays, namely sectors and companies with long product cycles and steady, if unremarkable, growth in sales and cash flows. These businesses rarely experienced large shifts in performance. The tech revolution that started in the business-to-business sectors of the economy and gradually infiltrated the consumer world over the past 30 years has changed the structure of many industries. Companies that were expected to adapt to popular culture, with trends measured in multi-year or even decades-long product life cycles, today face a much more dynamic boom-and-bust, fad-oriented market. The digitalization of whole swathes of the economy, from information to retail and from entertainment to leisure, shortened product upgrades to one year, sometimes a few quarters for the most ephemeral video games. The consequences of technological disruption on companies trying to deliver predictability to service debt can be traumatic.[4] PE fund managers must refrain from investing in sectors exposed or likely to get exposed to technological changes. A reliable LBO target should require no major strategic changes or wide-scale rationalization. Optimal Business Fundamentals Beside market dominance and cash-flow predictability to cover debt commitments, the most sought-after LBO targets are mature, viable, stand-alone businesses. Two other criteria worth mentioning relate to assets and people. Asset efficiency: For asset-rich businesses, the key question a fund manager must answer is how to get more out of the assets. High

What Makes an Ideal Leveraged Buyout Candidate? Read More »

Book Review: Principles of Bitcoin

Principles of Bitcoin: Technology, Economics, Politics, and Philosophy. 2025. Vijay Selvam. Columbia University Press. Decentralized finance continues to evolve. The relative novelty of a digital asset and means of exchange — bitcoin is, after all, a mere sixteen years old — seems to be an unending source of fascination across all strata of society. The mystery and enamorment of the digital currency will likely increase given the heightened attention accorded it from the current American presidential administration, whose proclivity toward less regulation would warrant, demand even, a more nuanced understanding of its multifaceted nature. Bitcoin sits at the axis of technology, economics, politics, and philosophy. Governments, policymakers, economists, information technology professionals, and risk officers will all welcome the author’s rigorous analysis and lucid explication. CFA® charterholders and those aspiring to be will find the treatment of the subject matter a bit different from more conventional valuation processes accorded public and private markets. Then again, bitcoin is anything but conventional. A skeptic by nature, a trait the author attributes to his métier of law, Vijay Selvam was educated in more traditional concepts of asset valuation to which bitcoin does not lend itself. Yet he brought a deep understanding of complexity to his work with real estate structured products and derivatives, whose performance was the proximate cause of the Great Recession. His involvement in 2008 with the creation of a bailout arrangement for a Wall Street bank in the midst of the debacle left him cynical. Bitcoin made its first appearance shortly thereafter as an alternative to the wreckage of centralized finance recently visited upon economies across the world. The author’s self-awareness of a cognitive bias against bitcoin and toward conventional finance led him to the realization that a basic reference work on the subject was lacking. Principles of Bitcoin offers a multifaceted evaluation of bitcoin in an attempt to place its reputation and notoriety in a thoughtful context. To understand bitcoin is to understand the ascent of money through the interrelationships between economics, politics, technology, and philosophy. It is as much about unlearning traditional concepts of asset valuation as it is about modifying one’s approach to understanding this new thing. Bitcoin’s inventor, Satoshi Nakamoto, anguished over how best to describe bitcoin. Cracking its recondite nature requires the use of first-principles thinking, a disassembly of the subject matter into its fundamental components, and a development and progression of one’s understanding of concepts. Indeed, this holistic approach is central to the book and helps shed light on bitcoin’s true purpose and mechanics. The technical discussion spans five chapters and at times can appear complex, though the author endeavors to make it accessible through numerous references to philosophy, technology, and literature. One may view bitcoin as a scarce digital commodity in some ways akin to gold, whose path-dependent nature and inextricable link to the internet make it a robust asset. Bitcoin’s technology employs cryptography, distributed systems, and economic motivations to produce a digital asset that is robust to the risk of double-spending and transparent on a public ledger. Proof of Work (PoW) ensures a form of decentralized agreement. Bitcoin technology accords it distinct traits of scarcity, divisibility, portability, verifiability, durability, resistance to censorship, and unconfiscatability. Its first-mover status and recognizability, coming on the heels of the global financial crisis, afford it an advantage that would be tough to replicate, let alone beat. Against the backdrop of monetary history, which has seen (hyper)inflation and currency debasement, and given that some governments weaponize money against their citizenry, bitcoin would appear to be a safe harbor. It is pseudonymous and knows no borders. It is able in many instances to escape confiscatory risk. It has the potential to serve the unbanked millions in far-flung corners of the world where conventional financial services don’t reach. Bitcoin’s decentralized architecture makes any attempt by governments to proscribe it difficult, if not impossible. Its transnational and apolitical features would also appear to address the issue that erstwhile French president Valéry Giscard d’Estaing termed the US dollar’s exorbitant privilege, or transactional hegemony, over other currencies. The author argues for bitcoin as a global reserve asset. As a new arrival on the financial landscape, bitcoin has suffered, and will continue to suffer, from malign perception and skepticism, unquestionably a time-honored ritual in the history of finance. That cash and gold have been employed in criminal activity does not make them inherently flawed as instruments of value. Similarly, several well-publicized incidents where bitcoin has been used to nefarious ends need not sully its reputation. Indeed, financial institutions have been implicated in money laundering schemes by orders of magnitude greater than the digital currency. A separate and interesting topic is bitcoin’s interaction with the environment. Here, the author seeks to dispel misperceptions regarding bitcoin’s environmental unfriendliness, arguing that its production can work to facilitate a more efficient transition toward sustainable energy sources. He adduces numerous examples of countries using bitcoin to pursue energy-friendly solutions. Principles of Bitcoin is at once reportorial and editorial. The writing is clear, the references rich. While it does evidence a bias in favor of bitcoin, Selvam’s compendium informs and educates. Readers would do well to approach the subject matter with intellectual curiosity and patience. Though coverage of the topic from many perspectives has been extensive, these are nonetheless early days. You don’t know what you don’t know. Through this work, Vijay Selvam endeavors to close that gap. source

Book Review: Principles of Bitcoin Read More »

Three Risks of Relying on the S&P 500 in Retirement Planning

For the past 15 years, investors have been rewarded for doing one thing well: owning the S&P 500. Cap-weighted, growth-heavy portfolios dominated returns and reinforced expectations that strong recent performance would persist. The risk is not what those portfolios delivered, but what investors now assume they will deliver next, and how those assumptions hold up once the objective shifts from beating a benchmark to funding retirement income. When success is defined by generating consistent, absolute returns rather than relative outperformance, the trade-offs change. Drawdowns matter more, volatility becomes asymmetric, and the order of returns can overwhelm long-term averages, particularly once withdrawals begin. Using rolling 15-year data across major US equity styles, this analysis addresses three practical questions that matter for retirement outcomes: How do trailing returns influence future return expectations? How often do different portfolio designs meet an 8% long-term return target? How do withdrawals affect drawdown risk once investors shift from accumulation to spending? Using rolling 15-year data across major US equity styles, this analysis addresses three practical questions that matter for retirement outcomes: How do trailing returns influence future return expectations? How often do different portfolio designs meet an 8% long-term return target? And, How do withdrawals affect drawdown risk once investors shift from accumulation to spending? 1. Trailing Returns and Forward Expectations One of the hardest habits for investors to break is assuming that recent performance will continue, even when “recent” means a decade or more. That may sound discouraging for investors in broad market passive or growth-oriented portfolios, but history has also shown a better outcome for strategies that emphasized diversification or valuation discipline, such as equal-weight, value, or defensive approaches. For these portfolios, looking back on the last 15 years has historically had little bearing on what the next 15 would bring. Even after strong periods, diversified, value-focused, or defensive quality-oriented styles did not experience the same sharp drop-off in returns that cap-weighted or growth investors often faced. One potential cause of this divergence is portfolio construction. Cap-weighted and growth portfolios systematically increased exposure to recent winners, magnifying returns during strong periods while embedding risks that only surfaced during market stress. By contrast, diversified, value-focused, or defensive quality-oriented portfolios relied less on multiple expansion and more on fundamental drivers, while systematic rebalancing trimmed winners and added to laggards. These structural features enforced valuation discipline over time and helped mitigate the boom-bust pattern that historically plagued concentrated growth exposures. The data confirmed this intuition. As illustrated in Figures 1 to 7, rolling 15-year analysis showed a strong inverse relationship between trailing and forward returns for cap-weighted and growth portfolios. Diversified, value-focused, or defensive quality-oriented styles, on the other hand, exhibited muted cyclicality. In other words, the portfolios that looked safest based on strong trailing performance carried the greatest forward risk, and those that appeared “boring” often delivered more stable outcomes across full cycles. Figure 1: The Next 15 Years: Rethinking Equity Style Risk. Portfolio Trailing 15‑Year Return Estimated Next 15‑Year Return Median 15‑Year Return R² (Trailing vs. Forward) Top 500 Growth 17.8% 6.1% 11.4% .79 Top 500 Cap Weighted 14.2% 8.3% 10.5% .74 Top 500 Equal Weighted 12.3% 11.7% 11.7% .54 Top 500 Value 12.9% 14.5% 13.3% .47 Top 500 Low Vol VMQ 12.1% 13.9% 12.9% .28 Top 500 Low Vol 11.5% 11.1% 10.3% .51 Disclosures: Past performance is no guarantee of future results. All the returns in the chart above are in reference to unmanaged, hypothetical security groupings created exclusively for analytical purposes. These are hypothetical styles based on describing characteristics. Please see appendix for definitions and citations. Figure 2: Growth’s Next 15 Years May Not Look like the Last 15 Years. Figure 3: Market Cap-Weighting’s Next 15 Years May Not Look like the Last 15 Years. Figure 4: Equal Weight’s Last 15 Years Have Been Consistent With Long‑Term Norm. Figure 5: Value’s Last 15 Years: Right in Line With Its Long‑Term Return Profile. Figure 6: Low Vol VMQ’s Forward Prospects Look More Constructive. Figure 7: Low Vol’s Next 15 Years May Look Like the Last 15 Years. For cap-weighted and growth portfolios, the regression lines showed a pronounced negative slope: periods of exceptional trailing returns were typically followed by much lower forward returns. For example, over the last 15 years the Top 500 Growth delivered 17.8%, but the forward 15-year expectation is just 6.1%. This pattern is consistent with valuation mean reversion and the cyclicality of market leadership. 2. Benchmark Performance vs. Your Retirement Target This section analyzes rolling 15-year returns for major US equity styles with a focus on the practical implications for retirement savers. Their success does not depend on beating the S&P 500, but rather, achieving consistent, absolute returns required to hit retirement savings targets. Most retirement plans rely on a return from equities of about 8% per year, a number baked into many glide paths, actuarial models, and retirement calculators. That assumption is critical because it determines whether portfolios grow enough to fund future withdrawals. Overshooting that target, thanks to strong markets or product outperformance, is a welcomed bonus. But undershooting it may be catastrophic. It may mean delaying retirement, at the cost of precious time, or accepting a lower standard of living for decades. On the surface, the average cap-weighted or growth portfolio return looked very attractive, even across decades that included both bull and bear markets. But a closer look revealed something troubling, in nearly a third of the 15-year periods, these portfolios failed to reach the critical 8% annualized return. By contrast, diversified, value-focused, or defensive quality-oriented portfolios dramatically reduced that risk. In fact, the chance of missing the 8% target dropped to nearly zero for value-focused portfolios, and simple equal-weighted portfolios had only a 15% shortfall risk. While these approaches were less likely to fully capture the best periods (think fewer “home runs”), they have better odds of meeting the goal that mattered most: fully funding a secure retirement. Figure 8: Market Cap-Weighting Had the Most Sub 8% Returns. Disclosures: Past performance is no guarantee

Three Risks of Relying on the S&P 500 in Retirement Planning Read More »

Lincoln’s Blueprint for Ethical AI

“Let us have faith that right makes might.” — Abraham Lincoln, Cooper Union Address1 Abraham Lincoln, the 16th president of the United States, forged his leadership during a period of profound national upheaval and rapid technological change. Just as the telegraph, railroad, and printing press transformed the 19th century, artificial intelligence (AI), digital networks, machine learning, and automated decision-making systems are reshaping modern life. The values Lincoln emphasized in the 1860s, responsibility, transparency, and moral restraint, offer a timely framework for guiding AI development with ethical guardrails that ensure technology serves humanity, not the reverse. While we can only speculate about what Lincoln would have thought of AI, history suggests he would have embraced its potential while insisting that its advancement remain grounded in law, ethics, and human dignity. Business leaders and investors can draw from Lincoln’s conviction that free enterprise and technological innovation should elevate fundamental human worth rather than erode it. An Innovator with Moral Restraint To be sure, Lincoln was himself an innovator. He remains the only US president to hold a patent, awarded in 1849 for a device to lift stranded boats over shoals, an innovation designed to improve transportation efficiency and expand commercial access.2 As president, he championed federal investment in railroads and telegraph networks, signing the Pacific Railway Act in 1862 to connect the nation through infrastructure that expanded commerce and communication.3 Lincoln notably embraced the transformational power of the telegraph as a tool for instantaneous communications. During the Civil War, he put considerable effort into centralizing and ramping up the US Military Telegraph Corps. David Homer Bates, who managed the telegraph office, reported that “during the Civil War the President spent more of his waking hours in the War Department telegraph office than in any other place, except the White House.”4 Yet Lincoln never conflated technological speed with sound judgment. For example, he often waited for additional dispatches during the Overland Campaign before approving military movements, resisting the urge to allow the speed of information to supplant sober judgment.5 Historians describe the telegraph office as Lincoln’s “war room,” where he took in real-time intelligence but insisted that decisions remain a matter of human responsibility.6 Similarly, AI should be viewed as an enhancement to human decision-making, not a replacement. Recent advancements in medicine have allowed AI to make faster, more accurate diagnoses of breast cancer than human radiologists, but practitioners caution that algorithms should inform rather than override the judgment of clinical professionals.7  History suggests Lincoln would surely and embrace this idea and not swap out human judgment and intuition. Ethics Over Efficiency In his First Annual Message delivered to Congress on December 3, 1861, Lincoln declared, “labor is prior to and independent of capital,” adding that capital is only the “fruit of labor”.8 In this speech, where he uses the word “labor” thirty-one times, Lincoln argues for maintaining a moral foundation for business operations in which human labor, creativity, and dignity are the dominant factors over capital, profits, and efficiency. That perspective resonates amid modern debates over AI and automation. While some business leaders predict widespread job displacement, Lincoln viewed labor as central to human purpose and self-worth. Innovation, in his view, should expand opportunity rather than reduce people to expendable inputs. Rather than viewing labor as merely a means to an end whose sole purpose is the generation of financial profit, Lincoln considered labor an essential element in defining one’s purpose in life, a core foundation of one’s own human dignity. 9 In today’s AI paradigm, Lincoln’s message remains as relevant as ever. Some of the nation’s most prominent business leaders predict that AI will eventually eliminate all human work10 and the largest corporations plan to invest in automation at the expense of human labor and welfare.11   A recent report suggests algorithmic scheduling systems in retail and logistics tend to prioritize speed and profit at the expense of employee stability and well-being.12   By contrast, AI-powered education platforms that allow workers to retrain and advance into roles with higher skills echo Lincoln’s belief that labor should be elevated rather than replaced.13 Lincoln’s belief that innovation should elevate rather than replace human work suggests he would support that latter and reject the former— used solely to maximize profits by displacing labor. Law as the Moral Boundary of Innovation Before entering politics, Lincoln was a lawyer who believed deeply in the rule of law. He warned that respect for law must become the nation’s “political religion,” and provide a safeguard against injustice and abuse of power.14 While he respected the constitutional boundaries of his office, even while stretching them in times of crisis, he consistently viewed (and based) his legal decisions through a lens of ethical responsibility. AI presents similar challenges. Trained on imperfect human data, AI systems can perpetuate bias, undermine privacy, and concentrate power. Documented failures from discriminatory hiring algorithms to biased facial-recognition systems underscore the risks of unregulated deployment. From unregulated facial-recognition systems to loose oversight of large language models (LLMs), there has never been a more pressing time than now to take Lincoln’s advice fully under consideration. 15, 16 Lincoln’s legal sensibility suggests that regulation should not stifle innovation but guide it. Clear, enforceable guardrails can help ensure that AI strengthens democratic equality and civil rights rather than eroding them. For long-term investors, legal clarity and ethical governance are not obstacles to growth, but rather prerequisites for sustainable value creation. 17 Human Dignity at the Center of Progress Lincoln’s vision for America was not limited to preserving the Union. He wanted to preserve a Union “dedicated to the proposition that all men are created equal.”18 Human dignity stood at the center of his moral and political vision. Scholars of AI ethics note that LLMs and predictive tools, if left unchecked, could reinforce social biases or marginalize vulnerable groups. They can reduce people to data points, make decisions without human oversight, invade privacy through surveillance, or reinforce unfair stereotypes.19 Whether in his debates with Stephen Douglas or in his public

Lincoln’s Blueprint for Ethical AI Read More »