...
๐Ÿ“– Research Document ยท v3.2

Relative Valuation Model
Methodology & Framework

A comprehensive, institutional-grade documentation of the quantitative frameworks, data pipelines, and statistical methodologies that power the Relative Valuation Dashboard. This document is intended for sophisticated investors, analysts, and researchers who require full transparency into model mechanics.

Model Version 3.2ยทLast Updated: February 10, 2026ยท~16,000 Wordsยท16 Sections
1

Investment Philosophy & Approach

The Relative Valuation Model is built on a foundational premise: a company's intrinsic worth, while ultimately unknowable with precision, can be meaningfully approximated by studying how the market prices similar businesses. This is not a momentum overlay, a technical signal, or a sentiment gauge. It is a fundamental, bottom-up valuation framework that treats market multiples as the collective judgment of thousands of institutional investors, sell-side analysts, and market participants.

Why Relative Valuation?

Discounted cash flow (DCF) models are theoretically elegant but suffer from acute sensitivity to terminal growth rate and discount rate assumptions โ€” small changes in these inputs produce vastly different outputs. Relative valuation sidesteps this fragility by asking a different question: given how the market prices comparable businesses, what should this company trade at?

The model combines this relative approach with absolute quality metrics, historical context, and stochastic simulation to produce a blended fair value estimate that is more robust than any single methodology alone.

Core Principles

  1. Multi-method convergence. No single valuation multiple is authoritative. The model blends 4โ€“10 methodologies, weighted by data quality and relevance, to reduce single-method bias.
  2. Quality deserves a premium. A company with 48% ROIC should not trade at the same multiple as a 12% ROIC peer. The model applies a justified quality premium calibrated on historical back-tests.
  3. Margins revert. Peak earnings create valuation traps. The model detects above-trend margins and applies cyclicality haircuts to prevent overvaluation at cycle peaks.
  4. Confidence matters. A fair value estimate without a confidence score is a guess. Every output includes a 0โ€“100 confidence assessment based on data depth, peer fit, and model agreement.
  5. Transparency over opacity. Every weight, every adjustment, and every assumption is visible to the user. There are no black boxes.
โš ๏ธ Important DisclaimerThis model provides quantitative analysis for educational and research purposes only. It is not investment advice, a recommendation, or a solicitation. Model outputs are inherently uncertain and should never be the sole basis for financial decisions. Consult a qualified financial advisor before making investment decisions.
2

Data Infrastructure & Pipeline

The model operates on a proprietary data pipeline that ingests, normalizes, and validates financial data from multiple institutional-grade sources. Data integrity is the foundation of every valuation output โ€” no model can compensate for contaminated inputs.

Primary Data Sources

Fundamental Data
SEC EDGAR
Income statements, balance sheets, and cash flow statements parsed directly from 10-K (annual) and 10-Q (quarterly) filings using XBRL/iXBRL structured data. No third-party interpretation layer.
Market Data
Exchange Feeds
End-of-day pricing, volume, market capitalization, and enterprise value computed from institutional-grade exchange data feeds with T+0 processing.
Forward Estimates
Consensus Aggregation
Forward EPS, revenue, and EBITDA estimates aggregated from sell-side analyst consensus. Weighted by recency and analyst track record where available.
Industry Classification
Proprietary 4-Level Taxonomy
Custom L1โ†’L4 industry mapping refined from SIC/NAICS codes and augmented with business-model classification (standard, bank, REIT, insurance, utility, biotech).

Data Processing Pipeline

  1. Ingestion: Raw XBRL filings parsed within 24 hours of SEC publication. Market data ingested at end-of-day.
  2. Normalization: Financial line items mapped to a standardized 180+ metric schema. Handles US-GAAP presentation differences across filers (e.g., "Revenue" vs "Net Revenue" vs "Total Revenue").
  3. Validation: Automated cross-statement reconciliation (e.g., Net Income on IS = Net Income on CF). Anomaly detection flags outlier values >3ฯƒ from historical.
  4. Derived Metrics: 90+ computed ratios (ROIC, FCF yield, Piotroski F-Score, etc.) calculated from normalized fundamentals.
  5. Screening Engine: Pre-computed screening tables updated daily with current multiples, growth rates, and quality scores for the full US equity universe.
๐Ÿ“Š Data CoverageThe pipeline covers 7,500+ US-listed equities across NYSE, NASDAQ, and AMEX. Historical fundamental data extends 7+ years for most companies, with 10+ years available for large-cap names. Market data history extends 20+ years for price-based metrics.
3

Peer Universe Construction

The quality of a relative valuation is entirely dependent on the quality of the peer comparison set. An inappropriate peer universe renders all subsequent analysis meaningless. The model uses a hierarchical, four-level industry classification system to construct the most relevant peer group for each company.

Four-Level Industry Taxonomy

LevelDescriptionExample (Apple)Typical Count
L1 โ€” SectorBroadest classificationTechnology800โ€“1,200
L2 โ€” IndustryPrimary peer group (used for multiples)Technology Hardware40โ€“150
L3 โ€” Sub-IndustryRefined peer groupConsumer Electronics15โ€“50
L4 โ€” Peer ClusterClosest comparablesMega-Cap Tech (MSFT, GOOGL, META, AMZN)8โ€“20

Peer Selection Rules

Scatter Plot Visualization

The peer comparison scatter plot maps each company's P/E ratio (y-axis) against 5-year EPS CAGR (x-axis). Bubble size represents market capitalization. A regression line (quality curve) shows the expected relationship between growth and valuation, with Rยฒ measuring goodness-of-fit. Companies below the quality curve are potentially undervalued relative to their growth profile.

4

Multiple Selection & Prioritization

Not all valuation multiples are created equal. A trailing P/E ratio is meaningless for a pre-revenue biotech, and EV/EBITDA is inappropriate for a bank. The model dynamically selects and prioritizes multiples based on the target company's business model, data availability, and sector relevance.

Available Multiples

MultipleTierBest ForLimitations
P/E RatioPrimaryProfitable companies, earnings-driven sectorsDistorted by non-recurring items, capital structure
EV/EBITDAPrimaryCapital-intensive industries, M&A compsIgnores capex requirements, working capital
P/FCFPrimaryCash-generative businesses, mature companiesVolatile for high-growth / capex-heavy firms
EV/EBITSecondaryOperating earnings focus, cross-border compsAffected by D&A policies
P/SalesSecondaryPre-profit companies, revenue-stage firmsIgnores profitability entirely
EV/SalesSecondaryCapital structure-neutral revenue compsSame as P/Sales
P/BookSecondaryAsset-heavy, financial companiesMeaningless for asset-light businesses
P/FFOPrimary (REIT)Real Estate Investment TrustsN/A for non-REITs
P/AFFOPrimary (REIT)Adjusted funds from operationsVaries by REIT definition
P/TBVPrimary (Bank)Banks, financialsN/A for asset-light companies
Dividend YieldSupplementaryIncome-oriented investors, utilitiesMisleading for non-dividend payers

Dynamic Weight Assignment

Weights are assigned algorithmically based on three criteria:

  1. Tier classification โ€” Primary multiples receive 2โ€“4ร— the weight of secondary multiples.
  2. Data availability โ€” Multiples with <10 peers contributing data receive reduced weight. Multiples with no historical data receive zero weight.
  3. Business model relevance โ€” Banks receive zero weight on EV/EBITDA; REITs receive maximum weight on P/FFO and P/AFFO; standard companies receive maximum weight on P/E and EV/EBITDA.

Weights are normalized to sum to 100%. The resulting blend is the raw fair value before quality and cyclicality adjustments.

5

Fair Value Blend Methodology

Each contributing multiple produces an implied fair value โ€” the price at which the target company would trade if it were valued at the industry median multiple. The blended fair value is the weighted average of all implied values.

Implied Value Calculation

For earnings-based multiples (P/E, P/FCF):
Implied Price = Industry Median Multiple ร— Company's Metric (EPS, FCF/share, etc.)

For enterprise-value multiples (EV/EBITDA, EV/EBIT, EV/Sales):
Implied EV = Industry Median Multiple ร— Company's Metric
Implied Price = (Implied EV โˆ’ Net Debt + Cash) รท Shares Outstanding

Three-Source Fair Value

Each multiple can produce up to three implied values:

The model prioritizes industry-implied values (most weight) but incorporates historical and forward perspectives to reduce recency bias and consensus herding risk.

Weighted Blend Formula

Raw Fair Value = ฮฃ (Implied Valuei ร— Weighti)
Adjusted Fair Value = Raw Fair Value ร— Quality Adjustment ร— Cyclicality Adjustment

Where:
โ€” Quality Adjustment is a multiplier from the quality framework (Section 6)
โ€” Cyclicality Adjustment is a haircut from the cyclicality framework (Section 7)
6

Quality Adjustment Framework

The industry median multiple treats all companies as equal. But a company with vastly superior profitability, faster growth, and a fortress balance sheet deserves a premium. The Quality Adjustment Framework quantifies this justified premium (or discount) based on three fundamental pillars.

Three-Pillar Scoring

Pillar 1 ยท 50% Weight
Growth
5-year revenue CAGR, 5-year EPS CAGR, and forward revenue growth vs. industry medians. Scored โˆ’1.0 to +1.0 based on percentile rank within peer group.
Pillar 2 ยท 35% Weight
Profitability
ROIC (or ROE for financials), net margin, gross margin, and FCF margin vs. industry. Companies with ROIC >30% score in the top quartile. ROIC <8% scores negatively.
Pillar 3 ยท 15% Weight
Balance Sheet Strength
Net Debt/EBITDA, interest coverage ratio, current ratio. Net cash positions score maximum. Leverage >4ร— EBITDA triggers negative scoring.

Composite Premium Calculation

Quality Score = (Growth ร— 50%) + (Profitability ร— 35%) + (Balance Sheet ร— 15%)
Justified Premium = Quality Score ร— Elasticity Coefficient

Elasticity Coefficient: calibrated at 0.20 for US large-cap (i.e., a perfect quality score of +1.0
justifies a 20% premium to industry median). Capped at ยฑ40% to prevent runaway extremes.
๐Ÿ“ˆ Back-Test ResultThe quality premium elasticity was calibrated on 12 years of US large-cap data (2012โ€“2024). Companies in the top quality quintile traded at a median 22% premium to industry peers; bottom quintile at a 18% discount. The 0.20 elasticity coefficient captures this relationship without overfitting.
7

Cyclicality Detection & Adjustment

One of the most dangerous valuation traps is buying a cyclical company at peak earnings. The P/E looks cheap because earnings are temporarily inflated โ€” when margins normalize, both earnings and the multiple contract simultaneously (the "double whammy"). The model's cyclicality framework detects this risk.

Margin Z-Score

The primary detection mechanism is the Margin Z-Score โ€” a standardized measure of how far the current net margin deviates from its 7-year historical average.

Margin Z-Score = (Current Net Margin โˆ’ 7Y Average Net Margin) รท 7Y Std Dev of Net Margin

Interpretation:
โ€” Z < +0.5ฯƒ: Normal (no adjustment)
โ€” Z = +0.5ฯƒ to +1.0ฯƒ: Elevated (5%โ€“10% haircut)
โ€” Z = +1.0ฯƒ to +2.0ฯƒ: High (10%โ€“15% haircut)
โ€” Z > +2.0ฯƒ: Extreme (15%โ€“20% haircut)

Cycle Position Labels

Normalized Fair Value

When cyclicality is detected (Z-Score >+1.0ฯƒ), the model computes a normalized fair value โ€” what the company would be worth if margins reverted to their 7-year average. This is displayed as the "downside scenario" in the Peak Risk Analysis section.

8

Monte Carlo Simulation Engine

A single-point fair value estimate creates false precision. The Monte Carlo engine generates a probability distribution of fair values by running 10,000 simulations with randomized inputs, producing percentile-based confidence intervals that acknowledge the inherent uncertainty in valuation.

Simulation Parameters

ParameterDistributionSource
Industry median multipleNormal(ฮผ, ฯƒ) where ฯƒ = IQR/1.35Cross-sectional peer dispersion
Quality premiumUniform(โˆ’5%, +5%) around computed premiumElasticity uncertainty
Margin variationNormal(current, historical_std)7-year margin volatility
Growth rate variationTriangular(bear, base, bull)Analyst consensus range
Multiple weight noiseDirichlet perturbation (ฮฑ=10)Weight uncertainty

Output Percentiles

P10
Bear Case
10th percentile โ€” 90% of simulations produce a higher value. Represents downside scenario.
P50
Median Case
50th percentile โ€” the central tendency of the distribution. Most comparable to the blended fair value.
P90
Bull Case
90th percentile โ€” only 10% of simulations produce a higher value. Represents upside scenario.

Conviction Zone Mapping

The current stock price's position within the Monte Carlo distribution determines the conviction zone:

9

Confidence Scoring System

Every fair value estimate should be accompanied by a confidence score. A model that says "fair value is $200, confidence 92" communicates something fundamentally different from "fair value is $200, confidence 38." The confidence score (0โ€“100) measures the reliability of the model's inputs and internal consistency, not the probability that the stock will reach fair value.

Six-Component Breakdown

ComponentWeightMeasuresScore Range
Data Completeness25%How many multiples have valid data (current + historical + peer)0โ€“100
Peer Quality20%Number of valid peers, dispersion of peer multiples, Rยฒ of regression0โ€“100
Historical Depth15%Years of fundamental data available (7 = max, <3 = penalty)0โ€“100
Earnings Stability15%Coefficient of variation of EPS over 5 years. Stable = high score.0โ€“100
Model Agreement15%Standard deviation of implied values across methodologies. Lower = higher confidence.0โ€“100
Cyclicality Penalty10%Margin Z-Score. Elevated margins reduce confidence.0 to โˆ’30
Confidence = ฮฃ (Component Scorei ร— Weighti) + Cyclicality Penalty
Floored at 0, capped at 100.
10

Signal Strength Composite Score

The Signal Strength Score (0.00โ€“1.00) is the model's single most actionable output. It synthesizes valuation attractiveness, model reliability, risk factors, and Monte Carlo positioning into one number. This is not a buy/sell recommendation โ€” it is a quantitative assessment of the current setup's favorability.

Four-Component Weighted Formula

Signal Score = (MOS ร— 40%) + (Confidence ร— 30%) + (Margin Risk ร— 15%) + (P50 Spread ร— 15%)
ComponentWeightRaw InputNormalization
Margin of Safety (MOS)40%Upside % to fair valueCapped at 1.0 (at +30% upside). Linear below. Zero at 0% upside.
Model Confidence30%Confidence score (0โ€“100)Divided by 100. Score of 78 โ†’ 0.78.
Margin Risk Penalty15%Margin Z-ScoreNegative contribution. Z of +1.8 โ†’ โˆ’0.18. Capped at โˆ’0.30.
P50 Price Spread15%MC P50 vs current price %Positive if P50 > price. Capped at ยฑ0.15.

Signal Labels

11

Model Action Zones

The Action Zone framework translates the quantitative model output into concrete, price-level-based zones. These zones represent model-derived price thresholds where the risk/reward profile changes meaningfully โ€” they are not buy/sell targets.

Zone Construction

Threshold Conditions

Beyond price levels, the model tracks four quantitative conditions that must all be satisfied for the model to register a "favorable" signal:

  1. Price โ‰ค Primary Zone: Market price at or below the model's primary threshold.
  2. Confidence โ‰ฅ 75: Sufficient data quality and model agreement.
  3. Margin Z-Score โ‰ค +1.0ฯƒ: Margins not dangerously elevated.
  4. P50 Fair Value > Price: Monte Carlo median confirms upside.
โš ๏ธ Not Investment AdviceAction zones are quantitative model outputs for educational purposes. They do not account for individual circumstances, risk tolerance, portfolio construction, tax implications, or qualitative factors like management quality, competitive dynamics, or regulatory risk. Never use these zones as the sole basis for investment decisions.
12

Regression-Based Fair Multiples

The industry median approach assumes all companies deserve the same multiple. The quality-adjusted approach applies a blanket premium. The regression approach takes this further: it builds a cross-sectional regression model that predicts a company's "expected" multiple based on its specific fundamental characteristics.

Regression Specification

Expected Multiplei = ฮฒโ‚€ + ฮฒโ‚(Revenue Growth) + ฮฒโ‚‚(Net Margin) + ฮฒโ‚ƒ(ROIC) + ฮฒโ‚„(Beta) + ฮต

Regression is run cross-sectionally across the L2 peer universe.
Rยฒ typically ranges from 0.45 to 0.85 depending on sector homogeneity.

If a company's actual multiple is significantly below its regression-predicted multiple, it may be undervalued relative to its fundamental characteristics. The regression provides signals like "Cheap" (trading below expected), "Fair" (near expected), or "Rich" (trading above expected).

13

Peak Earnings Risk Framework

The Peak Earnings Risk framework is the model's guard against the classic value trap: buying a stock that looks cheap on peak earnings, only to see both earnings and multiple contract. It answers the question: "What if current profitability is unsustainably high?"

Three Outputs

Metric
Margin Z-Score
How many standard deviations above/below 7Y average is the current net margin? Z >+1.0ฯƒ triggers elevated risk warnings.
Scenario
Normalized Fair Value
Fair value recalculated using 7Y average margins instead of current margins. Shows what the stock "should" be worth if margins revert.
Impact
Downside %
Percentage decline from current price to normalized fair value. Represents the potential loss if margins compress.
14

Sensitivity & Scenario Analysis

The sensitivity matrix explores how fair value changes across different combinations of two key drivers:EPS growth rate (rows) and P/E multiple (columns). This reveals the model's sensitivity to its most impactful assumptions.

Matrix Construction

The center cell (base case growth ร— current P/E) represents the status quo. Cells highlighted in green indicate fair values above current price (upside); red cells indicate fair values below current price (downside).

๐Ÿ’ก Key Insight DerivationThe model automatically generates a plain-language insight from the matrix. For example: "Even if P/E stays at 28ร—, a slowdown in EPS growth to 8% would justify only ~$182/share โ€” below the current price of $185." This makes sensitivity analysis actionable rather than just academic.
15

Portfolio Fit Analysis

A stock can be undervalued and still be a poor fit for a specific portfolio. The Portfolio Fit module analyzes how the stock would interact with common portfolio archetypes, providing hypothetical sizing and correlation context.

Outputs

โš ๏ธ Not Personalized AdvicePortfolio fit analysis is based on generic portfolio archetypes and does not account for individual circumstances. Actual position sizing should be determined by a qualified financial advisor who understands your specific financial situation, goals, risk tolerance, and tax circumstances.
16

Signal History & Track Record

The Signal History table records every model signal generated for a given ticker, along with the subsequent forward returns. This creates an auditable track record that allows users to assess the model's historical accuracy for each specific stock.

What Gets Recorded

Track Record Metrics

โš ๏ธ Past Performance DisclaimerPast model performance is not indicative of future results. Signal history is provided for transparency and educational purposes only. Market conditions, economic environment, and company fundamentals change continuously. Historical signal accuracy for one stock does not predict future accuracy for the same or any other stock.

Ready to apply this framework?

Explore the relative valuation dashboard for any US-listed equity.