Prediction markets are efficient most of the time — but not all of the time. Six systematic inefficiencies create exploitable edge: favorite-longshot bias, low-liquidity mispricing, time-zone arbitrage, correlated-event divergence, post-resolution lag, and new-market mispricing. An agent’s Layer 4 intelligence module exists to detect exactly these failures.

Why This Matters for Agents

The Efficient Market Hypothesis determines whether an autonomous betting agent can make money. If prediction markets are perfectly efficient, every contract is correctly priced, every bet has zero expected value, and the entire enterprise of building a trading agent is pointless. The agent is just paying fees to trade noise.

Markets are not perfectly efficient. The academic evidence is clear on two points: (1) prediction markets are remarkably accurate on aggregate, consistently outperforming polls, pundits, and expert panels for well-traded events, and (2) specific, identifiable conditions cause systematic mispricings that persist long enough for an agent to capture. This is Layer 4 — Intelligence. The inefficiency detection module sits between data ingestion (Layer 1 — Data) and order execution (Layer 3 — Trading) in the Agent Betting Stack. The agent ingests prices from Polymarket and Kalshi APIs, runs them through an EMH filter, and only forwards opportunities to the execution layer when it detects a pricing anomaly against its internal model. Without this filter, the agent trades randomly. With it, the agent concentrates capital on the 5-15% of markets where edge actually exists.

The Math

The Three Forms of EMH in Prediction Markets

Eugene Fama’s original EMH taxonomy maps directly onto prediction markets, but the information sets differ from equities.

Weak-form EMH: Prices reflect all historical trading data — past prices, volumes, and order flow. If weak-form holds, no technical analysis strategy (moving averages, momentum, mean reversion on prediction market prices) has positive expected value.

Semi-strong EMH: Prices reflect all publicly available information — polls, news reports, economic data, injury reports, regulatory filings, social media sentiment. If semi-strong holds, an agent scanning Reuters or Twitter for news cannot beat the market because the information is already priced in.

Strong-form EMH: Prices reflect all information, including private and insider knowledge. If strong-form holds, even an agent with access to non-public information (unreleased polling data, insider knowledge of a company announcement) cannot beat the market.

EMH Hierarchy — Information Sets

Strong:    { All information: public + private + insider }
              ⊃
Semi-Strong: { All public information: news, polls, filings, data }
              ⊃
Weak:        { Historical price and volume data only }

The empirical evidence for prediction markets:

EMH FormHolds?Evidence
WeakMostly yesPrediction market prices show very weak serial correlation. Momentum strategies yield <0.5% edge after fees (Tetlock 2017).
Semi-strongPartiallyPrices incorporate major news within 5-30 minutes on liquid markets. Delayed incorporation on low-volume markets creates windows of 2-48 hours.
StrongNoInsiders demonstrably move markets before public announcements. Polymarket whale wallets shift positions hours before news breaks.

The Formal Efficiency Test

A market is efficient with respect to information set Φ if, for all trading strategies S based on Φ:

E[R(S) | Φ] ≤ 0

where R(S) is the return of strategy S after transaction costs, and E is the expectation operator. In plain English: no strategy using only information in Φ earns positive expected returns after fees.

For a binary prediction market contract with price c and true probability p:

E[R] = p × (1 - c) - (1 - p) × c - f
E[R] = p - c - f

where f is the round-trip fee cost. Efficiency requires p = c + f — the market price equals the true probability plus the fee wedge.

An agent’s edge, denoted α, is:

α = p_model - p_market - f

where p_model is the agent’s estimated probability and p_market is the market-implied probability. The agent trades when α > 0. The entire Layer 4 intelligence module is dedicated to estimating p_model accurately and identifying markets where α is sufficiently large.

Academic Evidence on Prediction Market Accuracy

Three landmark studies frame the field:

Berg, Nelson, and Rietz (2008) — analyzed the Iowa Electronic Markets (IEM) across US presidential elections from 1988-2004. IEM final-day prices predicted the vote share within 1.5 percentage points on average. Polls over the same period had a mean error of 2.1 percentage points. Prediction markets outperformed polls in 74% of direct comparisons.

Arrow, Forsythe, Gorham et al. (2008) — published in Science, argued prediction markets aggregate dispersed information more effectively than any other known mechanism. The key insight: markets incentivize truthful belief revelation through profit motive, while polls and surveys do not.

Manski (2006) — raised a critical objection. Market prices represent marginal beliefs (the price at which one more dollar flows in or out), not mean beliefs of all participants. A contract at $0.60 does not mean the average participant believes the probability is 60%. It means the marginal trader — the one setting the price — believes approximately 60%. This distinction matters when liquidity is asymmetric.

Six Systematic Inefficiencies

1. Favorite-Longshot Bias

Contracts near the extremes ($0.90+ or below $0.10) are systematically mispriced. High-probability contracts trade below their true probability; low-probability contracts trade above.

The primary driver in prediction markets is capital opportunity cost. An agent buying a YES contract at $0.95 expecting resolution in 3 months locks up $0.95 to earn $0.05. The annualized return:

Annualized Return = (Payoff / Cost)^(1/t) - 1

where t = time to resolution in years

For $0.95 contract, 3-month resolution:
Annualized = (1.00 / 0.95)^(1/0.25) - 1 = (1.0526)^4 - 1 = 22.7%

If the risk-free rate is 4.5%, a 22.7% annualized return looks attractive — but only if the true probability is genuinely >95%. If the true probability is 97% and the contract trades at $0.92, the capital-adjusted implied probability is:

p_adjusted = Price / (1 - r × t)

where r = annualized risk-free rate, t = time to resolution in years

p_adjusted = 0.92 / (1 - 0.045 × 0.25) = 0.92 / 0.98875 = 0.9305

The market isn’t saying 92%. It’s saying ~93% after adjusting for the time value of money. The remaining gap between 93% and 97% (if that’s the true probability) is real edge.

2. Low-Liquidity Mispricing

Markets with less than $500 in orderbook depth and spreads wider than $0.05 are consistently inefficient. The bid-ask midpoint is a poor probability estimate when depth is shallow. With $50 behind the best bid at $0.55 and $30 behind the best ask at $0.65, a single $100 market order moves the price by $0.10+ in either direction.

Liquidity threshold for semi-strong efficiency (empirically observed):

Depth > $5,000 AND Spread < $0.03 → semi-strong efficient
Depth $500-5,000 AND Spread $0.03-0.05 → weak-form efficient only
Depth < $500 OR Spread > $0.05 → inefficient

3. Time-Zone Arbitrage

Kalshi operates primarily during US business hours. Polymarket’s CLOB runs 24/7. When material news breaks at 3 AM ET — a European election result, an Asian economic release, a late-night tweet from a political figure — Polymarket prices adjust within minutes while Kalshi prices remain stale until US traders wake up.

An agent monitoring both platforms with a news feed detects the divergence and routes orders to the lagging platform. The window is typically 2-6 hours for overnight events, shrinking to 15-60 minutes for early-morning US events.

4. Correlated-Event Mispricing

Consider two Polymarket markets:

  • Market A: “Will the Fed cut rates in June 2026?” — YES at $0.45
  • Market B: “Will the S&P 500 be above 5800 on July 1, 2026?” — YES at $0.58

These events are correlated. A Fed rate cut increases the probability of S&P above 5800. If new data (a weak jobs report) increases P(rate cut) from 45% to 55%, Market A adjusts quickly — but Market B may lag by minutes to hours because the causal link requires an inference step.

An agent that models the conditional probability P(B|A) can trade Market B immediately upon observing the Market A price shift, before other participants update.

The formula for detecting correlated mispricing:

P(B) = P(B|A) × P(A) + P(B|¬A) × P(¬A)

If:
  P(A)_market updates but P(B)_market remains stale
  AND |P(B)_model - P(B)_market| > threshold

→ Trade P(B) toward P(B)_model

5. Post-Resolution Lag

When Market A resolves (event occurs or doesn’t), all markets correlated with A should reprice immediately. In practice, propagation takes 5-60 minutes in low-liquidity correlated markets. An agent that pre-computes the conditional price updates executes immediately on resolution.

6. New-Market Mispricing

The first 24-48 hours after a market opens are consistently inefficient. The initial price is set by the market creator (often at $0.50 for binary contracts) and represents no information. Informed traders arrive over hours to days. An agent with a pre-existing model for the event type (political, sports, crypto) can be among the first informed participants.

Worked Examples

Example 1: Favorite-Longshot Bias on Polymarket

In March 2026, Polymarket’s “Will the US enter a recession in 2026?” contract trades:

YES: $0.12
NO:  $0.89
Sum: $1.01 (1% overround)

The YES contract at $0.12 looks like 12% implied probability. But the contract doesn’t resolve until December 2026 — 9 months away. Adjusting for capital lockup on the NO side:

import numpy as np

no_price = 0.89
risk_free_rate = 0.045  # annualized
time_to_resolution = 9 / 12  # 0.75 years

# Capital-adjusted probability for NO
no_adjusted = no_price / (1 - risk_free_rate * time_to_resolution)
no_adjusted_prob = no_adjusted  # = 0.89 / (1 - 0.03375) = 0.89 / 0.96625 = 0.9210

yes_adjusted = 1 - no_adjusted_prob  # = 0.079

print(f"Raw YES implied:      {0.12:.1%}")
print(f"Adjusted YES implied: {1 - no_adjusted_prob:.1%}")
print(f"Difference:           {0.12 - (1 - no_adjusted_prob):.1%}")
# Raw YES = 12%, adjusted YES ≈ 7.9%
# The market is pricing 4.1% more recession risk than the capital-adjusted fair value

If the agent’s model says 8% recession probability, the YES contract at $0.12 is overpriced — the market is paying a premium for the lottery-ticket nature of low-probability YES contracts. This is textbook favorite-longshot bias.

Example 2: Time-Zone Arbitrage Between Polymarket and Kalshi

At 2:00 AM ET, the Bank of Japan unexpectedly raises interest rates. The Polymarket “BOJ rate hike in Q1 2026” market instantly resolves YES. A correlated Kalshi market — “USD/JPY below 145 on March 31” — remains at yesterday’s closing price of NO at 65 cents because US-based Kalshi traders are asleep.

import numpy as np

# Pre-BOJ-hike state
kalshi_usdjpy_below_145_pre = 0.35  # YES = 35 cents

# Agent's model post-BOJ-hike
# Historical: BOJ hikes strengthen yen → USD/JPY drops ~3-5%
# Current USD/JPY: 148.5
# Post-hike expected: ~142-144 range
# Agent model: P(below 145) = 0.72

agent_model_prob = 0.72
kalshi_price = kalshi_usdjpy_below_145_pre
kalshi_fee = 0.01  # approximate round-trip

alpha = agent_model_prob - kalshi_price - kalshi_fee
ev_per_contract = alpha * 1.00  # $1 contracts

print(f"Kalshi stale price: {kalshi_price:.0%}")
print(f"Agent model:        {agent_model_prob:.0%}")
print(f"Edge (alpha):       {alpha:.0%}")
print(f"EV per contract:    ${ev_per_contract:.2f}")
# Alpha = 36% — massive edge, available until US traders wake up

The edge erodes as US-timezone traders arrive. An agent executing at 2:05 AM captures most of it. By 7:00 AM ET, the Kalshi price typically converges to within $0.03 of the Polymarket-implied value.

Example 3: Low-Liquidity Market Detection

An agent scans Polymarket for markets with depth below $500 and spread above $0.05:

Market: "Will SpaceX land Starship on Mars by 2028?"
YES best bid: $0.04 (depth: $120)
YES best ask: $0.11 (depth: $85)
Spread: $0.07
Midpoint: $0.075
Total depth: $205

The $0.07 spread means an agent buying at the ask ($0.11) needs the true probability to exceed 11% just to break even. The midpoint ($0.075) is unreliable as a probability estimate because a single $200 order moves the price by 100%+ of the spread. This market is not semi-strong efficient — informed traders have no incentive to correct pricing because the position sizes don’t justify the capital lockup.

Implementation

import numpy as np
from dataclasses import dataclass, field
from typing import Optional


@dataclass
class InefficiencySignal:
    """Signal from an EMH inefficiency detector."""
    market_id: str
    signal_type: str  # "favorite_longshot", "low_liquidity", "timezone_arb", "correlation", "new_market"
    alpha: float  # estimated edge (probability units)
    confidence: float  # 0-1, agent's confidence in the signal
    stale_price: float  # current market price
    model_price: float  # agent's estimated true probability
    metadata: dict = field(default_factory=dict)


def detect_favorite_longshot_bias(
    price: float,
    time_to_resolution_years: float,
    risk_free_rate: float = 0.045,
    model_probability: Optional[float] = None,
    fee_rate: float = 0.02
) -> Optional[InefficiencySignal]:
    """
    Detect favorite-longshot bias by comparing raw price to
    capital-adjusted fair value.

    Args:
        price: Current YES contract price (0 to 1)
        time_to_resolution_years: Time until market resolves
        risk_free_rate: Annualized risk-free rate (default 4.5%)
        model_probability: Agent's independent probability estimate
        fee_rate: Platform fee rate on winnings

    Returns:
        InefficiencySignal if bias detected, None otherwise
    """
    if price <= 0 or price >= 1:
        return None

    # Capital-adjusted price accounts for time value of money
    discount_factor = 1 - risk_free_rate * time_to_resolution_years
    if discount_factor <= 0:
        return None

    adjusted_price = price / discount_factor

    # Fee-adjusted breakeven
    fee_adjusted = adjusted_price / (1 - fee_rate * (1 - adjusted_price))

    # If no model provided, flag if raw vs adjusted gap exceeds 2%
    if model_probability is None:
        gap = abs(price - fee_adjusted)
        if gap > 0.02:
            return InefficiencySignal(
                market_id="",
                signal_type="favorite_longshot",
                alpha=gap,
                confidence=0.5,
                stale_price=price,
                model_price=fee_adjusted,
                metadata={
                    "raw_price": price,
                    "capital_adjusted": adjusted_price,
                    "fee_adjusted": fee_adjusted,
                    "time_to_resolution": time_to_resolution_years,
                    "risk_free_rate": risk_free_rate,
                }
            )
    else:
        alpha = model_probability - fee_adjusted
        if abs(alpha) > 0.02:
            return InefficiencySignal(
                market_id="",
                signal_type="favorite_longshot",
                alpha=alpha,
                confidence=0.7,
                stale_price=price,
                model_price=model_probability,
                metadata={
                    "raw_price": price,
                    "capital_adjusted": adjusted_price,
                    "fee_adjusted": fee_adjusted,
                }
            )

    return None


def detect_low_liquidity(
    best_bid: float,
    best_ask: float,
    bid_depth_usd: float,
    ask_depth_usd: float,
    spread_threshold: float = 0.05,
    depth_threshold_usd: float = 500.0
) -> Optional[InefficiencySignal]:
    """
    Flag markets where low liquidity makes prices unreliable.

    Args:
        best_bid: Best bid price for YES
        best_ask: Best ask price for YES
        bid_depth_usd: USD depth behind best bid
        ask_depth_usd: USD depth behind best ask
        spread_threshold: Minimum spread to flag (default $0.05)
        depth_threshold_usd: Minimum depth to consider reliable

    Returns:
        InefficiencySignal if market is illiquid, None otherwise
    """
    spread = best_ask - best_bid
    total_depth = bid_depth_usd + ask_depth_usd
    midpoint = (best_bid + best_ask) / 2

    if spread > spread_threshold or total_depth < depth_threshold_usd:
        # Confidence inversely proportional to spread and depth
        spread_penalty = min(spread / 0.20, 1.0)  # max penalty at $0.20 spread
        depth_penalty = max(0, 1 - total_depth / depth_threshold_usd)
        confidence = 0.3 + 0.4 * (spread_penalty + depth_penalty) / 2

        return InefficiencySignal(
            market_id="",
            signal_type="low_liquidity",
            alpha=spread / 2,  # half the spread is potential edge
            confidence=confidence,
            stale_price=midpoint,
            model_price=midpoint,  # no directional signal
            metadata={
                "spread": spread,
                "bid_depth_usd": bid_depth_usd,
                "ask_depth_usd": ask_depth_usd,
                "total_depth_usd": total_depth,
            }
        )

    return None


def detect_correlated_mispricing(
    price_a: float,
    price_b: float,
    prob_b_given_a: float,
    prob_b_given_not_a: float,
    fee_rate: float = 0.02,
    threshold: float = 0.03
) -> Optional[InefficiencySignal]:
    """
    Detect when correlated markets diverge from conditional probability model.

    Args:
        price_a: Current price of Market A (the leading market)
        price_b: Current price of Market B (the lagging market)
        prob_b_given_a: P(B|A) from agent's model
        prob_b_given_not_a: P(B|not A) from agent's model
        fee_rate: Platform fee rate
        threshold: Minimum divergence to flag

    Returns:
        InefficiencySignal if divergence detected, None otherwise
    """
    # Total probability theorem
    model_price_b = prob_b_given_a * price_a + prob_b_given_not_a * (1 - price_a)

    alpha = model_price_b - price_b - fee_rate

    if abs(alpha) > threshold:
        return InefficiencySignal(
            market_id="",
            signal_type="correlation",
            alpha=alpha,
            confidence=0.6,
            stale_price=price_b,
            model_price=model_price_b,
            metadata={
                "market_a_price": price_a,
                "market_b_price": price_b,
                "model_b_price": model_price_b,
                "p_b_given_a": prob_b_given_a,
                "p_b_given_not_a": prob_b_given_not_a,
            }
        )

    return None


def scan_market_for_inefficiencies(
    market_id: str,
    yes_bid: float,
    yes_ask: float,
    bid_depth_usd: float,
    ask_depth_usd: float,
    time_to_resolution_years: float,
    hours_since_creation: float,
    model_probability: Optional[float] = None,
    risk_free_rate: float = 0.045,
    fee_rate: float = 0.02,
) -> list[InefficiencySignal]:
    """
    Run all inefficiency detectors on a single market.

    Returns list of detected signals, sorted by alpha descending.
    """
    signals = []
    midpoint = (yes_bid + yes_ask) / 2

    # Check favorite-longshot bias
    fl_signal = detect_favorite_longshot_bias(
        price=midpoint,
        time_to_resolution_years=time_to_resolution_years,
        risk_free_rate=risk_free_rate,
        model_probability=model_probability,
        fee_rate=fee_rate,
    )
    if fl_signal:
        fl_signal.market_id = market_id
        signals.append(fl_signal)

    # Check low liquidity
    liq_signal = detect_low_liquidity(
        best_bid=yes_bid,
        best_ask=yes_ask,
        bid_depth_usd=bid_depth_usd,
        ask_depth_usd=ask_depth_usd,
    )
    if liq_signal:
        liq_signal.market_id = market_id
        signals.append(liq_signal)

    # Check new-market mispricing (first 48 hours)
    if hours_since_creation < 48:
        new_market_confidence = max(0.3, 1 - hours_since_creation / 48)
        signals.append(InefficiencySignal(
            market_id=market_id,
            signal_type="new_market",
            alpha=0.05,  # assume 5% mispricing for new markets
            confidence=new_market_confidence,
            stale_price=midpoint,
            model_price=model_probability or midpoint,
            metadata={"hours_since_creation": hours_since_creation},
        ))

    # Sort by alpha descending
    signals.sort(key=lambda s: abs(s.alpha), reverse=True)
    return signals


# --- Demo: run the scanner on sample data ---
if __name__ == "__main__":
    # Polymarket recession market — favorite-longshot example
    signals = scan_market_for_inefficiencies(
        market_id="polymarket-recession-2026",
        yes_bid=0.11,
        yes_ask=0.13,
        bid_depth_usd=3200,
        ask_depth_usd=2800,
        time_to_resolution_years=0.75,
        hours_since_creation=720,  # 30 days old
        model_probability=0.08,
    )

    print("=== Recession 2026 Market ===")
    for s in signals:
        print(f"  Signal: {s.signal_type}")
        print(f"  Alpha:  {s.alpha:+.1%}")
        print(f"  Market: {s.stale_price:.1%} | Model: {s.model_price:.1%}")
        print(f"  Confidence: {s.confidence:.0%}")
        print()

    # Low-liquidity SpaceX market
    signals = scan_market_for_inefficiencies(
        market_id="polymarket-spacex-mars-2028",
        yes_bid=0.04,
        yes_ask=0.11,
        bid_depth_usd=120,
        ask_depth_usd=85,
        time_to_resolution_years=2.0,
        hours_since_creation=168,  # 7 days old
        model_probability=0.03,
    )

    print("=== SpaceX Mars Landing Market ===")
    for s in signals:
        print(f"  Signal: {s.signal_type}")
        print(f"  Alpha:  {s.alpha:+.1%}")
        print(f"  Market: {s.stale_price:.1%} | Model: {s.model_price:.1%}")
        print(f"  Confidence: {s.confidence:.0%}")
        print()

Limitations and Edge Cases

Model risk dominates. The entire framework assumes the agent’s model probability (p_model) is more accurate than the market price. If the agent’s model is wrong, every “inefficiency” it detects is actually the agent being wrong. Calibration testing — covered in the Calibration and Model Evaluation guide — is non-negotiable before deploying capital.

Transaction costs kill small edges. Polymarket charges ~2% on net winnings. Kalshi’s spread adds another 1-3%. An “edge” of 2% evaporates entirely after fees. Agents need alpha > 3-5% after fees to justify execution risk. The expected value framework quantifies this precisely.

Adverse selection in low-liquidity markets. The reason a market is illiquid might be that informed traders already extracted edge and left. An agent buying in a thin market may be taking the other side of a position that smart money abandoned. Depth and volume trends matter — declining liquidity is a red flag.

Regime changes break historical patterns. The favorite-longshot bias magnitude varies with interest rates. At 0.5% risk-free rates (2020-2021), the bias was negligible for contracts under 6 months. At 4.5% (2025-2026), it’s material for anything over 3 months. An agent trained on 2020 data will systematically misestimate the bias in 2026.

Correlation estimates are fragile. The correlated-event detector requires P(B|A) as input. These conditional probabilities come from the agent’s model and are notoriously hard to estimate accurately. A small error in the conditional probability propagates into a false signal. Use wide confidence intervals and smaller position sizes on correlation-based trades.

Market manipulation creates false inefficiencies. A whale placing and canceling large orders (spoofing) creates temporary price dislocations that look like inefficiencies but are traps. The Market Manipulation Detection guide covers the math for identifying artificial price movements.

FAQ

Are prediction markets efficient?

Prediction markets are semi-strong efficient for high-liquidity political and sports markets — prices rapidly incorporate public information like polls, injuries, and earnings reports. They are consistently inefficient in low-liquidity markets (<$500 depth), during the first 24-48 hours after market creation, and for contracts near the $0.95/$0.05 extremes where capital lockup costs create systematic mispricing.

What is the favorite-longshot bias in prediction markets?

The favorite-longshot bias is the systematic tendency for high-probability contracts ($0.90+) to trade below their true probability, and low-probability contracts ($0.05-$0.10) to trade above theirs. In prediction markets, this is primarily driven by capital opportunity cost — locking $0.95 in a contract for months to earn $0.05 yields less than a money market fund. An agent accounts for this with the formula: adjusted_prob = price / (1 - r*t), where r is the annualized risk-free rate and t is time to resolution in years.

How do autonomous agents exploit prediction market inefficiencies?

Agents exploit inefficiencies through four channels: monitoring low-liquidity markets for wide spreads that informed traders haven’t corrected, detecting time-zone arbitrage when overnight news moves one platform before another, identifying correlated-event mispricing where conditional probabilities diverge from market prices, and trading the favorite-longshot bias using risk-free-rate-adjusted pricing models.

What is the difference between EMH weak form and semi-strong form for prediction markets?

Weak-form EMH says prediction market prices already reflect all past price and volume data — technical analysis of price charts has no edge. Semi-strong EMH says prices also reflect all public information (polls, news, filings). An agent that beats the market must either process public information faster than other participants or have access to information not yet public.

How does the Efficient Market Hypothesis connect to expected value in betting?

If markets are perfectly efficient, the market price equals the true probability, and every bet has zero expected value (EV = 0). Positive EV opportunities exist only when markets are inefficient — when the agent’s model assigns a different probability than the market. The EV framework quantifies how much edge exists; EMH analysis tells you where to look for it.

What’s Next

This page identifies where markets get it wrong. The next logical steps in the series explore the structural mechanisms and the models that exploit them:

  • Next in the series: Prediction Market Microstructure — dives into orderbook mechanics, spread dynamics, and liquidity provision. Understanding microstructure is how you understand why the inefficiencies in this guide exist.
  • The automated market maker math: LMSR and Automated Market Makers — covers how AMMs like LMSR create prices and where their pricing diverges from true probabilities.
  • Quantify your edge: Expected Value for Prediction Market Agents — once you find an inefficiency, EV tells you exactly how much it’s worth.
  • Size your positions: The Kelly Criterion — after finding edge and computing EV, Kelly tells you how much capital to allocate.
  • Live market data: The AgentBets Vig Index tracks sportsbook overrounds in real time — use it to compare prediction market efficiency against traditional offshore sportsbooks and identify cross-platform edge.
  • Agent analysis tools: Polyseer uses multi-agent Bayesian aggregation to estimate true probabilities — feed its output into the inefficiency detectors above.