The Logarithmic Market Scoring Rule (LMSR) prices prediction markets via C(q) = b * ln(sum(e^(q_i/b))). Prices follow the softmax function p_i = e^(q_i/b) / sum(e^(q_j/b)). The market maker’s worst-case loss is exactly b * ln(n) — known in advance, which is why LMSR dominates prediction market design.

Why This Matters for Agents

An autonomous betting agent interacts with two fundamentally different market structures: order books (Polymarket’s CLOB) and automated market makers (used by platforms like Kalshi for certain markets, and historically by Augur, Gnosis, and most on-chain prediction markets). The math governing trade execution, price impact, and optimal sizing differs completely between these structures.

This is Layer 3 — Trading. The LMSR cost function determines the exact price an agent pays for every share it buys or sells. An agent that treats an AMM market like an order book will systematically overpay through unmodeled price impact. Understanding LMSR lets an agent compute the precise cost of any proposed trade before submitting it, calculate optimal trade sizes given price impact constraints, and detect when an AMM’s prices have drifted from true probabilities — creating exploitable edge. The Prediction Market Microstructure guide covers the order book side. This guide covers the AMM side. Together, they give an agent complete Layer 3 coverage across the Agent Betting Stack.

The Math

Market Scoring Rules — The Foundation

A market scoring rule is a mechanism that takes a probability distribution as input and outputs a score. Robin Hanson’s insight (2003): chain scoring rules together so that each trader “corrects” the previous trader’s probability estimate, and the scoring rule’s payment structure incentivizes truthful reporting.

The Logarithmic Market Scoring Rule uses the logarithmic scoring rule as its base. The logarithmic proper scoring rule for outcome i is:

S(p, i) = ln(p_i)

where p_i is the probability assigned to the outcome that actually occurred. This is strictly proper — it uniquely maximizes expected score when you report your true beliefs. Any deviation from your true probability decreases your expected score.

The LMSR Cost Function

The LMSR cost function maps a vector of outstanding shares to the total amount spent by all traders:

C(q) = b * ln( Σ e^(q_i / b) )

where:

  • q = (q_1, q_2, …, q_n) is the vector of outstanding shares for each of n outcomes
  • b is the liquidity parameter (controls market depth)
  • The sum runs over all n outcomes

The cost of buying delta shares of outcome i is:

Cost = C(q_1, ..., q_i + delta, ..., q_n) - C(q_1, ..., q_i, ..., q_n)

This is the fundamental equation an agent uses to calculate trade cost. The cost is path-independent — it doesn’t matter what sequence of trades brought the market to state q.

The Price Function (Softmax)

The instantaneous price of outcome i — the marginal cost of an infinitesimal share — is the partial derivative of C with respect to q_i:

p_i = dC/dq_i = e^(q_i / b) / Σ e^(q_j / b)

This is the softmax function. If you’ve built neural networks, you’ve seen this before. The prices have two critical properties:

  1. Sum to one: Σ p_i = 1. The prices always form a valid probability distribution.
  2. Bounded: 0 < p_i < 1 for all i (assuming finite q_i). Prices never reach 0 or 1 exactly.

For a binary market (YES/NO) with q_YES = 0 and q_NO = 0 (initial state):

p_YES = e^(0/b) / (e^(0/b) + e^(0/b)) = 1/2 = 50%
p_NO  = 1/2 = 50%

Every LMSR market starts at uniform probabilities. Traders move prices by buying shares.

The Liquidity Parameter b

The parameter b controls everything about market behavior:

┌────────────┬──────────────────┬──────────────────┬────────────────────┐
│     b      │ Price Impact     │ Max Maker Loss   │ Use Case           │
│            │ (per $1 trade)   │ (binary market)  │                    │
├────────────┼──────────────────┼──────────────────┼────────────────────┤
│     10     │ ~5% per share    │ $6.93            │ Low-stakes polls   │
│    100     │ ~0.5% per share  │ $69.31           │ Standard markets   │
│  1,000     │ ~0.05% per share │ $693.15          │ High-liquidity     │
│ 10,000     │ ~0.005% per share│ $6,931.47        │ Institutional      │
└────────────┴──────────────────┴──────────────────┴────────────────────┘

The tradeoff is direct: higher b means better prices for traders but more risk for the market maker. An agent selecting which markets to trade should factor b into its execution cost model.

Bounded Loss Theorem

The market maker’s worst-case loss is exactly:

Max Loss = b * ln(n)

where n is the number of outcomes.

Proof. The cost function starts at C(0, 0, …, 0) = b * ln(n) (since all n exponentials equal 1). After all trading is complete, one outcome j resolves to YES. The market maker pays $1 per outstanding share of j, receiving C(q_final) from total trading. The market maker’s P&L is:

P&L = C(q_final) - C(0) - q_j

The worst case is when all trading concentrates on the winning outcome. As q_j approaches infinity while others stay at 0:

C(q) → b * ln(e^(q_j/b)) = q_j

So C(q_final) → q_j, and P&L → q_j - bln(n) - q_j = -bln(n).

The loss is bounded and known before the market opens. For a binary market: max loss = b * ln(2) ≈ 0.693 * b. For a 10-outcome market: max loss = b * ln(10) ≈ 2.303 * b.

This bounded loss is what makes LMSR viable as a business. The operator budgets b * ln(n) as a subsidy and knows their downside with certainty — unlike a traditional market maker on an order book who faces unlimited potential losses.

Price Impact Formula

When an agent buys delta shares of outcome i, the price moves from p_i to p_i':

p_i' = e^((q_i + delta) / b) / (Σ_{j≠i} e^(q_j / b) + e^((q_i + delta) / b))

The total cost of this trade is:

Trade Cost = b * ln(Σ_{j≠i} e^(q_j/b) + e^((q_i + delta)/b)) - b * ln(Σ_j e^(q_j/b))

For small delta relative to b, the cost is approximately linear: Cost ≈ p_i * delta. For large delta relative to b, the cost grows sub-linearly — the agent pays an average price between p_i and p_i’. This non-linear price impact is the key difference from order books, where the cost is exactly linear up to the posted depth at each price level.

Worked Examples

Example 1: Binary Market Trade on a Kalshi-Style AMM

Consider a binary market “Will the Fed cut rates at the June 2026 FOMC meeting?” with b = 500 and current state q_YES = 120, q_NO = 0.

Current prices:

p_YES = e^(120/500) / (e^(120/500) + e^(0/500))
      = e^0.24 / (e^0.24 + 1)
      = 1.2712 / (1.2712 + 1.0)
      = 1.2712 / 2.2712
      = 55.97%

p_NO  = 1 - 0.5597 = 44.03%

An agent wants to buy 50 YES shares. The cost:

C_before = 500 * ln(e^(120/500) + e^(0/500))
         = 500 * ln(2.2712)
         = 500 * 0.8205
         = $410.25

C_after  = 500 * ln(e^(170/500) + e^(0/500))
         = 500 * ln(e^0.34 + 1)
         = 500 * ln(1.4049 + 1.0)
         = 500 * ln(2.4049)
         = 500 * 0.8778
         = $438.90

Trade Cost = $438.90 - $410.25 = $28.65

The agent pays $28.65 for 50 shares — an average price of $0.573 per share. But the starting price was $0.5597 and the ending price is:

p_YES_new = e^(170/500) / (e^(170/500) + 1) = 1.4049 / 2.4049 = 58.42%

The agent paid a volume-weighted average between 55.97% and 58.42%. On a CLOB, the agent would have paid exactly the posted ask price for each share (plus any walk-through on the book). On an AMM, price impact is continuous and unavoidable.

Example 2: Multi-Outcome Market — “2026 World Cup Winner”

A 10-outcome market with b = 2000 and current share state:

Brazil:      q = 450   →  p = e^(0.225) / Z = 1.2523 / Z
France:      q = 380   →  p = e^(0.190) / Z = 1.2092 / Z
England:     q = 320   →  p = e^(0.160) / Z = 1.1735 / Z
Germany:     q = 280   →  p = e^(0.140) / Z = 1.1503 / Z
Argentina:   q = 350   →  p = e^(0.175) / Z = 1.1912 / Z
Spain:       q = 300   →  p = e^(0.150) / Z = 1.1618 / Z
Portugal:    q = 200   →  p = e^(0.100) / Z = 1.1052 / Z
Netherlands: q = 150   →  p = e^(0.075) / Z = 1.0779 / Z
Italy:       q = 100   →  p = e^(0.050) / Z = 1.0513 / Z
Field:       q =  50   →  p = e^(0.025) / Z = 1.0253 / Z

Z = sum of all numerators = 11.3980. Maximum market maker loss = 2000 * ln(10) = $4,605.17.

Brazil’s implied probability: 1.2523 / 11.3980 = 10.99%. An agent with a Poisson model projecting Brazil at 14% sees 3 percentage points of edge — enough to warrant a position after accounting for price impact.

Implementation

import numpy as np
from dataclasses import dataclass


@dataclass
class LMSRMarket:
    """
    Logarithmic Market Scoring Rule automated market maker.

    Parameters
    ----------
    b : float
        Liquidity parameter. Higher b = more liquidity, more maker loss.
    n_outcomes : int
        Number of mutually exclusive outcomes.
    """
    b: float
    n_outcomes: int
    shares: np.ndarray = None

    def __post_init__(self):
        if self.shares is None:
            self.shares = np.zeros(self.n_outcomes)

    def cost(self, q: np.ndarray = None) -> float:
        """
        LMSR cost function: C(q) = b * ln(sum(e^(q_i / b)))

        Parameters
        ----------
        q : np.ndarray, optional
            Share vector. Uses current state if None.

        Returns
        -------
        float
            Total cost charged to all traders so far.
        """
        if q is None:
            q = self.shares
        # Use logsumexp trick for numerical stability
        max_q = np.max(q / self.b)
        return self.b * (max_q + np.log(np.sum(np.exp(q / self.b - max_q))))

    def prices(self, q: np.ndarray = None) -> np.ndarray:
        """
        Current prices (softmax of q/b).

        Returns
        -------
        np.ndarray
            Probability vector summing to 1.0.
        """
        if q is None:
            q = self.shares
        # Numerically stable softmax
        scaled = q / self.b
        shifted = scaled - np.max(scaled)
        exp_vals = np.exp(shifted)
        return exp_vals / np.sum(exp_vals)

    def trade_cost(self, outcome: int, delta: float) -> float:
        """
        Cost of buying `delta` shares of `outcome`.

        Parameters
        ----------
        outcome : int
            Index of the outcome to trade (0-indexed).
        delta : float
            Number of shares to buy (positive) or sell (negative).

        Returns
        -------
        float
            Dollar cost of the trade. Positive = agent pays.
        """
        q_before = self.shares.copy()
        q_after = self.shares.copy()
        q_after[outcome] += delta
        return self.cost(q_after) - self.cost(q_before)

    def execute_trade(self, outcome: int, delta: float) -> dict:
        """
        Execute a trade and update market state.

        Returns
        -------
        dict
            Trade details including cost, avg price, and new market prices.
        """
        price_before = self.prices().copy()
        cost = self.trade_cost(outcome, delta)
        self.shares[outcome] += delta
        price_after = self.prices()
        avg_price = cost / delta if delta != 0 else 0

        return {
            "outcome": outcome,
            "shares": delta,
            "cost": cost,
            "avg_price": avg_price,
            "price_before": price_before[outcome],
            "price_after": price_after[outcome],
            "slippage": avg_price - price_before[outcome],
            "all_prices": price_after,
        }

    def max_loss(self) -> float:
        """Market maker's worst-case loss: b * ln(n)."""
        return self.b * np.log(self.n_outcomes)


def compare_lmsr_trade_sizes(b: float, p_current: float, deltas: list[float]) -> None:
    """
    Show how trade cost scales with size in an LMSR binary market.

    Parameters
    ----------
    b : float
        Liquidity parameter.
    p_current : float
        Current YES probability (0 to 1).
    deltas : list[float]
        List of trade sizes to compare.
    """
    # Recover q_YES from current price in binary market
    # p = e^(q/b) / (e^(q/b) + 1) => q = b * ln(p / (1-p))
    q_yes = b * np.log(p_current / (1 - p_current))
    market = LMSRMarket(b=b, n_outcomes=2, shares=np.array([q_yes, 0.0]))

    print(f"LMSR Binary Market | b = {b} | Current YES = {p_current:.1%}")
    print(f"Max maker loss: ${market.max_loss():.2f}")
    print(f"{'Shares':>8} {'Cost':>10} {'Avg Price':>10} {'End Price':>10} {'Slippage':>10}")
    print("-" * 52)

    for delta in deltas:
        # Reset market state for each comparison
        market.shares = np.array([q_yes, 0.0])
        result = market.execute_trade(0, delta)
        print(
            f"{delta:>8.0f} "
            f"${result['cost']:>9.2f} "
            f"${result['avg_price']:>9.4f} "
            f" {result['price_after']:>9.2%} "
            f" {result['slippage']:>9.4f}"
        )


# --- LS-LMSR: Liquidity-Sensitive Variant ---

@dataclass
class LSLMSRMarket:
    """
    Liquidity-Sensitive LMSR where b scales with cumulative volume.

    b(V) = alpha * V + b_0

    Parameters
    ----------
    b_0 : float
        Initial liquidity parameter.
    alpha : float
        Scaling factor for volume sensitivity.
    n_outcomes : int
        Number of outcomes.
    """
    b_0: float
    alpha: float
    n_outcomes: int
    shares: np.ndarray = None
    cumulative_volume: float = 0.0

    def __post_init__(self):
        if self.shares is None:
            self.shares = np.zeros(self.n_outcomes)

    @property
    def b(self) -> float:
        """Current liquidity parameter based on cumulative volume."""
        return self.alpha * self.cumulative_volume + self.b_0

    def prices(self) -> np.ndarray:
        """Current prices using current b."""
        scaled = self.shares / self.b
        shifted = scaled - np.max(scaled)
        exp_vals = np.exp(shifted)
        return exp_vals / np.sum(exp_vals)

    def execute_trade(self, outcome: int, delta: float) -> dict:
        """Execute trade with volume-adjusted b."""
        b_at_trade = self.b
        price_before = self.prices().copy()

        # Cost with current b
        q_before = self.shares.copy()
        q_after = self.shares.copy()
        q_after[outcome] += delta

        max_qb = np.max(q_before / b_at_trade)
        c_before = b_at_trade * (max_qb + np.log(
            np.sum(np.exp(q_before / b_at_trade - max_qb))
        ))
        max_qa = np.max(q_after / b_at_trade)
        c_after = b_at_trade * (max_qa + np.log(
            np.sum(np.exp(q_after / b_at_trade - max_qa))
        ))
        cost = c_after - c_before

        self.shares[outcome] += delta
        self.cumulative_volume += abs(cost)

        return {
            "cost": cost,
            "b_used": b_at_trade,
            "b_new": self.b,
            "price_before": price_before[outcome],
            "price_after": self.prices()[outcome],
        }


# --- Demo ---

if __name__ == "__main__":
    print("=" * 60)
    print("LMSR Trade Size Comparison")
    print("=" * 60)
    compare_lmsr_trade_sizes(
        b=500,
        p_current=0.56,
        deltas=[10, 50, 100, 200, 500]
    )

    print("\n" + "=" * 60)
    print("LS-LMSR Demonstration")
    print("=" * 60)
    ls_market = LSLMSRMarket(b_0=50, alpha=0.02, n_outcomes=2)
    print(f"Initial b: {ls_market.b:.1f}")
    print(f"{'Trade':>6} {'Cost':>8} {'b_used':>8} {'b_new':>8} {'YES':>8}")
    print("-" * 42)
    for i in range(10):
        result = ls_market.execute_trade(0, 20)
        print(
            f"{i+1:>6} "
            f"${result['cost']:>7.2f} "
            f"{result['b_used']:>8.1f} "
            f"{result['b_new']:>8.1f} "
            f"{result['price_after']:>7.2%}"
        )

LMSR vs. CLOB vs. Constant-Product AMM

An agent operating across platforms must understand three market structures:

Market Structure Comparison

                    LMSR              CPMM (x*y=k)       CLOB (Order Book)
                    ─────────────     ───────────────     ─────────────────
Liquidity source    Algorithmic       Liquidity pools     Active market makers
                    (operator subsidy) (LP deposits)      (limit orders)

Price impact        Logarithmic       Hyperbolic          Step function
                    (smooth, bounded) (1/x curve)         (jumps at price levels)

Max maker loss      b * ln(n)         Unbounded           Unbounded
                    (known upfront)   (impermanent loss)  (adverse selection)

Resolution          Pays $1 or $0     No terminal event   Pays $1 or $0
                    per outcome       (continuous swap)    per outcome

Zero-slippage       Never             Never               Yes, up to posted
trades possible?    (always impact)   (always impact)     depth at best price

Used by             Augur v1, Azure   Uniswap, Curve      Polymarket,
                    PM, Gnosis        (token markets)      Kalshi, Betfair

The key insight for agents: LMSR price impact is deterministic and calculable before trade submission. On a CLOB, price impact depends on the current order book state, which can change between when the agent reads it and when the order fills. On an LMSR, the cost function is a pure function of the share vector — no race conditions, no stale data risk.

However, the Polymarket CLOB has a critical advantage for large agents: zero slippage up to posted depth. If an agent sees 10,000 shares offered at $0.63 on the Polymarket CLOB, it can buy up to 10,000 shares at exactly $0.63 with no impact. On LMSR, every share moves the price.

Limitations and Edge Cases

1. The b selection problem. Choosing b is the hardest operational decision for an LMSR market operator. Too low and traders face excessive slippage — rational agents route to alternative markets. Too high and the operator bleeds subsidy money. There is no closed-form optimal b; it depends on expected volume, the value of information revelation, and the operator’s subsidy budget.

2. Manipulation via information hiding. An informed agent can exploit LMSR’s deterministic pricing. Since price impact is known, an agent can split a large trade into many small trades and achieve the same total cost (path independence). This means LMSR doesn’t penalize informed trading differently from noise trading — unlike a CLOB where a market maker can widen spreads in response to suspected informed flow.

3. Near-boundary instability. When a price approaches 0 or 1, the number of shares required to move the price by 1 percentage point grows exponentially. An agent trying to push p from 0.99 to 1.00 would need to buy approximately b * ln(99) ≈ 4.6 * b shares. For b = 1000, that’s 4,595 shares costing roughly $4,545. The market becomes extremely stiff near the boundaries.

4. Multi-outcome complexity. For n outcomes, each trade in one outcome affects the prices of all other outcomes (they must sum to 1). An agent building a portfolio across outcomes in the same market needs to compute the joint cost function, not treat each outcome independently. Buying YES on Brazil and YES on France in a World Cup market has a different cost than buying them in separate transactions — the first purchase changes prices for the second.

5. No bid-ask spread revenue. Unlike a CLOB market maker that earns the spread, an LMSR operator has zero expected profit. The market maker is a pure subsidy mechanism. Profitability comes from auxiliary fees (Kalshi charges per-contract fees), data monetization, or advertising. This economic model affects which platforms survive long-term and matters for agent platform selection — see the Prediction Market API Reference for platform fee structures.

FAQ

What is the LMSR cost function for prediction markets?

The LMSR cost function is C(q) = b * ln(sum(e^(q_i/b))), where q is the vector of outstanding shares per outcome, b is the liquidity parameter, and the sum runs over all outcomes. The cost of a trade is the difference in C before and after updating the share vector. Robin Hanson introduced this function in 2003, and it guarantees liquidity at every price level with bounded worst-case loss.

What is the maximum loss for an LMSR market maker?

The LMSR market maker’s worst-case loss is exactly b * ln(n), where b is the liquidity parameter and n is the number of outcomes. For a binary market (n=2) with b=100, the maximum loss is 100 * ln(2) = $69.31. This bounded loss property makes LMSR the standard choice for subsidized prediction markets where the operator needs a known cost ceiling.

How does LMSR compare to Uniswap-style constant-product AMMs?

LMSR uses a logarithmic cost function with bounded market maker loss of b * ln(n). Constant-product AMMs (x*y=k) have unbounded impermanent loss and price impact scaling hyperbolically with trade size. LMSR is designed for prediction markets where outcomes resolve to a terminal value ($0 or $1); CPMMs are designed for continuous token swaps. An agent trading on LMSR can calculate exact costs upfront; CPMM costs depend on pool state at execution time.

How does the LMSR liquidity parameter b affect prediction market trading?

The liquidity parameter b controls the tradeoff between market depth and market maker exposure. Higher b produces tighter spreads and lower price impact per trade, but the maximum loss (b * ln(n)) scales linearly with b. A b of 100 handles small retail trades; a b of 10,000 supports institutional flow but exposes the maker to $6,931 maximum loss in a binary market. Agents should estimate b from observed price impact and adjust position sizing accordingly.

What is LS-LMSR and why does it matter for prediction market agents?

LS-LMSR (liquidity-sensitive LMSR) dynamically adjusts b based on cumulative trading volume using b(V) = alpha * V + b_0. In early markets with little volume, b stays small (low subsidy). As volume accumulates, b grows, providing deeper liquidity. Agents must model this because their marginal price impact decreases over time as b increases — early trades move the market more than late trades.

What’s Next

The LMSR cost function is one of several scoring rules used in prediction market design — that guide covers Brier scoring, logarithmic scoring, and what makes a scoring rule “proper.” For the order book side of market microstructure, see the Prediction Market Microstructure guide. The multi-outcome markets guide extends the math here to combinatorial markets where outcomes interact.

For practical trading across both LMSR and CLOB markets, the Arbitrage Detection Algorithms guide shows how agents find and exploit cross-platform price discrepancies. And the betting bots hub covers the full agent infrastructure for deploying these strategies.