Monte Carlo simulation generates thousands of random outcome scenarios to map the full distribution of your bankroll’s future. Run 10,000+ simulated bet sequences, measure median profit, 5th-percentile drawdown, and ruin probability. Use these outputs to set position limits that keep your agent solvent through worst-case variance.
Why This Matters for Agents
An autonomous betting agent doesn’t get to see the future. It estimates probabilities, sizes positions with Kelly, and executes trades. But every input carries uncertainty: the probability estimate has error, the correlation structure between positions is approximate, and the sequence of outcomes is random. A single-point estimate of expected profit tells the agent nothing about the range of possible outcomes or the probability of catastrophic drawdown.
Monte Carlo simulation solves this. By generating 10,000-100,000 synthetic outcome scenarios, an agent maps the full probability distribution of its bankroll trajectory. This is Layer 4 — Intelligence. The simulation engine sits between the agent’s model outputs (probabilities, edges, Kelly fractions) and its Layer 2 wallet guardrails. The simulation answers: “Given my current portfolio of positions, what is the probability my bankroll drops below my ruin threshold?” If that probability exceeds the agent’s tolerance, the wallet module rejects the trade before it hits the orderbook. This feedback loop — simulate, evaluate, constrain — is how production agents manage risk without human intervention.
The Math
The Core Algorithm
Monte Carlo simulation for betting follows a simple structure:
For i = 1 to N_simulations:
For j = 1 to N_bets:
Sample outcome_j ~ Bernoulli(p_j)
Update bankroll: B += outcome_j * profit_j - (1 - outcome_j) * stake_j
Record terminal_bankroll[i] = B
Analyze distribution of terminal_bankroll
Each simulation is one possible future. The collection of N futures approximates the true distribution of outcomes.
Why this works: The Law of Large Numbers guarantees that as N grows, the sample statistics (mean, percentiles, ruin frequency) converge to their true population values. The standard error of the sample mean decreases as sigma / sqrt(N), where sigma is the standard deviation of terminal bankroll across simulations. With N = 10,000, the standard error is 1% of sigma. With N = 100,000, it’s 0.3%.
Key Output Metrics
From the distribution of terminal bankrolls, an agent extracts:
| Metric | Formula | Use |
|---|---|---|
| Median outcome | 50th percentile of terminal_bankroll | Expected “typical” result |
| Value at Risk (VaR 5%) | 5th percentile of terminal_bankroll | Worst case in 95% of scenarios |
| Tail risk (VaR 1%) | 1st percentile of terminal_bankroll | Extreme downside |
| Ruin probability | P(terminal_bankroll <= 0) | Probability of total loss |
| Expected drawdown | Mean of max drawdown across paths | Average worst peak-to-trough |
| Sharpe analog | Mean(return) / Std(return) | Risk-adjusted performance |
Modeling Correlated Positions
Independent simulations assume each bet outcome is statistically independent. That assumption fails for prediction market portfolios. If an agent holds YES on “Democrats win the presidency” and YES on “Democrats win the Senate,” those outcomes are positively correlated — the joint probability P(both YES) is higher than P(pres) * P(senate).
To model correlated binary outcomes:
Step 1: Define the correlation matrix R for n positions. R is n x n with R[i,i] = 1 and R[i,j] = rho_ij, where rho_ij is the pairwise correlation between outcomes i and j.
Step 2: Generate correlated standard normal draws using the Cholesky decomposition. If R = L * L^T, then Z = L * X where X ~ N(0, I) produces Z ~ N(0, R).
Step 3: Transform correlated normals to correlated uniforms via the standard normal CDF: U_i = Phi(Z_i).
Step 4: Convert uniforms to Bernoulli outcomes: outcome_i = 1 if U_i < p_i, else 0, where p_i is the probability of outcome i.
This Gaussian copula approach preserves the marginal probabilities p_i while introducing the desired correlation structure.
Variance Reduction Techniques
Raw Monte Carlo converges slowly. Three techniques accelerate convergence:
Antithetic variates. For every vector of uniform draws U, also simulate with 1 - U. These two simulations are negatively correlated, so their average has lower variance than two independent simulations. Variance reduction: typically 20-40% for bankroll simulations.
Standard estimate: E[f(U)]
Antithetic estimate: [f(U) + f(1 - U)] / 2
Variance reduction: Var_antithetic <= Var_standard (strict when f is monotonic)
Stratified sampling. Divide the [0, 1] interval into N equal strata. Draw exactly one uniform sample from each stratum. This eliminates clustering — every region of the probability space is represented, preventing the case where random draws over-represent the middle and under-represent the tails.
Importance sampling. To estimate rare events like ruin probability P(bankroll <= 0), standard Monte Carlo is inefficient — most simulations never hit ruin, so the estimate has high variance. Importance sampling overweights scenarios that lead to ruin by sampling from a shifted distribution q(x) and correcting with the likelihood ratio p(x)/q(x). Effective for estimating ruin probabilities below 1%.
Worked Examples
Example 1: Stress-Testing Kelly Under Probability Uncertainty
An agent on Polymarket estimates the probability of “Fed cuts rates by June 2026” at p = 0.62. The YES price is $0.55. Using Kelly with decimal odds b = 1/0.55 - 1 = 0.818:
f* = (bp - q) / b = (0.818 * 0.62 - 0.38) / 0.818 = (0.507 - 0.38) / 0.818 = 0.155
Kelly says bet 15.5% of bankroll. But how confident is the agent in p = 0.62? If the true probability is actually 0.52, f* drops to 0.024 (2.4%). If it’s 0.50, f* goes negative — the agent shouldn’t bet at all.
Monte Carlo stress test: model the agent’s uncertainty about p as Beta(31, 19), which has mean 0.62 and standard deviation 0.068. Draw 10,000 samples of p, compute Kelly fraction for each, simulate 50-bet sequences, and examine the terminal bankroll distribution.
Example 2: Correlated Polymarket Political Portfolio
An agent holds three positions on Polymarket:
| Position | Entry Price | Agent’s Prob | Edge |
|---|---|---|---|
| Democrats win presidency YES | $0.48 | 0.55 | 7% |
| Democrats win Senate YES | $0.42 | 0.50 | 8% |
| Fed cuts rates by Dec YES | $0.60 | 0.66 | 6% |
Estimated correlation matrix:
Pres Senate Fed
Pres 1.00 0.70 0.25
Senate 0.70 1.00 0.15
Fed 0.25 0.15 1.00
If the agent treats these as independent and sizes each at Kelly, it underestimates portfolio risk. Democrats sweeping (both lose together) is more likely than independent outcomes suggest. Monte Carlo with the correlation matrix reveals the true portfolio drawdown distribution.
Example 3: Season-Long Sportsbook Bankroll Trajectory
An agent betting NFL sides on BetOnline at -110 lines with a 55% historical win rate (after accounting for vig). Over a 17-week season with 5 bets per week = 85 total bets. Half-Kelly sizing: f = 0.5 * (0.55 * 1.909 - 0.45) / 1.909 = 0.5 * 0.025 = 1.25% per bet.
Simulating 10,000 seasons reveals the distribution of season-end bankroll and the probability of a losing season despite positive expected value.
Implementation
import numpy as np
from scipy import stats
from dataclasses import dataclass, field
@dataclass
class Position:
"""A single betting position for Monte Carlo simulation."""
name: str
probability: float # agent's estimated true probability
entry_price: float # price paid (0 to 1 for prediction markets)
kelly_fraction: float # fraction of bankroll to allocate
@property
def edge(self) -> float:
return self.probability - self.entry_price
@property
def decimal_odds(self) -> float:
return 1.0 / self.entry_price
@property
def profit_if_win(self) -> float:
return self.decimal_odds - 1.0
@dataclass
class SimulationResult:
"""Output of a Monte Carlo bankroll simulation."""
terminal_bankrolls: np.ndarray
max_drawdowns: np.ndarray
n_simulations: int
@property
def median_bankroll(self) -> float:
return float(np.median(self.terminal_bankrolls))
@property
def mean_bankroll(self) -> float:
return float(np.mean(self.terminal_bankrolls))
@property
def var_5(self) -> float:
"""5th percentile — Value at Risk."""
return float(np.percentile(self.terminal_bankrolls, 5))
@property
def var_1(self) -> float:
"""1st percentile — tail risk."""
return float(np.percentile(self.terminal_bankrolls, 1))
@property
def ruin_probability(self) -> float:
"""Fraction of simulations ending at or below zero."""
return float(np.mean(self.terminal_bankrolls <= 0))
@property
def mean_max_drawdown(self) -> float:
return float(np.mean(self.max_drawdowns))
def summary(self) -> str:
return (
f"Simulations: {self.n_simulations:,}\n"
f"Mean bankroll: ${self.mean_bankroll:,.2f}\n"
f"Median bankroll: ${self.median_bankroll:,.2f}\n"
f"VaR 5%: ${self.var_5:,.2f}\n"
f"VaR 1%: ${self.var_1:,.2f}\n"
f"Ruin prob: {self.ruin_probability:.4%}\n"
f"Mean max DD: {self.mean_max_drawdown:.2%}"
)
def simulate_sequential_bets(
positions: list[Position],
n_bets_per_position: int,
initial_bankroll: float = 10000.0,
n_simulations: int = 10000,
use_antithetic: bool = True,
rng_seed: int = 42
) -> SimulationResult:
"""
Monte Carlo simulation of sequential bets across multiple positions.
Each simulation runs through all positions n_bets_per_position times
in round-robin order. Kelly fractions are applied to the current
bankroll at each step (dynamic sizing).
Args:
positions: List of Position objects defining the bets.
n_bets_per_position: Number of times each position is bet.
initial_bankroll: Starting bankroll in dollars.
n_simulations: Number of Monte Carlo paths to generate.
use_antithetic: If True, use antithetic variates for variance reduction.
rng_seed: Random seed for reproducibility.
Returns:
SimulationResult with terminal bankrolls and drawdown data.
"""
rng = np.random.default_rng(rng_seed)
n_positions = len(positions)
total_bets = n_positions * n_bets_per_position
# Adjust simulation count for antithetic variates
if use_antithetic:
n_base = n_simulations // 2
n_actual = n_base * 2
else:
n_base = n_simulations
n_actual = n_simulations
# Generate uniform random draws: shape (n_base, total_bets)
uniforms = rng.uniform(0, 1, size=(n_base, total_bets))
if use_antithetic:
# Stack original and antithetic (1 - U) draws
uniforms = np.vstack([uniforms, 1.0 - uniforms])
# Build probability and payoff arrays
probs = np.array([p.probability for p in positions])
kelly_fracs = np.array([p.kelly_fraction for p in positions])
profits = np.array([p.profit_if_win for p in positions])
# Tile for round-robin: [pos0, pos1, ..., pos0, pos1, ...]
bet_indices = np.tile(np.arange(n_positions), n_bets_per_position)
bet_probs = probs[bet_indices]
bet_kelly = kelly_fracs[bet_indices]
bet_profits = profits[bet_indices]
# Convert uniforms to outcomes: 1 if U < p, else 0
outcomes = (uniforms < bet_probs[np.newaxis, :]).astype(np.float64)
# Simulate bankroll paths with dynamic Kelly sizing
bankrolls = np.full(n_actual, initial_bankroll)
peak_bankrolls = np.full(n_actual, initial_bankroll)
max_drawdowns = np.zeros(n_actual)
for t in range(total_bets):
stake = bankrolls * bet_kelly[t]
# Clamp stake to available bankroll (can't bet more than you have)
stake = np.minimum(stake, bankrolls)
stake = np.maximum(stake, 0)
pnl = outcomes[:, t] * stake * bet_profits[t] - (1 - outcomes[:, t]) * stake
bankrolls += pnl
bankrolls = np.maximum(bankrolls, 0) # floor at zero (ruin)
peak_bankrolls = np.maximum(peak_bankrolls, bankrolls)
current_dd = (peak_bankrolls - bankrolls) / np.maximum(peak_bankrolls, 1e-10)
max_drawdowns = np.maximum(max_drawdowns, current_dd)
return SimulationResult(
terminal_bankrolls=bankrolls,
max_drawdowns=max_drawdowns,
n_simulations=n_actual
)
def simulate_correlated_positions(
positions: list[Position],
correlation_matrix: np.ndarray,
initial_bankroll: float = 10000.0,
n_simulations: int = 10000,
rng_seed: int = 42
) -> SimulationResult:
"""
Monte Carlo simulation for a portfolio of correlated binary positions.
Uses a Gaussian copula to generate correlated Bernoulli outcomes.
All positions resolve simultaneously (single resolution event).
Args:
positions: List of Position objects.
correlation_matrix: n x n correlation matrix (must be positive semi-definite).
initial_bankroll: Starting bankroll in dollars.
n_simulations: Number of Monte Carlo paths.
rng_seed: Random seed.
Returns:
SimulationResult with terminal bankrolls.
"""
rng = np.random.default_rng(rng_seed)
n = len(positions)
# Validate correlation matrix
assert correlation_matrix.shape == (n, n), "Correlation matrix shape mismatch"
# Cholesky decomposition for correlated normal generation
L = np.linalg.cholesky(correlation_matrix)
# Generate independent standard normals: (n_simulations, n)
Z_independent = rng.standard_normal(size=(n_simulations, n))
# Correlate via Cholesky: Z_correlated = Z_independent @ L^T
Z_correlated = Z_independent @ L.T
# Transform to uniforms via normal CDF
U_correlated = stats.norm.cdf(Z_correlated)
# Transform to Bernoulli outcomes
probs = np.array([p.probability for p in positions])
outcomes = (U_correlated < probs[np.newaxis, :]).astype(np.float64)
# Compute portfolio PnL for each simulation
bankrolls = np.full(n_simulations, initial_bankroll)
for j, pos in enumerate(positions):
stake = initial_bankroll * pos.kelly_fraction
pnl = outcomes[:, j] * stake * pos.profit_if_win - (1 - outcomes[:, j]) * stake
bankrolls += pnl
bankrolls = np.maximum(bankrolls, 0)
# Drawdown is single-step here (simultaneous resolution)
drawdowns = np.maximum(0, initial_bankroll - bankrolls) / initial_bankroll
return SimulationResult(
terminal_bankrolls=bankrolls,
max_drawdowns=drawdowns,
n_simulations=n_simulations
)
def kelly_stress_test(
entry_price: float,
prob_mean: float,
prob_std: float,
n_bets: int = 50,
initial_bankroll: float = 10000.0,
n_simulations: int = 10000,
kelly_multiplier: float = 0.5,
rng_seed: int = 42
) -> dict:
"""
Stress-test Kelly sizing under probability estimation uncertainty.
Models the agent's uncertainty about true probability as a Beta
distribution with the given mean and standard deviation. For each
simulation, draws a 'true' probability from the Beta, computes
Kelly fraction, then simulates a sequence of n_bets.
Args:
entry_price: Market price (cost of YES contract).
prob_mean: Agent's point estimate of true probability.
prob_std: Standard deviation of probability estimate.
n_bets: Number of sequential bets per simulation.
initial_bankroll: Starting bankroll.
n_simulations: Number of Monte Carlo paths.
kelly_multiplier: Fractional Kelly (0.25 = quarter, 0.5 = half, 1.0 = full).
rng_seed: Random seed.
Returns:
Dict with simulation results and Kelly fraction distribution.
"""
rng = np.random.default_rng(rng_seed)
# Fit Beta distribution to mean and std
# Beta params: alpha = mean * kappa, beta = (1-mean) * kappa
# where kappa = mean*(1-mean)/var - 1
variance = prob_std ** 2
kappa = prob_mean * (1 - prob_mean) / variance - 1
alpha = prob_mean * kappa
beta_param = (1 - prob_mean) * kappa
# Draw 'true' probabilities for each simulation
true_probs = rng.beta(alpha, beta_param, size=n_simulations)
# Compute Kelly fraction for each drawn probability
decimal_odds_minus_1 = (1.0 / entry_price) - 1.0
kelly_fracs = np.maximum(
0,
(decimal_odds_minus_1 * true_probs - (1 - true_probs)) / decimal_odds_minus_1
) * kelly_multiplier
# Simulate bankroll paths
terminal_bankrolls = np.full(n_simulations, initial_bankroll)
for t in range(n_bets):
# Outcomes drawn from the 'true' probability (not the estimate)
outcomes = rng.uniform(0, 1, size=n_simulations) < true_probs
stakes = terminal_bankrolls * kelly_fracs
stakes = np.minimum(stakes, terminal_bankrolls)
stakes = np.maximum(stakes, 0)
pnl = np.where(
outcomes,
stakes * decimal_odds_minus_1,
-stakes
)
terminal_bankrolls += pnl
terminal_bankrolls = np.maximum(terminal_bankrolls, 0)
result = SimulationResult(
terminal_bankrolls=terminal_bankrolls,
max_drawdowns=np.zeros(n_simulations), # not tracked per-step here
n_simulations=n_simulations
)
return {
"result": result,
"summary": result.summary(),
"kelly_fracs_mean": float(np.mean(kelly_fracs)),
"kelly_fracs_std": float(np.std(kelly_fracs)),
"prob_draws_mean": float(np.mean(true_probs)),
"pct_negative_kelly": float(np.mean(kelly_fracs == 0)),
"beta_alpha": alpha,
"beta_beta": beta_param
}
# ── Worked Example: Run all three scenarios ──────────────────────────
if __name__ == "__main__":
# --- Example 1: Sequential NFL season on BetOnline ---
print("=" * 60)
print("EXAMPLE 1: NFL Season Simulation (85 bets, half-Kelly)")
print("=" * 60)
nfl_position = Position(
name="NFL sides at -110",
probability=0.55,
entry_price=0.5238, # -110 American = 52.38% implied
kelly_fraction=0.5 * ((0.55 * (1/0.5238 - 1) - 0.45) / (1/0.5238 - 1))
)
print(f"Kelly fraction (half): {nfl_position.kelly_fraction:.4f}")
nfl_result = simulate_sequential_bets(
positions=[nfl_position],
n_bets_per_position=85,
initial_bankroll=10000.0,
n_simulations=10000
)
print(nfl_result.summary())
print()
# --- Example 2: Correlated Polymarket portfolio ---
print("=" * 60)
print("EXAMPLE 2: Correlated Polymarket Political Portfolio")
print("=" * 60)
positions = [
Position("Dem Pres YES", probability=0.55, entry_price=0.48,
kelly_fraction=0.5 * ((0.55*(1/0.48-1) - 0.45)/(1/0.48-1))),
Position("Dem Senate YES", probability=0.50, entry_price=0.42,
kelly_fraction=0.5 * ((0.50*(1/0.42-1) - 0.50)/(1/0.42-1))),
Position("Fed Cuts YES", probability=0.66, entry_price=0.60,
kelly_fraction=0.5 * ((0.66*(1/0.60-1) - 0.34)/(1/0.60-1))),
]
corr_matrix = np.array([
[1.00, 0.70, 0.25],
[0.70, 1.00, 0.15],
[0.25, 0.15, 1.00]
])
# Correlated simulation
corr_result = simulate_correlated_positions(
positions=positions,
correlation_matrix=corr_matrix,
initial_bankroll=10000.0,
n_simulations=10000
)
print("WITH correlation:")
print(corr_result.summary())
print()
# Compare: independent simulation (identity correlation)
indep_result = simulate_correlated_positions(
positions=positions,
correlation_matrix=np.eye(3),
initial_bankroll=10000.0,
n_simulations=10000
)
print("WITHOUT correlation (independent):")
print(indep_result.summary())
print()
# --- Example 3: Kelly stress test ---
print("=" * 60)
print("EXAMPLE 3: Kelly Stress Test Under Probability Uncertainty")
print("=" * 60)
stress = kelly_stress_test(
entry_price=0.55,
prob_mean=0.62,
prob_std=0.068,
n_bets=50,
initial_bankroll=10000.0,
n_simulations=10000,
kelly_multiplier=0.5
)
print(stress["summary"])
print(f"Mean Kelly fraction: {stress['kelly_fracs_mean']:.4f}")
print(f"Std Kelly fraction: {stress['kelly_fracs_std']:.4f}")
print(f"% sims with zero Kelly: {stress['pct_negative_kelly']:.2%}")
print(f"Beta params: alpha={stress['beta_alpha']:.1f}, "
f"beta={stress['beta_beta']:.1f}")
Limitations and Edge Cases
Garbage in, garbage out. Monte Carlo amplifies whatever distributions you feed it. If your probability estimates are biased, the simulation faithfully propagates that bias across 10,000 paths. The simulation doesn’t fix bad models — it quantifies risk conditional on the model being approximately correct.
Correlation estimation is hard. The Gaussian copula assumes linear correlation captures the dependency structure. In reality, prediction market outcomes can have tail dependence (both go wrong simultaneously more often than linear correlation suggests). The copula underestimates joint tail risk. For critical portfolio decisions, run simulations with correlation bumped up 20-30% as a stress scenario.
Static vs. dynamic strategies. The sequential simulation above uses a fixed Kelly fraction computed from the initial probability estimate. A real agent updates its estimates mid-sequence as new information arrives — Bayesian updating changes p, which changes f*. Simulating adaptive strategies requires an inner model update loop inside the Monte Carlo, increasing computational cost by 10-100x.
Computational cost scales linearly. 10,000 simulations x 1,000 bets = 10 million random draws. With numpy vectorization, this runs in under a second. But adding correlation (Cholesky decomposition), adaptive strategies (model updates per step), or high-dimensional portfolios (50+ correlated positions) pushes runtime into minutes. For real-time agent decisions, pre-compute simulation tables offline and interpolate.
Bernoulli outcomes are a simplification. Prediction market contracts are binary, so Bernoulli sampling is exact. But sportsbook outcomes have continuous payoffs (point spreads, totals), and the Bernoulli model misses partial outcomes. For continuous payoffs, sample from the appropriate distribution (Poisson for scores per the Poisson modeling guide, normal for spreads).
Ruin probability estimation in the tails. Estimating P(ruin) = 0.1% requires roughly 1 / 0.001 = 1,000 ruin events in the simulation to get a stable estimate, meaning N > 1,000,000 simulations. For rare-event estimation, use importance sampling — standard Monte Carlo is too slow.
FAQ
How do you use Monte Carlo simulation for sports betting bankroll management?
Generate 10,000+ simulated bankroll paths by sampling each bet outcome from a Bernoulli distribution using your estimated win probability. For each simulation, apply your staking strategy (Kelly, flat, proportional) sequentially across all bets and record the terminal bankroll. The distribution of terminal bankrolls gives you median expected profit, worst-case drawdown at the 5th percentile (Value at Risk), and ruin probability P(bankroll <= 0). This is how agents on BetOnline or Bookmaker quantify season-level risk before committing capital.
What are variance reduction techniques for Monte Carlo betting simulations?
Three primary techniques: antithetic variates (negate each uniform random draw to create a negatively correlated twin simulation, reducing variance 20-40%), importance sampling (oversample rare tail events like ruin to get tighter estimates with fewer simulations), and stratified sampling (divide the [0,1] probability range into N equal strata and sample one point per stratum for guaranteed coverage of the full distribution). Antithetic variates are simplest to implement and should be the default for any production agent.
How do you simulate correlated prediction market positions?
Build a correlation matrix reflecting the dependencies between positions (e.g., Democratic presidential win and Democratic Senate win are positively correlated at roughly rho = 0.6-0.8). Use the Gaussian copula: generate correlated standard normal draws via Cholesky decomposition, transform to uniform via the normal CDF, then convert to Bernoulli outcomes by comparing against your probability thresholds. This captures portfolio-level risk that independent simulations miss entirely.
How many Monte Carlo simulations do you need for reliable betting results?
For median and mean estimates, 10,000 simulations typically suffice — the standard error of the mean scales as sigma / sqrt(N). For tail risk metrics like 1st percentile drawdown or ruin probability below 1%, you need 50,000-100,000 simulations to get stable estimates. The required N depends on the quantity you’re estimating. A useful rule: to estimate a probability p with relative error epsilon, you need approximately 1 / (p * epsilon^2) simulations.
How does Monte Carlo simulation connect to Kelly criterion bet sizing?
Kelly tells you the theoretically optimal fraction to bet, but it assumes you know the true probability exactly. Monte Carlo stress-tests Kelly by drawing probabilities from an uncertainty distribution (e.g., Beta distribution centered on your estimate) and simulating bankroll paths under each draw. This reveals how Kelly sizing performs under realistic estimation error. The typical finding: full Kelly is dangerously aggressive when probability estimates have standard deviation above 0.05 — half-Kelly or quarter-Kelly survives the uncertainty much better, aligning with the analysis in the drawdown math guide.
What’s Next
Monte Carlo simulation quantifies portfolio risk. The next step is understanding how correlated positions amplify that risk — and how to construct portfolios that diversify it away.
- Next in the series: Correlation and Portfolio Theory for Multi-Market Agents extends the correlation framework to full portfolio optimization with mean-variance frontiers.
- Parlay risk: Correlation Risk in Parlays applies Monte Carlo to multi-leg bets where “independent” legs share hidden correlation.
- Wallet enforcement: The Agent Wallet Comparison documents how Layer 2 wallets (Coinbase Agentic, Safe) enforce the position limits that Monte Carlo outputs recommend.
- Full stack context: See the Agent Betting Stack for how Monte Carlo simulation fits between Layer 4 intelligence and Layer 2 wallet guardrails.
- Multi-agent aggregation: Polyseer uses ensemble methods to aggregate probability estimates from multiple models — feed those aggregated probabilities into the simulation framework for more robust inputs.
