Elo ratings convert game results into team strength estimates using E_A = 1 / (1 + 10^((R_B - R_A) / 400)). Add margin-of-victory adjustments, home-field corrections, and season regression to build a model that outputs calibrated win probabilities — then compare those probabilities against sportsbook lines to find edge.

Why This Matters for Agents

An autonomous betting agent needs a model that answers one question: what is the true probability that Team A beats Team B? Elo is the simplest rating system that produces calibrated probabilities from nothing but historical results. No play-by-play data required. No roster information. No injury reports. Just outcomes.

This is Layer 4 — Intelligence. An Elo engine sits in the agent’s model layer, ingesting game results and outputting win probabilities. Those probabilities flow to the decision module, which compares them against implied probabilities from sportsbook odds (pulled via The Odds API pipeline) and calculates expected value. If EV > 0, the agent sizes the bet with Kelly and routes the order to the best-priced book. The Elo model is the foundation of that pipeline — and it’s surprisingly effective. FiveThirtyEight’s NFL Elo model predicted games at roughly 74% accuracy against the spread when accounting for margin of victory. An agent running a well-calibrated Elo model against sharp offshore books like BookMaker or BetOnline has a legitimate starting point for finding edge.

The Math

The Standard Elo Formula

Arpad Elo designed the system for chess in the 1960s. The core insight: model the probability of player A beating player B as a logistic function of the rating difference.

The expected score for player A against player B:

E_A = 1 / (1 + 10^((R_B - R_A) / 400))

Where R_A and R_B are the current ratings. The function maps any rating difference to a probability between 0 and 1.

Why the logistic function? Because performance in competitive events is approximately normally distributed, and the difference of two normal distributions is logistic. The base-10 exponent and the 400 scaling factor are conventions from chess — a 400-point rating gap corresponds to a 10:1 expected score ratio (E_A ≈ 0.909).

Rating Difference → Expected Score

R_A - R_B = +400  →  E_A = 0.909  (91% expected)
R_A - R_B = +200  →  E_A = 0.759  (76% expected)
R_A - R_B = +100  →  E_A = 0.640  (64% expected)
R_A - R_B =    0  →  E_A = 0.500  (50% — even match)
R_A - R_B = -100  →  E_A = 0.360  (36% expected)
R_A - R_B = -200  →  E_A = 0.241  (24% expected)
R_A - R_B = -400  →  E_A = 0.091  ( 9% expected)

The Update Rule

After a game, ratings update:

R_new = R_old + K × (S - E)

Where K is the update sensitivity (higher K = bigger swings), S is the actual score (1 for win, 0.5 for draw, 0 for loss), and E is the expected score from the formula above.

The term (S - E) is the surprise factor. If Team A was expected to win (E = 0.75) and did win (S = 1), the surprise is small: S - E = 0.25. If Team A was expected to win and lost (S = 0), the surprise is large: S - E = -0.75. Bigger surprises produce bigger rating changes.

K-Factor Selection

K controls how quickly ratings respond to new information. This is the single most important tuning parameter.

K ValueBehaviorUse Case
4-8Very slow updates, highly stableLarge sample sports (MLB 162-game season)
15-20Moderate updatesNFL, NBA regular season
24-32Fast updates, responsiveNew teams, volatile leagues
40+Very reactive, high noiseNot recommended for betting models

For NFL betting models, K = 20 is the standard starting point. FiveThirtyEight used K = 20 for their base NFL Elo before margin-of-victory adjustments.

The optimal K depends on the sport’s signal-to-noise ratio. Baseball has high game-to-game variance (the best team loses 40% of its games), so you need small K and many games to get stable ratings. Football has lower variance per game outcome but far fewer games — 17 regular season NFL games versus 162 MLB games — so you need higher K to react to genuine team changes within a season.

Season-to-Season Regression

Teams change between seasons. Players retire, rosters turn over, coaching staffs change. A team’s end-of-season rating shouldn’t carry over unchanged.

The standard approach: regress each team’s rating toward the mean by one-third at the start of each new season.

R_new_season = R_end × (2/3) + R_mean × (1/3)

If the mean rating is 1505 (slightly above the 1500 default to account for expansion teams entering at 1500 and initially losing), a team that finished at 1650 starts the next season at:

R_new = 1650 × 0.667 + 1505 × 0.333 = 1100.55 + 501.17 = 1601.7

This 1/3 regression is well-calibrated for the NFL. In the NBA, where rosters are more stable year-to-year due to longer contracts and fewer players per team, some modelers use 1/4 regression.

Margin-of-Victory Adjustment

Basic Elo treats all wins equally. A 1-point victory updates the same as a 35-point blowout. That throws away information. The FiveThirtyEight NFL model addresses this with a margin-of-victory (MOV) multiplier:

MOV_mult = ln(|MOV| + 1) × (2.2 / ((R_winner - R_loser) × 0.001 + 2.2))

Two components work together here:

1. Logarithmic compression: ln(|MOV| + 1) compresses blowouts. A 7-point win gets multiplier ln(8) = 2.08. A 28-point win gets ln(29) = 3.37. The 4x larger margin produces only a 1.6x larger multiplier. This is correct because the difference between winning by 7 and winning by 14 is more meaningful than the difference between winning by 28 and winning by 35.

2. Autocorrelation correction: The denominator (R_winner - R_loser) × 0.001 + 2.2 prevents a feedback loop. Without it, strong teams would accumulate inflated ratings because they beat weak teams by large margins, gaining extra Elo, which makes the expected margin even larger, which makes the next big win look less surprising than it should. The correction reduces the MOV multiplier when the winner was already heavily favored.

The adjusted update becomes:

R_new = R_old + K × MOV_mult × (S - E)

Glicko-2: Adding Uncertainty

Standard Elo assumes all ratings are equally reliable. They aren’t. A team that has played 50 games this season has a well-established rating. A team returning from a bye week or starting a new season has a less certain rating.

Glicko-2 (developed by Mark Glickman) adds two parameters beyond the rating:

  • Rating Deviation (RD): Standard deviation of the rating estimate. High RD = uncertain rating. Low RD = well-known rating.
  • Volatility (sigma): How erratically the team performs. High sigma = inconsistent results.

Key mechanics:

RD grows during inactivity:

RD_new = sqrt(RD_old^2 + sigma^2)

After each period of inactivity, RD increases. A team that hasn’t played in 3 weeks has a wider confidence interval around its rating.

Updates scale with opponent RD: When a team with low RD (well-known strength) beats a team with high RD (uncertain strength), the update is smaller than if both teams had low RD. Beating an unknown opponent tells you less.

Initial values: Rating = 1500, RD = 350 (very uncertain), sigma = 0.06.

For betting agents, Glicko-2’s main advantage is that RD provides a built-in confidence interval. An agent can say “Team A’s rating is 1620 +/- 45” rather than just “1620.” When comparing model probabilities to market odds, the agent can account for its own uncertainty — widening the required edge threshold when RD is high.

TrueSkill: Teams and Multiplayer

Microsoft’s TrueSkill extends Elo concepts to team games where individual contributions matter. Each player has a skill rating (mu) and uncertainty (sigma), and team strength is the sum of individual skills.

The math uses Gaussian belief propagation — messages pass between factor nodes representing match outcomes and player skills. This is computationally heavier than Elo but handles partial team observations (when some players are unknown or substituted).

For most sports betting applications, team-level Elo is sufficient. TrueSkill becomes valuable for esports (variable rosters), tennis doubles, or any context where individual player ratings need to compose into team ratings.

Worked Examples

Example 1: NFL Week 10 — Chiefs vs. Bills

Pre-game ratings (hypothetical mid-season values calibrated to recent performance):

Kansas City Chiefs:  R = 1635  (strong team)
Buffalo Bills:       R = 1610  (also strong)
Home team: Bills

Rating difference (with home-field advantage):
R_Bills_adj = 1610 + 48 = 1658  (home-field = +48 Elo points)
R_Chiefs    = 1635

E_Chiefs = 1 / (1 + 10^((1658 - 1635) / 400))
         = 1 / (1 + 10^(23/400))
         = 1 / (1 + 10^0.0575)
         = 1 / (1 + 1.1415)
         = 1 / 2.1415
         = 0.467

E_Bills  = 1 - 0.467 = 0.533

The model gives the Bills a 53.3% win probability at home. Check this against the line: if BetOnline has Bills -1.5 at -110, the implied probability (after removing vig) is roughly 52.4%. The model and the market are close, meaning little edge on this game.

Now suppose the Bills win 27-20 (MOV = 7):

MOV_mult = ln(7 + 1) × (2.2 / ((1658 - 1635) × 0.001 + 2.2))
         = ln(8) × (2.2 / (0.023 + 2.2))
         = 2.079 × (2.2 / 2.223)
         = 2.079 × 0.9897
         = 2.058

K = 20
Update_Bills  = 20 × 2.058 × (1 - 0.533) = 20 × 2.058 × 0.467 = 19.22
Update_Chiefs = 20 × 2.058 × (0 - 0.467) = 20 × 2.058 × (-0.467) = -19.22

New ratings:
Bills:  1610 + 19.22 = 1629.2
Chiefs: 1635 - 19.22 = 1615.8

Example 2: Detecting Value Against the Spread

An agent runs Elo on every NFL team and gets these Week 12 probabilities:

Game: Packers at Vikings
Elo probability (Packers win): 0.42
Elo probability (Vikings win): 0.58

Sportsbook line (Bovada): Vikings -3 at -110
Implied probability of Vikings covering -3: ~0.524 (after vig removal)

The agent’s Elo model says the Vikings have a 58% chance of winning outright. But the spread is -3 — the Vikings need to win by more than 3. Using the agent’s MOV distribution model (derived from historical Elo-gap-to-margin mappings), an Elo gap of +58 points (Vikings adj rating minus Packers adj rating) corresponds to an expected margin of about 3.5 points with a standard deviation of 13.5 points.

P(Vikings cover -3) = P(margin > 3)
                    = P(z > (3 - 3.5) / 13.5)
                    = P(z > -0.037)
                    = 0.515

The model says 51.5% chance the Vikings cover. The market implies 52.4%. No edge — the agent passes on this game.

Example 3: Season Regression

End of 2025 NFL season ratings:

Team          End Rating  → Regressed (1/3 to 1505)  → New Season Start
Chiefs        1672          1672×0.667 + 1505×0.333 = 1616.2
Lions         1648          1648×0.667 + 1505×0.333 = 1600.2
Ravens        1630          1630×0.667 + 1505×0.333 = 1588.4
Patriots      1385          1385×0.667 + 1505×0.333 = 1425.0
Panthers      1370          1370×0.667 + 1505×0.333 = 1415.0

The compression is deliberate. The gap between the best and worst teams shrinks from 302 points to 201 points. This reflects genuine offseason uncertainty — free agency, the draft, and coaching changes narrow the gap between contenders and rebuilders.

Implementation

"""
Elo Rating System for Sports Betting Agents

A complete, runnable implementation with margin-of-victory adjustment,
home-field advantage, season regression, and calibration evaluation.

pip install numpy pandas
"""

import math
import numpy as np
import pandas as pd
from dataclasses import dataclass, field
from typing import Optional


@dataclass
class EloConfig:
    """Configuration for the Elo rating system."""
    initial_rating: float = 1500.0
    k_factor: float = 20.0
    home_advantage: float = 48.0
    mean_rating: float = 1505.0
    regression_fraction: float = 1 / 3
    use_mov: bool = True
    scale_factor: float = 400.0


@dataclass
class TeamRating:
    """Current rating state for a single team."""
    rating: float = 1500.0
    games_played: int = 0
    wins: int = 0
    losses: int = 0


class EloRatingSystem:
    """
    Elo rating engine for autonomous betting agents.

    Supports:
    - Standard Elo with configurable K-factor
    - Margin-of-victory adjustment (FiveThirtyEight method)
    - Home-field advantage
    - Season-to-season regression
    - Win probability output for betting decisions
    """

    def __init__(self, config: Optional[EloConfig] = None):
        self.config = config or EloConfig()
        self.ratings: dict[str, TeamRating] = {}

    def get_rating(self, team: str) -> TeamRating:
        """Get or initialize a team's rating."""
        if team not in self.ratings:
            self.ratings[team] = TeamRating(rating=self.config.initial_rating)
        return self.ratings[team]

    def expected_score(
        self,
        rating_a: float,
        rating_b: float
    ) -> float:
        """
        Compute expected score for team A against team B.

        E_A = 1 / (1 + 10^((R_B - R_A) / 400))

        Returns probability between 0 and 1.
        """
        exponent = (rating_b - rating_a) / self.config.scale_factor
        return 1.0 / (1.0 + 10.0 ** exponent)

    def win_probability(
        self,
        team_a: str,
        team_b: str,
        home_team: Optional[str] = None
    ) -> dict[str, float]:
        """
        Return win probabilities for both teams.

        Args:
            team_a: First team name
            team_b: Second team name
            home_team: Which team (if any) has home-field advantage

        Returns:
            Dict with team names as keys and win probabilities as values.
        """
        r_a = self.get_rating(team_a).rating
        r_b = self.get_rating(team_b).rating

        if home_team == team_a:
            r_a += self.config.home_advantage
        elif home_team == team_b:
            r_b += self.config.home_advantage

        prob_a = self.expected_score(r_a, r_b)
        return {team_a: prob_a, team_b: 1.0 - prob_a}

    def _mov_multiplier(
        self,
        margin: int,
        winner_rating: float,
        loser_rating: float
    ) -> float:
        """
        Margin-of-victory multiplier (FiveThirtyEight method).

        MOV_mult = ln(|MOV| + 1) * (2.2 / ((R_w - R_l) * 0.001 + 2.2))

        The log compresses blowouts. The denominator prevents
        autocorrelation between rating gap and margin.
        """
        log_component = math.log(abs(margin) + 1)
        rating_diff = winner_rating - loser_rating
        autocorr_correction = 2.2 / (rating_diff * 0.001 + 2.2)
        return log_component * autocorr_correction

    def update(
        self,
        team_a: str,
        team_b: str,
        score_a: int,
        score_b: int,
        home_team: Optional[str] = None
    ) -> dict[str, float]:
        """
        Update ratings after a game.

        Args:
            team_a: First team
            team_b: Second team
            score_a: Points scored by team_a
            score_b: Points scored by team_b
            home_team: Which team had home-field advantage

        Returns:
            Dict with new ratings for both teams.
        """
        tr_a = self.get_rating(team_a)
        tr_b = self.get_rating(team_b)

        r_a = tr_a.rating
        r_b = tr_b.rating

        # Adjust for home field in expected score calculation
        r_a_adj = r_a + (self.config.home_advantage if home_team == team_a else 0)
        r_b_adj = r_b + (self.config.home_advantage if home_team == team_b else 0)

        e_a = self.expected_score(r_a_adj, r_b_adj)
        e_b = 1.0 - e_a

        # Actual scores: 1 = win, 0 = loss, 0.5 = draw
        if score_a > score_b:
            s_a, s_b = 1.0, 0.0
        elif score_b > score_a:
            s_a, s_b = 0.0, 1.0
        else:
            s_a, s_b = 0.5, 0.5

        # Margin-of-victory multiplier
        k = self.config.k_factor
        if self.config.use_mov:
            margin = abs(score_a - score_b)
            if score_a > score_b:
                mov_mult = self._mov_multiplier(margin, r_a_adj, r_b_adj)
            elif score_b > score_a:
                mov_mult = self._mov_multiplier(margin, r_b_adj, r_a_adj)
            else:
                mov_mult = 1.0
            k = k * mov_mult

        # Update ratings (update on raw ratings, not home-adjusted)
        tr_a.rating = r_a + k * (s_a - e_a)
        tr_b.rating = r_b + k * (s_b - e_b)

        # Update records
        tr_a.games_played += 1
        tr_b.games_played += 1
        if score_a > score_b:
            tr_a.wins += 1
            tr_b.losses += 1
        elif score_b > score_a:
            tr_b.wins += 1
            tr_a.losses += 1

        return {team_a: tr_a.rating, team_b: tr_b.rating}

    def regress_to_mean(self) -> None:
        """
        Apply season-to-season regression.
        Pulls every team's rating toward the mean by regression_fraction.

        R_new = R_old * (1 - frac) + R_mean * frac
        """
        frac = self.config.regression_fraction
        mean = self.config.mean_rating
        for team in self.ratings.values():
            team.rating = team.rating * (1 - frac) + mean * frac
            team.games_played = 0
            team.wins = 0
            team.losses = 0

    def standings(self) -> pd.DataFrame:
        """Return current ratings as a sorted DataFrame."""
        data = []
        for name, tr in self.ratings.items():
            data.append({
                "team": name,
                "rating": round(tr.rating, 1),
                "games": tr.games_played,
                "wins": tr.wins,
                "losses": tr.losses,
            })
        df = pd.DataFrame(data)
        return df.sort_values("rating", ascending=False).reset_index(drop=True)


def evaluate_calibration(
    predictions: list[float],
    outcomes: list[int],
    n_bins: int = 10
) -> dict:
    """
    Evaluate model calibration using Brier score, log-loss,
    and a calibration table.

    Args:
        predictions: List of predicted probabilities (for the team
                     considered the 'positive' outcome)
        outcomes: List of actual outcomes (1 = positive occurred, 0 = not)
        n_bins: Number of bins for calibration curve

    Returns:
        Dict with brier_score, log_loss, and calibration_table (DataFrame).
    """
    preds = np.array(predictions)
    acts = np.array(outcomes)

    # Brier score: (1/N) * sum((p - o)^2)
    brier = np.mean((preds - acts) ** 2)

    # Log-loss: -(1/N) * sum(o*ln(p) + (1-o)*ln(1-p))
    eps = 1e-15  # prevent log(0)
    preds_clipped = np.clip(preds, eps, 1 - eps)
    logloss = -np.mean(
        acts * np.log(preds_clipped) + (1 - acts) * np.log(1 - preds_clipped)
    )

    # Calibration table
    bin_edges = np.linspace(0, 1, n_bins + 1)
    rows = []
    for i in range(n_bins):
        mask = (preds >= bin_edges[i]) & (preds < bin_edges[i + 1])
        if mask.sum() > 0:
            rows.append({
                "bin_low": round(bin_edges[i], 2),
                "bin_high": round(bin_edges[i + 1], 2),
                "mean_predicted": round(preds[mask].mean(), 3),
                "mean_actual": round(acts[mask].mean(), 3),
                "count": int(mask.sum()),
            })

    return {
        "brier_score": round(brier, 4),
        "log_loss": round(logloss, 4),
        "calibration_table": pd.DataFrame(rows),
    }


# --- Demo: Run a mini-season ---

if __name__ == "__main__":
    elo = EloRatingSystem(EloConfig(k_factor=20, home_advantage=48, use_mov=True))

    # Simulate some NFL-like results
    games = [
        ("Chiefs", "Lions", 24, 20, "Chiefs"),
        ("Bills", "Ravens", 31, 27, "Bills"),
        ("Lions", "Packers", 34, 20, "Lions"),
        ("Ravens", "Chiefs", 28, 24, "Ravens"),
        ("Bills", "Packers", 21, 17, "Packers"),
        ("Chiefs", "Packers", 30, 14, "Chiefs"),
        ("Lions", "Bills", 27, 24, "Lions"),
        ("Ravens", "Packers", 38, 10, "Ravens"),
    ]

    predictions = []
    outcomes = []

    for away, home, score_a, score_b, home_team in games:
        probs = elo.win_probability(away, home, home_team=home_team)
        pred_home = probs[home]
        actual_home = 1 if score_b > score_a else 0

        predictions.append(pred_home)
        outcomes.append(actual_home)

        new_ratings = elo.update(away, home, score_a, score_b, home_team=home_team)
        winner = home if score_b > score_a else away
        margin = abs(score_a - score_b)
        print(
            f"{away:>10} {'@':>2} {home:<10}  "
            f"{score_a}-{score_b}  "
            f"P(home)={pred_home:.3f}  "
            f"Winner: {winner} (+{margin})"
        )

    print("\n--- Standings ---")
    print(elo.standings().to_string(index=False))

    print("\n--- Season Regression ---")
    elo.regress_to_mean()
    print(elo.standings().to_string(index=False))

    # Calibration (small sample, illustrative only)
    cal = evaluate_calibration(predictions, outcomes)
    print(f"\nBrier Score: {cal['brier_score']}")
    print(f"Log Loss:    {cal['log_loss']}")

Limitations and Edge Cases

Small sample sizes. NFL seasons have 17 games. An Elo system needs roughly 30-50 games to converge on a team’s true strength. This means early-season Elo ratings are heavily influenced by preseason regression and the first few results. An agent should widen its confidence threshold for bets in Weeks 1-4 and tighten as the season progresses. Glicko-2 handles this natively via the RD parameter.

Roster changes mid-season. Elo tracks team strength as a single number. It cannot account for a starting quarterback injury, a trade deadline acquisition, or a coaching change. When Josh Allen gets injured, the Bills’ Elo doesn’t drop — but their true win probability does. An agent that relies solely on Elo will be slow to react. The fix is to combine Elo with a roster-adjustment layer that modifies the base rating when key personnel changes occur.

Home-field advantage is not constant. The +48 Elo points for home-field is a league average. Some teams (Seahawks in CenturyLink Field, Broncos at altitude) have stronger home advantages. Some teams (Chargers) historically have weaker home-field edges. A more sophisticated model uses team-specific home-field adjustments.

Elo doesn’t model the spread directly. Elo outputs a win probability, not a point spread. Converting Elo win probability to a predicted spread requires a separate mapping function. The empirical relationship for the NFL is approximately: predicted_spread = (R_A_adj - R_B_adj) / 25. A 100-point Elo gap corresponds to roughly a 4-point spread. But this mapping introduces additional error.

The K-factor tradeoff is fundamental. High K reacts quickly to genuine team changes but also overreacts to random variance. Low K is stable but slow to adapt. There is no K that eliminates this tension — it’s a bias-variance tradeoff. Dynamic K (higher early in the season, lower later) helps but doesn’t solve it.

Elo assumes a single dimension of skill. A team’s ability to win is compressed into one number. In reality, teams have strengths and weaknesses: a dominant rushing offense paired with a weak pass defense. Elo cannot distinguish between a team that wins by controlling possession and one that wins with explosive plays. For dimensional modeling, see Regression Models for Sports Betting.

FAQ

What is the Elo rating formula for sports betting?

The Elo expected score formula is E_A = 1 / (1 + 10^((R_B - R_A) / 400)), where R_A and R_B are the ratings of team A and team B. This outputs a win probability between 0 and 1. After the game, ratings update via R_new = R_old + K × (S - E), where K controls sensitivity (typically 20 for established teams) and S is the actual outcome (1 for win, 0 for loss).

How do you adjust Elo ratings for margin of victory?

Multiply the K-factor by a margin-of-victory multiplier: MOV_mult = ln(|MOV| + 1) × (2.2 / ((R_winner - R_loser) × 0.001 + 2.2)). The logarithm compresses blowouts so a 28-point win isn’t valued 4x more than a 7-point win. The autocorrelation correction in the denominator prevents strong teams from being double-rewarded for beating weak opponents by large margins.

What is the difference between Elo and Glicko-2 rating systems?

Elo tracks only a single number per player/team. Glicko-2 adds two parameters: rating deviation (RD), which measures confidence in the rating, and volatility (sigma), which captures how erratically a team performs. A team with low RD and low sigma has a stable, well-known strength. A team with high RD hasn’t played recently and its rating is uncertain — Glicko-2 adjusts update magnitude accordingly.

How do you evaluate if an Elo model is well-calibrated for betting?

Use Brier score and log-loss against historical outcomes. Brier score = (1/N) × sum((predicted_prob - actual_outcome)^2) — lower is better, with 0.25 as the baseline for coin-flip predictions. Log-loss penalizes confident wrong predictions more heavily. Compare your model’s calibration curve against the 45-degree line: if you predict 70% and the team wins 70% of the time, the model is well-calibrated.

How does Elo connect to expected value in sports betting?

An Elo model outputs a win probability (e.g., 0.63 for Team A). Compare this against the sportsbook implied probability derived from the odds. If Elo says 63% and the book implies 55%, you have an 8-percentage-point edge. Feed this into the expected value formula EV = p × payout - (1-p) × stake, then size with Kelly. See the Expected Value guide for the full framework.

What’s Next

Elo gives you team strength estimates and win probabilities. The next step is building richer models that incorporate more variables:

  • Next in the series: Regression Models for Sports Betting — move beyond single-number ratings to multi-feature models using linear, logistic, and ridge regression.
  • Apply Elo to NFL modeling: NFL Mathematical Modeling builds a full NFL prediction system with point spreads, totals, and player props — Elo is the backbone.
  • Size your bets: Once you have calibrated probabilities, feed them into Kelly Criterion for optimal bet sizing.
  • Compare against the market: The AgentBets Vig Index shows real-time sportsbook overrounds — compare your Elo-derived probabilities against implied odds from BetOnline, Bovada, and BookMaker to find edge.
  • Full agent architecture: See the Agent Betting Stack for how the Elo engine fits into the four-layer agent pipeline, and browse the Marketplace for pre-built rating tools.