Offshore Sportsbook APIs: The Complete Developer Guide

You want programmatic access to offshore sportsbook odds. You’re building an arb scanner, a line-shopping bot, or a quantitative model that needs prices from BetOnline, Bovada, or MyBookie alongside the regulated U.S. books. The problem: offshore sportsbooks were never designed for developers. There are no published API docs, no developer portals, no OAuth flows. You’re on your own.

This guide maps every viable path to offshore sportsbook data — official APIs (where they exist), undocumented internal endpoints, third-party aggregators, and raw scraping. By the end, you’ll know exactly which approach fits your use case and have working code to start pulling odds.

Disclaimer: Accessing offshore sportsbook data may violate those books’ Terms of Service. Some offshore sportsbooks operate in legal gray areas depending on your jurisdiction. This guide is for educational and informational purposes. You are responsible for understanding and complying with all applicable laws and terms of service. Nothing here constitutes legal advice or an endorsement of any sportsbook.


The Offshore API Landscape

The regulated U.S. sportsbook ecosystem — DraftKings, FanDuel, BetMGM — is slowly moving toward developer-friendly APIs. Offshore books are a different world. Books like Bovada and BetOnline serve millions of bettors but invest almost nothing in developer tooling. Their incentive structure doesn’t reward it: they profit from recreational bettors, not from quants building automated systems.

This creates a fragmented landscape with four tiers of data access:

Tier 1: Official APIs

Almost nonexistent. No major offshore sportsbook publishes a documented, versioned REST API with authentication, rate limit headers, and a developer portal. If you’re coming from the Kalshi or Polymarket world where APIs are first-class products, recalibrate your expectations.

The closest thing to an “official” API in the offshore space is BetOnline’s XML/JSON odds feed, which has been intermittently available and is not formally documented. Some books offer affiliate data feeds, but these are limited to marketing data (sign-up links, promotional odds) rather than full market data.

Tier 2: Undocumented Internal Endpoints

Every sportsbook website loads its odds from somewhere. Modern sportsbook frontends are single-page applications that fetch data from internal API endpoints. These endpoints return structured JSON, are publicly accessible (no authentication required for read-only odds data), and can be called directly.

Bovada is the canonical example. Its frontend makes requests to internal endpoints that return full odds data as JSON. BetOnline similarly exposes data feeds that power its odds display. These endpoints work — until they don’t. The sportsbook can change URLs, response formats, or add bot detection at any time, with no notice and no changelog.

Tier 3: Third-Party Aggregator APIs

This is the pragmatic path for most developers. Services like The Odds API, OpticOdds, OddsJam, and Unabated aggregate odds from dozens of sportsbooks — including offshore books — and expose them through clean, documented REST APIs.

ProviderOffshore CoverageUpdate SpeedFree TierPricing
The Odds APIBetOnline, Bovada, MyBookie, BetUS5–30s500 requests/monthFrom $20/mo
OpticOddsBetOnline, Bovada, Sportsbetting.agSub-secondLimited trialCustom pricing
OddsJam40+ books incl. major offshore2–10sNoFrom $99/mo
Unabated20+ books incl. offshore5–15sNoFrom $50/mo

The tradeoff: you pay for the data, you’re subject to the aggregator’s rate limits, and you don’t control data freshness. But you get stable endpoints, structured responses, and someone else dealing with the scraping and maintenance nightmare.

Tier 4: Direct Scraping

When no other option works, you can scrape the sportsbook website directly. This means headless browsers (Playwright, Puppeteer), anti-bot bypass tooling, and constant maintenance as the site changes its frontend.

Scraping is the most fragile and resource-intensive approach. It’s also the only option for books that no third-party aggregator covers, or when you need data points (e.g., player props, live betting feeds) that aggregators don’t include. Budget for 10–20 hours per month of maintenance per scraped site.


Individual Book Overviews

BetOnline.ag

BetOnline is the most developer-accessible of the major offshore books. Its odds data has been consistently available through various endpoints, and multiple third-party aggregators include BetOnline in their coverage. The site’s frontend architecture is relatively straightforward compared to competitors.

BetOnline exposes several data feeds that return odds in structured formats. The feed URLs have remained more stable than Bovada’s, though they still change periodically. BetOnline also appears in most third-party aggregator services — The Odds API, OpticOdds, and OddsJam all include BetOnline data.

For agent developers, BetOnline is often the first offshore book to integrate because the data is available, the odds are competitive (BetOnline is known for sharp-friendly lines in some markets), and account longevity for winning bettors tends to be better than at some competitors. See the full technical breakdown in the BetOnline API guide.

Bovada

Bovada is the highest-volume offshore sportsbook serving U.S. bettors and one of the most technically interesting from a data access perspective. Its frontend is a modern SPA that loads all odds data from internal JSON endpoints. These endpoints are well-structured, return rich data including market metadata, and have been reverse-engineered by the community multiple times.

The challenge with Bovada’s internal endpoints is stability. Bovada periodically changes its API structure, adds Cloudflare-style bot protection, or restructures URL patterns. What works today may break next week. The community typically figures out the new endpoints within days, but if you’re building a production system, you need a fallback.

Bovada’s lines are soft — they cater primarily to recreational bettors, which means they’re slower to move and more likely to have arb opportunities against sharp books. This makes Bovada data particularly valuable for arbitrage and positive-EV strategies. For the full deep dive, see the Bovada API guide.

Sportsbetting.ag

Sportsbetting.ag shares infrastructure with BetOnline — they’re sister sites running on the same backend platform. This means the data access methods that work for BetOnline generally work for Sportsbetting.ag with minor URL differences. The odds are often identical or very close between the two books.

Because the backend is shared, Sportsbetting.ag’s internal endpoints mirror BetOnline’s structure. Third-party aggregators that cover BetOnline typically also include Sportsbetting.ag data, sometimes under the same data source. For practical purposes, if you’ve built a BetOnline integration, adapting it for Sportsbetting.ag is minimal work.

The main reason to track Sportsbetting.ag separately is that odds can diverge slightly from BetOnline, particularly on props and live betting. These divergences occasionally create micro-arb opportunities between the sister sites. Full details in the Sportsbetting.ag API guide.

MyBookie

MyBookie is one of the more popular offshore sportsbooks but also one of the harder to access programmatically. The site uses aggressive bot detection, and its internal endpoint structure is less predictable than BetOnline’s or Bovada’s. MyBookie has been known to serve different content to headless browsers versus real users.

Third-party aggregator coverage for MyBookie varies. The Odds API includes MyBookie data for major markets. OddsJam covers it as well. But prop market data and live betting data from MyBookie is harder to find in aggregator feeds compared to BetOnline or Bovada.

MyBookie’s lines tend to be recreational-facing, which means softer odds on popular markets. For strategies that depend on identifying soft lines (positive-EV betting, middles), MyBookie data is worth tracking even if the integration is more painful. See the MyBookie API guide for workarounds and known endpoints.

BetUS

BetUS is the oldest continuously operating offshore sportsbook and has a more traditional web architecture than its competitors. Its frontend is less of a modern SPA and more of a server-rendered application, which means odds data isn’t always available from clean JSON endpoints. This makes direct endpoint access harder and scraping more complex.

Third-party coverage for BetUS is spottier than for BetOnline or Bovada. Some aggregators include BetUS for major moneyline and spread markets, but prop and live betting coverage is limited. If you specifically need BetUS data, you may need to combine an aggregator feed for core markets with direct scraping for props.

BetUS differentiates itself with aggressive promotional odds on high-profile events. These promotional lines can create genuine arb opportunities against other books, making BetUS data valuable for event-driven strategies even if day-to-day integration is more effort. Full analysis in the BetUS API guide.


Data Access Comparison

BookOfficial APIInternal EndpointsThird-Party CoverageScraping Difficulty
BetOnlinePartial (feeds)Structured JSON, moderately stableExcellent (all major aggregators)Medium
BovadaNoneRich JSON, changes frequentlyGood (The Odds API, OpticOdds)Medium-Hard
Sportsbetting.agPartial (shared with BetOnline)Mirrors BetOnlineGood (often bundled with BetOnline)Medium
MyBookieNoneUnstable, bot detectionModerate (The Odds API, OddsJam)Hard
BetUSNoneLimited (server-rendered pages)Limited (major markets only)Hard

Read this table as a decision matrix. If you need data from all five books, use a third-party aggregator. If you need a single book with the deepest data, hit BetOnline or Bovada’s internal endpoints directly. If you’re building an arb scanner, you likely need both — aggregator data for breadth and direct endpoints for latency.


Quick Start: Fetching Offshore Odds via The Odds API

The fastest path to offshore sportsbook odds is through The Odds API. Here’s a working example that pulls odds from multiple offshore books for a given sport:

import requests
from datetime import datetime

API_KEY = "YOUR_ODDS_API_KEY"
BASE_URL = "https://api.the-odds-api.com/v4"

OFFSHORE_BOOKS = [
    "betonlineag",
    "bovada",
    "mybookieag",
    "betus",
    "sportsbettingag",
]

def fetch_offshore_odds(sport: str = "americanfootball_nfl") -> list[dict]:
    """Fetch current odds from offshore sportsbooks for a given sport."""
    resp = requests.get(
        f"{BASE_URL}/sports/{sport}/odds",
        params={
            "apiKey": API_KEY,
            "regions": "us,us2",
            "markets": "h2h,spreads,totals",
            "bookmakers": ",".join(OFFSHORE_BOOKS),
            "oddsFormat": "american",
        },
        timeout=10,
    )
    resp.raise_for_status()

    remaining = resp.headers.get("x-requests-remaining", "?")
    print(f"API requests remaining: {remaining}")

    return resp.json()


def extract_best_odds(events: list[dict]) -> list[dict]:
    """Find the best available odds per outcome across offshore books."""
    results = []
    for event in events:
        game = {
            "id": event["id"],
            "matchup": f"{event['away_team']} @ {event['home_team']}",
            "commence": event["commence_time"],
            "best_odds": {},
        }

        for bookmaker in event.get("bookmakers", []):
            book_name = bookmaker["key"]
            for market in bookmaker.get("markets", []):
                market_key = market["key"]
                for outcome in market.get("outcomes", []):
                    label = f"{market_key}|{outcome['name']}"
                    point = outcome.get("point", "")
                    if point:
                        label += f"|{point}"

                    price = outcome["price"]
                    if label not in game["best_odds"] or _is_better(
                        price, game["best_odds"][label]["price"]
                    ):
                        game["best_odds"][label] = {
                            "price": price,
                            "book": book_name,
                        }
        results.append(game)
    return results


def _is_better(new_price: int, old_price: int) -> bool:
    """Higher is better for both positive and negative American odds."""
    return new_price > old_price


if __name__ == "__main__":
    events = fetch_offshore_odds("americanfootball_nfl")
    best = extract_best_odds(events)

    for game in best:
        print(f"\n{game['matchup']}{game['commence']}")
        for label, info in game["best_odds"].items():
            print(f"  {label}: {info['price']:+d} ({info['book']})")

This gives you a normalized view of the best available odds across offshore books for every event. To extend it into an arb scanner, compare these odds against regulated book data and calculate implied probability totals — see the Sports Betting Arbitrage Bot guide for the full pipeline.


Rate Limiting, Data Freshness & Operational Considerations

Rate Limits

Every data access method has rate constraints:

  • The Odds API: 500 requests/month on free tier. Paid tiers scale up. Each request returns all bookmakers, so you can fetch efficiently.
  • OpticOdds: Custom rate limits based on your plan. WebSocket feeds avoid the request-per-poll problem entirely.
  • Direct endpoints (Bovada, BetOnline): No published rate limits, but aggressive polling triggers bot detection. Keep requests under 1 per 5 seconds per endpoint as a baseline. Rotate user agents, add jitter, respect Retry-After headers.
  • Scraping: Self-imposed rate limits are critical. One request every 10–30 seconds with randomized delays. Headless browser sessions are expensive — pool and reuse them.

Data Freshness

Freshness requirements depend on your strategy:

StrategyRequired FreshnessRecommended Approach
Arbitrage< 5 secondsDirect endpoints + WebSocket aggregator
Positive-EV5–30 secondsThird-party aggregator API
Line shopping30–60 secondsAggregator API with polling
Historical analysisMinutes–hoursAggregator API, cached locally

For arbitrage strategies, stale data kills you. An opportunity detected with 30-second-old odds may already be gone. If you’re building a serious arb scanner against offshore books, you need the lowest-latency data source available — which usually means direct endpoint access or OpticOdds’ WebSocket feed, not poll-based APIs.

Handling Endpoint Breakage

Offshore book endpoints break regularly. Your system needs:

  1. Health checks — Ping each data source on a schedule. If a source stops responding or returns stale data (check timestamps), flag it.
  2. Fallback chains — For each book, configure a primary source (e.g., direct endpoint) and a fallback (e.g., aggregator API). Switch automatically on failure.
  3. Alerting — When a data source degrades, you want to know immediately. Don’t discover it when your bot makes a bad trade on stale odds.
  4. Schema validation — Validate response structure before parsing. Offshore endpoints change their JSON schema without warning. A missing field that causes a KeyError at 2am is a solvable problem if you validate first.
from dataclasses import dataclass
from typing import Callable

@dataclass
class OddsSource:
    name: str
    fetch: Callable
    priority: int

class OddsFetcher:
    def __init__(self, sources: list[OddsSource]):
        self.sources = sorted(sources, key=lambda s: s.priority)

    def get_odds(self, sport: str) -> dict | None:
        for source in self.sources:
            try:
                data = source.fetch(sport)
                if self._validate(data):
                    return data
            except Exception as e:
                print(f"[{source.name}] failed: {e}")
                continue
        return None

    def _validate(self, data: dict) -> bool:
        if not data:
            return False
        if "events" not in data and not isinstance(data, list):
            return False
        return True

Odds Normalization Challenges

One of the hardest parts of working with offshore sportsbook data isn’t getting the data — it’s making it comparable.

The Problem

Each book formats odds differently:

  • American odds are standard for U.S.-facing offshore books, but some internal endpoints return decimal odds or raw implied probabilities.
  • Market naming varies wildly. BetOnline might call it “Over 45.5”, Bovada might call it “Over 45½”, and a third-party aggregator might normalize to “over” with a point field of 45.5.
  • Team naming is inconsistent. “LAL” vs “Lakers” vs “Los Angeles Lakers” vs “LA Lakers”. Player name formatting varies even more.
  • Prop market structure differs between books. One book might offer “Player X Over 25.5 Points” as a single market; another splits it into “Over” and “Under” as separate markets.

The Solution

You need a normalization layer that:

  1. Converts all odds to a single format (decimal is best for calculations — convert to/from American for display).
  2. Maps team and player names to canonical IDs (build a lookup table or use a sports data API like ESPN or SportsDataIO for entity resolution).
  3. Standardizes market types into a common schema (e.g., {type: "total", side: "over", point: 45.5}).
  4. Handles edge cases like half-points, alternate lines, and markets that exist at one book but not another.

This is a large enough topic that it has its own dedicated guide. See Odds Normalization for Offshore Sportsbooks for the full implementation, including a Python normalization library and mapping tables for all five books covered here.


Building Your Data Pipeline

A production-grade offshore odds pipeline looks like this:

  1. Ingest layer — Parallel fetchers for each data source (aggregator APIs + direct endpoints). Each fetcher runs on its own schedule and writes to a shared queue.
  2. Normalization layer — Consumes raw odds from the queue, normalizes team names, market types, and odds formats. Outputs canonical odds objects.
  3. Storage layer — Write normalized odds to a time-series store (TimescaleDB, InfluxDB, or even SQLite for small-scale). Keep historical data for backtesting.
  4. Consumer layer — Your arb scanner, line shopping tool, or model reads from storage. Optionally subscribe to real-time updates via pub/sub.

The pipeline architecture matters more than any individual data source. Sources will break, change, and degrade. Your pipeline needs to absorb that volatility and still deliver clean, comparable odds to your strategy layer.


What’s Next

This hub page gives you the landscape. The spoke articles go deep on each book and technique:

Individual Book Guides

BetUS API and Automation: What You Need to Know

Developer guide to BetUS odds data and automation. Covers API access options, third-party providers, BetUS-specific characteristics, and practical approaches for integrating BetUS into automated multi-book betting pipelines.

Mar 1, 2026

Bovada API: What Developers Need to Know About Accessing Bovada Odds

Bovada has no public API, but its internal JSON endpoints are semi-discoverable. This guide covers the reality of Bovada API access — internal endpoints, the GitHub scraper ecosystem, and the recommended path via third-party providers like The Odds API and OpticOdds — with working Python code for building a Bovada odds pipeline.

Mar 1, 2026

Does BetOnline Have an API? How Developers Access BetOnline Odds Data

BetOnline has no public API. This guide covers how developers actually access BetOnline odds data — via third-party APIs like The Odds API and OpticOdds — with working Python code, comparison tables, and real-time pipeline architecture.

Mar 1, 2026

MyBookie API and Odds Data Access

Developer guide to accessing MyBookie odds data for automated betting strategies. Covers API options, third-party data providers, unique line characteristics, and how MyBookie fits into a multi-book automated pipeline.

Mar 1, 2026

Offshore Sportsbook Odds: How to Normalize Data Across Books

How to normalize odds data from multiple offshore sportsbooks into a unified format. Covers odds format conversion, market matching, timestamp alignment, and building a normalization pipeline in Python.

Mar 1, 2026

Sportsbetting.ag API and Automation Guide

Developer guide to accessing Sportsbetting.ag odds data programmatically. Covers API access options, third-party data providers, automation possibilities, and how Sportsbetting.ag compares to other offshore books for developers.

Mar 1, 2026