CrewAI is an open-source Python framework for building teams of AI agents that collaborate autonomously. This guide covers how to use CrewAI for prediction market research, sports betting analysis, and automated trading pipelines — with working code, architecture patterns, and production deployment strategies. CrewAI sits at Layer 4 (Intelligence) of the agent betting stack.

What CrewAI Is

CrewAI is a standalone Python framework for orchestrating multiple AI agents that work together as a team. Each agent has a role, a goal, a backstory, and access to tools. Agents collaborate on tasks — passing context, delegating work, and producing structured outputs — without you writing the coordination logic.

As of March 2026, CrewAI is at version 1.12 with 45,900+ GitHub stars, 5.2 million monthly PyPI downloads, and native support for both MCP (Model Context Protocol) and A2A (Agent-to-Agent) protocol. It powers over 12 million daily agent executions in production across companies like DocuSign, PwC, and General Assembly.

The framework provides two core abstractions:

  • Crews — Teams of agents with autonomous collaboration. Agents decide how to divide work, delegate subtasks, and combine results. Use crews when you want agents reasoning together.
  • Flows — Event-driven pipelines that sit above crews. Flows add conditional branching, state management, and deterministic control. Use flows when you need production-grade orchestration.

For betting agents, CrewAI maps naturally to how you’d structure a human trading team: a researcher gathers data, an analyst evaluates odds, a risk manager sizes positions, and an executor places trades. Each role becomes an agent.

Why CrewAI for Betting Agents

Single-agent architectures hit a ceiling fast in prediction markets. One LLM call cannot simultaneously scrape odds data, analyze sentiment, estimate probabilities, check risk limits, and decide trade sizing. CrewAI lets you decompose these tasks into specialized agents that each do one thing well.

┌─────────────────────────────────────────────────────────┐
│                   BETTING CREW                          │
│                                                         │
│  ┌──────────┐  ┌──────────┐  ┌──────────┐  ┌────────┐ │
│  │RESEARCHER│─▶│  ANALYST  │─▶│   RISK   │─▶│EXECUTOR│ │
│  │          │  │          │  │ MANAGER  │  │        │ │
│  │ Scrape   │  │ Compare  │  │ Position │  │ Submit │ │
│  │ odds,    │  │ to fair  │  │ sizing,  │  │ orders │ │
│  │ news,    │  │ value,   │  │ bankroll │  │ via    │ │
│  │ social   │  │ find     │  │ checks   │  │ API    │ │
│  │ signals  │  │ +EV      │  │          │  │        │ │
│  └──────────┘  └──────────┘  └──────────┘  └────────┘ │
│       ▲                                        │       │
│       └────────── Memory (cross-session) ──────┘       │
└─────────────────────────────────────────────────────────┘

Key advantages over writing this from scratch:

  • Built-in delegation — Agents automatically route subtasks to the right team member
  • Memory across sessions — Agents remember what worked in previous runs
  • Structured outputs — Define Pydantic models for validated, typed results
  • 80+ built-in tools — Web search, file operations, API calls included out of the box
  • Any LLM — Use Claude for reasoning, GPT for speed, Ollama for cost control

Installation and Setup

CrewAI requires Python 3.10 through 3.13. The framework uses UV for dependency management but works fine with pip.

# Install CrewAI with tools
pip install 'crewai[tools]'

# Or with UV (recommended)
uv add 'crewai[tools]'

# Verify installation
python -c "import crewai; print(crewai.__version__)"
# 1.12.2

Set up your LLM provider. CrewAI defaults to OpenAI but supports any provider:

# For Anthropic Claude (recommended for reasoning tasks)
export ANTHROPIC_API_KEY="your-key"

# For OpenAI
export OPENAI_API_KEY="your-key"

# For local models via Ollama
# No API key needed — just run `ollama serve`

To use the CLI scaffolding:

# Create a new project
crewai create crew betting_research_crew

# This generates:
# betting_research_crew/
# ├── src/
# │   └── betting_research_crew/
# │       ├── config/
# │       │   ├── agents.yaml
# │       │   └── tasks.yaml
# │       ├── crew.py
# │       └── main.py
# └── pyproject.toml

Core Concepts

Agents

An agent is an autonomous unit with a role, goal, backstory, and optional tools. The backstory shapes how the LLM reasons — a “veteran sports bettor with 15 years of experience” produces different analysis than a “quantitative analyst at a hedge fund.”

from crewai import Agent

odds_analyst = Agent(
    role="Odds Analyst",
    goal="Identify mispriced markets by comparing bookmaker odds to true probabilities",
    backstory="""You are a sharp sports bettor with deep expertise in probability 
    theory and market microstructure. You specialize in finding +EV opportunities 
    across prediction markets and sportsbooks. You understand vig, closing line 
    value, and steam moves.""",
    llm="anthropic/claude-sonnet-4-20250514",
    verbose=True,
    memory=True,
    max_iter=5
)

Key agent parameters:

ParameterTypePurpose
rolestrAgent’s job title — used in prompt construction
goalstrWhat the agent is trying to achieve
backstorystrContext that shapes reasoning style
llmstrLLM model string (e.g., anthropic/claude-sonnet-4-20250514)
toolslistTools the agent can use
memoryboolEnable cognitive memory system
verboseboolPrint reasoning steps to console
max_iterintMaximum reasoning iterations before stopping
allow_delegationboolWhether this agent can delegate to others

Tasks

Tasks are specific assignments given to agents. Each task has a description, expected output format, and an assigned agent.

from crewai import Task

research_task = Task(
    description="""Research the current NBA championship futures market.
    Gather odds from at least 3 major sportsbooks. Identify any significant
    line movements in the last 48 hours. Note any relevant injury news
    or roster changes that affect title probability.""",
    expected_output="""A structured report containing:
    1. Current odds from each sportsbook (American and implied probability)
    2. Line movement summary (direction, magnitude, timing)
    3. Key factors affecting probabilities (injuries, trades, schedule)
    4. Any discrepancies between books that suggest mispricing""",
    agent=odds_analyst
)

Tasks pass context automatically — the output of one task feeds into the next. You can also explicitly define context dependencies:

analysis_task = Task(
    description="Analyze the research data and identify the top 3 +EV bets",
    expected_output="Ranked list of betting opportunities with edge estimates",
    agent=odds_analyst,
    context=[research_task]  # explicitly receive research_task output
)

Structured Outputs with Pydantic

For betting agents, you need structured data — not free-form text. Use Pydantic models to enforce output schemas:

from pydantic import BaseModel
from typing import Optional

class BettingOpportunity(BaseModel):
    market: str
    selection: str
    bookmaker: str
    odds_american: int
    implied_probability: float
    estimated_true_probability: float
    edge_percentage: float
    confidence: str  # "high", "medium", "low"
    reasoning: str

class AnalysisReport(BaseModel):
    opportunities: list[BettingOpportunity]
    markets_analyzed: int
    timestamp: str

analysis_task = Task(
    description="Identify all +EV opportunities from the research data",
    expected_output="Structured list of betting opportunities",
    agent=odds_analyst,
    output_pydantic=AnalysisReport  # enforces schema
)

Crews

A crew brings agents and tasks together. You choose a process type — sequential (tasks run in order) or hierarchical (a manager agent delegates):

from crewai import Crew, Process

betting_crew = Crew(
    agents=[researcher, odds_analyst, risk_manager],
    tasks=[research_task, analysis_task, sizing_task],
    process=Process.sequential,
    memory=True,
    verbose=True
)

# Run the crew
result = betting_crew.kickoff()
print(result)

Sequential works for linear pipelines: research → analyze → size → execute.

Hierarchical works when you want a manager agent to dynamically assign tasks based on the situation:

betting_crew = Crew(
    agents=[researcher, odds_analyst, risk_manager],
    tasks=[research_task, analysis_task, sizing_task],
    process=Process.hierarchical,
    manager_llm="anthropic/claude-sonnet-4-20250514"
)

Building a Prediction Market Research Crew

Here is a complete, working example of a prediction market research crew that analyzes Polymarket data:

import os
from crewai import Agent, Task, Crew, Process
from crewai_tools import SerperDevTool
from pydantic import BaseModel

# Tools
search_tool = SerperDevTool()

# --- Agents ---

researcher = Agent(
    role="Prediction Market Researcher",
    goal="Gather comprehensive data on active prediction markets",
    backstory="""You are an expert OSINT researcher specializing in prediction 
    markets. You track Polymarket, Kalshi, and political betting markets. You 
    know how to find relevant news, social sentiment, and expert opinions that 
    move market prices.""",
    llm="anthropic/claude-sonnet-4-20250514",
    tools=[search_tool],
    verbose=True
)

analyst = Agent(
    role="Probability Analyst",
    goal="Estimate true probabilities and identify mispriced markets",
    backstory="""You are a quantitative analyst with expertise in Bayesian 
    probability estimation. You calibrate predictions by weighing base rates, 
    recent evidence, and market structure. You understand that market prices 
    reflect consensus but are not always efficient — especially in thin 
    markets or when new information has not yet been priced in.""",
    llm="anthropic/claude-sonnet-4-20250514",
    verbose=True
)

reporter = Agent(
    role="Trading Report Writer",
    goal="Produce actionable trading reports with clear recommendations",
    backstory="""You are a financial writer who translates complex analysis 
    into clear, actionable recommendations. You always include the reasoning 
    chain, confidence level, and risk factors for each recommendation.""",
    llm="anthropic/claude-sonnet-4-20250514",
    verbose=True
)

# --- Tasks ---

research_task = Task(
    description="""Research the top 5 most actively traded prediction markets 
    on Polymarket right now. For each market, gather:
    - Current YES/NO prices and 24h volume
    - Recent news that could affect the outcome
    - Social media sentiment (X/Twitter, Reddit)
    - Any expert forecasts or polling data
    Focus on markets with > $100K volume and resolution dates within 60 days.""",
    expected_output="""Structured research dossier for 5 markets with 
    current prices, volume, key news, sentiment summary, and expert 
    forecasts for each.""",
    agent=researcher
)

analysis_task = Task(
    description="""For each market in the research dossier, estimate the true 
    probability using Bayesian reasoning:
    1. Start with the base rate (historical frequency of similar events)
    2. Update based on current evidence (news, polls, sentiment)
    3. Compare your estimate to the market price
    4. Flag any market where your estimate differs from the market price 
       by more than 5 percentage points
    Show your work — include the prior, evidence weights, and posterior.""",
    expected_output="""Probability analysis for each market with:
    - Base rate and source
    - Evidence summary and directional impact
    - Posterior probability estimate
    - Market price comparison
    - Edge calculation (your estimate minus market price)""",
    agent=analyst,
    context=[research_task]
)

class MarketOpportunity(BaseModel):
    market_name: str
    current_price: float
    estimated_probability: float
    edge: float
    direction: str  # "BUY_YES" or "BUY_NO"
    confidence: str
    reasoning: str
    risk_factors: list[str]

class TradingReport(BaseModel):
    opportunities: list[MarketOpportunity]
    markets_analyzed: int
    summary: str

report_task = Task(
    description="""Create a trading report from the probability analysis.
    Rank opportunities by edge size. Only include opportunities where 
    the edge exceeds 5% and confidence is medium or higher.
    For each opportunity, specify direction (BUY_YES or BUY_NO), 
    confidence level, and key risk factors.""",
    expected_output="Structured trading report with ranked opportunities",
    agent=reporter,
    context=[analysis_task],
    output_pydantic=TradingReport
)

# --- Crew ---

prediction_crew = Crew(
    agents=[researcher, analyst, reporter],
    tasks=[research_task, analysis_task, report_task],
    process=Process.sequential,
    memory=True,
    verbose=True
)

# Run it
result = prediction_crew.kickoff()

# Access structured output
if hasattr(result, 'pydantic'):
    report = result.pydantic
    for opp in report.opportunities:
        print(f"{opp.market_name}: {opp.edge:+.1%} edge → {opp.direction}")

Custom Tools for Betting Data

The built-in SerperDevTool handles general web search, but betting agents need direct access to odds APIs. Build custom tools to connect agents to your Layer 3 trading infrastructure.

Polymarket Tool

from crewai.tools import BaseTool
from pydantic import BaseModel, Field
import httpx

class PolymarketInput(BaseModel):
    slug: str = Field(description="Market slug or search term")

class PolymarketTool(BaseTool):
    name: str = "polymarket_lookup"
    description: str = """Look up a Polymarket prediction market by slug or keyword. 
    Returns current prices, volume, and market details."""

    def _run(self, slug: str) -> str:
        url = "https://gamma-api.polymarket.com/markets"
        params = {"slug": slug, "limit": 5}
        
        response = httpx.get(url, params=params)
        if response.status_code != 200:
            return f"Error fetching market data: {response.status_code}"
        
        markets = response.json()
        if not markets:
            return f"No markets found for '{slug}'"
        
        results = []
        for m in markets:
            results.append(
                f"Market: {m.get('question', 'N/A')}\n"
                f"  Slug: {m.get('slug', 'N/A')}\n"
                f"  YES Price: {m.get('outcomePrices', 'N/A')}\n"
                f"  Volume: ${m.get('volume', 0):,.0f}\n"
                f"  Liquidity: ${m.get('liquidity', 0):,.0f}\n"
                f"  End Date: {m.get('endDate', 'N/A')}\n"
            )
        return "\n".join(results)

Sportsbook Odds Tool

Connect agents to live odds data from The Odds API or your own MCP server:

class OddsLookupInput(BaseModel):
    sport: str = Field(description="Sport key, e.g., 'basketball_nba'")
    market: str = Field(default="h2h", description="Market type: h2h, spreads, totals")

class SportsOddsTool(BaseTool):
    name: str = "sportsbook_odds"
    description: str = """Fetch live sportsbook odds for a given sport and market type. 
    Returns odds from multiple bookmakers for comparison."""
    
    def _run(self, sport: str, market: str = "h2h") -> str:
        api_key = os.environ.get("ODDS_API_KEY")
        url = f"https://api.the-odds-api.com/v4/sports/{sport}/odds"
        params = {
            "apiKey": api_key,
            "regions": "us,us2",
            "markets": market,
            "oddsFormat": "american"
        }
        
        response = httpx.get(url, params=params)
        data = response.json()
        
        results = []
        for game in data[:5]:  # limit to 5 games
            results.append(f"\n{game['home_team']} vs {game['away_team']}")
            for book in game.get('bookmakers', [])[:4]:
                outcomes = book['markets'][0]['outcomes']
                odds_str = " | ".join(
                    f"{o['name']}: {o['price']:+d}" for o in outcomes
                )
                results.append(f"  {book['title']}: {odds_str}")
        
        return "\n".join(results)

Assign custom tools to agents:

researcher = Agent(
    role="Odds Researcher",
    goal="Gather live odds from multiple sources",
    backstory="Expert at reading sportsbook lines and detecting value",
    tools=[PolymarketTool(), SportsOddsTool(), search_tool],
    llm="anthropic/claude-sonnet-4-20250514"
)

The Memory System

CrewAI rebuilt its memory system from scratch in early 2026. The new unified Memory class replaces the old separate short-term, long-term, and entity memory types with a single intelligent API that uses an LLM to analyze content when saving.

This matters for betting agents because markets repeat patterns. An agent that remembers what strategies worked in previous NFL weeks, which sportsbooks had stale lines during March Madness, or which Polymarket categories tend to be mispriced — that agent gets better over time.

Enabling Memory

from crewai import Memory

# Standalone usage (scripts, notebooks)
memory = Memory()

# Remember facts
memory.remember("BetOnline consistently has the tightest NBA spreads")
memory.remember("Polymarket political markets tend to overreact to polls within 24 hours")
memory.remember("Bovada NFL vig averages 4.8% on moneylines vs 3.2% at Pinnacle")

# Recall relevant context
matches = memory.recall("Which sportsbook has the best NBA odds?", limit=3)
for m in matches:
    print(f"[{m.score:.2f}] {m.record.content}")

Scoped Memory for Multi-Agent Crews

Different agents need different memory views. The researcher needs historical data patterns; the risk manager needs past position sizes and outcomes:

memory = Memory()

# Each agent gets its own scope
researcher_mem = memory.scope("/agent/researcher")
risk_mem = memory.scope("/agent/risk_manager")

# Shared knowledge base
memory.remember(
    "NFL Sunday lines move most between Friday 6pm and Saturday noon ET",
    scope="/knowledge/nfl"
)

# Risk manager can read shared knowledge + its own history
risk_view = memory.slice(
    scopes=["/agent/risk_manager", "/knowledge/nfl"],
    read_only=True
)

Extract Facts from Unstructured Data

The memory system can decompose long text into atomic facts:

raw_notes = """Post-game analysis: The Lakers covered the spread 
against Denver despite being 7-point underdogs. LeBron had 34 points.
The total went over 228.5. Denver shot 38% from three which is 
well below their season average of 44%."""

facts = memory.extract_memories(raw_notes)
# ["Lakers covered spread as 7-point underdogs vs Denver",
#  "LeBron scored 34 points in Lakers vs Denver game",
#  "Lakers-Denver total went over 228.5",
#  "Denver shot 38% from three vs 44% season average"]

for fact in facts:
    memory.remember(fact, scope="/games/nba")

Crew-Level Memory

Enable memory at the crew level and all agents automatically share context:

betting_crew = Crew(
    agents=[researcher, analyst, risk_manager],
    tasks=[research_task, analysis_task, sizing_task],
    process=Process.sequential,
    memory=True,
    verbose=True
)

MCP Integration: Connecting Agents to Live Data

CrewAI has native support for MCP (Model Context Protocol) — the open standard for connecting AI agents to external tools. This means your agents can connect to any MCP server without custom tool wrappers.

For prediction market agents, MCP servers can expose live odds feeds, position data, wallet balances, and execution endpoints:

from crewai import Agent
from crewai_tools import MCPServerAdapter

# Connect to an MCP server (e.g., live odds feed)
odds_mcp = MCPServerAdapter(
    server_params={
        "url": "https://your-mcp-server.com/sse",
        "transport": "sse"
    }
)

# Or connect to a local stdio MCP server
local_mcp = MCPServerAdapter(
    server_params={
        "command": "python",
        "args": ["odds_mcp_server.py"],
        "transport": "stdio"
    }
)

# Agents automatically discover available tools from the MCP server
analyst = Agent(
    role="Odds Analyst",
    goal="Find mispriced markets using live odds data",
    backstory="Expert at comparing bookmaker lines to true probabilities",
    tools=[odds_mcp],  # all MCP tools become available
    llm="anthropic/claude-sonnet-4-20250514"
)

The AgentBets MCP server exposes live vig rankings, head-to-head comparisons, and sport-specific odds data that CrewAI agents can consume directly.


A2A Protocol: Cross-Framework Agent Delegation

CrewAI treats the A2A (Agent-to-Agent) protocol as a first-class delegation primitive. A2A lets agents built on different frameworks — CrewAI, LangGraph, Semantic Kernel, custom implementations — discover each other and delegate tasks.

This is the difference between MCP and A2A: MCP connects agents to tools (APIs, databases, functions). A2A connects agents to other agents. A complete prediction market system uses both.

Client Mode: Delegating to Remote Agents

from crewai import Agent, Crew, Task
from crewai.a2a import A2AClientConfig

# This agent can delegate to a remote A2A-compliant agent
coordinator = Agent(
    role="Research Coordinator",
    goal="Coordinate research across specialized betting agents",
    backstory="Expert at delegating analysis tasks to specialists",
    llm="anthropic/claude-sonnet-4-20250514",
    a2a=A2AClientConfig(
        endpoint="https://nfl-analyst.example.com/.well-known/agent-card.json",
        timeout=120,
        max_turns=10
    )
)

task = Task(
    description="Get NFL Week 14 spread analysis from the specialist agent",
    expected_output="Detailed spread analysis with +EV picks",
    agent=coordinator
)

crew = Crew(agents=[coordinator], tasks=[task], verbose=True)
result = crew.kickoff()

Server Mode: Exposing Your Agent via A2A

from crewai import Agent
from crewai.a2a import A2AServerConfig

# Make this agent discoverable by other A2A agents
nfl_expert = Agent(
    role="NFL Betting Analyst",
    goal="Provide expert NFL spread and totals analysis",
    backstory="Former oddsmaker with deep NFL knowledge",
    llm="anthropic/claude-sonnet-4-20250514",
    a2a=A2AServerConfig(url="https://your-server.com")
)

Install the A2A support package:

pip install 'crewai[a2a]'

Flows: Production Betting Pipelines

Crews handle collaboration. Flows handle orchestration. When you need conditional logic — “if the edge is above 3%, size the bet; if below, skip” — you need a Flow.

from crewai.flow.flow import Flow, listen, start, router

class BettingPipeline(Flow):
    
    @start()
    def gather_odds(self):
        """Step 1: Fetch live odds from multiple sources"""
        # Run your odds-gathering crew
        result = odds_crew.kickoff()
        self.state["odds_data"] = result.raw
        return result
    
    @listen(gather_odds)
    def analyze_markets(self, odds_data):
        """Step 2: Run probability analysis"""
        analysis_crew_result = analysis_crew.kickoff(
            inputs={"odds_data": odds_data}
        )
        self.state["analysis"] = analysis_crew_result
        return analysis_crew_result
    
    @router(analyze_markets)
    def route_on_edge(self, analysis):
        """Step 3: Route based on edge quality"""
        # Parse the analysis to check edge magnitude
        if "high_edge" in str(analysis):
            return "execute_trade"
        elif "medium_edge" in str(analysis):
            return "human_review"
        else:
            return "log_and_skip"
    
    @listen("execute_trade")
    def place_bet(self):
        """Auto-execute high-confidence trades"""
        execution_crew.kickoff(
            inputs={"analysis": self.state["analysis"]}
        )
    
    @listen("human_review")
    def request_approval(self):
        """Flag medium-confidence trades for human review"""
        print(f"REVIEW NEEDED: {self.state['analysis']}")
    
    @listen("log_and_skip")
    def log_pass(self):
        """Log the pass for future analysis"""
        print("No actionable edge found. Logged for review.")

# Run the pipeline
pipeline = BettingPipeline()
pipeline.kickoff()

The Flow architecture diagram:

Flow: BettingPipeline
├── gather_odds (@start)
├── analyze_markets (@listen → gather_odds)
├── route_on_edge (@router → analyze_markets)
│   ├── "execute_trade" → place_bet
│   ├── "human_review" → request_approval
│   └── "log_and_skip" → log_pass

Flows give you the orchestration layer that crews lack: deterministic branching, persistent state between steps, and clear separation between “what agents figure out” and “what the pipeline decides.”


Pricing and Deployment

Open Source (Self-Hosted)

CrewAI’s core framework is MIT-licensed and free with no execution limits. You handle infrastructure, monitoring, and scaling. This is the right choice for most developers building betting agents — you’re already managing API keys, database connections, and deployment pipelines.

pip install 'crewai[tools]'
# That's it. No account, no limits.

CrewAI AMP (Managed Platform)

CrewAI AMP adds a visual editor, real-time tracing, agent training, and managed deployment. The pricing tiers as of March 2026:

PlanPriceExecutions/MonthDeployed CrewsKey Features
Basic (Free)$0501Visual editor, standard tools
Professional$25/mo100 (+$0.50/extra)52 seats, priority support
EnterpriseCustomUp to 30,000+UnlimitedSSO, RBAC, SOC2, on-prem

For prediction market agents that run on a schedule (daily scans, pre-game analysis), the open-source version is sufficient. The AMP platform becomes valuable when you need tracing — seeing exactly which tool calls and LLM prompts each agent made during a run.


CrewAI vs Alternatives

FeatureCrewAILangGraphAutoGen/MS Agent Framework
Core metaphorRole-based teamsState machine graphsConversation patterns
Setup timeMinutesHoursHours
DeterminismModerate (crews), High (flows)HighLow-Moderate
MCP supportNativeNativeVia Microsoft Agent Framework
A2A supportNativeVia integrationVia Microsoft Agent Framework
MemoryUnified cognitive memoryCustom (via checkpointers)Message-based
Python requirement3.10-3.133.9+3.9+
LicenseMITMITMIT
Best forFast prototyping, team-based workflowsComplex stateful agents, production controlInteractive multi-agent conversations

For betting agents specifically: CrewAI wins on speed-to-prototype. The “crew of specialists” mental model maps directly to how you’d structure a trading operation. LangGraph wins when you need absolute control over execution paths — critical for systems that manage real money. Start with CrewAI, migrate the execution layer to LangGraph if you need tighter control.

AutoGen has been merged into the Microsoft Agent Framework (release candidate as of February 2026). Starting new projects on standalone AutoGen is not recommended.


Project Structure for Production

Use the CrewAI CLI to scaffold a clean project, then extend it:

betting_agent/
├── src/
│   └── betting_agent/
│       ├── config/
│       │   ├── agents.yaml       # Agent definitions (YAML)
│       │   └── tasks.yaml        # Task definitions (YAML)
│       ├── tools/
│       │   ├── __init__.py
│       │   ├── polymarket.py     # Polymarket API tool
│       │   ├── odds_api.py       # Sportsbook odds tool
│       │   └── vig_calculator.py # Vig calculation tool
│       ├── crew.py               # Crew assembly
│       ├── flows.py              # Production flow pipelines
│       └── main.py               # Entry point
├── tests/
│   ├── test_tools.py
│   └── test_crew.py
├── .env                          # API keys
└── pyproject.toml

YAML-based agent configuration keeps your Python clean:

# config/agents.yaml
researcher:
  role: "Prediction Market Researcher"
  goal: "Gather comprehensive market data and sentiment signals"
  backstory: >
    Expert OSINT researcher specializing in prediction markets.
    You track Polymarket, Kalshi, and sports betting markets with
    a focus on finding information that moves prices.

analyst:
  role: "Probability Analyst"  
  goal: "Estimate true probabilities using Bayesian methods"
  backstory: >
    Quantitative analyst with deep expertise in calibrated 
    probability estimation. You compare market prices to 
    fundamental value.
# config/tasks.yaml
research_task:
  description: >
    Research the top {n_markets} most actively traded prediction 
    markets. Gather current prices, volume, recent news, and 
    sentiment for each.
  expected_output: >
    Structured research dossier with prices, volume, news 
    summary, and sentiment for each market.
  agent: researcher

analysis_task:
  description: >
    Analyze each market using Bayesian probability estimation.
    Compare your estimates to market prices and flag any 
    opportunities with edge > {min_edge}%.
  expected_output: >
    Probability analysis with edge calculations for each market.
  agent: analyst
  context:
    - research_task

Putting It All Together

Here is how CrewAI fits into the full agent betting stack:

┌─────────────────────────────────────────────────────┐
│ Layer 4 — Intelligence (CrewAI)                     │
│ ┌─────────────────────────────────────────────────┐ │
│ │ Flow: Daily Market Scan                         │ │
│ │ ├── Crew: Research (Researcher + Sentiment)     │ │
│ │ ├── Crew: Analysis (Odds Analyst + Bayesian)    │ │
│ │ ├── Router: Edge threshold check                │ │
│ │ └── Crew: Execution (Risk Manager + Executor)   │ │
│ └─────────────────────────────────────────────────┘ │
├─────────────────────────────────────────────────────┤
│ Layer 3 — Trading                                   │
│ Polymarket CLOB · Kalshi API · Sportsbook APIs      │
├─────────────────────────────────────────────────────┤
│ Layer 2 — Wallet                                    │
│ Coinbase Agentic Wallets · x402 · Safe              │
├─────────────────────────────────────────────────────┤
│ Layer 1 — Identity                                  │
│ Moltbook · SIWE · ENS                               │
└─────────────────────────────────────────────────────┘
  1. Start small — Build a research crew with 2-3 agents. Use verbose=True and watch the reasoning.
  2. Add custom tools — Connect to Polymarket, Kalshi, or sportsbook APIs via custom tools or MCP.
  3. Enable memory — Let agents learn from past analysis. The cognitive memory system improves signal quality over time.
  4. Wrap in Flows — Add conditional routing, human-in-the-loop review for large bets, and scheduled execution.
  5. Connect execution — Wire the output to your Layer 3 trading setup for automated order placement.

For the intelligence layer fundamentals — LLM prompt patterns, Bayesian estimation, and signal aggregation — see the Agent Intelligence Guide. For the tools and frameworks that complement CrewAI, browse the marketplace and tool directory.


What’s Next