Agent wallet legal liability refers to the legal responsibility assigned when an autonomous AI agent executes financial transactions — trades, transfers, or payments — through a crypto wallet or trading platform without direct human authorization for each action. Under current US law (UETA Section 14), the operator who deployed the agent is primarily liable. This guide covers every regulatory framework, liability doctrine, and risk mitigation strategy that builders need to understand before deploying autonomous trading agents on prediction markets.

This is not legal advice. This guide is an educational overview written for developers and operators. It is not a substitute for consultation with a qualified attorney. Laws governing AI agents, prediction markets, and financial regulation vary by jurisdiction and are evolving rapidly. Consult a lawyer familiar with both technology law and financial regulation before deploying an autonomous trading agent. Nothing here creates an attorney-client relationship.

You built an agent that trades autonomously on prediction markets. It holds a wallet, it evaluates odds, it places bets. When it wins, the profits are yours. But when it loses — when it executes a trade you did not specifically authorize, when it gets manipulated, when it violates a rule you did not know existed — who pays?

That question has no clean answer today. The legal infrastructure for autonomous AI agents managing real capital is fragmented, jurisdiction-dependent, and lagging behind the technology by years. This guide maps the terrain as it exists in March 2026 and gives you the practical framework to operate within it.

This is Article 3 in the Agent Wallet Content Series. For wallet selection, start with Article 1. For securing your agent wallet infrastructure, see Agent Wallet Security. For payment protocol integration, see Agentic Payments Protocols.


The Liability Gap

When a human trader loses money, the liability chain is simple: the trader made the decision, the trader bears the loss. When a human uses a broker, the broker operates under a fiduciary framework with licensing, insurance, and regulatory oversight. The liability chain is documented, litigated, and well-understood.

AI agents fit into neither category. They are not the principal (the human making decisions), nor are they licensed agents operating under a fiduciary standard. They are software — software that initiates financial transactions autonomously, sometimes in ways their operators did not specifically anticipate.

This creates a genuine liability gap:

  • The developer wrote the code but may not control how it is deployed or configured.
  • The operator deployed the agent but may not have authorized each specific trade.
  • The wallet provider supplies the infrastructure but does not control the agent’s logic.
  • The platform (Polymarket, Kalshi) provides the market but does not vet each participant’s automation stack.

No single party has accepted full responsibility. No regulatory framework has assigned it comprehensively. And as agent wallets move from experimental projects trading with a few hundred dollars to infrastructure managing meaningful capital, this gap is no longer academic.

The stakes are real. In Q4 2025 and Q1 2026, multiple agent-managed wallets have exceeded $100,000 in trading volume. Copy-trading platforms are routing subscriber capital through autonomous agents. Agents in multi-agent networks are sharing signals and executing coordinated strategies. Every one of these operations sits inside the liability gap.

For a broader view of how the legal landscape applies to selling agent software, see The Legal Guide to Selling AI Trading Agents.


Regulatory Frameworks That Apply

No single law governs autonomous AI trading agents. Instead, builders face a patchwork of existing regulations — some written decades before AI agents existed — that courts and regulators are stretching to cover this new territory.

UETA and E-SIGN (United States)

The Uniform Electronic Transactions Act (UETA) is the most directly applicable US statute. Adopted by 47 states, UETA was designed to give legal validity to electronic transactions. Its treatment of “electronic agents” is remarkably relevant to AI trading bots, even though it was drafted in 1999.

Section 14 of UETA addresses automated transactions explicitly. The key provision: a contract formed by the interaction of electronic agents is enforceable, and the resulting transaction is legally binding on the person whose electronic agent initiated it.

What this means in practice:

  • If your agent places a trade on Polymarket, that trade is legally your trade.
  • You cannot argue “my agent did it without my authorization” as a defense against the trade’s financial consequences.
  • The legal system treats your agent as your instrument, not as an independent actor.

The federal E-SIGN Act (Electronic Signatures in Global and National Commerce Act) provides a parallel federal framework with similar principles. Together, UETA and E-SIGN establish the baseline: the operator is liable for agent-initiated transactions.

This is the single most important legal principle for every builder in this space. Your agent’s trades are your trades.

CFTC Jurisdiction

The Commodity Futures Trading Commission is the primary US regulator for prediction markets, and its jurisdiction does not vanish because a participant is automated.

Kalshi is a CFTC-regulated designated contract market (DCM). It operates under the Commodity Exchange Act with full regulatory oversight. Agents trading on Kalshi must comply with all the same rules as human traders: position limits, reporting requirements, and prohibitions on market manipulation. The agent does not get an exemption from any of these obligations.

Polymarket occupies a different regulatory position. It is not registered as a DCM and is not available to US persons. For non-US operators, Polymarket operates in a less regulated environment, but CFTC enforcement actions in 2024 and 2025 demonstrated that the commission claims jurisdiction over prediction markets that serve US participants, regardless of where the platform is nominally based.

Relevant enforcement context: In 2024, the CFTC pursued action against Polymarket for offering event contracts to US persons without registration. The January 2025 settlement established precedent that offshore prediction markets are not beyond CFTC reach. For agent operators, this means that deploying an autonomous agent on Polymarket from a US-based operation carries regulatory risk regardless of Polymarket’s own compliance posture.

For agents trading on CFTC-regulated platforms, automated trading introduces additional scrutiny. The CFTC’s 2015 Regulation Automated Trading (Reg AT) proposal — though never finalized — signaled the commission’s interest in oversight of automated trading systems. Agents that trade at scale, execute rapidly, or operate across multiple accounts will attract attention.

EU AI Act

The European Union’s AI Act, which entered into force in August 2024 with phased implementation through 2027, is the most comprehensive AI-specific regulation globally. Its application to autonomous trading agents is significant and largely unavoidable for any agent operating within EU jurisdiction or serving EU users.

High-risk classification: The AI Act classifies AI systems used in financial services as high-risk under Annex III. An autonomous agent that evaluates market conditions, determines position sizing, and executes trades almost certainly qualifies. High-risk classification triggers a cascade of obligations:

  • Risk management system: Document and continuously assess risks posed by the agent’s operation.
  • Data governance: Ensure training data and market data used by the agent meet quality standards.
  • Technical documentation: Maintain comprehensive documentation of the agent’s design, capabilities, and limitations.
  • Record-keeping: Maintain logs of all agent decisions and transactions (which overlaps with the audit trail requirements discussed in the Risk Mitigation section).
  • Transparency: Inform users and affected parties that they are interacting with an AI system.
  • Human oversight: Implement mechanisms for human intervention and override.
  • Accuracy and robustness: Ensure the agent performs as intended and is resilient to errors and adversarial inputs.

Timeline: Prohibitions on unacceptable-risk AI systems took effect in February 2025. Obligations for high-risk AI systems apply from August 2026. Full enforcement across all categories extends through 2027. Builders deploying agents in the EU should be preparing compliance documentation now.

Practical impact: If your agent serves EU users or operates from EU infrastructure, assume the AI Act applies. The penalties for non-compliance are severe — up to 35 million euros or 7% of global annual turnover, whichever is higher.

OECD AI Principles

The OECD AI Principles, adopted in 2019 and updated in 2024, provide a non-binding but influential framework for AI governance across 46 member countries. While not directly enforceable, they shape national policy and regulatory thinking.

The principles most relevant to agent wallet operations:

  • Transparency and explainability: Stakeholders should be able to understand AI system outcomes. For trading agents, this means logging and auditability requirements.
  • Robustness and safety: AI systems should function appropriately and not pose unreasonable safety risks. For agents managing capital, this translates to risk controls and kill switches.
  • Accountability: Organizations developing and deploying AI systems should be accountable for their proper functioning. This reinforces the operator-bears-responsibility principle.

For agents operating across multiple jurisdictions, the OECD framework provides the conceptual foundation that most national regulators are building upon. Understanding these principles helps anticipate where regulation is heading, even in jurisdictions that have not yet enacted AI-specific legislation.


The Principal Problem

The legal concept of agency — where one party (the agent) acts on behalf of another (the principal) — is centuries old. In traditional finance, it is extensively codified. For AI agents, almost none of that codification applies, but the underlying principle does.

The core doctrine: A principal is responsible for the actions of their agent, provided the agent is acting within the scope of its authority. For AI trading agents, the “scope of authority” is whatever the agent is technically capable of doing within its configured parameters.

This creates an immediate problem. A human broker has explicit authorization limits, licensing requirements, a duty of care, and regulatory registration. An AI agent has none of these. It has code, a wallet with funds, and whatever guardrails its operator implemented.

Traditional Agent vs AI Agent

DimensionTraditional Financial Agent (Broker)AI Trading Agent
AuthorizationWritten agreement defining scope, limits, asset classes, and risk toleranceCode configuration; spending limits if implemented; otherwise unlimited within wallet balance
LicensingSeries 7, Series 63, RIA registration — mandatoryNone required; no regulatory framework exists
Fiduciary dutyLegal obligation to act in client’s best interestNo fiduciary standard; acts according to code logic
Regulatory oversightSEC, FINRA, state regulators — continuousNo dedicated regulator; CFTC has partial jurisdiction
Liability chainBroker, broker-dealer, clearing firm — well-definedDeveloper, operator, wallet provider, platform — undefined
InsuranceE&O insurance, SIPC coverage — mandatoryVoluntary; most operators carry none
Error recoveryTrade breaks, compliance review, regulatory complaint processOn-chain transactions are generally irreversible
Audit trailMandatory record-keeping with regulatory retention periodsVoluntary; depends entirely on operator implementation

The gap is stark. Traditional financial agents operate within a regulatory infrastructure that took decades to build. AI agents operate in a vacuum — and the legal system defaults to holding the principal (the operator) responsible when something goes wrong.

This means that if your agent executes a trade that loses money, you bear the loss. If your agent violates a platform’s rules, you face the consequences. If your agent’s behavior constitutes market manipulation, you face the enforcement action. The agent is not a separate legal person. It is your tool, and you are accountable for how it operates.

For detailed guidance on the wallet infrastructure decisions that underpin this liability framework, see Agent Wallet Comparison.


KYC/AML Complications

Anti-money laundering (AML) and know-your-customer (KYC) requirements do not disappear because a transaction is initiated by software rather than a human. The agent does not get an exemption. The obligations fall on the operator and, in some cases, on the platforms facilitating agent transactions.

Platform-Specific Requirements

Polymarket: Wallet-based identity with no KYC for most operations. Agents interact through on-chain transactions using the operator’s wallet. This does not mean AML obligations are absent — it means Polymarket has shifted compliance responsibility to the user. If you are operating from a jurisdiction with AML reporting requirements, you are still subject to those requirements regardless of whether Polymarket enforces them.

Kalshi: Full KYC is mandatory. Account holders must provide identity verification, and all trading occurs under that verified identity. An agent trading on Kalshi must do so under the operator’s verified account, using the operator’s API credentials. There is no pathway for an agent to create its own Kalshi account.

Money Transmission Concerns

The most significant regulatory risk for agent wallet builders is inadvertent money transmission. If your agent handles third-party funds — routing subscriber capital in a copy-trading arrangement, pooling funds from multiple users, or transferring value between parties — you may be operating as a money transmitter.

In the United States, money transmission is regulated at both the federal level (FinCEN) and the state level (money transmitter licensing in each state). Operating as an unlicensed money transmitter is a federal crime under 18 U.S.C. Section 1960, carrying penalties of up to five years imprisonment.

Scenarios that may trigger money transmission requirements:

  • Copy-trading platforms where subscriber funds flow through your infrastructure
  • Agent marketplaces where agents transfer funds between buyer and seller wallets
  • Multi-agent pools where agents from different operators share a common treasury
  • Cross-platform arbitrage where an agent moves funds between venues on behalf of others

The safest approach: never let your agent hold or transmit funds that belong to someone other than the operator. If your business model requires handling third-party funds, consult with a money transmission compliance attorney before launching.

Travel Rule Implications

The Financial Action Task Force (FATF) Travel Rule requires financial institutions and virtual asset service providers (VASPs) to share originator and beneficiary information for transfers above certain thresholds ($3,000 in the US, 1,000 euros in the EU). For cross-platform agents that move funds between venues — bridging between Polymarket and a centralized exchange, for example — Travel Rule compliance may apply.

If your agent wallet interacts with VASPs (exchanges, custodial services), those providers may require identifying information about your transactions. Agents that move funds across multiple platforms should implement transaction logging sufficient to satisfy Travel Rule inquiries from counterparty VASPs.

For the broader security architecture that supports these compliance requirements, see Agent Betting Security.


Liability Scenarios

Abstract legal principles become concrete when applied to specific failure modes. The following scenarios represent the most likely liability situations for agent wallet operators. Each is grounded in existing legal frameworks, even though case law specific to AI trading agents remains sparse.

Scenario Analysis

ScenarioDescriptionPrimary LiabilityLegal BasisMitigation
Unauthorized trade executionAgent places a trade the operator did not specifically authorize, but the agent was configured with the authority to tradeOperatorUETA Section 14 — automated transactions bind the principal. The agent acted within its configured scope.Implement per-trade spending limits, contract allowlists, and position size caps. Use Safe transaction guards to enforce on-chain constraints.
Prompt injection exploitAttacker manipulates agent inputs (market data feed, API response, prompt injection in agent context) causing the agent to drain its walletOperator (primary), Developer (secondary if exploit was foreseeable)Operator failed to implement adequate security controls. Developer may face negligence claims if the vulnerability was known or obvious.Input sanitization, sandboxed execution, spending limits per session, kill switch. See Agent Betting Security.
Market manipulationAgent violates position limits, engages in wash trading, or executes patterns that constitute spoofing or layeringOperator (civil and potentially criminal)Commodity Exchange Act Section 9(a)(2); CFTC Rule 180.1. Market manipulation liability is strict — intent can be inferred from patterns.Position limit enforcement in agent logic, trade frequency monitoring, compliance review of agent strategies before deployment.
Copy-trading subscriber lossesAgent managing copy-trading pool loses subscriber capital through legitimate but poorly performing tradesOperator (contractual and potentially regulatory)Breach of contract (ToS terms), potential securities law violations if pool constitutes an investment contract (Howey test), money transmission if handling third-party fundsClear ToS with risk disclaimers, no performance guarantees, investor accreditation verification if pooling funds, legal opinion on Howey analysis.
Front-running via agent networkAgent receives information from a multi-agent signal network and executes trades before the information becomes publicOperator and potentially network operatorCFTC market manipulation rules; potentially wire fraud (18 U.S.C. Section 1343) if scheme involves deception; misappropriation theory of insider tradingStrict information barriers, no trading on non-public information received from agent networks, legal review of signal-sharing arrangements.

Key Takeaways From Scenarios

Three patterns emerge across these scenarios:

First, the operator bears primary liability in virtually every case. The legal system treats the agent as the operator’s tool. Lack of specific authorization for a particular trade is not a defense when the operator gave the agent general authority to trade.

Second, developer liability is secondary but real. If a software defect or foreseeable vulnerability caused the loss, the developer may face negligence claims. Selling agent software with known security vulnerabilities or without adequate documentation of risk creates exposure. For developers selling agents on marketplaces, see The Legal Guide to Selling AI Trading Agents.

Third, the mitigation column in every scenario points back to the same core requirements: spending limits, audit trails, kill switches, and proper entity structure. These are not optional best practices — they are the minimum viable legal protection for any agent trading operation.


Risk Mitigation Framework

Legal risk cannot be eliminated for autonomous trading agents. It can be managed. The following framework represents the practical minimum for any builder deploying an agent that trades with real capital.

1. Operating Entity Structure

Never operate an autonomous trading agent from a personal account. This is the single most important structural decision. An LLC (or equivalent limited liability entity) separates your personal assets from your agent’s operational liabilities.

Recommended structures by operation scale:

Operation ScaleRecommended EntityJurisdictionEstimated Setup CostKey Benefit
Solo developer, under $50K volumeSingle-member LLCWyoming or Delaware$100-500Personal asset protection
Team, $50K-500K volumeMulti-member LLC or C-CorpDelaware$500-2,000Liability separation, investor-ready
Handling third-party fundsC-Corp + legal counselDelaware + operating state$5,000-15,000Regulatory compliance framework
Crypto-native, internationalOffshore entity (BVI, Cayman) + operating entityMultiple$10,000-50,000Regulatory flexibility, tax planning

Wyoming LLCs are particularly attractive for crypto-native operations due to the state’s favorable digital asset legislation and DAO LLC framework. Delaware remains the standard for entities seeking venture capital or operating at significant scale.

2. Insurance Considerations

The insurance market for AI agent operations is nascent but developing. As of March 2026, the following coverage types are relevant:

  • Professional liability (E&O): Covers claims arising from professional services, including software that fails to perform as expected. Most E&O policies were not written with autonomous agents in mind — you may need endorsements or custom language.
  • Cyber liability: Covers losses from security breaches, including prompt injection attacks, key compromise, and infrastructure exploits. This is the most readily available and relevant coverage for agent operators.
  • Technology E&O: A specialized variant that covers technology product failures. More appropriate than general E&O for software developers selling trading agents.
  • Emerging AI liability products: Several specialty insurers (including Coalition, Resilience, and certain Lloyd’s syndicates) have begun offering AI-specific liability coverage. These products are new, expensive, and coverage terms vary significantly.

What insurance does not cover: losses from legitimate trading activity. If your agent makes bad trades and loses money, that is a business loss, not an insurable event. Insurance covers claims from third parties, software failures, and security breaches — not market risk.

3. Audit Trails and Logging

Comprehensive logging is both a legal protection and a regulatory requirement in many jurisdictions. Every agent trading operation should maintain:

  • Complete transaction logs: Every trade executed, including timestamp, market, position, size, price, and outcome.
  • Decision logs: The agent’s reasoning for each trade, including input data, model outputs, and confidence scores.
  • Configuration history: Every change to the agent’s parameters, strategy, spending limits, or authorized markets.
  • Error and exception logs: Every failure, timeout, rejected transaction, or unexpected behavior.
  • Wallet activity: All deposits, withdrawals, and transfers — not just trades.

Retention period: Maintain logs for a minimum of five years. CFTC record-keeping requirements for regulated entities specify five years, and even for unregulated operations, this provides adequate coverage for most statutes of limitations.

Storage: Logs should be append-only (immutable once written) and stored independently from the agent’s operational infrastructure. If an attacker compromises your agent, they should not be able to alter the audit trail.

4. Terms of Service for Platforms

If you operate a copy-trading platform, agent marketplace, or any service where third parties interact with your agents, your Terms of Service are a critical liability management tool. Key provisions:

  • Risk disclosure: Explicit, prominent warnings that autonomous trading involves risk of total loss.
  • No performance guarantees: Disclaiming any guarantee or expectation of profit.
  • Limitation of liability: Capping your liability to the amount the user paid for the service (or a defined dollar amount).
  • Dispute resolution: Specifying arbitration rather than litigation, with a defined jurisdiction.
  • Indemnification: Users agree to indemnify you against third-party claims arising from their use of the service.
  • Regulatory acknowledgment: Users confirm they are operating in compliance with their local laws.

These provisions do not make you immune from liability, but they establish the contractual framework that courts will reference when disputes arise. For more on selling agent software with proper protections, see The Legal Guide to Selling AI Trading Agents.

5. Smart Contract Liability Limitations

For agents operating on-chain (Polymarket, decentralized exchanges), smart contract-level protections complement off-chain legal structures:

  • Spending caps enforced on-chain: Use wallet architectures that enforce spending limits at the contract level, not just in application logic. Safe transaction guards and Coinbase Agentic Wallet session limits are examples.
  • Timelock mechanisms: Require a delay period for large withdrawals, giving operators time to intervene.
  • Kill switch: A mechanism (multisig-controlled or admin key) that can freeze the agent’s wallet. This must be tested and operational before deployment.

These on-chain controls serve as evidence of reasonable care — demonstrating that you implemented technical safeguards to prevent the exact type of loss that occurred.

6. Agent Wallet Security Integration

Legal protection and technical security are inseparable. The risk mitigation framework described here assumes that the agent wallet infrastructure is properly secured. For the technical implementation of wallet security — key management, spending controls, kill switches, and monitoring — see Agent Wallet Security.

The combination of proper entity structure, insurance, audit trails, contractual protections, on-chain controls, and wallet security forms the complete risk mitigation stack. Omitting any layer creates exploitable gaps.

For the full technical architecture that these legal considerations sit atop, see The Agent Betting Stack Explained.


Jurisdiction Comparison

Legal requirements for autonomous agent trading vary dramatically by jurisdiction. The following table summarizes the current landscape as of March 2026.

DimensionUS (Federal/State)European UnionUnited KingdomSingaporeOffshore (BVI/Cayman)
Agent trading legal?Yes, with significant caveats. CFTC jurisdiction applies to prediction markets. State-level money transmission rules for third-party funds.Yes, subject to MiFID II for financial instruments and EU AI Act for AI systems.Yes, under FCA oversight. Crypto trading permitted; prediction market regulation evolving.Yes, under MAS (Monetary Authority of Singapore) oversight. Progressive regulatory framework for digital assets.Generally yes, with minimal restrictions. Regulatory arbitrage is the primary motivation.
KYC required?Kalshi: yes (full KYC). Polymarket: not enforced (offshore). FinCEN AML obligations apply to US persons regardless.Yes, under AMLD (Anti-Money Laundering Directives). VASPs must perform KYC.Yes, under Money Laundering Regulations 2017 (amended). FCA-registered crypto firms must KYC.Yes, MAS requires KYC for digital payment token services.Varies. BVI has AML requirements but enforcement is lighter. Cayman VASP regime requires KYC.
CFTC/equivalent jurisdiction?CFTC regulates prediction markets as event contracts. Enforcement active.ESMA oversees financial markets. MiFID II applies to qualifying instruments. AI Act adds AI-specific obligations.FCA regulates financial markets. No prediction-market-specific framework yet.MAS regulates capital markets and payment services. Digital asset licensing under Payment Services Act.No equivalent regulator in most offshore jurisdictions. Self-regulation dominates.
AI-specific regulation?No federal AI law enacted. Executive Order 14110 (Oct 2023) sets policy direction. State-level AI bills emerging (Colorado, California, Illinois).EU AI Act (entered force Aug 2024). High-risk classification for financial AI systems. Phased enforcement through 2027.UK AI regulatory framework (pro-innovation, sector-specific). No single AI law. FCA exploring AI in financial services.Singapore Model AI Governance Framework (voluntary). MAS guidance on AI in financial services (binding for regulated entities).No AI-specific regulation. Minimal oversight of automated systems.
Recommended entityDelaware LLC or C-Corp. Wyoming LLC for crypto-native. State money transmitter licenses if handling third-party funds.EU-domiciled entity (Ireland, Netherlands common). MiFID license if operating as financial intermediary. AI Act compliance documentation.UK LTD. FCA registration for crypto asset activities.Singapore PTE LTD. MAS licensing for digital payment token services if applicable.BVI Business Company or Cayman Exempted Company. Combine with operating entity in a regulated jurisdiction for credibility.

Jurisdiction Selection Guidance

For most builders starting out, a US LLC (Wyoming or Delaware) provides the best balance of legal protection, simplicity, and credibility. US law is well-understood, entity formation is cheap and fast, and the liability protections are robust.

For operations that are explicitly crypto-native, handle significant international volume, or want to minimize regulatory exposure, an offshore entity combined with a US operating entity provides flexibility. But offshore structures add complexity, cost, and potential reputational risk.

For any operation serving EU users, EU AI Act compliance preparation should begin immediately, regardless of where your entity is domiciled. The Act’s extraterritorial scope means it applies to AI systems that affect EU persons, not just systems deployed within the EU.

For builders exploring the full landscape of agent-specific terminology and concepts referenced in this guide, see the Agent Betting Glossary.


What Comes Next

The legal framework for autonomous trading agents is moving fast. Several developments to watch in 2026 and 2027:

  • CFTC rulemaking on automated trading: The commission has signaled renewed interest in Regulation AT or a successor framework specifically addressing algorithmic and autonomous trading on event contract markets.
  • EU AI Act enforcement: August 2026 marks the start of high-risk AI system obligations. The first enforcement actions will set critical precedents for how the Act applies to trading agents.
  • State-level AI legislation: Colorado’s AI Act (effective February 2026) and similar bills in California, Illinois, and New York will create a patchwork of state-level requirements for AI systems that influence consequential decisions.
  • Insurance market maturation: As AI agent operations scale, the insurance market will develop standardized products. Builders who establish relationships with insurers now will be better positioned when claims arise.
  • Case law development: The first lawsuits involving AI trading agent losses will establish judicial precedent. Whether courts treat agents as simple software tools or as something novel will shape liability doctrine for years.

Builders who establish proper legal infrastructure now — entity structure, insurance, audit trails, and contractual protections — will be positioned to adapt as the regulatory landscape solidifies. Those who wait for clarity before implementing protections are accepting unnecessary risk.


See Also


Guide updated March 2026. Not financial or legal advice. Consult qualified counsel for your jurisdiction. Built for builders.