Texas has enacted groundbreaking artificial intelligence governance laws that will reshape how businesses operate in the state’s $2.4 trillion economy. Signed into law on June 22, 2025, House Bill 149 — the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) — and Senate Bill 1188 create the nation’s most comprehensive state-level AI regulatory framework, effective January 1, 2026, and September 1, 2025, respectively.
The legislation positions Texas as the third state to adopt comprehensive AI laws, but with a business-friendly twist that sets it apart from California’s strict approach. Unlike impact-focused regulations elsewhere, Texas requires proof of intentional misconduct rather than mere discriminatory outcomes — a distinction that provides businesses with greater legal certainty while maintaining consumer protections.
Healthcare Surge: New Transparency Requirements
Healthcare providers face the most immediate changes under both laws. TRAIGA mandates that providers disclose AI system use to patients before or during interactions, except in emergencies where disclosure must happen as soon as reasonably possible. This requirement applies whenever AI influences diagnosis or treatment decisions.
SB 1188 adds another layer of oversight, requiring licensed practitioners to review all AI-generated medical records according to Texas Medical Board standards. The law ensures that human physicians make ultimate medical decisions, even when AI provides diagnostic support or treatment recommendations.
The healthcare provisions also include strict data localization requirements. SB 1188 prohibits the physical offshoring of electronic medical records, applying to both direct storage by healthcare providers and third-party cloud services. This mandate could force major healthcare systems to restructure their data infrastructure, potentially affecting relationships with global cloud providers.
Strategic Advantage: Intent-Based Liability Framework
TRAIGA’s most significant innovation lies in its intent-based liability standard. The law prohibits AI systems developed with specific harmful intents, including behavioral manipulation to encourage self-harm, constitutional rights violations, and discriminatory targeting of protected classes. Critically, disparate impact alone cannot establish discriminatory intent — a provision that shields businesses from liability for unintended algorithmic outcomes.
This approach creates practical documentation requirements. While TRAIGA doesn’t mandate extensive record-keeping, proving lack of discriminatory intent effectively requires organizations to maintain detailed documentation of AI system purposes, design decisions, and intended use cases. Companies must document legitimate business purposes, testing protocols that demonstrate efforts to prevent prohibited uses, and clear policies restricting deployment to lawful purposes.
Market Impact: First-in-Nation AI Regulatory Sandbox
Texas introduces a pioneering 36-month regulatory sandbox program, administered by the Department of Information Resources. Approved participants can test innovative AI applications without standard state licensing requirements, creating a competitive advantage for businesses willing to pilot cutting-edge solutions.
The sandbox waives certain regulations but maintains TRAIGA’s core prohibitions. Participants must submit quarterly performance reports and engage with consumer feedback, but gain protection from punitive actions during the testing period. This framework could attract AI companies seeking regulatory clarity while developing new technologies.
Robust Safe Harbor Protections
TRAIGA includes multiple safe harbor provisions for organizations demonstrating good faith compliance efforts. Companies may avoid liability if they discover violations through internal testing, substantially comply with the NIST AI Risk Management Framework, follow state agency guidelines, or experience third-party misuse of their systems.
These protections reward proactive compliance efforts and encourage investment in AI safety measures. Organizations implementing adversarial testing and red team exercises gain explicit legal protection, incentivizing robust security practices.
Enforcement Authority and Penalty Structure
The Texas Attorney General holds exclusive enforcement authority, providing a single point of regulatory contact rather than multiple agency oversight. Before pursuing action, the Attorney General must provide 60-day cure periods for violations, offering businesses opportunity to address compliance issues.
Civil penalties follow a tiered structure: curable violations carry $10,000 to $12,000 fines per violation, while uncurable violations range from $80,000 to $200,000. Continuing violations incur $2,000 to $40,000 daily penalties, creating strong incentives for swift remediation.
Government Entity Restrictions
State agencies face additional constraints under TRAIGA. The law prohibits governments from using AI to assign social scores that could lead to detrimental treatment, or to identify individuals through biometric data from public sources without consent. These restrictions don’t apply to private businesses, creating competitive advantages for commercial AI applications.
Governmental entities must also disclose AI system interactions to consumers before or at the point of contact, ensuring transparency in government services.
Biometric Privacy Updates
TRAIGA amends Texas’s Capture or Use of Biometric Identifiers law, clarifying that public availability of biometric data doesn’t constitute consent unless individuals made it public themselves. The law creates new exemptions for biometric use in AI systems deployed for security, fraud detection, or preventing illegal activity.
These changes provide clearer guidance for businesses using biometric data in AI applications while maintaining privacy protections for individuals.
What Business Leaders Should Know
Companies operating in Texas should begin immediate compliance preparations. Key steps include conducting comprehensive AI system audits, establishing internal governance policies aligned with TRAIGA requirements, and implementing testing protocols that qualify for safe harbor protections.
Organizations should also evaluate opportunities to participate in the regulatory sandbox for high-risk or novel AI use cases. Early participation could provide competitive advantages and shape future regulatory guidance.
The January 1, 2026, effective date provides approximately six months for preparation, but healthcare providers face tighter timelines under SB 1188’s September 1, 2025, implementation.
Global Regulatory Context
Texas’s approach contrasts sharply with the European Union’s AI Act and California’s pending legislation. By focusing on intentional misconduct rather than algorithmic outcomes, Texas creates a more business-friendly environment while maintaining consumer protections.
This framework could influence nationwide AI regulation, potentially serving as a model for other states seeking to balance innovation with oversight. Companies with multi-state operations must coordinate TRAIGA compliance with other regulatory requirements, but Texas’s approach may provide templates for harmonized governance.
The laws demonstrate Texas’s growing influence in technology policy, leveraging the state’s business-friendly reputation to attract AI investment while establishing clear regulatory boundaries. As enforcement begins, Texas could emerge as a preferred jurisdiction for AI development and deployment.
What’s your take on Texas’s intent-based approach to AI regulation — does it strike the right balance between innovation and protection? Share your perspective on how this could reshape business AI strategies.