TRAIGA: The Texas Responsible Artificial Intelligence Governance Act

Executive Summary

Effective January 1, 2026, the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), enacted via House Bill 149, establishes a unique regulatory framework for artificial intelligence in the Lone Star State. Unlike the risk-based models adopted by the European Union or Colorado, TRAIGA focuses on intent-based liability, prohibiting specific malicious uses of AI while fostering innovation through safe harbors and a regulatory sandbox. For legal practitioners and corporate counsel, the Act represents a shift from compliance based on potential harm to liability based on intentional misconduct, specifically rejecting “disparate impact” as a sole basis for discrimination claims.

Legislative Context and Scope

TRAIGA emerged as a business-friendly alternative to the earlier House Bill 1709, which proposed impact assessments and risk-tiering similar to EU AI laws. The final law applies broadly to any person who develops or deploys an AI system in Texas, produces products used by Texas residents, or conducts business in the state.

The statute defines an “Artificial Intelligence System” expansively. However, liability is bifurcated between “Developers” (those creating the system) and “Deployers” (those using it commercially) – the goal being that responsibility generally tracks with the actor’s intent.

The Intent-Based Liability Standard

The most significant legal innovation in TRAIGA is its rigorous mental state requirement. The Act explicitly states that disparate impact alone is insufficient to demonstrate an intent to unlawfully discriminate. This distinguishes Texas from jurisdictions where algorithmic bias discovered post-deployment can trigger strict liability. Instead, the Texas Attorney General must prove that a developer or deployer intended to discriminate or infringe upon rights.

That, obviously, is a MUCH higher bar to reach.

To aid compliance, TRAIGA creates a rebuttable presumption (if you’ve heard of “safe harbor” rules, you can think of this as a “safe-ish harbor”) that an entity exercised reasonable care if it complies with recognized standards, such as the NIST AI Risk Management Framework, or follows certain other internal protocols. This incentivizes robust internal governance as a (partial) litigation shield.

Specific Prohibitions: The “Red Lines”

TRAIGA prohibits the development or deployment of AI systems for specific harmful purposes. It is unlawful to create systems intentionally designed to incite physical self-harm, commit criminal activity, or generate child sexual abuse material (CSAM) and non-consensual deepfakes.

Furthermore, the Act bans AI systems that intentionally limit expression based on political beliefs – though it includes exceptions for obscenity, credible threats of violence, and illegal content to balance First Amendment concerns.

Affirmative Obligations: Healthcare and Government

While private sector obligations are largely negative (prohibitions), the Act does impose affirmative duties on government and healthcare entities.

State agencies must disclose when a consumer is interacting with an AI system. These disclosures must be clear, conspicuous, and free of “dark patterns”—interfaces designed to manipulate user autonomy or consent. Additionally, agencies are barred from using AI for “social scoring” or using biometric data to uniquely identify individuals without consent.

Healthcare providers face a distinct mandate to disclose the use of AI systems “in relation to” any service or treatment. This disclosure must occur before or at the time the service is provided, except in emergencies. This broad requirement suggests that any AI involvement in diagnostics, scribing, or clinical decision-making triggers a duty to warn the patient.

Innovation and Enforcement

To attract technology investment, TRAIGA establishes an AI Regulatory Sandbox administered by the Department of Information Resources. This program allows approved applicants to test innovative AI systems for 36 months under a suspended regulatory regime, offering a safe harbor from state enforcement actions during the testing period.

Critically, enforcement is the exclusive domain of the Texas Attorney General; the Act does not create a private right of action, insulating companies from class-action litigation. The AG must provide a 60-day “right to cure” notification before initiating action. Penalties for uncurable violations can reach $200,000 per violation, with continuing violations accruing fines of up to $40,000 per day.

Conclusion

TRAIGA represents a distinctive attempt, at least, at finding a “Third Way” in AI governance, prioritizing intent over impact and specific harms over precautionary bureaucracy. For lawyers, the key to navigating TRAIGA seems to lie in documenting “reasonable care” through NIST alignment and ensuring that client disclosures (where required) remain transparent and compliant.

As to whether this attempt will be successful… time will tell.

Contact Us

    By clicking “Send Message” you agree that we may review any information you send to us before you and the firm execute an engagement letter. You also agree that our review of any such information, even if it is highly confidential and even if it is transmitted in an effort to retain us, will not preclude us from representing another client that is directly adverse to you, even in a matter in which that information could and will be used against you.