Researching and aligning artificial general intelligence systems to unlock unprecedented human progress — safely, responsibly, and openly.
Our research spans the most pressing open problems in AGI — from alignment and interpretability to long-horizon planning and autonomous agent coordination.
Developing robust methods for ensuring frontier AI systems reliably pursue intended goals across novel contexts. We study RLHF, Constitutional AI, and scalable oversight techniques.
Active ResearchUnderstanding the internal computations of large neural networks. We reverse-engineer circuits, identify features, and map causal structures inside transformer models.
Active ResearchBuilding AI agents that autonomously plan and execute complex, multi-step goals. We focus on safe autonomy envelopes, agent coordination, and error recovery.
In ProgressAdvancing AI's capacity for physical intuition, counterfactual reasoning, and mental simulation. Grounding intelligence in structured world representations.
UpcomingDesigning rigorous evaluations for capability and safety. We contribute to ARC-AGI-style benchmarks and red-teaming protocols for autonomy, deception, and emergent behavior.
Active ResearchInvestigating cross-modal understanding — vision, language, audio, and action. Building toward embodied AI that perceives and acts in physical and simulated environments.
PlannedThree distinct model architectures, each targeting a critical capability dimension of the path to beneficial AGI.
Designed for safe, constrained deployment in high-stakes environments. Enforces explicit safety envelopes, constitutional constraints, and human-oversight requirements at inference time.
A general reasoning engine built to operate reliably under uncertainty, stress, and distribution shift. Optimized for causal inference, multi-step planning, and out-of-distribution generalization.
A multi-agent coordination framework for distributed intelligence. NOVA enables networks of specialized agents to collaborate on complex goals with adaptive task routing and shared memory.
Safety and alignment objectives are core to the training pipeline — not post-hoc filters. We use Constitutional AI and scalable oversight from day one.
We cannot trust what we cannot understand. Every model has a mechanistic audit program to map internal representations and identify deceptive features.
Our deployment protocols maintain meaningful human control at every capability tier. We operate with explicit autonomy envelopes and corrigibility constraints.
We publish our safety findings — including failures — to contribute to the global field. A rising tide of safety knowledge lifts all boats.
“We are at the most consequential moment in the history of intelligence. The decisions made in the next decade will shape civilization for centuries. We believe that safety and capability are complements, not trade-offs — and that the best AGI is one humanity can trust unconditionally.”
— Texas AGI Labs Research Team, 2025
Texas AGI Labs is an independent AI research institution based in McKinney, Texas. We exist because we believe the frontier of intelligence research should not be concentrated in a single city or a single worldview.
Whether you're a researcher, engineer, institution, or simply curious about the future of intelligence — we want to hear from you.