Thinking Machines Lab Aims to Make AI Models More Predictable: What Startups Should Know

Thinking Machines Lab—a high-profile AI startup founded by former OpenAI CTO Mira Murati and backed by $2 billion in seed funding—has set its sights on making AI model responses more consistent. This initiative was recently highlighted in the lab’s new research blog, where they offered the tech world a rare look at their early progress.
Challenging AI’s Non-Determinism
A core issue in today’s generative AI systems is unpredictability: ask a model like ChatGPT the same question several times, and it will often generate different answers. This randomness, known as non-determinism, has generally been accepted as an inherent property of large language models. But the team at Thinking Machines Lab argues it can—and should—be solved.
According to researcher Horace He, the unpredictability root cause isn’t just the models themselves. Instead, it stems from the unpredictable way that GPU kernels—the atomic code blocks running inside Nvidia chips—are orchestrated during inference (the moment when users get a response). He claims that with tighter control over this orchestration layer, we could achieve far greater consistency in AI outputs.
Beyond Reliability: The Strategic Implications
Consistency in AI model outputs isn’t just a matter of academic interest. For enterprises and scientific research, reproducible results are vital. They allow robust decision-making, regulatory compliance, and easier debugging. Moreover, as Thinking Machines Lab highlights, more consistent AI responses could dramatically enhance reinforcement learning (RL) methods, which rely on predictable feedback to refine models. Smoother RL training leads to better, more reliable AI for end users.
The lab also revealed plans to leverage these techniques in building customized enterprise AI solutions, potentially making bespoke models safer and more trustworthy for business-critical applications. For founders exploring AI, these developments offer both inspiration and a blueprint for competitive differentiation.
A Philosophy of Openness (with Questions)
Thinking Machines Lab has promised to share frequent blog posts, open-source code, and research insights as part of its public commitment—similar to OpenAI's original mission. Their new blog, titled “Connectionism,” aims to foster collaboration with the broader tech community and maintain transparency. Industry watchers will be keen to see if this open stance endures as the company grows and commercializes its offerings.
Industry Context and Future Outlook
This glimpse into the inner workings of one of Silicon Valley’s most secretive AI startups signals a broader trend: as generative AI ambitions outpace old technical assumptions, new frontiers like controllable consistency are emerging as differentiators. If Thinking Machines Lab can successfully engineer predictability into large models—and turn that into real products—they may redefine the standard for enterprise-ready AI and justify their multi-billion-dollar valuation.
Deep Founder Analysis
Why it matters
For startups and founders building on top of AI, model consistency isn’t just an engineering challenge—it’s a market enabler. Reliable, repeatable AI is crucial for industries such as healthcare, finance, and legal tech, where unpredictable outputs create real business risk. Thinking Machines Lab’s research hints at a new set of quality standards that could unlock enterprise adoption at scale.
Risks & opportunities
Startups face the risk of being left behind if they ignore rising expectations for AI reliability. On the flip side, there’s an opportunity for founders to turn technical rigor into a go-to-market advantage, especially by targeting sectors with regulatory or safety needs. Parallels can be drawn to early cloud computing, where “enterprise-grade” security unlocked massive growth.
Startup idea or application
A compelling opportunity exists for a startup to build a “deterministic AI inference layer” as an API or middleware plug-in for popular cloud providers and AI platforms. By ensuring reproducible outputs across different hardware/software deployments, such a tool could become the gold standard for companies seeking certification-ready or regulated AI services.
AI Models Consistency Enterprise AI Reinforcement Learning Thinking Machines Lab
For founders interested in broader AI trends and how startups are leveraging them, check out our article on AI safety regulation’s market impact and startup pitch strategies from TechCrunch Disrupt 2025.
Visit Deep Founder to learn how to start your own startup, validate your idea, and build it from scratch.
📚 Read more articles in our Deep Founder blog.
Comments ()