AI Governance Through Markets
Case studies on how market forces, not regulation, are creating trustworthy AI
The conventional debate over AI governance mistakenly pits regulation against innovation — where governments rush to curb risks while companies and investors fear oversight will stifle progress.
But this is a false dichotomy. Real change happens when incentives align with business realities.
Startups, enterprises, and investors are already quietly reshaping AI’s trajectory through private markets — not merely through research papers, but through intentional decisions that reward reliability.
While governments and industry are only now beginning to recognize this shift, with some calling it ‘private governance’, I’ve been pioneering these new models - as an operator, investor, and policy adviser - that bypass traditional regulation by aligning incentives to build truly trustworthy AI.
Here’s what I’ve learned — and where we need to go next.
Why I’m Focused on Market-Driven AI Governance
Over the past 15 years I’ve operated at the nexus of tech and policy, building products, scaling startups, and designing ecosystems that drive safer, more transparent technology.
From my academic work, focused on early alignment challenges, to startups, Google, and Meta, I witnessed governance failures firsthand— everything from AI-driven online harms to challenges with content integrity and regulatory shortcomings.
Embedding governance into products taught me a crucial lesson: Retrofitting safety after scaling is nearly impossible.
Frustrated by tech giants overlooking novel solutions, I pivoted to early-stage investing, advising AI startups on risk and safety.
I learned that effective AI governance can’t just be about regulation. It must provide a tangible competitive edge, yielding products that are not only compliant but inherently more valuable.
This insight underpins my investment thesis: Companies prioritizing truth and safety will define the next generation of trusted AI.
Trustworthy AI isn’t only a function of what companies decide to build. Products are also shaped by who funds them, who buys them, and who sets the rules of engagement.
That’s why I’ve spent the last few years focused on three key leverage points:
Startups & Investors: Influencing AI governance at the earliest stages through initiatives like Responsible Innovation Labs.
Enterprise Adoption: Creating AI procurement frameworks that determine what gets bought and deployed, in partnership with the Data & Trust Alliance.
Market Standards & Liability Frameworks: Developing transparency tools—such as AI Vendor Cards and contractual requirements—to make governance enforceable. I also led opposition to California’s AI bill SB 1047, mobilizing investors and startups to challenge legislation that misaligns incentives for safe AI development.
Rather than relying on a single approach, I’ve combined these forces to harness network effects and mainstream trustworthy AI.
Case Studies
I. Building AI governance into startups from Day 1: Responsible Innovation Labs
Problem: AI startups often prioritize speed over governance, risking expensive retrofits. When selling to enterprises, trustworthy products are a non-negotiable.
Hypothesis: Embedding governance early can prevent costly fixes and attract investor support, drawing on Responsible Innovation Labs’ network of multistage VCs and startups.
What We Did: We adapted the 2023 NIST AI Risk Management Framework, into a lightweight toolkit, developed 'safety by design' resources, partnered with VCs to integrate into their due diligence processes, and secured endorsements from 100+ policymakers, leading AI labs, startups, and investors
Key Learnings:
VCs influence startups, but without enterprise demand for safety, change won’t stick
Startups will invest in safety if it helps them sell — governance must be a competitive advantage, not a compliance burden
An enforcement mechanism is crucial; without contractual standards, adoption remains fragmented
That led me to my next move: Enterprise AI procurement.
II. Creating AI procurement standards for Fortune 100 companies: Data & Trust Alliance
Problem: Enterprises are deploying AI tools without complete information on value, risk, and liability. Vendors tout “state-of-the-art” products, but buyers have limited avenues to validate these claims. Procurement teams lacked the expertise to ask the right questions, at times signing off without proper assessments around risk and value leading to astonishing failures.
Hypothesis: New disclosures about AI product and vendor trustworthiness can empower enterprises to assess these systems more effectively. Procurement teams can act as central nodes, convening cross-functional teams (legal, engineering, privacy, governance) to evaluate AI purchases across multiple dimensions.
What We Did:
I worked closely with executives from 26 Fortune 100 companies to develop practical AI procurement standards. While still in the pilot phase, these early conversations have laid a promising foundation for how enterprises can more rigorously assess AI risks and value, ensuring successful deployments that benefit companies and their customers.
Key Learnings:
Standardize Disclosures: Provide clear, consistent information about the entire AI product stack (not just one layer) so companies don’t have to guess how an AI system works.
Extend Existing Tools: Thinking outside the box, we conceived of new tools like an ‘AI Vendor Card’ that build on existing go-to-market practices like AI model and system cards, which are a widely accepted way for AI labs to communicate key product attributes to technical audiences.
Ensure Ongoing Accountability: Shift governance from a one-time checklist to continuous oversight
Facilitate Procurement of Trustworthy Systems: Help enterprises purchase AI products that are both safe and valuable
III. What Healthcare Can Teach Us About AI Governance
Healthcare Has a Built-In Governance Advantage
Regulated industries like healthcare and finance are ahead of others in identifying AI product risk and value. This is because the cross-functional risk assessment it requires is part of their DNA. Concerns from across the company about how new technologies impact their world are discussed, and then they work together to build in safeguards and make build/buy decisions about third party AI systems.
I want to be clear: While regulation facilitated the company-wide collaboration needed to do private AI governance well, it is not a precondition for private AI governance. Rather, other industries should take lessons from healthcare about how to coordinate internally and emulate it.
I spent half my career in healthcare and. know, for example, hospitals don’t wait for failure and partake in proactive audits like JCAHO performance assessments to ensure compliance across data, risk, and patient outcomes.
The Coalition on Health AI (CHAI) was among the first to adapt the NIST AI Risk Management Framework for hospitals, proving that industry groups can set effective standards for trustworthy AI. When I worked with Responsible Innovation Labs, CHAI was my first proof point that a collaborative, standards-driven approach to shaping trustworthy systems can work – something many AI companies still lack.
The Lesson: AI companies need more than one-off audits. They require continuous monitoring and coordinated multi-stakeholder oversight, just as hospitals do for patient safety.
IV. The Liability Gap: AI Insurance & Risk Allocation
Many enterprises assume they can offload AI risk to vendors or insurance policies. But this is a flawed assumption.
The largest AI companies and consultancies actually sell AI indemnity insurance alongside their products, but the most widely used plans are too narrowly scoped, covering mostly content generation risks (e.g., deepfake lawsuits) rather than full-system failures.
The AI insurance market mirrors the challenges seen in cyberinsurance, with insurers struggling to assess AI’s risk landscape and price plans accordingly. Working with major vendors, like OpenAI, Accenture, or Microsoft, may appear to offset liability for the enterprise buyer, but in reality they do not usually bear the full cost if something goes wrong.
This is why contractual AI governance is crucial. AI transparency - sharing information and adhering to agreed-upon thresholds - must be built into procurement agreements rather than treated as an afterthought.
Where AI Market Incentives Go Next
We’re at a tipping point. AI governance isn’t just a policy issue, it’s a market design challenge.
To advance market-driven AI governance, we need industry standards like AI Vendor Cards, procurement teams that serve as governance stewards, and government incentives that promote transparency over rigid regulation.
This isn’t a theoretical framework; we’re actively building and investing in early-stage AI startups that embody this vision. Forward-thinking LPs and industry leaders can transform safety from a mere compliance checkbox into a genuine competitive advantage.
A Vision for the Future:
The Astera Institute sums up my approach:
“We have two very good engines for scaling and distributing ‘good things’ in the world (however we define those things): markets and governments. If those ‘good things’ can generate equivalent value at a lower cost—or better yet, create new value—markets will deploy them widely and quickly. Meanwhile, governments can organize and shape markets, operate at scale, and allocate resources to public goods that markets alone cannot efficiently deliver.”
Both markets and governments can scale ‘good things’ when incentives align. Together, they can drive innovation that delivers public value at lower costs."
Share this mission? Let’s chat.
Badass, Lauren - great read