Weave·Studio
01 / 14
Seed · 2026

WeaveStudio

Physics for synthetic societies.

A simulation platform where thousands of AI people live inside worlds you design. For consumer launches, policy changes, game narratives, sales conversations, clinical trials, and more. Same setup runs twice, gives you the same answer. Every outcome comes with a record of how it happened.

$5M Seed · Platform Infrastructure
The thesis

Every agent platform made the same mistake.

They let the AI model run the world. The model decides what is possible, what happens next, what each character sees. Fine for a demo. Impossible to ship.

We flipped it. The rules of the world are fixed. The AI only chooses from a list of moves the world allows. It never decides what is possible.

That one inversion is why the platform works across any domain: a consumer launch, a policy intervention, a game world, a clinical trial. The rules change. The engine does not.

OTHERS AI model decides what is possible decides what happens decides what is seen and says the dialogue world (wherever the AI says) WEAVESTUDIO Rules of the world fixed, provable, unchanging World state one source of truth AI model picks an allowed move
Consequence

Six things you cannot do when the AI runs the world.

This is why the well-known agent demos are demos. Google's Concordia, Stanford's Smallville, Altera's Project Sid. Impressive on video. Unshippable in production, no matter the domain.

The platform

An engine for synthetic societies.

The engine is domain-agnostic. It knows nothing about consumer research, gaming, policy, or sales. Domain lives in a template you write on top. The same engine runs any of them.

YOU DESIGN THE ENGINE YOU GET BACK Places rooms, channels, markets People traits, networks, histories Rules what can happen, when Live data prices, news, schedules WeaveStudio same answer twice every cause on file any domain A full story everyone's moves, in order Replay & fork save, change, compare Interviews ask any character why Cheaper models trained on your runs
The moat

Five things nobody else can offer.

01
Same answer, twice.
Press run on the same brief and you get the same story, move for move. The basis of everything else.
02
What-if on rails.
Save a moment. Change one thing. Re-run. The before-and-after is fair because everything else is held identical.
03
Receipts on every outcome.
Every event links back through the chain of causes that produced it. The room can defend the answer.
04
Nobody sees everything.
Characters only see what they should. Information asymmetry is a rule of the world, not a suggestion.
05
Each run is cheaper than the last.
Simulations produce training data. Smaller models replace expensive ones. Cost drops 80 to 95% per decision.

All five hold across every domain. That is the platform.

Landscape

Everyone else is either a demo or a survey.

PlayerWhere they come fromWhere they break
ConcordiaGoogle DeepMindRuns twice, different answers. Nothing to trace.
SmallvilleStanford paperResearch prototype. No product. No replay.
Project SidAltera, $9M+ raisedAI still runs the world. No scoped vision. No cause trail.
AgentSocietyResearch groupScales large. No replay. No branching. No rules.
Synthetic Users, Aaru, EvidenzaEarly-stage startupsSell the output, not the engine. Cannot repeat or trace.
CrewAI, AutoGen, LangChainAgent frameworksOrchestrators, not worlds. No rules layer.
Ipsos, Nielsen, KantarTraditional researchThree to six weeks. No what-if. Stated preference, not revealed behavior.

The pattern is simple. Competitors are demos (not shippable) or surveys (no rules). The layer between is a world with physics. That layer is empty.

Reach

Eight worlds. One engine.

The same platform runs all of the below. Each is a template written on top, not a separate product we rebuilt. The engine stayed still. The domain changed.

Lokara
Consumer Research
"Will this launch work, and why, for whom?"
LoreEngine
Gaming Narratives
"What happens in this world when the player logs off?"
CityLab
Public Policy
"What actually happens under rent control?"
DealForge
Sales & GTM
"How does this pitch land with a skeptical CFO?"
TrialMesh
Clinical Trials
"Which patients will drop out, and when?"
MarketMind
Financial Markets
"How does this shock propagate through the market?"
OrgScope
Org Design
"What breaks if we merge these two teams?"
PollPulse
Campaigns
"How does the message travel through the bubble?"

Lokara is our first commercial product. The other seven are built and waiting. We bring them online when the market signals, not when the engineering allows.

One example

A Tokyo skincare launch, run before it shipped.

This is the platform running a consumer-research template. Thirty consumers and three creators across six Tokyo wards and four channels: TikTok, Instagram, a skincare community, and a private group chat.

At second 75, a skeptic turns sentiment negative. Baseline arm lands at 14% purchase. The intervention branch wakes a creator with a targeted rebuttal. Purchase lifts to 24.2%. Seventy-three percent of the lift traces back to that one rebuttal.

Replay on the same brief, get the same story. Swap the template and the same engine runs a game world, a policy change, or a sales call instead.

0 + skeptic breaks · 0:75 rebuttal · 1:40 Baseline · 14% Intervention · 24.2% 0s 260s LAUNCH WINDOW
Sentiment over time. Both arms identical until the fork.
Why now

Three forces crossed the line at the same moment.

  • 01AI models can hold a character. They speak in persona across hundreds of turns without drifting. Two years ago, not possible.
  • 02Custom models got cheap. Training a smaller model on your own data costs a few hundred dollars. The savings loop is live for the first time.
  • 03Trust in stated opinion collapsed. Across research, policy, campaigns, and product, buyers everywhere are shopping for something that shows behavior, not attitude.

Two years ago we could not have built this. Two years from now, someone else builds it if we do not.

OPPORTUNITY viable 2020 2023 2027
AI in-character
Cheap custom models
Stated-opinion fatigue
Market

One engine. Many markets.

SegmentGlobal (2025)Reachable
Consumer research$90B$12B
Simulation software$15Bn/a
Gaming NPCs and narrative$3B$0.5B
Sales training$7B$1B
Policy modeling$2B$0.3B
Total~$117B~$14B

Because the engine is domain-agnostic, we do not pick one market and defend it. We pick the one that lands first, then open the next.

TOTAL MARKET $117B REACHABLE $14B WEDGE consumer first wedge, then the next
Go-to-market

Four stages. One arc.

A design partner signs. The six-week pilot ships a case study. The case study lands the next three. First-vertical customers eventually open the engine for their own developers.

1 DISCOVERY Design partner conferences, academic, warm intros two to three weeks 2 PILOT Six-week build co-design, run, deliver a case study six weeks 3 PRODUCTION Shipped logo live inside the customer workflow ongoing 4 EXPANSION Platform access second vertical or engine for their devs year two and on

Inbound only, for now. Academic papers, conferences, warm intros. We do not sell what we have not built.

Traction

The engine is built. The first wedge is active.

Shipped

  • Full platform: replay, branching, cause tracing, scoped vision.
  • Custom-model training pipeline end to end.
  • Eight domain templates built. One commercial, seven ready.
  • Flagship run on the consumer-research template.
  • Academic partnership: Hamilton lab at George Mason University.
ROADMAP · 12 MONTHS Q1 5 pilots across domains Q2 First vertical alpha Lokara live Q3 10 customer logos platform hardened Q4 Platform API opens savings loop live YEAR 2 Gaming vertical Sales vertical Series A LoreEngine launch DealForge launch month 15 ready
The ask

$5M seed. 18 months. Series-A ready by month 15.

55% Eng
20% GTM
15% Compute
10% Ops
55% · $2.75M
Engineering
15 hires over 8 quarters. Platform hardening. Vertical templates on deck.
20% · $1.0M
Go-to-market
Two GTM hires. Conferences across verticals. Design partners converted to shipped logos.
15% · $0.75M
Compute
Model budget for pilots. Custom-model training runs. Data infrastructure.
10% · $0.5M
Operations
Finance, legal, success, the security groundwork for enterprise deals.
5
Customer logos shipped
2
Verticals in production
1
Published paper
1
Platform API open to developers
The math

Why these numbers.

Three scenarios. One recommended. $5M is the raise that gets us to Series A on milestones rather than timeline. Less is a bridge. More is a second vertical pulled forward.

Floor · $3M
9 mo
runway to bridge
Ship first vertical to production. Land three design-partner logos. Raise an A on thin traction or bridge to more.
Target · $5M
18 mo
runway to Series A
Full 12-month roadmap. Five logos live by month nine. Platform API open by month twelve. A-round on milestones at month fifteen.
Cap · $8M
26 mo
runway plus second vertical
Everything in the $5M plan plus a second vertical (gaming or sales) in production before the A.
Valuation framing, for the lead conversation

Comps. Agent-platform seeds in the last 18 months range from $8M to $25M pre-money. Project Sid raised $9M-plus without the physics layer. Aaru and Evidenza priced higher on weaker technical claims.

Our anchor. Production platform shipped. Eight templates built. Design partners in motion. Academic collaboration live. This is not a deck-and-a-prototype raise.

Terms. Priced round or SAFE, open. Standard pro-rata. Board seat open to the lead.