A New Laboratory for Human Behavior
Researchers at multiple institutions are now building what they call AI societies—simulated environments populated by AI agents trained to replicate how individual humans and groups behave. The goal is to study dynamics—social, economic, political—that are difficult or impossible to test directly on human subjects.
The approach is gaining traction for a straightforward reason: real human experiments are slow, expensive, and ethically constrained. AI agent simulations can model millions of interactions, run thousands of scenarios, and do so in hours rather than years.
Simile Raises $100M to Commercialize Agent Simulations
Simile, a Palo Alto-based startup, announced a $100 million funding round to build commercial simulation products using AI agents that model human behavior "in any situation." The company's target applications include:
- Policy decision-making: Testing proposed regulations or interventions in simulation before real-world deployment
- Conflict resolution modeling: Running scenarios across negotiation and dispute contexts
- Consumer market simulation: Predicting how populations will respond to product launches, pricing changes, or communications campaigns
Open Questions About Validity
The fundamental challenge for AI societies is behavioral fidelity—how accurately do AI agents actually replicate human decision-making, especially in novel or high-stakes situations? Critics point out that current LLMs are trained on text produced by humans, meaning agent behavior reflects language patterns more than the full complexity of human psychology, embodiment, and social context.
Proponents counter that even imperfect simulations offer value if they surface dynamics and failure modes that human planners would otherwise miss—and that fidelity will improve as training data and agent architectures evolve.
Discussion
Sign in to comment. Your account must be at least 1 day old.