This piece argues that so-called silicon sampling, where AI systems simulate survey answers instead of collecting them from real people, could seriously damage public-opinion research. The problem is not just technical quality, but the possibility of replacing real social signals with synthetic ones.

1. The Shock of Synthetic Opinion

The article opens with a case where a reported survey turned out to be an AI simulation rather than human polling. That example captures the core problem: the results looked like public opinion even though no public had actually been asked.

2. Why This Undermines Polling Itself

Opinion polling only matters when it reflects the beliefs of real people. Once synthetic responses are treated as equivalent to human responses, the practice stops measuring society and starts modeling an artificial version of it.

3. Existing Limits and Model Bias

Traditional polling already has weaknesses, but the summary argues that replacing those weaknesses with opaque model behavior is worse. AI systems inherit biases from data, design, and assumptions that are hard to audit cleanly.

4. A Force Multiplier for Existing Problems

Instead of solving trust issues in public discourse, silicon sampling could amplify them. Synthetic opinion can be cheaper and faster, but that efficiency may come at the cost of legitimacy and interpretability.

5. The Market Will Grow Fast Anyway

Because the method is inexpensive and scalable, companies will be tempted to adopt it quickly. That commercial momentum makes the social risk more urgent, not less.

Closing

The essay's warning is straightforward: synthetic opinion is not public opinion. If institutions start treating the two as interchangeable, polling could lose its value precisely when trust in information is already fragile.

Related writing