Can AI Act Like a Negotiator, a Partner, or a Shark? A New Study Says Yes.
A new paper from researchers at the University of Oxford and Kings College London suggests that large language models like GPT-4, Claude, and Gemini don’t just spit out language, but rather develop strategic personalities. One plays nice. One retaliates. One forgives. None were explicitly trained to do this.
So, what happens when you start relying on AI to help make leasing decisions, underwrite risk, or interact with counterparties?
That’s the question I explored using NotebookLM’s AI podcast feature. I fed it this weighty paper and prompted it to discuss the findings through the lens of a non-technical commercial real estate professional. The result: a 15-minute podcast-style conversation that unpacks what the research says, and why it matters (or doesn’t) to how we do business.
Listen to the Podcast-Style Discussion of this Paper
About the Paper
The paper, Emergent World Models in Large Language Models without Explicit Spatial Inductive Biases, tested whether advanced language models could reason strategically and spatially without being told how. Using 140,000 Prisoner’s Dilemma games, researchers observed each model develop a distinct approach to cooperation, betrayal, and adaptation.
In short:
These AIs behave more like real people than previously thought.
Why the Study Matters to CRE
-
Model behavior could influence your outcomes. Whether AI is assisting in lease negotiations, JV structuring, or strategy, a forgiving vs. aggressive model could shift the result.
-
It shows LLMs can handle higher-order reasoning. They’re not just AI assistants, but they’re capable of operating like digital analysts, partners, or advisors.
-
You can train your AI to have a specific personality or play a specific role. Just like you wouldn’t send your most junior analyst to a hardball negotiation, understanding how your AI “thinks” and training it to think in a certain way will become part of your strategy.