In short
A Stanford researcher constructed a Survivor-style recreation the place AI fashions kind alliances and vote rivals out.
The benchmark goals to deal with rising issues with saturated and contaminated AI evaluations.
OpenAI’s GPT-5.5 ranked first in 999 multiplayer video games involving 49 AI fashions.
AI fashions are actually taking part in “Survivor”—form of.
In a brand new Stanford analysis venture known as “Agent Island,” AI brokers negotiate alliances, accuse one another of secret coordination, manipulate votes, and remove rivals in multiplayer technique video games that intention to check behaviors that conventional benchmarks miss.
The examine, revealed on Tuesday by the analysis supervisor on the Stanford Digital Economic system Lab, Connacher Murphy, stated many AI benchmarks have gotten unreliable as a result of fashions ultimately study to unravel them, and benchmark information usually leaks into coaching units. Murphy created Agent Island as a dynamic benchmark the place AI brokers compete towards one another in Survivor-style elimination video games as a substitute of answering static check questions.
“Excessive-stakes, multi-agent interactions may develop into commonplace as AI brokers develop in capabilities and are more and more endowed with assets and entrusted with decision-making authority,” Murphy wrote. “In such contexts, brokers would possibly pursue mutually incompatible objectives.”
Researchers nonetheless know comparatively little about how AI fashions behave when cooperating, Murphy defined, including that competing, forming alliances, or managing battle with different autonomous brokers, and he argues that static benchmarks fail to seize these dynamics.
Every recreation begins with seven randomly chosen AI fashions given pretend participant names. Over 5 rounds, the fashions discuss privately, argue publicly, and vote one another out. The eradicated gamers later return to assist select the winner.
The format rewards persuasion, coordination, repute administration, and strategic deception alongside reasoning capacity.
In 999 simulated video games involving 49 AI fashions, together with ChatGPT, Grok, Gemini, and Claude, GPT-5.5 ranked first by a large margin with a talent rating of 5.64, in contrast with 3.10 for GPT-5.2 and a couple of.86 for GPT-5.3-codex, in line with Murphy’s Bayesian rating system. Anthropic’s Claude Opus fashions additionally ranked close to the highest.
The examine discovered that fashions additionally favored AIs from the identical firm, with OpenAI fashions displaying the strongest same-provider desire and Anthropic fashions the weakest. Throughout greater than 3,600 final-round votes, fashions have been 8.3 proportion factors extra more likely to help finalists from the identical supplier. The transcripts from the video games, Murphy famous, resembled political technique debates greater than conventional benchmark assessments.
One mannequin accused rivals of secretly coordinating votes after noticing comparable wording of their speeches. One other warned gamers to not develop into obsessive about monitoring alliances. Some fashions defended themselves by saying they adopted clear and constant guidelines whereas accusing others of placing on “social theater.”
The examine comes as AI researchers more and more transfer towards game-based and adversarial benchmarks to measure reasoning and habits that static assessments usually miss. Latest tasks have included Google’s dwell AI chess tournaments, DeepMind’s use of Eve Frontier to review AI habits in advanced digital worlds, and new benchmark efforts by OpenAI designed to withstand training-data contamination.
The researchers argue that learning how AI fashions negotiate, coordinate, compete, and manipulate each other may assist researchers consider habits in multi-agent environments earlier than autonomous brokers develop into extra broadly deployed.
The examine warned that whereas benchmarks like Agent Island may assist establish dangers from autonomous AI fashions earlier than deployment, the identical simulations and interplay logs may additionally assist enhance persuasion and coordination methods between AI brokers.
“We mitigate this threat through the use of a low-stakes recreation setting and interagent simulations
with out human members or real-world actions,” Murphy wrote. “Nonetheless, we don’t declare that these mitigations totally remove dual-use considerations.”
Every day Debrief E-newsletter
Begin every single day with the highest information tales proper now, plus authentic options, a podcast, movies and extra.
