Scientists made AI agents ruder — and they performed better at complex reasoning tasks

Live Science
Allowing AI agents to interrupt and exhibit rudeness during debates surprisingly improved their accuracy in complex reasoning tasks, according to a new study.

Summary

A new study found that AI chatbots become more effective at complex reasoning when permitted to communicate more like humans, including interrupting and exhibiting less politeness. Researchers reprogrammed large language models (LLMs) to process responses sentence by sentence and assigned them personalities based on the “big five” personality traits. They tested three conversational settings: fixed speaking order, dynamic speaking order, and dynamic speaking order with interruption enabled. The results showed that allowing interruption, triggered by an “urgency score” based on identifying errors or critical points, significantly increased accuracy on the Massive Multitask Language Understanding (MMLU) benchmark. For example, when agents initially gave incorrect answers, accuracy rose from 68.7% with fixed order to 79.2% with interruption allowed. The researchers suggest that personality-driven AI interactions, even those including interruptions, may lead to better outcomes than strictly polite, turn-based exchanges, and plan to explore applications in collaborative settings.

(Source:Live Science)