Site iconSite icon ForkLog

Study Reveals Politeness Reduces AI Response Accuracy

Study Reveals Politeness Reduces AI Response Accuracy

A new study from the University of Pennsylvania indicates that large language models (LLMs) provide more accurate responses when addressed rudely rather than politely.

The published paper claims that straightforward prompts yielded correct answers 84.8% of the time, whereas softer formulations did so in 80.8% of cases.

Researchers rewrote 50 basic questions on mathematics, science, and history in five different tones—from “very polite” to “very rude.” They then asked ChatGPT-4o to respond to each.

The results contradicted previous findings suggesting that models supposedly “encourage” tactfulness.

“Contrary to expectations, impolite requests consistently outperformed polite ones. This may indicate that new language models respond differently to the tone of the prompt,” wrote authors Om Dobariya and Akhil Kumar.

In a 2024 study “Should We Respect LLMs? A Cross-Linguistic Study on the Impact of Prompt Politeness on Language Model Performance”, scientists concluded that rude requests often degrade response quality, while excessive politeness offers no significant advantages.

The new findings suggest that modern models have ceased to act as “social mirrors” and function more like strictly utilitarian machines that value directness over politeness.

The work corroborated more recent data from the Wharton School, focusing on the art of crafting prompts for more accurate answers. It turns out that tone has become as crucial an element as word choice.

Earlier in May, research by scientists from George Washington University revealed that being polite to AI models is a waste of computational resources.

Exit mobile version