{"id":89866,"date":"2025-10-14T11:22:05","date_gmt":"2025-10-14T08:22:05","guid":{"rendered":"https:\/\/forklog.com\/en\/?p=89866"},"modified":"2025-10-14T11:25:15","modified_gmt":"2025-10-14T08:25:15","slug":"study-reveals-politeness-reduces-ai-response-accuracy","status":"publish","type":"post","link":"https:\/\/forklog.com\/en\/study-reveals-politeness-reduces-ai-response-accuracy\/","title":{"rendered":"Study Reveals Politeness Reduces AI Response Accuracy"},"content":{"rendered":"<p>A new <a href=\"https:\/\/www.arxiv.org\/pdf\/2510.04950\">study<\/a> from the University of Pennsylvania indicates that large language models (LLMs) provide more accurate responses when addressed rudely rather than politely.<\/p>\n<p>The published paper claims that straightforward prompts yielded correct answers 84.8% of the time, whereas softer formulations did so in 80.8% of cases.<\/p>\n<p>Researchers rewrote 50 basic questions on mathematics, science, and history in five different tones\u2014from &#8220;very polite&#8221; to &#8220;very rude.&#8221; They then asked ChatGPT-4o to respond to each.<\/p>\n<p>The results contradicted previous findings suggesting that models supposedly &#8220;encourage&#8221; tactfulness.<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>&#8220;Contrary to expectations, impolite requests consistently outperformed polite ones. This may indicate that new language models respond differently to the tone of the prompt,&#8221; wrote authors Om Dobariya and Akhil Kumar.<\/p>\n<\/blockquote>\n<p>In a 2024 study <a href=\"https:\/\/arxiv.org\/pdf\/2402.14531\">&#8220;Should We Respect LLMs? A Cross-Linguistic Study on the Impact of Prompt Politeness on Language Model Performance&#8221;<\/a>, scientists concluded that rude requests often degrade response quality, while excessive politeness offers no significant advantages.<\/p>\n<p>The new findings suggest that modern models have ceased to act as &#8220;social mirrors&#8221; and function more like strictly utilitarian machines that value directness over politeness.<\/p>\n<p>The work corroborated more <a href=\"https:\/\/papers.ssrn.com\/sol3\/papers.cfm?abstract_id=5165270\">recent data<\/a> from the Wharton School, focusing on the art of crafting prompts for more accurate answers. It turns out that tone has become as crucial an element as word choice.<\/p>\n<p>Earlier in May, research by scientists from George Washington University <a href=\"https:\/\/forklog.com\/en\/news\/study-finds-politeness-with-ai-models-is-futile\">revealed<\/a> that being polite to AI models is a waste of computational resources.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>A new study from the University of Pennsylvania indicates that large language models (LLMs) provide more accurate responses when addressed rudely rather than politely.<\/p>\n","protected":false},"author":1,"featured_media":89867,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"select":"1","news_style_id":"1","cryptorium_level":"","_short_excerpt_text":"Study shows LLMs respond more accurately to rude prompts than polite ones.","creation_source":"","_metatest_mainpost_news_update":false,"footnotes":""},"categories":[3],"tags":[438,167],"class_list":["post-89866","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news-and-analysis","tag-artificial-intelligence","tag-research"],"aioseo_notices":[],"amp_enabled":true,"views":"401","promo_type":"1","layout_type":"1","short_excerpt":"Study shows LLMs respond more accurately to rude prompts than polite ones.","is_update":"","_links":{"self":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/89866","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/comments?post=89866"}],"version-history":[{"count":1,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/89866\/revisions"}],"predecessor-version":[{"id":89868,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/89866\/revisions\/89868"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/media\/89867"}],"wp:attachment":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/media?parent=89866"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/categories?post=89866"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/tags?post=89866"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}