Too Much Social Media Causes ‘brain Rot’ Even In AI: Study

Too Much Social Media Causes ‘brain Rot’ Even In AI: Study


uaetodaynews.com — Too much social media causes ‘brain rot’ even in AI: study

Researchers warn that large language models begin to reason less well and produce errors more often if they are trained on large volumes of low-quality content, especially popular on social networks. This is stated in the work published on the preprint server arXivreports the news service Nature.

Scientists from the University of Texas at Austin studied the influence of “junk” data – short, superficial posts, sensational materials – on the behavior of AI. The analysis focused on the models’ ability to extract information from long texts, the logic of responses, ethics, and the model’s display of “personality traits.”

It turned out that the larger the proportion of such data in the training, the more often the models skip logical steps and give incorrect answers, including in tests with a choice of options. Research leader Zhangyang Wang reminds An old AI engineering maxim: “garbage in, garbage out.” The new analysis only confirms the importance of data selection.

Scientists used a million public posts from a popular social network to retrain open-source models Llama 3 and Qwen. Llama is an instruction-oriented model, while Qwen is a reasoning-oriented model.

The Impact of Bad Content on AI

After training on low-quality data, Llama changed her behavior: according to the results of standard psychological questionnaires, her “positive” traits decreased and negative ones increased, including signs of narcissism and even psychopathy.

Attempts to correct the situation – for example, additional training on high-quality data or adjusting instructions – gave only partial results. The model was still missing important steps in the reasoning.

Experts say these results highlight the need to strictly filter training data, especially sensational and distorted ones. Otherwise AI systems risk “degrading,” and the quality of their responses – decline. The question of whether negative changes are reversible if the model is later “fed” with good data remains open.

The topic becomes especially relevant amid news that social networks intend to expand the collection of user content for AI training – for example, LinkedIn plans to use data from European users in its generative systems from November.

Subscribe and read “Science” in

Telegram


Disclaimer: This news article has been republished exactly as it appeared on its original source, without any modification.
We do not take any responsibility for its content, which remains solely the responsibility of the original publisher.


Author:
Published on: 2025-11-01 10:01:00
Source: naukatv.ru

uaetodaynews

UAETodayNews delivers the latest news and updates from the UAE, Arab world, and beyond. Covering politics, business, sports, technology, and culture with trusted reporting.


Disclaimer: This news article has been republished exactly as it appeared on its original source, without any modification. We do not take any responsibility for its content, which remains solely the responsibility of the original publisher.


Author: uaetodaynews
Published on: 2025-11-01 06:52:00
Source: uaetodaynews.com

daliynewslb.com

Stay updated with Daily News LB – your trusted source for the latest news and in-depth analysis on politics, economy, and technology in Lebanon and the world.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button