Just like humans, AI can get ‘brain rot’ from low-quality text and the effects appear to linger, pre-print study says

0
1

Studies suggest humans experience shorter attention spans, distorted memories, and shifts in self-esteem due to “brain rot,” or a dependence on low-quality online content. Researchers now say the same phenomenon can affect artificial (AI) models, too.

Heavy consumption of viral short-form videos like those on TikTok in particular is associated with increased anxiety and depression, as well as shorter attention spans in young people, according to a Stanford University study.

In AI models, continual exposure to the short and viral social media posts that make up a growing part of the internet “induces lasting cognitive decline in large language models,” researchers from Texas A&M University, the University of Texas at Austin, and Purdue University found in a new pre-print study

In proving their hypothesis, researchers fed LLMs continually with X posts that were either short and viral or were shaped to grab users’ attention. They found this poisonous training causes “nontrivial” declines in reasoning and long-context understanding thanks in part to a jump in “thought-skipping,” meaning the AI models increasingly failed to make a plan to answer the question, left out parts of the reasoning process, or skipped this reflection entirely.

The study, published on the open-access scholarly article archive, arxiv, has not yet been peer-reviewed.

Contrasting with previous criticism of AI models’ kiss-up tendencies, the study found that when LLMs, including Meta’s open source Llama3 as well as versions of Alibaba’s Qwen LLM,  were trained on junk, they were less agreeable. Worse yet, the researchers found that AI brain rot brought out an LLM’s darkest traits, including higher rates of psychopathy and narcissism. 

When researchers tried to “heal” the LLMs using higher-quality human-written data through the process of “instruction tuning,” the AI models still had lingering effects and showed a significant gap between the quality of their reasoning compared to their baseline, pre-junk diet.

“The gap implies that the Brain Rot effect has been deeply internalized, and the existing instruction tuning cannot fix the issue. Stronger mitigation methods are demanded in the future,” the researchers wrote.

Because AI models are trained on trillions of data points from across the internet, the researchers warned that LLMs “inevitably and constantly” are exposed to this low-quality content just like humans, which could pose risks for the technology as a whole.

Previous studies have shown that AI models’ training is essential to their performance. In a July 2024 study published in the peer-reviewed journal Nature, found that AI models eventually collapse if continually trained on AI-generated content. Another study showed AI models can be manipulated into breaking its own guardrails using persuasion techniques effective on humans.

All of this adds up to the potential danger caused by AI models not trained on quality data. A danger that can potentially impact human safety. 

The researchers’ recommendation: AI companies need to stop merely hoarding massive amounts of data and focus on the quality of the data being used to train their LLMs. They may also need to conduct routine “cognitive health checks” on the models—or else risk a full-blown safety crisis.

“Such persistent Brain Rot effect calls for future research to carefully curate data to avoid cognitive damages in pre-training,” the researchers wrote.

Disclaimer : This story is auto aggregated by a computer programme and has not been created or edited by DOWNTHENEWS. Publisher: fortune.com