24/7 Space News
ROBO SPACE
New Study Confirms Large Language Models Pose No Existential Risk
illustration only
New Study Confirms Large Language Models Pose No Existential Risk
by Sophie Jenkins
London, UK (SPX) Aug 13, 2024

ChatGPT and other large language models (LLMs) do not have the capability to learn independently or develop new skills, meaning they pose no existential threat to humanity, according to recent research conducted by the University of Bath and the Technical University of Darmstadt in Germany.

Published as part of the proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024), the study reveals that while LLMs are proficient in language and capable of following instructions, they lack the ability to master new skills without direct guidance. As a result, they remain inherently controllable, predictable, and safe.

The researchers concluded that despite LLMs being trained on increasingly large datasets, they can continue to be used without significant safety concerns, though the potential for misuse still exists.

As these models evolve, they are expected to generate more sophisticated language and improve in responding to explicit prompts. However, it is highly unlikely that they will develop complex reasoning skills.

"The prevailing narrative that this type of AI is a threat to humanity prevents the widespread adoption and development of these technologies, and also diverts attention from the genuine issues that require our focus," said Dr. Harish Tayyar Madabushi, a computer scientist at the University of Bath and co-author of the study on the 'emergent abilities' of LLMs.

Led by Professor Iryna Gurevych at the Technical University of Darmstadt, the collaborative research team conducted experiments to evaluate LLMs' ability to tackle tasks they had not previously encountered-often referred to as emergent abilities.

For example, LLMs can answer questions about social situations without having been explicitly trained to do so. While earlier research suggested this capability stemmed from models 'knowing' about social situations, the researchers demonstrated that it is actually a result of LLMs' proficiency in a process known as in-context learning (ICL), where they complete tasks based on examples provided.

Through extensive experimentation, the team showed that the combination of LLMs' abilities to follow instructions (ICL), their memory, and their linguistic proficiency can account for both their capabilities and limitations.

Dr. Tayyar Madabushi explained, "The fear has been that as models grow larger, they will solve new problems that we cannot currently predict, potentially acquiring hazardous abilities like reasoning and planning. This concern was discussed extensively, such as at the AI Safety Summit last year at Bletchley Park, for which we were asked to provide commentary. However, our study shows that the fear of a model going rogue and doing something unexpected, innovative, and dangerous is unfounded."

He further emphasized, "Concerns over the existential threat posed by LLMs are not limited to non-experts and have been expressed by some leading AI researchers worldwide. However, our tests clearly demonstrate that these fears about emergent complex reasoning abilities in LLMs are not supported by evidence."

While acknowledging the need to address existing risks like AI misuse for creating fake news or facilitating fraud, Dr. Tayyar Madabushi argued that it would be premature to regulate AI based on unproven existential threats.

He noted, "For end users, relying on LLMs to interpret and execute complex tasks requiring advanced reasoning without explicit instructions is likely to lead to errors. Instead, users will benefit from clearly specifying their requirements and providing examples whenever possible, except for the simplest tasks."

Professor Gurevych added, "Our findings do not suggest that AI poses no threat at all. Rather, we demonstrate that the supposed emergence of complex thinking skills linked to specific threats is unsupported by evidence, and that we can effectively control the learning process of LLMs. Future research should, therefore, focus on other potential risks, such as the misuse of these models for generating fake news."

Research Report:Are Emergent Abilities in Large Language Models just In-Context Learning?

Related Links
University of Bath
All about the robots on Earth and beyond!

Subscribe Free To Our Daily Newsletters
Tweet

RELATED CONTENT
The following news reports may link to other Space Media Network websites.
ROBO SPACE
OpenAI worries its AI voice may charm users
San Francisco (AFP) Aug 9, 2024
OpenAI says it is concerned that a realistic voice feature for its artificial intelligence might cause people to bond with the bot at the cost of human interactions. The San Francisco-based company cited literature which it said indicates that chatting with AI as one might with a person can result in misplaced trust and that the high quality of the GPT-4o voice may exacerbate that effect. "Anthropomorphization involves attributing human-like behaviors and characteristics to nonhuman entities, su ... read more

ROBO SPACE
North Korea tour operators hopeful ahead of country's reopening

Meet the two Boeing mission astronauts stuck aboard the ISS

LeoLabs Secures $20M in New Contracts in H1 2024

ISS Crew Conducts Historic Archaeological Survey in Space

ROBO SPACE
NASA to decide stranded Starliner astronauts' route home by end of month

NASA to make decision on Starliner astronauts by end of month

One SpaceX launch scrubbed, another still a go

Rocket Lab Executes 52nd Electron Mission for Capella Space

ROBO SPACE
Scientists lay out revolutionary method to warm Mars

Here's How Curiosity's Sky Crane Changed the Way NASA Explores Mars

Mars Express Reveals Ancient Lake Eridania on Mars

NASA Trains Machine Learning Algorithm for Mars Sample Analysis

ROBO SPACE
Shenzhou-18 Crew Tests Fire Alarms and Conducts Medical Procedures in Space

Astronauts on Tiangong Space Station Complete Fire Safety Drill

Shenzhou XVIII Crew Conducts Emergency Drill on Tiangong Space Station

Beijing Unveils 'Rocket Street' to Boost Commercial Space Sector

ROBO SPACE
Non-Geostationary Constellations Set to Dominate High Throughput Satellites Market

New Coordination System Allows Satellite Internet and Radio Astronomy to Share the Sky

EQT in Exclusive Talks to Acquire Majority Stake in Eutelsat's Satellite Ground Station Business

AST SpaceMobile Prepares for September Launch with Arrival of First Commercial Satellites at Cape Canaveral

ROBO SPACE
Cleveland-Made Automated Tech Embarks on Space Mission

AFRL Collaborative Automation For Manufacturing Systems Laboratory opens

UCLA Engineers Develop Shape-Shifting Metamaterial Inspired by Classic Toys

ICEYE Expands SAR Constellation with Four New Satellites

ROBO SPACE
UK Space Agency Backs Missions to Study Stellar Influence on Habitable Worlds

Intense Stellar Flares from Red Dwarfs Pose Risks to Exoplanet Habitability

AI Competition Targets Exoplanet Atmospheres

Study Highlights Potential Dangers to Habitable Planets Around Red Dwarfs

ROBO SPACE
Ariel's Carbon Dioxide Indicates Potential Subsurface Ocean on Uranus' Moon

Spacecraft to swing by Earth, Moon on path to Jupiter

A new insight into Jupiter's shrinking Great Red Spot

Queen's University Belfast Researchers Investigate Mysterious Brightening of Chiron

Subscribe Free To Our Daily Newsletters




The content herein, unless otherwise known to be public domain, are Copyright 1995-2024 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. All articles labeled "by Staff Writers" include reports supplied to Space Media Network by industry news wires, PR agencies, corporate press officers and the like. Such articles are individually curated and edited by Space Media Network staff on the basis of the report's information value to our industry and professional readership. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. General Data Protection Regulation (GDPR) Statement Our advertisers use various cookies and the like to deliver the best ad banner available at one time. All network advertising suppliers have GDPR policies (Legitimate Interest) that conform with EU regulations for data collection. By using our websites you consent to cookie based advertising. If you do not agree with this then you must stop using the websites from May 25, 2018. Privacy Statement. Additional information can be found here at About Us.