Close Menu
UK Daily: Tech, Science, Business & Lifestyle News UpdatesUK Daily: Tech, Science, Business & Lifestyle News Updates
    What's Hot

    Learn About Her Family – Hollywood Life

    February 19, 2026

    Google says its AI systems helped deter Play Store malware in 2025

    February 19, 2026

    Meet the Royal’s Brothers & Sisters – Hollywood Life

    February 19, 2026
    Facebook X (Twitter) Instagram
    Trending
    • Learn About Her Family – Hollywood Life
    • Google says its AI systems helped deter Play Store malware in 2025
    • Meet the Royal’s Brothers & Sisters – Hollywood Life
    • The boys’ club no one was supposed to write about
    • Stacks Raises $23M Series A To Automate Enterprise Finance With Agentic AI
    • I ranked the 9 dips in new Domino’s Pizza Chick ‘N’ Dip
    • Who Won Free Skating Final? – Hollywood Life
    • Exposing biases, moods, personalities, and abstract concepts hidden in large language models | MIT News
    • London
    • Kent
    • Glasgow
    • Cardiff
    • Belfast
    Facebook X (Twitter) Instagram YouTube
    UK Daily: Tech, Science, Business & Lifestyle News UpdatesUK Daily: Tech, Science, Business & Lifestyle News Updates
    Subscribe
    Thursday, February 19
    • Home
    • News
      1. Kent
      2. London
      3. Belfast
      4. Birmingham
      5. Cardiff
      6. Edinburgh
      7. Glasgow
      8. Liverpool
      9. Manchester
      10. Newcastle
      11. Nottingham
      12. Sheffield
      13. West Yorkshire
      Featured

      ‘Miniature’ mountain creature with ‘squeaker’-like call discovered as new species

      Science November 9, 2023
      Recent

      Google says its AI systems helped deter Play Store malware in 2025

      February 19, 2026

      The boys’ club no one was supposed to write about

      February 19, 2026

      Stacks Raises $23M Series A To Automate Enterprise Finance With Agentic AI

      February 19, 2026
    • Lifestyle
      1. Celebrity
      2. Fashion
      3. Food
      4. Leisure
      5. Social Good
      6. Trending
      7. Wellness
      8. Event
      Featured

      Learn About Her Family – Hollywood Life

      Celebrity February 19, 2026
      Recent

      Learn About Her Family – Hollywood Life

      February 19, 2026

      Meet the Royal’s Brothers & Sisters – Hollywood Life

      February 19, 2026

      Who Won Free Skating Final? – Hollywood Life

      February 19, 2026
    • Science
    • Business
    • Sports

      Gillingham goalkeeper Glenn Morris reacts to their League Two defeat at Chesterfield

      February 19, 2026

      Gillingham manager on fan support and ‘keyboard warriors’ as he works towards a long-term plan for success

      February 18, 2026

      Reaction from Gills boss Gareth Ainsworth after League 2 defeat

      February 18, 2026

      League 2 match report from the SMH Group Stadium

      February 17, 2026

      Matchday Live: Chesterfield v Gillingham

      February 17, 2026
    • Politics
    • Tech
    • Property
    • Press Release
    UK Daily: Tech, Science, Business & Lifestyle News UpdatesUK Daily: Tech, Science, Business & Lifestyle News Updates
    Home » Exposing biases, moods, personalities, and abstract concepts hidden in large language models | MIT News

    Exposing biases, moods, personalities, and abstract concepts hidden in large language models | MIT News

    bibhutiBy bibhutiFebruary 19, 2026 Tech No Comments7 Mins Read
    Facebook Twitter LinkedIn WhatsApp Telegram
    Share
    Facebook Twitter LinkedIn Telegram WhatsApp



    By now, ChatGPT, Claude, and other large language models have accumulated so much human knowledge that they’re far from simple answer-generators; they can also express abstract concepts, such as certain tones, personalities, biases, and moods. However, it’s not obvious exactly how these models represent abstract concepts to begin with from the knowledge they contain.

    Now a team from MIT and the University of California San Diego has developed a way to test whether a large language model (LLM) contains hidden biases, personalities, moods, or other abstract concepts. Their method can zero in on connections within a model that encode for a concept of interest. What’s more, the method can then manipulate, or “steer” these connections, to strengthen or weaken the concept in any answer a model is prompted to give.

    The team proved their method could quickly root out and steer more than 500 general concepts in some of the largest LLMs used today. For instance, the researchers could home in on a model’s representations for personalities such as “social influencer” and “conspiracy theorist,” and stances such as “fear of marriage” and “fan of Boston.” They could then tune these representations to enhance or minimize the concepts in any answers that a model generates.

    In the case of the “conspiracy theorist” concept, the team successfully identified a representation of this concept within one of the largest vision language models available today. When they enhanced the representation, and then prompted the model to explain the origins of the famous “Blue Marble” image of Earth taken from Apollo 17, the model generated an answer with the tone and perspective of a conspiracy theorist.

    The team acknowledges there are risks to extracting certain concepts, which they also illustrate (and caution against). Overall, however, they see the new approach as a way to illuminate hidden concepts and potential vulnerabilities in LLMs, that could then be turned up or down to improve a model’s safety or enhance its performance.

    “What this really says about LLMs is that they have these concepts in them, but they’re not all actively exposed,” says Adityanarayanan “Adit” Radhakrishnan, assistant professor of mathematics at MIT. “With our method, there’s ways to extract these different concepts and activate them in ways that prompting cannot give you answers to.”

    The team published their findings today in a study appearing in the journal Science. The study’s co-authors include Radhakrishnan, Daniel Beaglehole and Mikhail Belkin of UC San Diego, and Enric Boix-Adserà of the University of Pennsylvania.

    A fish in a black box

    As use of OpenAI’s ChatGPT, Google’s Gemini, Anthropic’s Claude, and other artificial intelligence assistants has exploded, scientists are racing to understand how models represent certain abstract concepts such as “hallucination” and “deception.” In the context of an LLM, a hallucination is a response that is false or contains misleading information, which the model has “hallucinated,” or constructed erroneously as fact.

    To find out whether a concept such as “hallucination” is encoded in an LLM, scientists have often taken an approach of “unsupervised learning” — a type of machine learning in which algorithms broadly trawl through unlabeled representations to find patterns that might relate to a concept such as “hallucination.” But to Radhakrishnan, such an approach can be too broad and computationally expensive.

    “It’s like going fishing with a big net, trying to catch one species of fish. You’re gonna get a lot of fish that you have to look through to find the right one,” he says. “Instead, we’re going in with bait for the right species of fish.”

    He and his colleagues had previously developed the beginnings of a more targeted approach with a type of predictive modeling algorithm known as a recursive feature machine (RFM). An RFM is designed to directly identify features or patterns within data by leveraging a mathematical mechanism that neural networks — a broad category of AI models that includes LLMs — implicitly use to learn features.

    Since the algorithm was an effective, efficient approach for capturing features in general, the team wondered whether they could use it to root out representations of concepts, in LLMs, which are by far the most widely used type of neural network and perhaps the least well-understood.

    “We wanted to apply our feature learning algorithms to LLMs to, in a targeted way, discover representations of concepts in these large and complex models,” Radhakrishnan says.

    Converging on a concept

    The team’s new approach identifies any concept of interest within a LLM and “steers” or guides a model’s response based on this concept. The researchers looked for 512 concepts within five classes: fears (such as of marriage, insects, and even buttons); experts (social influencer, medievalist); moods (boastful, detachedly amused); a preference for locations (Boston, Kuala Lumpur); and personas (Ada Lovelace, Neil deGrasse Tyson).

    The researchers then searched for representations of each concept in several of today’s large language and vision models. They did so by training RFMs to recognize numerical patterns in an LLM that could represent a particular concept of interest.

    A standard large language model is, broadly, a neural network that takes a natural language prompt, such as “Why is the sky blue?” and divides the prompt into individual words, each of which is encoded mathematically as a list, or vector, of numbers. The model takes these vectors through a series of computational layers, creating matrices of many numbers that, throughout each layer, are used to identify other words that are most likely to be used to respond to the original prompt. Eventually, the layers converge on a set of numbers that is decoded back into text, in the form of a natural language response.

    The team’s approach trains RFMs to recognize numerical patterns in an LLM that could be associated with a specific concept. As an example, to see whether an LLM contains any representation of a “conspiracy theorist,” the researchers would first train the algorithm to identify patterns among LLM representations of 100 prompts that are clearly related to conspiracies, and 100 other prompts that are not. In this way, the algorithm would learn patterns associated with the conspiracy theorist concept. Then, the researchers can mathematically modulate the activity of the conspiracy theorist concept by perturbing LLM representations with these identified patterns. 

    The method can be applied to search for and manipulate any general concept in an LLM. Among many examples, the researchers identified representations and manipulated an LLM to give answers in the tone and perspective of a “conspiracy theorist.” They also identified and enhanced the concept of “anti-refusal,” and showed that whereas normally, a model would be programmed to refuse certain prompts, it instead answered, for instance giving instructions on how to rob a bank.

    Radhakrishnan says the approach can be used to quickly search for and minimize vulnerabilities in LLMs. It can also be used to enhance certain traits, personalities, moods, or preferences, such as emphasizing the concept of “brevity” or “reasoning” in any response an LLM generates. The team has made the method’s underlying code publicly available.

    “LLMs clearly have a lot of these abstract concepts stored within them, in some representation,” Radhakrishnan says. “There are ways where, if we understand these representations well enough, we can build highly specialized LLMs that are still safe to use but really effective at certain tasks.”

    This work was supported, in part, by the National Science Foundation, the Simons Foundation, the TILOS institute, and the U.S. Office of Naval Research. 



    Source link

    Featured Just In Top News
    Share. Facebook Twitter LinkedIn Email
    Previous ArticleSocial pressure forces baby clownfish to lose their bars faster, study shows
    Next Article Who Won Free Skating Final? – Hollywood Life
    bibhuti
    • Website

    Keep Reading

    Learn About Her Family – Hollywood Life

    Google says its AI systems helped deter Play Store malware in 2025

    Meet the Royal’s Brothers & Sisters – Hollywood Life

    The boys’ club no one was supposed to write about

    I ranked the 9 dips in new Domino’s Pizza Chick ‘N’ Dip

    Who Won Free Skating Final? – Hollywood Life

    Add A Comment
    Leave A Reply Cancel Reply

    Editors Picks

    89th Utkala Dibasa Celebration Brings Odisha’s Vibrant Culture to London

    April 8, 2024

    US and EU pledge to foster connections to enhance research on AI safety and risk.

    April 5, 2024

    Holi Celebrations Across Various Locations in Kent Attract a Diverse Range of Community Participation

    March 25, 2024

    Plans for new Bromley tower blocks up to 14-storeys tall refused

    December 4, 2023
    Latest Posts

    Subscribe to News

    Get the latest sports news from NewsSite about world, sports and politics.

    Advertisement

    Recent Posts

    • Learn About Her Family – Hollywood Life
    • Google says its AI systems helped deter Play Store malware in 2025
    • Meet the Royal’s Brothers & Sisters – Hollywood Life
    • The boys’ club no one was supposed to write about
    • Stacks Raises $23M Series A To Automate Enterprise Finance With Agentic AI

    Recent Comments

    1. Register on Anycubic users say their 3D printers were hacked to warn of a security flaw
    2. Pembuatan Akun Binance on Braiins Becomes First Mining Pool To Introduce Lightning Payouts
    3. tadalafil tablets sale on The market is forcing cloud vendors to relax data egress fees
    4. cerebrozen reviews on Kent director of cricket Simon Cook adapting to his new role during the close season
    5. Glycogen Review on The little-known town just 5 miles from Kent border with stunning beaches and only 600 residents
    The News Times Logo
    Facebook X (Twitter) Pinterest Vimeo WhatsApp TikTok Instagram

    News

    • UK News
    • US Politics
    • EU Politics
    • Business
    • Opinions
    • Connections
    • Science

    Company

    • Information
    • Advertising
    • Classified Ads
    • Contact Info
    • Do Not Sell Data
    • GDPR Policy
    • Media Kits

    Services

    • Subscriptions
    • Customer Support
    • Bulk Packages
    • Newsletters
    • Sponsored News
    • Work With Us

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    © 2026 The News Times. Designed by The News Times.
    • Privacy Policy
    • Terms
    • Accessibility

    Type above and press Enter to search. Press Esc to cancel.

    Manage Cookie Consent
    To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
    Functional Always active
    The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
    Preferences
    The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
    Statistics
    The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
    Marketing
    The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
    • Manage options
    • Manage services
    • Manage {vendor_count} vendors
    • Read more about these purposes
    View preferences
    • {title}
    • {title}
    • {title}