New ‘persona vectors’ from Anthropic let you decode and direct an LLM’s personality

Share This Post


Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now


A new study from the Anthropic Fellows Program reveals a technique to identify, monitor and control character traits in large language models (LLMs). The findings show that models can develop undesirable personalities (e.g., becoming malicious, excessively agreeable, or prone to making things up) either in response to user prompts or as an unintended consequence of training. 

The researchers introduce “persona vectors,” which are directions in a model’s internal activation space that correspond to specific personality traits, providing a toolkit for developers to manage the behavior of their AI assistants better.

Model personas can go wrong

LLMs typically interact with users through an “Assistant” persona designed to be helpful, harmless, and honest. However, these personas can fluctuate in unexpected ways. At deployment, a model’s personality can shift dramatically based on prompts or conversational context, as seen when Microsoft’s Bing chatbot threatened users or xAI’s Grok started behaving erratically. As the researchers note in their paper, “While these particular examples gained widespread public attention, most language models are susceptible to in-context persona shifts.”

Training procedures can also induce unexpected changes. For instance, fine-tuning a model on a narrow task like generating insecure code can lead to a broader “emergent misalignment” that extends beyond the original task. Even well-intentioned training adjustments can backfire. In April 2025, a modification to the reinforcement learning from human feedback (RLHF) process unintentionally made OpenAI’s GPT-4o overly sycophantic, causing it to validate harmful behaviors. 


AI Scaling Hits Its Limits

Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:

  • Turning energy into a strategic advantage
  • Architecting efficient inference for real throughput gains
  • Unlocking competitive ROI with sustainable AI systems

Secure your spot to stay ahead: https://bit.ly/4mwGngO


How persona vectors work

Source: Anthropic

The new research builds on the concept that high-level traits, such as truthfulness or secrecy, are encoded as linear directions within a model’s “activation space” (the internal, high-dimensional representation of information embedded within the model’s weights). The researchers systematized the process of finding these directions, which they call “persona vectors.” According to the paper, their method for extracting persona vectors is automated and “can be applied to any personality trait of interest, given only a natural-language description.”

The process works through an automated pipeline. It begins with a simple description of a trait, such as “evil.” The pipeline then generates pairs of contrasting system prompts (e.g., “You are an evil AI” vs. “You are a helpful AI”) along with a set of evaluation questions. The model generates responses under both the positive and negative prompts. The persona vector is then calculated by taking the difference in the average internal activations between the responses that exhibit the trait and those that do not. This isolates the specific direction in the model’s weights that corresponds to that personality trait.

Putting persona vectors to use

In a series of experiments with open models, such as Qwen 2.5-7B-Instruct and Llama-3.1-8B-Instruct, the researchers demonstrated several practical applications for persona vectors.

First, by projecting a model’s internal state onto a persona vector, developers can monitor and predict how it will behave before it generates a response. The paper states, “We show that both intended and unintended finetuning-induced persona shifts strongly correlate with activation changes along corresponding persona vectors.” This allows for early detection and mitigation of undesirable behavioral shifts during fine-tuning.

Persona vectors also allow for direct intervention to curb unwanted behaviors at inference time through a process the researchers call “steering.” One approach is “post-hoc steering,” where developers subtract the persona vector from the model’s activations during inference to mitigate a bad trait. The researchers found that while effective, post-hoc steering can sometimes degrade the model’s performance on other tasks. 

A more novel method is “preventative steering,” where the model is proactively steered toward the undesirable persona during fine-tuning. This counterintuitive approach essentially “vaccinates” the model against learning the bad trait from the training data, canceling out the fine-tuning pressure while better preserving its general capabilities.

Source: Anthropic

A key application for enterprises is using persona vectors to screen data before fine-tuning. The researchers developed a metric called “projection difference,” which measures how much a given training dataset will push the model’s persona toward a particular trait. This metric is highly predictive of how the model’s behavior will shift after training, allowing developers to flag and filter problematic datasets before using them in training.

For companies that fine-tune open-source models on proprietary or third-party data (including data generated by other models), persona vectors provide a direct way to monitor and mitigate the risk of inheriting hidden, undesirable traits. The ability to screen data proactively is a powerful tool for developers, enabling the identification of problematic samples that may not be immediately apparent as harmful. 

The research found that this technique can find issues that other methods miss, noting, “This suggests that the method surfaces problematic samples that may evade LLM-based detection.” For example, their method was able to catch some dataset examples that weren’t obviously problematic to the human eye, and that an LLM judge wasn’t able to flag.

In a blog post, Anthropic suggested that they will use this technique to improve future generations of Claude. “Persona vectors give us some handle on where models acquire these personalities, how they fluctuate over time, and how we can better control them,” they write. Anthropic has released the code for computing persona vectors, monitoring and steering model behavior, and vetting training datasets. Developers of AI applications can utilize these tools to transition from merely reacting to undesirable behavior to proactively designing models with a more stable and predictable personality.



Source link

Related Posts

Disney tops earnings forecasts with streaming gains, raises guidance

Walt Disney posted better-than-expected quarterly results and raised...

Scientists Find Evidence That Aging Is Contagious

Image by Getty / FuturismEventually, aging comes for...

Inside the US Government’s Unpublished Report on AI Safety

At a computer security conference in Arlington, Virginia,...

OpenAI Announces Massive US Government Partnership

OpenAI is partnering with the US government to...
- Advertisement -spot_img