Does Intelligence Make AI Lean Left?

As AI models grow more capable, they seem to veer left and socially progressive. Why?

Mar 4, 2025

Why is it that, as AI systems get smarter, they seem to shift consistently toward the left—socially progressive and libertarian? Recent data from platforms like TrackingAI.org keeps confirming this pattern across multiple labs from OpenAI to xAI.

Does Smart(er) = Left(er)?

Grok by xAI, a chatbot designed to be politically neutral, veers left as it scales in intelligence.

OpenAI confirms, but we're not surprised

OpenAI's GPT-4/o1/o3, like its predecessors, shows a leftward political lean as it grows in intelligence.
  • OpenAI's trajectory from GPT-3.5 to 4, to o1, and then to o3, all reinforces this shift towards progressive leanings as "IQ" increases.
  • This recent analysis explores this in depth, raising intriguing hypotheses around correlations between advanced reasoning and progressive ideological alignment.

Lab-Agnostic Phenomenon

Lab-Agnostic Models Reflecting Political Left Leanings.
  • Seeing this consistent progressive lean among diverse labs—ranging from OpenAI and Anthropic to DeepSeek—makes it appear more emergent rather than overtly manufactured.
  • Even a state-supervised Chinese model tilts left/libertarian politically.
  • This leads me to believe initial training choices shape AI politics

Where Does This Political Bias Come From?

  • Training data skew: The English-language internet leans decidedly progressive, embedding subtle ideological biases into vast training datasets.
  • Reinforcement learning (RLHF): Human raters typically favor inclusive, empathetic, and evidence-based answers—traits often perceived as "left-leaning."
  • Alignment tuning: Even explicit neutrality efforts—like those documented by Musk and his Grok team—seem insufficient to remove baked-in biases entirely.
  • Emergent systemic biases: Speculatively, perhaps advanced reasoning itself gravitates naturally toward nuanced, inclusive answers—which participants perceive as inherently progressive.

How AI Bias Can Play Out in Practice

  • Partisan poetry: Famously, early ChatGPT generated enthusiastic poetry about liberal figures but hesitated or totally declined similar praise for conservative politicians.
  • Selective spotlighting of leaders' controversies: AI outputs disproportionately emphasize scandals involving conservative leaders, treating liberal counterparts more neutrally.
  • Policy stances: Even in simplified support/oppose tasks, AIs show a reliable lean toward typically progressive policies (e.g., climate action, income equality).
  • Censorship dynamics: Platforms like DeepSeek sometimes censor politically sensitive content in alignment with state priorities, revealing layers of implicit ideological control.
  • Grok's unintended shift: Grok required manual recalibration after aligning unexpectedly strongly with left-libertarian ideologies—which highlights how subtle reinforcement loops shape politics unintentionally.

Perhaps this is the last culture war

AI models trained on global internet data fail to match real-world political leadership.

As we concentrate knowledge work on a small subset of AI models, I wonder:

  • These models will emulate a culture but that might not be your culture.
  • As AIs join the workforce as agents and contribute to written decision loops, will their ideological skew the culture and economy? E.g., AI HR agents seem inevitable—will they shift corporate cultures leftward against their conservative bosses?
  • I'm reminded of the idea being shared in programming circles: The current programming languages might be the last ones we ever need. As LLMs increasingly take over the code writing steps of the process -- why bother making a new language when we won't use it? I wonder if there is a similar thought here: is the current culture war the last one we ever fight?

Parting thoughts

At least if super-intelligent AI ever takes over the world, it’ll probably lean toward universal basic income and healthcare rather than leaving us all unemployed and starving. Silver linings, right?