Linear intelligence, superlinear impact?

Reflecting on Sam's insights on AI economics

Feb 11, 2025

Sam Altman’s latest post on AI economics lays out three observations:

  1. AI intelligence scales roughly with the logarithm of resources.
  2. The cost of accessing AI is dropping at an astonishing rate—about 10x every 12 months.
  3. Even linear improvements in AI intelligence can yield super-exponential socioeconomic benefits.

I wanted to add my reflections to these.


1. Intelligence as a Function of Resources

"The intelligence of an AI model roughly equals the log of the resources used to train and run it."

Sam shares what feels like a "law": that intelligence grows logarithmically with increased compute, data, and inference power. While this means diminishing short-term returns, it also suggests steady, predictable gains over time.

  • On the “IQ” of AI One way to think about this is in terms of an IQ-like metric. Pre-training has pushed LLMs right up to an IQ of 100. And "test-time" or reasoning models pushing us beyond for the first time.

IQ of AI in 2025

  • Stacking Effects Matter – Even if each breakthrough offers smaller individual gains, stacking multiple breakthroughs should create compounding effects.

  • S-Curves – Maybe this isn't a surprise, but it feels like AI progress might be more inclined to go serial S-curves instead of the one big "wall" of ASI singularity.

175th best programmer in the world: o3 @sama internal benchmark is 50th best in world “probably #1 by end of 2025”

AshutoshShrivastava
AshutoshShrivastava
@ai_for_success

People on X are arguing about whether we’ve achieved AGI or not, while the majority of the world is still in denial, thinking AI is just a fad, and some outright hate it. 😬 What people don’t realize is how insane this progress is. We went from 0-5% in 4 years and then…

Image
1
Reply

2. The Rapidly Declining Cost of AI

"The cost to use a given level of AI falls about 10x every 12 months."

  • The Fastest Depreciating Asset in History

  • We felt this at Hypercontext in the 18 months of building. We saw models become "10x smarter, 100x cheaper, 1000x faster in 18 months." I often tell fellow founders to just use the best model available today and build just beyond its capabilities. Soon, it will be 10x cheaper in 12 months and be able to accomplish the task.

  • Consider GPT-4o (2024 era model) to o3-mini (2025 era model) to see the rapid improvements in cost and performance.

AI Price and Quality in 2025


3. The Super-Exponential Socioeconomic Impact

"The socioeconomic value of linearly increasing intelligence is super-exponential in nature."

Some examples I could find:

  • Cities, Colonies, and Networks – Urban scaling research (Bettencourt et al.) shows that when a city’s population doubles, economic activity more than doubles. Similar effects happen in AI—larger models and networks of models interact in ways that unlock unexpected, compounding benefits.

  • Brain-Like Scaling – The human brain doesn’t get smarter just by adding neurons; it gets smarter by increasing connections. A small increase in neurons can lead to a quadratic increase in potential synapses. AI may follow a similar path—small intelligence gains enabling disproportionately large improvements.

However I'm stuck thinking growth in reality eventually meets constraints and settles into some form of equilibrium. The same may apply to AI: after flashes of superlinear returns, we might settle into an equilibrium of resource usage, regulations, or practicality.

Godlike intelligence isn't needed for 95% of known tasks. Maybe it is needed for all unknown tasks. But, if those tasks are like the prior, we need intelligence for discovery and engineering, but often just energy for implementation.


Parting Thought

“This is the worst AI will ever be.”

This still feels true.