Blog...
A blog post is loading...
Loading…
A blog post is loading...
Loading…
Reflecting on Sam's insights on AI economics
Feb 11, 2025
Sam Altman’s latest post on AI economics lays out three observations:
I wanted to add my reflections to these.
"The intelligence of an AI model roughly equals the log of the resources used to train and run it."
Sam shares what feels like a "law": that intelligence grows logarithmically with increased compute, data, and inference power. While this means diminishing short-term returns, it also suggests steady, predictable gains over time.
Stacking Effects Matter – Even if each breakthrough offers smaller individual gains, stacking multiple breakthroughs should create compounding effects.
S-Curves – Maybe this isn't a surprise, but it feels like AI progress might be more inclined to go serial S-curves instead of the one big "wall" of ASI singularity.
175th best programmer in the world: o3 @sama internal benchmark is 50th best in world “probably #1 by end of 2025”
People on X are arguing about whether we’ve achieved AGI or not, while the majority of the world is still in denial, thinking AI is just a fad, and some outright hate it. 😬 What people don’t realize is how insane this progress is. We went from 0-5% in 4 years and then
"The cost to use a given level of AI falls about 10x every 12 months."
We felt this at Hypercontext in the 18 months of building. We saw models become "10x smarter, 100x cheaper, 1000x faster in 18 months." I often tell fellow founders to just use the best model available today and build just beyond its capabilities. Soon, it will be 10x cheaper in 12 months and be able to accomplish the task.
Consider GPT-4o (2024 era model) to o3-mini (2025 era model) to see the rapid improvements in cost and performance.
"The socioeconomic value of linearly increasing intelligence is super-exponential in nature."
Some examples I could find:
Cities, Colonies, and Networks – Urban scaling research (Bettencourt et al.) shows that when a city’s population doubles, economic activity more than doubles. Similar effects happen in AI—larger models and networks of models interact in ways that unlock unexpected, compounding benefits.
Brain-Like Scaling – The human brain doesn’t get smarter just by adding neurons; it gets smarter by increasing connections. A small increase in neurons can lead to a quadratic increase in potential synapses. AI may follow a similar path—small intelligence gains enabling disproportionately large improvements.
However I'm stuck thinking growth in reality eventually meets constraints and settles into some form of equilibrium. The same may apply to AI: after flashes of superlinear returns, we might settle into an equilibrium of resource usage, regulations, or practicality.
Godlike intelligence isn't needed for 95% of known tasks. Maybe it is needed for all unknown tasks. But, if those tasks are like the prior, we need intelligence for discovery and engineering, but often just energy for implementation.
“This is the worst AI will ever be.”
This still feels true.