If AI-2027 Is Right: What Should We Do?

A practical guide for SaaS companies in light of AI-2027 predictions.

Apr 8, 2025

I've been reviewing the predictions laid out in AI-2027 (Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean) on events leading up to 2027.

AI-2027 predictions

They specifically forecast:

  • Mid-2025: A new model that’s ~1000x more powerful than GPT-4 (citing Sam Altman’s hints and their prior track record).
  • 2026: AI assistants (“Agent-1” in their text) that can solve coding tasks very quickly, especially under human oversight (increasing research efficiency by 50%).
  • 2027: AI agents that can code 30x faster than the best human engineers, leading to a massive increase in productivity and a shift in the nature of software development.

Let’s assume, for a moment, that these forecasts are roughly correct. As someone actually running a SaaS company what do you do?

Below is my take.


Prediction 1: A Model 1000x More Powerful Than GPT-4 by Mid-2025

AI-2027 prediction of a model 1000x more powerful than GPT-4 by mid-2025

AI-2027 predicts: An upcoming megamodel that merges multimodal, code generation, and advanced reasoning—far beyond GPT-4.

Sam basically alludes to the same in a roadmap update in Feb 2025 aiming to "unify o-series models and GPT." With an update In April 2025 alluding to putting "o4" technology in this updated mega model.

All of this is to say, yes this will happen. It's not much of a prediction, really.

Because this will happen

Starting today retrain your brain:

  • Building software is faster, cheaper, and more efficient for companies. Any prior advantage you may have had with your software will shrink (think nicer ux, integrations, more features). Take a look at Same.dev for instance. Instead, value will continue to move towards distribution, relationships, unique data (more on this later), and strongly opinionated workflows (think Linear vs. Notion).
  • If you're a "big startup", consider: Investing in building competitive advantages from deep user relationships, strong brand recognition, and genuinely unique insights rather than just "fast coding."
  • If you're a "small startup": You have an engineering advantage for the first time ever. Remain small, build quickly, and leverage the shortest customer -> build cycle possible.
  • Keep your eye tuned to 18m from now: You probably aren't ever going to be a frontier SOTA ai lab. Any ideas to venture in that direction are probably a waste of time. Instead skate where the puck is going and let the labs do the R&D for you. To do this well, you need to be eagerly aware of what ChatGPT will cannabilize next.

Prediction 2: AI Agents Automating a Lot of Our Work by 2026

AI-2027 prediction of AI agents automating a lot of our work by 2026

Start organizing your company around this

  • Internal tools like PR reviewers, QA bots, test suite generators, and release managers go from “assistive” to fully autonomous. OpenAI already measures model performance on internal GitHub PRs. AI-2027 assumes this trend goes vertical.
  • Most software companies will restructure workflows to include autonomous agents. Humans do prioritization, agents execute.
  • The value shifts to management interfaces, control layers, and orchestrators. If everyone has the same AI agents, then what matters is how well you assign work, detect failure, and recover. Think JIRA-for-robots.
  • Developer teams shrink. Engineers become agent integrators and AI behavior debuggers.
  • Founders should: Build products as if you had 10 interns doing repetitive work for free. What would you automate first? Start there.

Prediction #3 Superhuman Coding & AI Research

AI-2027 prediction of superhuman coding agents by 2026

Strategize now so you're ready

  • If you're a startup:
    • If AI agents can code 30x faster than your best engineer, and you can afford to run 30 of them for the same cost, the shape of your company changes. Experimentation speed explodes. You can run 100 variations of a feature in parallel. The bottleneck becomes knowing exactly what to build. The limiting factor is signal: what do users actually want?
    • This makes user insight your lead and your speed. Companies who take direct user feedback loops (conversations, analytics, embedded prompts, support logs, domain intuition) very seriously are at a huge advantage as they can feed that signal back to their product overnight.
    • You should also expect competitors to clone your feature set quickly. If it can be seen and described, it can be copied.
  • As a founder:
    • Why should your company buy software at all when the ai agent can build and customize what you need overnight?
      • For integration to software that's walled off to agents (facebook, x, etc.) or already ubiquitous (google sheets, etc.)
      • When the cost is lower than your attention over time? When the software helps enforce better patterns that you want to see but don't know how to build/maintain? When there are social benefits for a market to use the same software (linkedin, flexport, shopify, etc.)?
    • If that's what and when you would buy, consider how you would sell to another founder. Why should they buy your app given the same?
  • Rise of personal software:
    • At one point, only the richest people could afford a car. Then, most people needed it. (~50 years)
    • At one point, only the richest people could afford a computer. Then, most people needed it. (~20 years)
    • At one point, only the richest people could afford a cellphone. Then, most people needed it. (~15 years)
    • At one point, only the richest people could afford a smartphone. Then, most people needed it. (~5-10 years)
    • At one point, only the richest people could afford apps built for their specific needs. Now, most people need it?
  • Put the "Superhuman coder" on AI research tasks, not just app tasks. And make sure those tasks are things only you can do (and other can't clone).

Parting Thought

AI benchmarks have historically been good predictors of rapid progress: what gets measured tends to quickly improve.

If we take the predictions of AI-2027 seriously, then we must assume frontier labs saturate most known benchmarks. As founders, our job is to build tech that's only possible if that level of intelligence is achieved and, more importantly, in a way that's not rendered obsolete by that same intelligence.

Here’s a concise thesis about where to build value that general-purpose AI won’t easily commoditize over the next few years:

Where to Build Your Moat

Exclusive, Hard-to-Collect Real-World Data (The Strongest Moat)

Data that’s uniquely yours—especially if expensive, regulated, or logistically challenging to gather—is your strongest shield. Eg:

  • Tesla will leads in autonomous driving data because they capture 100,000 real-world driving miles every minute via their fleet—data. Waymo might catch them, Tesla's approach is profitable and Waymo's isn't. No other competitor can really replicate.
  • Flatiron Health transformed oncology treatment through exclusive patient records gathered painstakingly from hundreds of medical centers.

Consider industries where regulatory, ethical, or practical hurdles prevent competitors and frontier labs from easily acquiring similar datasets. If you can uniquely gather the data required for a reward function in your industry, your position is nearly unassailable. When the AI super coder arrives you can RL finetune a model on your data in a way no one else can.

Deeply Embedded Workflow Integrations (The Stickiest Moat)

Standalone apps become commoditized when AI can generate them instantly. Instead, defensibility lies in deeply integrating into widely adopted systems where users are entrenched, approvals are complex, and switching is costly. Eg:

  • Zapier/Stripe: Embedded across thousands of business apps, costly and difficult to replace—even with perfect coding automation.
  • Salesforce/Shopify: Businesses run critical workflows through them; migrating away risks disruption and demands extensive integration re-approvals.

Build solutions that become part of entrenched user workflows—requiring integrations, permissions, and deep user buy-in that generic AI can't replicate overnight.

Human-in-the-Loop Oversight (Critical Where Required by Regulation & Trust)

In many industries, removing human judgment entirely isn’t actually wanted. It’s legally or ethically impossible.

Human judges, juries, and lawyers will always be required to represent humans in the courts. Governments will want a human accountable to the decisions made by AI systems that could have tax or criminal implications.

Look for workflows where professional judgment and accountability remain mandatory (and in your opinion should remain mandatory even when AI is better than humans). Design your products around seamless human-in-the-loop integration. These industries won’t rapidly commoditize precisely because pure automation isn’t actually what's desired.