Executive Summary
AI labs are caught in a fundamental contradiction that reveals the true distance to AGI. They simultaneously claim human-level AI is imminent while investing billions in pre-baking specific skills through reinforcement learning—an approach that becomes pointless if models can truly learn on the job like humans. The current training paradigm requires building custom pipelines for every micro-task, from identifying macrophages in lab slides to crafting PowerPoint presentations. This reveals that despite impressive capabilities, current models lack the core learning mechanism that makes humans valuable: the ability to acquire context-specific skills through experience and semantic feedback. The economic evidence supports this view—if models were truly human-equivalent, they would generate trillions in revenue rather than the current orders-of-magnitude shortfall. The scaling laws that drove pre-training success don't apply to reinforcement learning, where early research suggests million-fold compute increases yield minimal improvements. The path forward lies in solving continual learning, which will likely progress incrementally like in-context learning did, rather than delivering a sudden capability jump. This timeline mismatch between AGI expectations and training reality creates opportunities for investors who can identify which companies are positioned for the actual development path rather than the hyped timeline.
Key Insights
what Dwarkesh Patel said“If we're actually close to a human-like learner, then this whole approach of training on verifiable outcomes is doomed”
This is a preview. Log in to see the full analysis including investment opportunities, risks, catalysts, and detailed insights.