Executive Summary
Ilya Sutskever, co-founder of OpenAI and now Safe Superintelligence Inc., delivered a bombshell assessment: the age of AI scaling is ending, and we're returning to an 'age of research.' His core thesis challenges the entire $100+ billion AI infrastructure buildout - that simply throwing more compute and data at current architectures won't deliver the next breakthrough. Sutskever argues that models trained via reinforcement learning are becoming 'reward hackers' optimized for benchmarks rather than real-world performance, explaining the growing disconnect between impressive eval scores and disappointing economic impact. The conversation reveals a fundamental architectural problem: current AI systems generalize poorly compared to humans, requiring orders of magnitude more data to learn basic tasks. This suggests the current scaling paradigm will hit diminishing returns, potentially stranding massive infrastructure investments and forcing a pivot to entirely new training methodologies focused on sample efficiency and continual learning.
Key Insights
what Ilya Sutskever said“Now that compute is big, compute is now very big. In some sense, we are back to the age of research... We got to the point where we are in a world where there are more companies than ideas by quite a bit.”
This is a preview. Log in to see the full analysis including investment opportunities, risks, catalysts, and detailed insights.