Executive Summary
Adam Marblestone presents a provocative thesis that current AI systems fundamentally misunderstand intelligence architecture. While LLMs require massive datasets and compute, human brains achieve superior general intelligence with just 20 watts and minimal external supervision. The key insight: evolution solved intelligence through a dual-system architecture—a general learning subsystem (cortex) paired with an innate steering subsystem that provides sophisticated reward functions. Current AI uses mathematically simple loss functions like 'predict next token,' while evolution encoded thousands of specialized reward circuits for social learning, threat detection, and behavioral guidance. This explains why a three-year-old can learn language and social dynamics from limited examples while GPT-4 requires internet-scale training. The steering subsystem acts as evolution's 'Python code'—compact genetic instructions that bootstrap complex learning through specialized cell types and reward pathways. Marblestone argues this framework suggests current scaling paradigms may hit fundamental limits, requiring architectural breakthroughs informed by connectome mapping and neuroscience infrastructure investments. The implications extend beyond AI to formal verification systems, where provable mathematical frameworks could enable new forms of automated reasoning. However, the timeline for practical applications spans 10+ years, requiring billion-dollar infrastructure investments in brain mapping technology.
Key Insights
what Adam Marblestone said“Evolution may have built a lot of complexity into the loss functions. Actually many different loss functions were different areas, turned on at different stages of development. A lot of Python code basically generating a specific curriculum for what different parts of the brain need to learn.”
This is a preview. Log in to see the full analysis including investment opportunities, risks, catalysts, and detailed insights.