are we thinking about AI wrong?
everyone's racing to build one super intelligent agent that can do everything.
but is that actually the move?
humans didn't evolve to solve theoretical physics or complex mathematics.
these abilities were just byproducts of our critical thinking skills developed in uncertain and social environments.
sure, it took us hundreds of thousands of years to reach our current intelligence.
but that slow progress was partly because our methods of information sharing were not great.
constantly lost and rediscovered over time.
so, Michael Jordan dropped a paper: "A Collectivist, Economic Perspective on Artificial Intelligence."
(yeah the goat, not that basketball guy)
he points out a critical gap: we're missing social and economic mechanisms in our AI approach.
instead of one massive model scaled with data and compute, Jordan argues for an ecosystem of AI agents.
think specialized agents interacting socially, governed by market-based mechanisms and aligned incentives.
rather than one centralized AI predicting some outcome, multiple specialized agents could independently predict different aspects.
they'd trade information through market incentives to achieve better overall predictions.
this could lead to more robust, interpretable, and socially aligned AI systems.
but shifting to this approach faces significant challenges given the current industry trends.
still, maybe we're chasing the wrong thing entirely...