- Each of AI skeptics, the general public, and AI alarmists get AI risks wrong
- Risks are real, and our society has yet to face them — AI skeptics are wrong
- The risks that don’t matter (general public misconceptions)
- Automation, less work satisfaction
- Mass unemployment
- AI will change our relationship with labor and the economy, but this is not necessarily a bad thing
- AI will improve the living standards of billions
- Significant boost in productivity and innovation
- Automation can drive down costs, boost efficiency, and facilitate new markets and opportunities
- Improvement in human welfare
- Healthcare, education, personalized services
- Unlocking innovation and creativity
- Automating routine tasks and complementing human intelligence
- Long-term AI is very likely a complement; short-term, it may be a substitute
- Better decision-making and governance
- Most interesting applications of AI that I see: bio, crypto (can finally make it useful! Solana?) material science, mathematics, physics, econ/social science, business analytics
- The risks that do matter
- Bioweapons
- AI-enabled weaponry
- Stable authoritarian regimes / surveillance
- Authoritarian control
- Undermining democracy and freedom of expression
- Centralization of power
- Personalized manipulation
- Loss of autonomy and individual freedom
- Economic and social instability (particularly in labor markets)
- The transition period is the problem, not the economy itself
- Inequality and social division
- Good evidence suggests AI benefits the already skilled
- We will need:
- Better education and retraining programs
- The next generation may become AI-dependent idiots with no original ideas, or it may be the most exciting generation ever
- Higher taxes for the rich
- Social support
- Some aspects of alignment
- Blackbox problem / interpretability
- Misalignment
- A practical issue at the current level of LLMs — no need to speculate about AGI
- AGI plausibility
- I don’t refute the possibility of AGI (plausible within the next century)
- However, not with the current architecture and data sources
- There is no honest way to assign a probability estimate since it’s a black swan
- AGI is a narrative of top AI labs to justify their insane investments
- AGI is not possible with the current generation of neural networks
- Gen AI shows that supposedly hard tasks (writing, image generation) are easier than thought
- However, current LLMs still cannot abstract and generalize well (e.g., multiplication, video games)
- Doing well on grad school tests is a bad proxy for real-world value
- Recursive self-improvement doesn’t hold: LLMs can automate straightforward tasks but still leave rigorous reasoning to humans
- We can see slowdown in LLMs development: difference between GPT4 and 4.5 is much smaller than 3 and 3.5
- Not fully sure in this claim, could be good to verify
- However, I am interested to see if Dynamics movement algorithm improvements + LLMs + computer vision with large capability improvements can lead to another leap in AI progress
- I still believe deep learning is the most exciting technology of this decade. That’s why I take classes and attend conferences on it
Setbacks I receive and my responses:
- Point 6 seems weak; of course current models don't seem capable, but once many systems are aggregated (I'm thinking Boston Dynamics movement algorithm improvements + LLMs + computer vision with large capability improvements), an AGI-like system (at the very least) seems likely within the next 10 years seeing how fast progress has gone in the last 4. [Mario]
- as I say, I allow the possibility that some new innovation and architecture could lead to AGI, but that the architecture of modern LLMs is incapable of reaching AGI. Good point about combination though
- Additionally, strong reasoning skills with large context windows (most recently o3) are a strong counter to 6.3; might not be "rigorous" yet, but getting close. [Mario]
- Reasoning models in my opinion are a way to harness the results of LLMs, not a new step that is orders of magnitude different (nothing compared to GPT2 to GPT3 transition)
1.0.