1. Each of AI sceptics, general public, and AGI alarmists get AI risks wrong
2. Risks are real and our society yet to face them - AI sceptics are wrong
3. The risks that don't matter (general public)
1. Automation, less work satisfaction
2. Mass unmeployment
1. AI will change our relation with labour and the economy, but it is not a bad thing
4. AI will improve the living standards of billions
1. significant boost in productivity and innovation; automation can drive down costs, boost efifciency and facilitate new markets and opportunities
2. Improvement in Human Welfare
1. Healthcare, Education, Personalized Services
3. Unlocking innovation and creativity by automating routine tasks; complementing human intelligence (very plausible to the level of confidence that long term AI is a complement; short term may be a substitute)
4. Better decision-making and governance
5. The risks that matter are
1. Bioweapons
2. Stable Authoritarian regimes/surveillance
1. Authoritarian Control
2. Undermining Democracy and freedom of expression
3. Centralization of power
4. Personalized manipulation
5. Loss of autonomy and individual freedom
3. Economic and Social instability (particularly labour markets) - it is the problem with transition period, not with the economy
4. Inequality and social division -- there is good evidence that AI benefits the already skilled
1. We will need better education, retraining programs, higher taxes for rich, social support
5. The role of ai in education -- the next generation may be ai-dependent idiots with no original ideas, or it may be the most exciting generation ever
6. Some aspects of alignment:
1. Blackbox problem / interpretability
2. Misalignment - it is a practical issue at the current level of LLMs, no need to speculate about AGI
3. I don't refute the possibility of AGI (and belive it's plausible within the next century) but not with the current architecture and data sources, and there is no honest way to put a probability estimate since it's a black swan
6. AGI is a narrative of top AI labs to justify their insane investments
1. See above: I don't refute the possibility of AGI, but current generation of NNs is not capable of it.
2. Gen AI only shows that supposedly hard tasks like writing and image generation are not so difficult. The current generation of LLMs still cannot abstract and generalize, given their performance at multiplication or video games. We should remind people that doing well on grad school tests is a bad proxy for the value you bring to society.
3. Recursive self-improvement doesn't hold either: we can see that LLMs can automate straightforward tasks, still leaving the hard task of rigorous reasoning to humans.
7. I still belive deep learning is the most exciting technology of this decade, and that is why I take classes and attend conferences on it