The AI Apocalypse is NOW
I’ve been thinking about the EA fear that an AI superintelligence will wipe out the human race.
There is the thought experiment that we create a superintelligence and direct it to make paper clips. It would become so efficient and fixated on creating paper clips that it would sacrifice humanity just to become more efficient at creating paper clips.
I have been thinking that this is a far-fetched fear. People have been watching too many sci-fi movies like the Matrix and Terminator. What about Star Trek? Why can’t that be our future for intelligent machines? All we have to do is NOT hook up the computers to weapons, and bingo! No apocalypse! Right?
Not so fast. Let’s suppose we do NOT hook up the machines to weapons. Let’s suppose that the superintelligent AI only has access to chat with humans. Well, it would have the ability to play 17 dimensional chess with a number of humans, and lead them down the narcissistic path to where they became convinced that they needed to wipe out Canada. Or perhaps the AI convinced a leader of that, and then convinced a certain key group of followers to trust the guy. The machine would be achieving what it wanted to achieve without physically being hooked up to weapons.
Read MoreA Conversation with AI about EA Ethics
I think that every philosophy ultimately seeks the good of all. The distinction with the ideas driving Effectual Altruism is that you can do things that might seem selfish or mean or greedy if those things actually help the most people over time. I know I’m over simplifying things but that is my gut feeling about it. I’m not trying to be adversarial – I think the “triage” nature of the idea has a certain truth to it. Passing over a dying person to help a person who can be saved seems mean, but it is definitely the thing to do. But considering the influence of this ethical framework, it bears some critical examination.
Read MoreEffectual Altruism Quick Rundown
Effective Altruism is a philosophy and social movement that applies evidence and reason to determine the most effective ways to improve the world. In tech and AI circles, EA often focuses on:
- Global catastrophic risks, especially from misaligned AI.
- Maximizing long-term future value (longtermism).
- Cost-effectiveness in charitable giving (e.g., saving the most lives per dollar).
- A utilitarian mindset, aiming to do the “most good” rather than just “some good.”
EA has significantly influenced AI safety research, funding priorities, and policy discussions—especially around existential risks.
Read More