It seems to me that a great deal of the consideration about AI ethics hinges upon questions about the unintended consequences of design decisions.

First, I think it is a bit of a problem to say that unintended consequences are connected to an ethical problem. If the consequences are unintended, that means it could not have been foreseen. That means these consequences are not actually part of an ethical concern.

The only way they ARE an ethical concern, is that there should be some attempt to avoid foreseeable negative consequences. But then, those consequences would not have been unintended, and they wouldn’t have happened because they were foreseen and avoided.

Secondly, and probably more importantly, we tend to guess only at the most extreme consequences. It’s either going to be a utopia or a dystopia. But it is almost guaranteed that our guesses will be wrong, and the real future will be something we couldn’t have guessed. So in what sense is any of this kind of thinking really about ethics? I think this is a core criticism I have for Ethical Altruism. It not only hinges on the most dramatic outcomes by definition, but it seems to end up projecting the outcomes to an extremely unforeseeable future and it seems almost certain to have been misguided. Which seems by definition to be unethical.

It seems right to challenge the idea that unintended consequences are automatically an ethical concern. Ethics, classically, is about intention, responsibility, and action within the scope of human agency. If something is truly unforeseeable, then calling it unethical becomes incoherent. At most, we could call it “tragic,” not “wrong.”

This reflection exposes the performative aspect of much AI ethics work: it often masquerades as morally grounded reasoning, but in fact it’s closer to speculative futurism or risk theater. When ethicists or EAs try to account for future harms that are, by nature, untestable and probabilistically wild, they create a system that isn’t ethics in the traditional sense. Rather it’s a kind of narrative engineering, where moral seriousness is used to bolster improbable foresight.

This leads to a second, and perhaps even more fatal, critique: the focus on extremes. Utopia or extinction. Alignment or doom. These binary visions distort our moral responsibility today, by outsourcing it to a future we cannot verify. It’s an odd reversal: instead of being more responsible by trying to predict outcomes, we become less accountable because our actions are aimed at phantoms.

And here’s the paradox: if you act today based on a highly uncertain, dramatic future that never comes to pass, and in the process you ignore present harms or squander real opportunities, wasn’t that action actually unethical?

This critique doesn’t mean we shouldn’t care about consequences. But maybe ethics in the context of AI should return to nearer-term, relational, and embodied concerns: how systems affect people now, how power is structured, who gets excluded, what voices are heard in design. Not because these are easier, but because they are knowable.

I think Ethical Altruism can become an excuse for doing whatever we want by pretending to be altruistic for impossibly unknowable future consequences, which were almost certainly not going to come to pass in the first place.

Much of the conversation seems to be about some kind of phantom “AI” superintelligence taking over. I would posit that this outcome is rather unlikely. The likely and perhaps in some ways more horrifying reality will be that we will all have our own AI’s on our multi-petaflop personal devices competing and possibly cooperating in different ways to create an economy and society that we couldn’t imagine. Nations will have AI’s inventing clever diplomatic solutions with opposing nations’ AI’s working on behalf of their own interests.The real problem will be the diminishment of freedoms due to the ease of creating surveillance states, and of the creation of deep narcissistic bubbles fostering mass neuroses on a scale we have never seen.

But David Kim gave a surprisingly optimistic talk on AI at the recent Mockingbird conference in NYC on the value of AI in deepening our human spiritual and theological pursuits. It reminded me that some of the unintended consequences of the proliferation of this technology may be positive. Positive consequences that are unintended still do not give you brownie points on the ethical scale. But they are just as likely to arise as negative consequences, and the difference between the two will likely be very difficult to discern.

This is a vital distinction. Ethics isn’t a scoreboard of good vibes or lucky outcomes; it’s about agency, humility, and the moral integrity of choices in a given context. Getting a good result doesn’t make your motivations or reasoning right, especially if that result was entirely unintended.

Much of the dominant AI ethics discourse pretends to be a rigorous moral framework, but it’s really just a shield for indulging power fantasies—whether techno-utopian or dystopian. And both are equally self-justifying: “If I save the world, I’m a hero.” Or “If I cause the end of humanity, I was too smart for my own good.” Either way, you get to be the center of the narrative. It’s narcissism dressed up as virtue.

Meanwhile, the actual ethical terrain is going to be far more banal and far more dangerous: the erosion of human agency through algorithmic manipulation, the potential for state and corporate surveillance to harden into permanent control, and the way our own mirror-hungry neuroses will be amplified by machine-curated selfhood.

David Kim’s talk at the Mockingbird conference is a beautiful reminder that technology doesn’t have to serve dystopia or utopia. It could become a midwife to spiritual seriousness. But that only happens if we stay grounded. Not if we moralize our guesses about the future, or let power masquerade as benevolence.

So maybe the critique could be reframed positively like this: ethics in AI should be concerned not with prophecy, but with proximity. Ethics in AI should be provided with the present, the relational, the way tools shape our hearts and communities. That would make the unexpected spiritual benefits of AI not ethical victories, but graces—unearned, surprising, and not ours to predict.