Without explanations, Artificial Intelligence is just an illusion of control

by Ignacio Gutiérrez PeñaMay 6, 2025Prescriptive Decision AI

On April 28, 2025, the entire country literally went dark. I am not an electrical network engineer; my world is Artificial Intelligence. Clear explanations of what happened that day have not yet been provided, but the power outage reminded me of a risk we see daily: depositing critical decisions in predictive models that no one understands.

At AyGLOO we have been defending for years that AI is only useful (especially in critical processes) when it is explained. Our work consists of putting a magnifying glass on existing algorithms, showing why they make each prediction, where they can fail, and what will happen if conditions change.

When we apply this magnifying glass to the energy system, four decisive things happen:

1. Blind spots come to light. We identify "critical segments," that is, time slots or locations where the model itself recognizes its uncertainty.

2. We can approach the future in advance. Through simplified "twin models" we recreate, in seconds, alternative scenarios, including exogenous factors, without retraining the model.

3. Each alert comes with its why. We offer multi-scale explainability (from the global panorama to the hourly detail) and complete traceability, so that an operator not only sees the alarm; understands the mechanism that triggers it and the levers to deactivate it.

4. We measure impartiality and robustness. We detect biases and misalignments in the data, preventing automatic decisions from amplifying risks just when reliability is vital.

All this is Prescriptive Decision AI: making machines speak our language, instead of demanding blind faith. We do not replace our clients' models; we make them transparent, auditable and, above all, actionable.

From kilowatts to banking, from medical diagnosis to cybersecurity

The blackout highlighted the urgency in energy, but the challenge is universal. A bank fighting fraud, a hospital allocating ICU beds, or a company suffering cyberattacks face the same dilemma: if the algorithm makes a mistake, who explains the error and corrects the course?

The recipe is always the same: combine predictive power with explainability. Tools like ours expose data biases, point out failures, and offer simulations that transform an opaque number into a conscious decision.

An invitation to distrust: to trust better

I do not write to boast about energy expertise, but to remember something more basic: in 2025 we cannot afford mute algorithms managing critical infrastructure. AI must be accountable, and Prescriptive Decision AI is not a luxury: it is the sensible way to combine automation with responsibility.

If the April blackout drives operators, regulators, and companies to demand explanations as rigorous as accuracy, we will have learned the lesson. At AyGLOO we are ready to put that magnifying glass where needed. Because, in the end, trust in technology is born from its ability to tell us the truth.

🔗 Access the news on TecnoNews