Since our inception in 2021, at AyGLOO we have chosen a different path: Prescriptive Decision AI and the democratization of Machine Learning use. Those who work with advanced models know well what this means: powerful algorithms that, without explainability, become true black boxes.
To differentiate ourselves from what exists, we have invested decisively in R&D&I. In just four years we have five projects in which explainability is the fundamental pillar, all with a strong innovative component and significant economic investment, four of them funded by CDTI.
Today we want to talk about one of the most recent, started at the beginning of 2025 and on which we will work until September 2026. A project that represents a real technological challenge: explainability in graph models.
The potential (and challenge) of GNNs
The Graph Neural Networks (GNNs) are one of the most powerful and promising technologies in modern artificial intelligence. Why? Because they allow modeling and analyzing complex relationships between entities, something that traditional algorithms don't do well.
They are especially useful in areas where relationships between elements are key:
- Financial fraud: they help detect hidden networks of fraudsters acting in coordination, far beyond the linear patterns of a simple suspicious transaction.
- Cybersecurity: they allow identifying sophisticated attacks that propagate between devices, users or network access, which would otherwise go unnoticed.
- Recommendation and personalization: they power smarter recommendation systems in e-commerce, content or digital advertising, by better understanding the relationships between users, products and usage contexts.
Thanks to GNNs, platforms can discover affinities and similar behaviors even among users who never interacted with the same products, improving precision, diversity and discovery capability of recommendation systems.
In sectors where every decision directly affects reputation, security and millions of euros at stake, model opacity is unacceptable.
The big challenge is that explainability in GNNs practically doesn't exist. There is barely any solid scientific research, and what little exists remains very basic.
A differential tool: intuitive and for all profiles
At AyGLOO we are addressing this gap with the development of a pioneering Prescriptive Decision AI tool applied to GNNs. It's not just about opening the black box, but doing so in an intuitive, clear and accessible way for any user, without requiring technical knowledge.
Imagine a fraud executive at a bank who doesn't know how to program or master advanced AI. With our tool they will be able to easily understand:
- Why the model has detected a possible fraud case.
- What connections or patterns in the relationship network have been key to that conclusion.
- How to act more informed and with greater confidence in the face of a possible attack.
Explainability thus becomes not just a technical component, but a bridge between the most advanced AI and the people who make strategic decisions.
Conclusion
GNNs represent a qualitative leap in artificial intelligence's ability to address critical challenges such as fraud, cybersecurity or intelligent personalization. But their true value only emerges when their results are explainable and understandable.
Since 2021, we maintain the same commitment: that artificial intelligence stops being a black box and becomes a tool that provides clarity, transparency and actionability, within reach of everyone (from engineers to executives).

