Explainable AI Platform

Explainable Artificial Intelligence

Plataforma de IA Explicable

INDIA -Tabular data

AyGLOO helps to interpret AI models with intuitive, immediate, varied, customised, comprehensive analysis without depending on a data scientist to make better and faster decisions.

If you are a technician, it also provides valuable information during the construction of the model.

icono interpretar y desarrollar

INDIA – Natural lenguage

Incorporates interpretability of natural language into your analysis.

We use our own interpretability models.

iconos computer vision

INDIA – Images

It incorporates interpretability of computer vision models into your analysis to understand, for example, how the neural network classifies your images.

• INDIA -Tabular data •

infoGRAPHIC XAI platform-modelos IA-ENG

AyGLOO uses its own Explainable AI models.

AyGLOO works with a wide variety of technologies, including Python, R, H2O, SAS, SPSS, etc.

AyGLOO works with a wide range of models including neural networks, random forest, XG-boost, logistic regression and models of reinforcement learning (RL) and clustering.

Exploratory data analysis (EDA)

Control and customisation of the analysis by selecting the variable desired from the model with one click and adding other variables.

They include potential clusters or subsets of variables that identify relationships between variables.

Global analysis

Analysis of the model with interpretable rules, providing the contribution of each variable, its importance on a global level, and whether this contribution is positive or negative. It also includes an analysis of segments that provide a great deal of power to the analysis since it enables the visual identification of bias, hidden relationships of variables and flaws with the model in specific data segments.

We use our own powerful algorithms like SHAP – Intelligent Segment Analysis, smart generation of rules, etc.

Analysis by instances

Let’s imagine we want to analyse the decision of a credit scoring model with a customer that has had credit denied. Our analysis by instances enables the models decision to be traced, using intuitive rules, and to understand the contribution of each variable in reaching this decision.

Furthermore, our what-if analysis enables us to understand which variables, by changing the value, would ensure the credit requested by the customer was accepted, providing valuable information to understand the model’s decision.



It identifies bias of the model relating to variables defined by the user (for example, gender, race).

Our tool enables the user to define a wide variety of analyses of bias and fairness (analysis of the composition of predictions, data parities, analysis of false positives, false negatives and importance of attributes).

Causal analysis

Identifies causal relationships between the variables of the problem and the sample intuitively even though these relationships are complex.

Análisis Causal
graficos tecnicos

Technical graphics

Incorporates more technical graphs that provide valuable information to data scientists during the construction of the model and then during the life cycle of that model.

It analyses in detail the different types of error of the model: false positives, false negatives, ROC curve, analysis of residues, correlations.

Incorporates interpretability of natural language into your analysis.

Incorporates interpretability of images into your analysis.

We use our own interpretability models.

• INDIA – Natural lenguage •

Natural language

It understands the semantics, syntax and complex relationships captured by the most sophisticated NLP models. It locates bias, flaws in the model and contested examples in the data using our own algorithms for the analysis of the sequential interpretability of text, of attribution algorithms like SHAP or local explanation algorithms like LIME.

AyGLOO works on everything from classic models like Recurring or Convolutional Networks to the most powerful state-of-the-art models, transformers.

Applying these models to classification problems, the end user will be able to interpret the model through an intuitive and simple display.

It recognises entities in the text and verifies their relevance in the model’s decision-making.

It extracts the most common topics handled in your texts and checks the similarities between them.

Lenguaje natural

• INDIA – Images •


It uses machine learning visualisation techniques to explain the output of other machine learning models.

The model produces a heat map of the output of other models, where each pixel represents the importance of that part of the input for the output of the model. This enables users to see which parts of the input are most important for the model and, therefore, understand better how the model is working.

AI_ComputerVision ENG

In all of our products

• Cloud Computing •

All of our solutions use cloud computing with parallel serverless techniques

Dedicate your resources to processes with a higher added value: