Explainable Artificial Intelligence
INDIA -Tabular data
AyGLOO helps to interpret AI models with intuitive, immediate, varied, customised, comprehensive analysis without depending on a data scientist to make better and faster decisions.
If you are a technician, it also provides valuable information during the construction of the model.
INDIA – Natural lenguage
Incorporates interpretability of natural language into your analysis.
We use our own interpretability models.
• INDIA -Tabular data •
Exploratory data analysis (EDA)
Control and customisation of the analysis by selecting the variable desired from the model with one click and adding other variables.
Analysis of the model with interpretable rules, providing the contribution of each variable, its importance on a global level, and whether this contribution is positive or negative. It also includes an analysis of segments that provide a great deal of power to the analysis since it enables the visual identification of bias, hidden relationships of variables and flaws with the model in specific data segments.
We use our own powerful algorithms like SHAP – Intelligent Segment Analysis, smart generation of rules, etc.
Analysis by instances
Let’s imagine we want to analyse the decision of a credit scoring model with a customer that has had credit denied. Our analysis by instances enables the models decision to be traced, using intuitive rules, and to understand the contribution of each variable in reaching this decision.
Furthermore, our what-if analysis enables us to understand which variables, by changing the value, would ensure the credit requested by the customer was accepted, providing valuable information to understand the model’s decision.
It identifies bias of the model relating to variables defined by the user (for example, gender, race).
Our tool enables the user to define a wide variety of analyses of bias and fairness (analysis of the composition of predictions, data parities, analysis of false positives, false negatives and importance of attributes).
Identifies causal relationships between the variables of the problem and the sample intuitively even though these relationships are complex.
Incorporates more technical graphs that provide valuable information to data scientists during the construction of the model and then during the life cycle of that model.
It analyses in detail the different types of error of the model: false positives, false negatives, ROC curve, analysis of residues, correlations.
• INDIA – Natural lenguage •
It understands the semantics, syntax and complex relationships captured by the most sophisticated NLP models. It locates bias, flaws in the model and contested examples in the data using our own algorithms for the analysis of the sequential interpretability of text, of attribution algorithms like SHAP or local explanation algorithms like LIME.
AyGLOO works on everything from classic models like Recurring or Convolutional Networks to the most powerful state-of-the-art models, transformers.
Applying these models to classification problems, the end user will be able to interpret the model through an intuitive and simple display.
It recognises entities in the text and verifies their relevance in the model’s decision-making.
It extracts the most common topics handled in your texts and checks the similarities between them.
• INDIA – Images •
It uses machine learning visualisation techniques to explain the output of other machine learning models.
The model produces a heat map of the output of other models, where each pixel represents the importance of that part of the input for the output of the model. This enables users to see which parts of the input are most important for the model and, therefore, understand better how the model is working.