Currently, available Explainable AI mechanisms present significant challenges. On one hand, they require highly qualified personnel who have a high occupancy rate and are scarce, making their availability difficult at the exact moment when needed. On the other hand, existing techniques are not yet completely effective in intuitively and practically unraveling the internal processes of AI models. Let's delve deeper into this last statement.
The Case of SHAPley Values
One of the most used techniques to explain model decisions is SHAPley Values, which breaks down the contribution of each variable in the prediction of each case. However, this methodology also presents drawbacks when facing high-dimensionality scenarios.
Example: Bank Fraud Detection with High Dimensionality
Situation:A bank implemented a fraud detection system that uses an artificial intelligence model based on 100 different variables. These variables range from transaction patterns and online behaviors to geographical and temporal indicators. With the aim of providing transparency in decision-making, the technical team decided to apply SHAPley Values to explain how each variable influenced the fraud prediction in each transaction.
Implementation of SHAPley Values:
- Objective:Identify the individual contribution of each of the 100 variables in the model's decision for each transaction.
- Process:Each time a transaction was evaluated, a vector composed of 100 SHAP values was generated, each indicating the impact of the corresponding variable on the prediction (fraud or no fraud).
- Problem Scale:With approximately 1,000,000 analyzed transactions, an explanation matrix of 100 rows (variables) by 1,000,000 columns (individual cases) was obtained.
Detected Problems:
- Information Overload:The resulting matrix, with dimensions 100×1,000,000, is extremely complex and practically impossible to interpret manually.
- Difficulty in Identifying Global Patterns:Although individual cases can be analyzed, detecting trends or common anomalies across millions of transactions requires additional Explainable AI and visualization techniques.
- Human Limitations:Human processing capacity is overwhelmed by the volume of data, which can lead to overlooking critical patterns or making decisions based on incomplete analysis.
Consequences:
- Reduced Interpretability:Analysts found it difficult to synthesize the information needed for audits or to make strategic decisions based on a global view of the model.
- Need for Complementary Tools:Given the data overload, it became imperative to seek additional Explainable AI and intuitive visualization tools.
- Risk of Loss of Transparency:Without an effective way to interpret the vast matrix of explanations, the initial purpose of providing transparency in the model's decision-making was compromised.
Conclusion
This example illustrates that, although SHAPley Values are a powerful tool for understanding the influence of each variable on predictions, their application in high-dimensionality systems and large data volumes can be counterproductive. Information overload prevents efficiently extracting actionable knowledge. Therefore, it is essential to complement this technique with other Explainable AI methods and intuitive visualization.
In upcoming chapters, we will explore additional techniques that seek to overcome these limitations and offer more practical and effective solutions for explainability in artificial intelligence.
Join us on this journey towards more explainable and understandable AI!

