Actionable AI for business with counterfactual analysis
What good is an accurate AI model if you can't act on its results?
Accuracy is important, yes, but in our opinion, the true competitive advantage emerges when we combine accuracy with explainability. It is at that point where artificial intelligence becomes truly actionable and useful for business. To illustrate this clearly, I will talk about a very powerful functionality that, however, often remains in the background: the counterfactual analysis module.
This module answers a key question that complements the traditional explanation of AI. It's no longer just about understanding "why did the model predict this?", but going one step further: "what would have to change to get a different result?" Simply put, it identifies the minimum change needed in the data of a specific case for the prediction to vary, showing us the shortest path between result A and result B.
Why is this so valuable in a business environment? Because it converts complex outputs from machine learning models into concrete and understandable actions. It's not enough to know that a customer won't accept an offer; what's really useful is knowing what small adjustment would be enough for them to do so. This analytical capability opens new opportunities, fine-tunes processes and directly improves the customer experience.
When artificial intelligence is intuitive, fast and designed to respond to business needs, its impact ceases to be theoretical and translates into concrete results. And the best part: you don't need to know how to program or understand complex algorithms. Common sense, well-posed questions and a tool designed for decision makers are enough. Below, I share some examples that illustrate this.
Airlines & Travel: the art of the perfect upgrade offer
In the airline sector, where competition is fierce and margins are tight, every empty Business class seat represents a lost opportunity.
A common question in this environment is: How can we increase the acceptance rate of Business class upgrade offers in the days before the flight?
To answer this, many companies already use models that predict the probability of a passenger accepting an upgrade to Business class by paying a supplement. With counterfactual analysis, we can go one step further: for each passenger, identify what minimum change in the offer would make them go from "reject" to "accept". For example, we discover that for certain customer segments, simply lowering the upgrade price by 10% is enough, while for others it may be key to include VIP lounge access at no cost. These small "adjustments" tailored to each traveler can make the difference.
In practice, this allows us to say things as concrete as: "If we had offered this passenger the upgrade for €40 instead of €50, they would have accepted it".
The benefit is twofold: on the one hand, more revenue and better premium cabin occupancy; on the other, more satisfied customers because they only receive offers appropriate to their sensitivity. In a sector so focused on passenger experience, optimizing the right offer at the right time before the flight can make the difference between a lost opportunity and an effective conversion.
Conclusions: from explanation to strategic action
From these examples, it is clear that counterfactual analysis is a key piece for the strategy of companies that use explainable AI. Executives show interest when they discover that an AI-generated explanation doesn't stay in the technical realm, but can become concrete decisions that improve results and profitable actions. That's when AI starts making sense for business, when you can know what levers to move to influence that result in favor of the business and the customer.
In summary, these are some key benefits when applying counterfactual analysis in business decisions:
- Decision optimization: Allows fine-tuning offers, prices and actions at the optimal point, maximizing conversions (airlines and retail examples) and focusing resources where they generate more return.
- Identification of hidden opportunities: Reveals borderline segments or cases where a small change opens the door to new revenue (for example, customers almost eligible for credit or insurance that could be recovered with slight improvements).
- Friction reduction: Decreases barriers in the customer experience by providing clear explanations and steps to follow (fewer frustrated customers due to unexplained rejections, fewer complaints and more trust in AI).
- Real and measurable impact: Contributes to tangible results – more upgrades sold, more policies approved without risk, more claims resolved on first try, less churn – all of which is reflected in revenue, savings and compliance.
- Better operational planning in critical infrastructure: In sectors such as energy, water or telecommunications, it allows anticipating with precision the factors that trigger technical failures or service losses, optimizing predictive maintenance and avoiding unplanned stops that involve high costs and operational risks.
Ultimately, this tool is an ally that helps bridge the gap between AI models and business decisions since "every prediction hides an alternative story", and having the ability to explore those stories is what turns explainable AI into a competitive advantage.
It's also very important not to forget the judgment that human-in-the-loop provides: discerning whether that change is viable for the customer, profitable for the business and, above all, fair in its context. Because the real challenge is not technical, but ethical. It's not just about knowing what levers we can move, but asking ourselves if we should do it. Are we helping people overcome reasonable barriers or just optimizing how to work around a system that, perhaps, should change?
Additionally, although counterfactual analysis is very useful, it's not infallible. Some suggestions may not be viable and, if misinterpreted, can generate false expectations. That's why it's also important to combine it with other explainability and fairness methods to ensure balanced decisions and avoid biases.
Because the true value of explainable AI is not just in turning it into an actionable and useful tool for optimizing decisions, but in achieving that those decisions are also more human, more transparent and more responsible. That's the kind of intelligence that truly deserves to be applied.

