Prescriptive Decision AI

Your models predict. Our agents act.

Connected to your ML models. Acts autonomously when it can. When it can’t, it guides the analyst with context, explanation, and traceability.

Models supported

ClassificationRegressionForecastingComputer VisionNLP/LLMGraph-basedClassificationRegressionForecastingComputer VisionNLP/LLMGraph-based

A cross-cutting decision layer

Beyond traditional prescriptive analytics and Explainable AI.

01VERIFY

The agent acts only when it can do so with confidence

Before executing a decision, the system evaluates the model’s reliability for that specific case or context. If it meets the defined threshold, it acts. If not, it routes the case to an analyst. It never automates blindly.

02EXPLAIN

Every action is explained and documented

Whether the agent decides or an analyst intervenes, the system records what was done, why, with which data, and at what confidence level. The team can review it. Audit and compliance can too.

03SCALE

It knows when to act and when to escalate to a human

Autonomy isn’t all-or-nothing. The agent resolves cases autonomously when they meet the defined criteria, and escalates those requiring human judgment—with all the information prepared to speed up the decision.

Research-driven Technologies

Built on NEOTEC & CERVERA R&D: EU Next Generation funds managed by CDTI.

Twin Models (≥95% fidelity)Intelligent Segment Analysis (ISA)Graph XAIExplainable AI in LLMs

Unlike generic post-hoc explanations, our Twin Models structurally replicate your original model's behavior with ≥95% fidelity while remaining fully transparent. You get the full decision logic: not just feature importances.

This enables:

  • Global explanations: how the model works overall
  • Local explanations: why this specific prediction
  • Counterfactual analysis: what to change for a different outcome
  • Regulatory-grade audit trails

Automatically detects critical data segments where the model has systematic errors, hidden bias, abnormally high uncertainty, or exceptionally strong performance.

Shifts you from reactive (waiting for failures) to proactive (catching issues before production impact).

Breakthrough research in explaining network-based models: a reference Graph XAI solution for production in Europe, backed by CDTI R&D.

Essential for:

  • Fraud detection in networks
  • Supply chain risk
  • Any domain where relationships between entities matter

Prompting for Transparency

  • Chain-of-Thought (CoT): LLM reasons out loud with logical steps (AyGLOO method)
  • Tree / Graph-of-Thought: flow diagrams of model reasoning (AyGLOO method)

Post-hoc Audit & Trust

  • Attribution: identifies influential prompt sentences and words
  • Detect influential image regions via controlled perturbations
  • Fact Checking in order to reduce hallucinations
  • Uncertainty metrics

Who benefits from our solution?

Business teams

Concrete, prioritized actions with clear visibility on impact, cost, and risk.

Data Scientist Team

Pinpoint model errors, inconsistencies, and segment anomalies to drive accuracy improvements.

Compliance teams

Traceable explainability, clear decision rationale, and structured regulatory reporting.

circulo explicable

Who benefits from our solution?

Business teams

Concrete, prioritized actions with clear visibility on impact, cost, and risk.

Data Scientist Team

Pinpoint model errors, inconsistencies, and segment anomalies to drive accuracy improvements.

Compliance teams

Traceable explainability, clear decision rationale, and structured regulatory reporting.

How it works

1

CONNECT

Connect AyGLOO to your existing AI/ML models via API or batch — SaaS or self-hosted, framework-agnostic.

2

DECIDE & ACT

AyGLOO evaluates the reliability of each decision and acts autonomously when it can. When it can’t, it recommends the best action to the analyst, with context and confidence level.

3

AUDIT & SCALE

Every decision is explained, monitored, and traced. The system detects drift, controls fairness, and escalates to a human when needed.

Banking

Anti-Money Laundering

Automate what can be automated. Prepare analysts for what cannot.

High reliabilityThe agent acts and closes the case with no human intervention

The alert exceeds the defined confidence threshold. The agent executes the action, generates a SAR draft, and logs everything end-to-end. The analyst does not intervene.

Medium or low reliabilityThe agent prepares the analyst to decide in minutes
  • Concrete action recommendation with a confidence level
  • Explanation of why the alert was generated, signal by signal
  • Network analysis: linked accounts, jurisdictions, and layering patterns invisible at single-customer level
  • Automatic identification of where the model is most uncertain for this case type
  • Pre-filled SAR draft, ready to review and sign

Typical impact: High-reliability alerts are resolved without analyst time. The rest arrives with full context: investigation time drops from hours to minutes, and the regulatory report is generated automatically.

Security Services

Threat detection in video surveillance

Automate visual alert triage when the model is reliable. Assist the operator when it is not.

High reliabilityThe agent escalates the alert to the defined protocol

No operator intervention. The agent acts, notifies, and generates the incident report. Full audit trail from the first second.

Low reliability or ambiguous contextThe agent assists the operator with visual context and a recommendation
  • The agent highlights which image region triggered the alert and why
  • Natural-language explanation ready for the incident report
  • Model confidence level in this specific context
  • The agent detects conditions where the model fails systematically: low light, angle, occlusion
  • Action recommendation with exportable traceability

Typical impact: High-reliability alerts are escalated without operator effort. Uncertain ones arrive with visual context and a ready-to-use recommendation. Triage time drops sharply and false positives stop being a black box.

Utilities

Demand and renewable generation forecasting

Automate dispatch and bidding decisions when the model is reliable. Alert the trader when it is not.

High reliabilityThe agent executes the bid or dispatch directly

Within the defined operational limits. No friction, no waiting. The agent acts, logs every decision, and makes it fully traceable.

Low reliability or atypical contextThe agent alerts the trader with full context
  • Forecast broken down by variable: weather, seasonality, load
  • The agent detects conditions where the model fails before the error reaches the market
  • Scenario simulation: what happens if wind, temperature, or expected demand changes
  • Risk-profile-adjusted bid recommendation, ready to approve

Typical impact: High-reliability dispatch decisions run without intervention. The trader acts only where the model is uncertain, with context and simulation prepared. Result: lower imbalance costs and better positioning in DA/ID markets.

Product snapshot: AML alert

Illustrative example of what AyGLOO offers for a use case in anti-money laundering.

Introduction and benefits

AML systems generate a large volume of alerts that compliance teams struggle to investigate in depth. An analyst spends 30 to 45 minutes per alert on manual investigation — and 70–80% of those alerts end up closed with no action. The real problem is not detecting more alerts: it is minimising regulatory risk under the constraint of compliance capacity that is always limited. When a model flags an alert, the analyst receives a risk score, not an explanation. AyGLOO adds an agentic layer on top of that process: the agent evaluates model reliability and the signals supporting each specific alert. When the signals hold and reliability is total, it escalates automatically. When the signals do not hold, it closes the case with full traceability and regulatory rationale ready for any review. When there is uncertainty, it routes to the analyst with the full file prepared to decide in minutes, not hours.

1
The agent escalates automatically when reliability is total and generates the SAR draft. When signals do not hold, it closes with full traceability. The analyst intervenes only where there is real uncertainty. → Reduces false positives and backlog without increasing headcount
2
Network analysis detects coordinated laundering patterns across linked accounts, invisible when analysing each customer in isolation. → Organised crime represents 30–40% of the total value laundered
3
ISA is the agent’s double-check before acting: it confirms whether the model behaves well in that specific segment. If not, the agent routes to the analyst even if the score is high. → 70–90% of AML alerts are false positives: ISA prevents escalating them
4
The regulatory report draft and full traceability are generated automatically, covering AML supervision requirements and GDPR Art. 22. → Regulator-ready audit trail: mitigates sanction risk
Illustrative example
With AyGLOO. Same alert, fully enriched
Alert #AML-49271 · Priority: High Risk score: 0.94 The agent acts
XAI Alert explanation Twin Reliability-level rules ISA Model reliability by segment Graph Network analysis What-if Simulation analysis CF Minimal evidence changing the decision Action Agent decision PDF Regulatory traceability
1. Why this alert was generated. Twin model (fidelity 96.3%)
100%IF operational_pattern_A > threshold AND geographic_indicator_B = true → Escalate immediately. Full confidence: act without manual review.
81%IF profile_changes_30d ≥ threshold AND operational_volume > segment_mean × factor → Possible instrumental account. Verify before escalating.
63%IF geographic_indicator_B = true with no other factors → Weak signal in isolation. Do not escalate based on this rule alone.
The full-confidence rule triggers autonomous execution by the agent: it escalates the alert and generates the SAR draft without waiting for an analyst. Lower-confidence rules are routed to the level-1 analyst, with the full context already prepared.
XAITwin
2. Linked network analysis
7 linked accounts share 3 beneficiaries across 4 jurisdictions · Staggered transfer pattern consistent with layering · 2 possible instrumental accounts: inactive 11 months, reactivated with high-volume outflows
This pattern is not visible when analysing each customer separately. Network analysis detects it by connecting entities through relationships, not just individual values.
Graph
3. Model double-check in this segment (ISA)
Micro-segment: "high structuring volume, EU to non-EU flows" · Conversion to regulatory report: 73% · False positive rate: 8% · No drift detected in the last 12 weeks · High reliability: act with confidence.
ISA is the agent’s double-check before automatic escalation. If the traffic light were amber, the agent would not act even if the score is high — it would route to the analyst with full context.
ISA
4. What-if. Which signal drives the alert
Remove structuring signal: score drops to Medium (0.61) · Remove jurisdiction signal: remains High (0.82) · The alert holds even without the main factor.
Simulation analysis evaluates alert sensitivity signal by signal: it confirms what is determinant and what is secondary before the agent executes or routes.
What-if
5. CF. Minimal evidence that would change the agent’s decision
Without the network layering pattern (7 linked accounts): score drops to Medium (0.67) · the agent would not auto-escalate — it would route to level-1 analyst for review · Without the structuring signal AND without the linked network: score drops to Low (0.41), case can be closed · The structuring + network combination is what sustains auto-escalation; neither signal alone would justify it
CF is not sensitivity: it is the causal chain a level-2 analyst needs to defend the SAR to the regulator. It is not "the score was 0.94" — it is "these two combined signals justify escalation, and without either one the decision would be different".
CF
6. Agent decision
The agent escalates — no level-1 analyst intervention
Signals hold and reliability is total · Auto-escalates to a level-2 analyst · Generates SAR draft with full narrative and linked evidence · Traceability exportable to PDF/CSV · Compliant with AML supervision requirements and GDPR Art. 22 · Level-2 analyst reviews and signs in minutes, not from scratch
or, if signals do not hold (CF confirms none is sufficient on its own)
→ The agent closes automatically: exportable full reasoning traceability · regulator-ready rationale · no analyst intervention · the closure is documented and auditable
or, if reliability is lower or ISA flags uncertainty
→ The agent routes to a level-1 analyst: enriched alert with signal-by-signal explanation, linked network analysis, segment ISA, and partial SAR draft · The analyst decides whether to escalate or close with full context in minutes
The agent is not biased towards escalation: it has criteria both ways. It escalates when signals hold and reliability guarantees it. It closes when they do not, with the same regulatory traceability. It routes to a human when there is real uncertainty. The agent does not replace the level-2 analyst’s signature because regulation requires human accountability for SAR closure — what it removes is all the work before that signature.
ActionPDF
Estimated portfolio impact · 20,000 alerts/year
−75%
Investigation time per alert
From 30–45 min to under 10 min · The agent prepares the file, the analyst decides
+60%
Alerts resolved with no human intervention
The agent acts autonomously in full-reliability cases
€0
Regulatory report preparation cost
SAR generated automatically · Audit trail ready for AML supervisor and GDPR Art. 22

Two paths. One platform.

Whether you already have AI/ML models or you're starting from scratch, AyGLOO adapts to you.

You already have AI/ML deployed

AyGLOO connects as a prescriptive decision layer on top of your existing models: SaaS or on-prem, any ML/AI model via API, use-case-specific configuration, minimal disruption.

You don't have AI/ML yet

We build the predictive model together with the prescriptive layer embedded from day one: end-to-end from data to actionable decisions, with explainability and governance built-in.

Realistic timeline

PHASE 1

2–3 weeks

Scoping & data assessment

PHASE 2

6–8 weeks

Working pilot

PHASE 3

10–14 weeks

Production rollout

Timelines based on standard single-use-case deployment. Multi-use-case programs follow a phased roadmap.

Do you already have models in production, but not automated decisions?

In 45 minutes we’ll identify which decisions you can already automate with the models and data you already have.