Introduction
Machine learning is revolutionizing the insurance world. From calculating prices to detecting fraud, these tools promise to make work faster and more accurate. However, not everyone is convinced. Many professionals within insurance companies, especially those who are not technology experts, distrust these models because they don't understand them. What's happening? In this post, we explore why this distrust occurs, what science says about it, and how this obstacle can be overcome.
The problem: Why do they distrust machine learning?
Imagine being given a magic box that tells you how much to charge for insurance or if a claim is suspicious. Sounds great, but there's one detail: you don't know how that box works. This is how many employees in the insurance sector feel about machine learning. According to studies, this distrust has three main causes:
- They are "black boxes": Complex models, such as neural networks, make incredible predictions, but don't explain how they arrive at them. A study by Kuo and Lupton (2020) notes that actuaries prefer traditional methods because they are easier to understand.
- Lack of technical skills: Many managers and workers lack training in advanced technology. A Dataiku article (2020) highlights that this knowledge gap is a major barrier in companies with legacy systems.
- Resistance to change: Moving from manual processes to something automated isn't easy. According to The Actuarial Club (2020), distrust grows when models replace what has always been done by hand.
What does science say?
Research confirms that the problem is not just a perception. A recent study in Scientific Reports (2025) on health insurance fraud detection explains that the most advanced models need Explainable AI tools like SHAP or LIME to be understandable. Without these explanations, employees feel they are trusting something magical rather than something logical.
Solutions: How to gain trust?
Fortunately, there are ways to close this gap. Here are three evidence-based ideas:
- Make it understandable: Using explainable artificial intelligence (XAI) can show how models make decisions. A ResearchGate article (2023) suggests this increases trust by making transparent what was previously a mystery.
- Train the team: Teaching employees about machine learning is key. KPMG (2023) found that 52% of insurance leaders see AI as the future, but they need to invest in training to get everyone on board.
- Work together: Creating a culture where technical and business teams collaborate can reduce fear of change. Intelliarts (2024) highlights that this is already working in some companies.
Conclusion
Machine learning has the power to transform insurance, but only if professionals trust it. Science shows that lack of understanding is the major obstacle: models are complex, technical skills are scarce, and change is frightening. However, with clear explanations, training, and teamwork, insurers can overcome this barrier. The result? A more efficient industry ready for the future.

