AI can’t stay a black box – it’s time to understand why it makes every decision.

3 min reading

That idea became the central theme of the Winter School on Causality and Explainable AI at Sorbonne Université — a week that gathered top researchers from DeepMind, UCL, TUM, Charité, and Inria.

Last month, I had the privilege to join the Winter School on Causality and Explainable AI, hosted at Sorbonne Université. It was an inspiring week that brought together some of the world’s leading researchers in causality, explainability, and responsible AI — including professors from UCL, TUM, Charité Berlin, Inria, and DeepMind.

And honestly? It was amazing.
Not only because of the depth of the lectures, but also because of the people. Despite the incredible level of expertise in the room, there was no sense of hierarchy — professors, PhD students, and industry researchers all exchanged ideas on equal footing. After finishing their own lectures, world-renowned scientists sat in the audience, took notes, and asked questions just like everyone else.
Add to that the welcoming atmosphere, great campus energy, and (yes!) fantastic food — and it became a week to remember.

What I Took Away for Our Work at Asseco Platform

At Asseco Platform, we train and deploy vision models that support clients across FMCG and Pharma with advanced Image Recognition solutions that ensure that products are available, properly displayed, and effectively promoted at the point of sale.

What I learned at the Winter School will directly influence how we explain, train, and improve these models.

One key message echoed through many lectures:

“We can’t treat AI as a black box anymore – we must understand why it makes each decision.”

That principle perfectly aligns with our current R&D direction. In the coming months, we’ll start adapting explainability techniques discussed during the school – for example, concept-based explainable models (CBMs) and structure-aware recourse — to make our visual recognition systems more transparent and interpretable.

This means two things:

  • Smarter training – We’ll design our architectures more consciously, using causality-inspired approaches to ensure models truly learn what matters in the data.
  • Better quality control – By understanding the “why” behind each prediction, we’ll be able to reduce false recognitions and improve the overall accuracy and reliability of our solutions.

Ultimately, these improvements will make our Image Recognition models not only perform better, but also explain themselves better – a crucial step toward more trustworthy AI in commercial applications.

A Personal Reflection

Beyond the technical insights, Sorbonne left me thinking about the beauty of AI research itself.
The lectures weren’t just about algorithms – they were built around proofs, reasoning, and the mathematical elegance behind machine learning. We didn’t just accept what we “knew”; we challenged it, rebuilt it, and proved it again — this time, more rigorously.

It reminded me why I love this field. AI isn’t just about performance metrics. It’s about understanding, discovery, and continuous questioning.

And yes – I’ll definitely come back. Maybe even for a PhD one day. The atmosphere of curiosity and open collaboration at Sorbonne was something truly special.

Closing Thought

Events like the Winter School show where AI is heading – toward systems that are not only intelligent but also explainable.
And for us at Asseco Platform, this is a direction we’re fully embracing.

Causality helps us understand how things work.
Explainability helps us trust them.
Together, they help us build better AI.

Let’s talk about new business opportunities

Microsoft Booking is the scheduling automation platform with team-based scheduling, solutions and integrations for every department, and advanced security features.