
Published at
By Sylvain Melchior, Guilhem Barroyer
Becoming AI-Ready
AI adoption is no longer a forward-looking topic: it has already become widespread, often without organisations fully measuring its impact.
According to the Stanford Human-Centered AI Index, 78% of organisations already use AI in some form, yet only 13% consider themselves truly ready to benefit from it at scale.
This gap creates a paradox: usage grows, but control stagnates. The result: Shadow AI emerges, an unregulated usage, often invisible to IT, exposing the organisation to risks of leakage and misalignment.
AI Readiness: What the Field Shows
Only 13% of companies report being fully ready to leverage AI’s potential. (Cisco - 2024)
74% of organisations still fail to demonstrate concrete value from their AI initiatives due to gaps in governance, prioritisation or skills. (BCG - 2024)
AI does not lack potential, what’s missing is the framework.
And that is exactly what mapping brings back into the picture: it enables organisations to become AI-Ready.
AI Reveals the Structural Weaknesses of the Information System
AI spread extremely fast: office copilots, writing assistants, summarisation tools, automated analysis…
But this diffusion, without a proper framework, exposes several vulnerabilities.
A Loss of Visibility
AI tools are used without inventory, supervision or any governance. Office copilots, writing assistants, browser extensions: adoption is fast, yet completely opaque to IT.
Result: it becomes impossible to know who uses what, and with which data.
This grey zone now has a name, Shadow AI, it "is the unsanctioned use of any artificial intelligence (AI) tool or application by employees or end users without the formal approval or oversight of the information technology (IT) department"
Data Leaks
The most common scenario: employees copy code, contracts, customer information or ticket content directly into external platforms (ChatGPT, Gemini, Claude…).
This risk is no longer theoretical, several companies have already pulled back, sometimes abruptly.
Samsung banned the use of generative AI
In May 2023, Samsung restricted its employees from using ChatGPT after several “abusive” uses involving sensitive information pasted into the tool.
The company explicitly banned any upload of professional information to these platforms when employees work outside corporate premises.
(Le Monde - 2023)
This type of incident illustrates a simple reality: AI lowers the barrier to accessing data, and mechanically expands the surface of risk.
Decisions Without Traceability
AI also produces outputs that are impossible to explain afterwards.
In many organisations, no one knows:
- where the information came from,
- which data was used,
- which model produced the answer,
- which context influenced the output.
This absence of a responsibility chain creates major problems for compliance, security… and collective trust.
Building a Shared Foundation
AI entered organisations through individual usage: each person tests a copilot, an agent, a model.
This abundance drives innovation, but weakens collective control: low visibility, porous security, diffuse responsibility.
In other words, organisations experiment a lot… but learn very little at enterprise scale.
In the European financial sector, nearly 90% of institutions have integrated AI at some level yet most remain at an experimental stage. (EY Survey - 2024)
The problem is not AI itself, but the way it spreads, without coordination, transparency or governance.
Absorbing AI Without Fragmentation
Models create new flows, new connectors, new dependencies.
Without a shared foundation, each use case becomes its own mini-system: a model, its data, its integration… and its risks.
Local solutions multiply, but the global architecture fragments.
With a shared foundation, AI connects to coherent structures:
- documented APIs,
- standardised events,
- qualified data,
- identified processes.
Instead of stacking isolated initiatives, organisations build a platform capable of hosting AI without drifting.
This foundation also enables something essential: industrialising what works.
Today, too many AI successes remain stuck at POC stage, not due to missing technology, but due to the lack of a stable environment where they can be reproduced, audited, improved, and deployed at scale.
With a shared foundation, every AI capability stops being a one-off: it becomes part of an architecture built to absorb, trace, and orchestrate intelligence.
A Common Language Between IT, Data, Business and Compliance
AI forces collaboration between teams that historically do not share priorities or tools.
- IT thinks integration and reliability
- Data teams think models and quality
- Business thinks operational value
- Compliance thinks risk and accountability
Without a shared foundation, these perspectives collide: what is valuable for one becomes risky for another; technically feasible but not industrialisable; promising but impossible to audit.
A shared reference (a map of data, models, flows, impacts) reconciles these views and turns AI into a collective project, not a constellation of isolated tests.
The Five Foundations of an AI-Ready Information System
Being AI-Ready is not about adding one more layer of technology. It means rethinking the foundations so that AI can integrate without creating risks, silos or unpredictable behaviours.
An AI-Ready SI is connected, governed and modelled, where data, processes and decisions follow a shared logic.
1. Open and Modern Infrastructures
AI does not live in an isolated bubble: it relies on the entire application ecosystem.
It requires fluid, real-time exchanges, scalable environments and secure data exposure.
This means evolving rigid architectures into interconnected platforms: hybrid or multicloud environments, event-driven mechanisms, clear and documented APIs.
Above all, mature organisations know where their technical dependencies lie, because they mapped them.
2. Governed and Traceable Data
AI is only as good as the data that feeds it. Quality, traceability, and contextualisation become structural imperatives.
Building a reliable data foundation means harmonising sources, clarifying flows, and ensuring continuity between operational, analytical and training data.
This is what separates an AI that shines in a demo from an AI that performs reliably in production.
3. Interdisciplinary Governance
AI crosses organisational boundaries. It involves IT, data, business, compliance, legal, HR, and none of these teams can own it alone.
Mature organisations deploy networked governance with clear responsibilities: who owns the model, who validates usage, who monitors drift, who handles incidents.
Governance is not a constraint, it is a trust framework.
4. A Culture of Understanding
AI success is not only infrastructural; it is cultural. Advanced organisations ensure that people understand what AI does, which data it uses, and how it shapes decisions.
Mapping plays a key role: it shows where AI intervenes, which flows it touches, and how it interacts with the SI.
It turns AI into a readable and governable subject.
5. Rethinking Security and Sovereignty
Every model, API or connector introduces new sensitivity points.
AI increases exposure: external dependencies, data location questions, opaque decisions, risks of leakage or unintended training.
Security must now be built-in from the start: control hybrid environments, know where sensitive data resides, choose model sovereignty levels, isolate experimentation environments.
The goal is not to restrict innovation, but to enable innovation without losing control.
Conclusion
AI can accelerate, automate and transform the organisation, but only if it rests on a solid, shared and governed foundation. An AI-ready SI is not one that multiplies models, but one that knows:
- where they connect,
- which data they consume,
- which processes they influence,
- and how to control them over time.
This is where mapping becomes decisive: it makes model behaviour tangible, exposes hidden dependencies, clarifies responsibilities and enables organisations to steer a constantly moving environment.
With Boldo, this understanding becomes visual, actionable and shareable, a backbone for orchestrating AI, documenting risks, aligning teams and building an architecture where innovation never compromises control.
The challenge is no longer to bring AI into the organisation, but to give it a framework, clear, governed, and evolutive enough to host intelligence while keeping full mastery.


How Artificial Intelligence is Driving Executives to Reconsider the Strategic Role of the Information System
~4 minutes