Artificiale Intelligence Hero
Artificial Intelligence
Enterprise Architecture

Published at

By Guilhem Barroyer

Artificial Intelligence Explained: A Modern Overview

Artificial Intelligence (AI) refers to the set of technologies that enable a system to learn, reason and adapt in order to replicate certain human cognitive abilities. It relies on algorithms capable of perceiving an environment, interpreting signals, and acting to maximise the probability of reaching a goal.

In other words, an AI is not just a calculator: it is a decision-making system—sometimes reacting in milliseconds, sometimes drawing on millions of examples.

Its behaviour is often compared to that of a chess grandmaster: a strategist who observes patterns, evaluates countless scenarios, anticipates consequences, and proposes optimal moves. Like an expert whispering the next “likely plays”, AI brings speed, consistency, and the ability to see further than what a human alone could infer.
This metaphor resonates particularly well in enterprise architecture:

AI gives organisations an expanded awareness of their own systems. It highlights invisible dependencies, accelerates understanding, and enables simulation before action.

A long History of Artificial Intelligence

Looking back at the evolution of AI reveals a clear pattern: the history of artificial intelligence is far older than ChatGPT.
It did not emerge from a single breakthrough, but from a succession of advances aimed at understanding, formalising and automating aspects of human reasoning. This trajectory can be read in three major phases.

AI Milestones

AI Milestones

1. From calculating machines to reasoning machines

The idea of delegating part of human reasoning to machines dates back to the 19th century.
With Charles Babbage and Ada Lovelace, logic became executable: a machine could chain instructions to simulate simple reasoning.

A century later, Alan Turing posed the foundational question “Can machines think?”, proposing a test to evaluate their ability to imitate human behaviour.

This period established two intuitions that shaped the entire history of AI:

  • thought can be formalised,
  • a machine can execute part of it.

In the 1950s-1970s, these ideas materialised in the first expert systems, which attempted to encode human logic into rule-based structures.

2. Learning: letting knowledge emerge rather than programming it

From the 1980s onwards, a major shift occurred: AI stopped being a collection of rules and became a process of learning.
The work of Hinton, LeCun and Rumelhart demonstrated that neural networks could learn from data.

But infrastructure at the time was limited: it took until the 2010s, and the rise of the cloud, GPUs and massive datasets-for deep learning to reach its full potential.

This convergence gave rise to modern AI:

  • image recognition,
  • machine translation,
  • speech synthesis,
  • image generation with GANs.

At that stage, AI could recognise, classify and predict, but not yet understand.

3. Context and language: AI becomes an interface

The real breakthrough came in 2017 with Google’s Transformer architecture and its mechanism of attention, enabling models to identify the most relevant elements of a sequence.

This innovation enabled:

  • large language models (LLMs),
  • contextual understanding,
  • coherent writing,
  • multimodal generation (text, image, code).

From there, AI moved out of research labs, gained consistency, memory and versatility, and became an interface: a tool capable of interacting, reformulating, reasoning and acting.

Recently, progress has accelerated at an unprecedented pace:

  • 2020 - GPT-3: large-scale text generation.
  • 2021 - CLIP & DALL·E: rise of modern multimodality.
  • 2022 - ChatGPT: conversational AI goes mainstream.
  • 2023 - Claude, Gemini, Copilot: AI becomes a work companion.
  • 2024 - European sovereign models: growing need for control and transparency.
  • 2025-2026 - AI agents: towards systems that act and orchestrate workflows.

In a few years, AI evolved from an analytical tool to an interaction technology, then to a transversal cognitive layer embedded in processes and applications.

Convergence, models and the rise of agents

Recent progress in AI stems from the convergence of three technological shifts:

1. Computing power: the GPU explosion

Public cloud and dedicated GPUs (NVIDIA, AMD) enabled the training of models with hundreds of billions of parameters, unifying language, vision, code and more.

2. Data: massive and diverse corpora

The availability of vast, heterogeneous and contextual datasets greatly expanded the scope of what AI can learn.

3. Modelling: the pivotal role of Transformers

Since 2017, the attention mechanism has allowed models to interpret, summarise, reformulate and generate with unprecedented quality.

Generative AI

This convergence led to models capable of producing text, images, code and structured explanations. The key shift lies in synthesis, not just prediction. AI becomes a production system. And above all, natural language becomes the primary interface.

Since 2022, these capabilities have reached everyday life and enterprise workflows through conversational interfaces.

As of today (19/11/2025), several major players shape the AI landscape, each with its own philosophy, strengths and trade-offs:


OpenAI - GPT-5 / GPT-5.1

Excellent conversational quality, strong multimodality, and deep integration within the Microsoft ecosystem.
Limitations: closed-source model, high cost, and strong dependency on U.S. cloud infrastructure.

Anthropic - Claude 3 Opus / Sonnet / Haiku

Known for its safety, long-form reasoning, and “Constitutional AI,” which ensures controlled and predictable behaviour.
Limitations: slightly less performant in code generation.

Google - Gemini 3.0

Native multimodality (text, image, video) with strong integration into Google Search and Workspace.
Limitations: variable stability and less predictable behaviour in advanced use cases.

Mistral (France) - Mixtral 8x7B / 8x22B / Mistral Large

Fast, lightweight European models, open-source or sovereign, highly customisable and efficient.
Limitations: require tuning and more internal expertise to reach optimal performance.

Meta - LLaMA 3.1 / 3.2

Open-source, retrainable, with a large derivative ecosystem and an active community.
Limitations: lower coherence and robustness in production environments.

DeepSeek (China) - V3 / R1

Strong performance in mathematical and logical reasoning, with very low inference cost.
Limitations: limited transparency regarding training data.

xAI - Grok 3

Able to access real-time data from X (Twitter) and adopt a more free-form conversational tone.
Limitations: limited documentation, and the model remains relatively unstable.

The visible tip of the Iceberg

Generative AI dominates public attention but represents only one subset of the broader AI ecosystem.
To understand its role, it is useful to distinguish between:

  • generative AI (produce),
  • interpretive AI (understand),
  • perceptive AI (see, hear),
  • symbolic AI (rule-based reasoning),
  • orchestration (connect models to business tools).
The Place of LLMs Within the AI Landscape

The Place of LLMs Within the AI Landscape


A complete architecture of modern AI

While generative AI has made artificial intelligence visible to the general public, it represents only a fraction of the field.
Modern AI forms a coherent ecosystem, organised into three complementary layers: functional domains, orchestration layers, and technological ecosystems.

This architectural perspective is essential: it clarifies how AI integrates into an information system, where it connects, what it consumes, what it produces, and what conditions its reliability, governance and security.

Functional domains: AI’s cognitive capabilities

Functional domains group the major “families of skills” an AI system can perform, ranging from perception to action.

  • Natural Language Processing (NLP): understanding, summarisation, extraction, translation, generation.
  • Computer Vision (CV): image and video analysis, object detection.
  • Generative AI (GenAI): creation of text, images, code, audio.
  • Reasoning and Decision-Making: symbolic systems, logic engines, hybrid models.
  • Autonomous Action (Agentic AI): planning, API-based execution, multi-step task chaining.

These domains act as cognitive capabilities, much like business capabilities within an organisation.
Together, they form a mental map of what an AI system can actually do.

Orchestration layers: connecting, contextualising, activating

An AI system is not useful because it exists, it is useful because it fits into a workflow, a tool, or a process.
Orchestration layers ensure this integration.

  • APIs & connectors: link models to business applications.
  • Orchestration frameworks (LangChain, LlamaIndex, Dust…): manage memory, context, data retrieval, and action sequencing.
  • RAG systems (Retrieval-Augmented Generation): connect models to internal documentation to reduce hallucinations and improve factuality.
  • MCP and standardised protocols: give AI governed access to internal applications and tools.

This is where AI becomes truly effective: when it connects to the information system, not when it remains an isolated cloud model.

Technological ecosystems: openness, performance, sovereignty

The AI landscape is structured around three major technological approaches, each offering distinct advantages:

  • Proprietary models: focused on performance, stability and integration.
    Examples: OpenAI, Anthropic, Google.
  • Open-source models: focused on transparency, customisation and local deployment.
    Examples: Meta (LLaMA), Mistral, Falcon.
  • Sovereign models: focused on data control and strategic independence.
    Examples: Aleph Alpha, European initiatives.

A still-fragile ecosystem

Despite rapid progress, AI remains a sensitive technology whose structural limitations must be considered from the outset.

Models learn from imperfect data: they can therefore reproduce biases or generate convincing but incorrect responses (hallucinations). This is not a malfunction, it is a direct consequence of their statistical nature. This is compounded by issues of security and governance.
Models may be manipulated through targeted attacks (prompt injection, rule bypassing), and it is not always possible to trace the source of a response or guarantee the compliance of an agent executing multiple steps.
Technological dependency is another key concern: most advanced capabilities are concentrated among a few actors, raising questions of sovereignty, cost and operational control.

Finally, AI only creates value when the organisation is ready to support it: coherent data, clear processes, explicit responsibilities. Without this alignment, even the best models remain limited.

Conclusion

Recent AI evolution shows a consistent pattern: the more capable the models become, the more they depend on the context in which they operate.
AI is no longer an isolated prediction engine; it is a system that interacts, retrieves information, and fits into organisational workflows — especially with the rise of autonomous agents.

This shift turns AI into a transversal layer of the information system, able to observe, connect and act. But this capability only works if the underlying system is readable and coherent: clean data, governed references, clear processes.

An AI is never better than the environment it integrates with. (See our article on AI-Readiness)

The challenge is therefore not to “add AI on the side”, but to understand where and how it connects to the existing landscape. This is the natural bridge between AI and enterprise architecture: representing dependencies, clarifying interfaces, identifying impacts.

This is where Boldo plays its role: providing a visual framework to map cognitive capabilities, define integration points and guide decisions. As AI becomes a core actor of the information system, architecture becomes the foundation of control, and cartography the essential starting point for turning AI into a strategic asset rather than an operational risk.

IA_en