Artificial IntelligenceEnterprise Architecture

AI and Enterprise Architecture: For Now, We Mostly Have Questions

Sylvain Melchior

We'll be upfront: we don't fully know yet what AI will change about enterprise architecture.

It's not for lack of thinking about it. As the makers of an EA tool, it's a topic we've been turning over from every angle for the past two years. We read the same articles you do, test the same tools, and talk to architects who are experimenting in the field.

But if we're being honest, we're still at the stage where the questions are more interesting than the answers. And we think that's fine. Better, even: we think asking the right questions now is probably more useful than pretending to have the answers.

So here are ours.

Will AI Model for Us?

It's the most obvious question, and the one we hear the most.

With current LLM capabilities, you can imagine a scenario where an AI analyzes IT landscape data (logs, CMDBs, tickets, documentation) and automatically generates an application map. No more spending weeks interviewing teams and piecing flows together by hand.

It's an appealing picture. And technically, we're starting to see interesting things emerge. Prototypes that parse technical documentation to extract components and relationships. Tools that analyze network flows to detect undocumented dependencies.

But we still have a few questions.

Would an auto-generated map actually be understood by the teams who need to use it? One of the key challenges in EA is that the map needs to be a shared object, something people recognize themselves in, where they can say "yes, that's how it works" or "no, this flow is missing." If the map is generated by a machine, do we lose that co-construction dimension that gives it its value?

And then there's the abstraction question. Technical data (logs, network flows) tells a very granular story. But enterprise architecture often operates at a higher level: business domains, capabilities, processes. Can an AI make that abstraction leap? Or do we risk ending up with maps that are technically detailed but hard to read from a business perspective?

We're not saying it's impossible. We're saying we haven't seen a convincing answer to these questions yet.

AI as the Architect's Assistant: Yes, But for What Exactly?

There's a second scenario, maybe more realistic in the short term: AI not as a replacement for the architect, but as an assistant.

Asking a question in natural language to your architecture map ("which applications depend on our customer database?"), requesting an impact analysis ("if we decommission this middleware, what breaks?"), getting an automatic summary of an application landscape for an architecture review board.

That we find promising. And it's something we're actively exploring at Boldo.

But here too, there are open questions. The most important one might be about trust. When an architect does an impact analysis manually, they know which assumptions they made, which sources they used, what they verified and what they estimated. When an AI produces the same analysis, how do we make sure the architect understands what's reliable and what isn't?

The risk is over-reliance. Getting a well-formulated answer and taking it at face value, when it's built on incomplete data or fragile inferences. In EA, an error in an impact analysis can have pretty concrete consequences on migration or investment decisions.

We think the right approach is probably one of transparency: the AI shows its sources, its reasoning, its areas of uncertainty. The architect stays in the loop, not as a passive validator, but as a critical partner. But honestly, we haven't seen many implementations that do this really well yet.

What Does This Change About Data Governance?

When you start plugging AI into an IT landscape, a question we see coming up more and more with our customers is about governance.

What data feeds the model? Who validated that it's reliable, current, complete? What are the access rules? Is it okay to send this data to an LLM hosted in the US? Do the results generated by the AI become reference data themselves, or do they remain suggestions?

These are classic governance questions, but AI makes them much more urgent. Because AI consumes data at a scale and speed that no human process did before. And because AI outputs travel fast: a summary generated by a copilot can end up in an executive slide deck in under an hour.

The enterprise architect probably has an interesting role to play here. They have (in theory) the big picture view of data, flows, and responsibilities. But are current EA tools designed to integrate this AI dimension into governance? We're not sure. It's something we're thinking about.

Shadow AI Is Already Here

Something we hear more and more often when talking with CIOs and architects: AI is being deployed in organizations with or without architecture's involvement.

Business teams using ChatGPT to process internal data. Developers plugging AI APIs into applications without going through the architecture review. Managers feeding copilots with Excel files containing sensitive data.

It's very reminiscent of shadow IT in the 2010s: business units deploying SaaS without informing the IT department. EA was slow to react, and when it did, the response was often "control" rather than "support."

We wonder if history will repeat with shadow AI, or if this time, architects will manage to position themselves earlier. Not to block, but to map: knowing where AI is being used, what data it consumes, what risks that represents. It's maybe the most natural extension of the EA role in the coming years.

But once again, we're at observations, not certainties.

What About AI in the EA Tool Itself?

This is obviously the question that concerns us most directly.

As a vendor, it's tempting to put AI everywhere in the product. Auto-generated views, relationship suggestions, built-in chatbot, automatic summaries. And there are clearly use cases where that makes sense.

But we're careful not to fall into the same trap we described in our article about the electrician metaphor: plugging in the current before checking that the grid can handle it.

Our current questions:

Which AI features bring real value to the architect, and which ones are just "wow effect" that fades after the demo? How do we make sure AI embedded in an EA tool is reliable on topics as critical as dependency analysis or compliance? How do we keep the human at the center, so AI augments the architect's judgment rather than replacing it?

We're making progress on these topics. We're testing things. Some work well, others less so. We'll share more when we have more perspective.

Why We're Sharing All This

We could have written an article titled "5 Ways AI Is Transforming Enterprise Architecture." It would have been more comfortable, more shareable, more SEO-friendly.

But we preferred to be honest about where we are. Because we think our industry has a bit of a tendency to jump to conclusions when a topic is as hyped as AI. And enterprise architects, who are by nature people who think in systems and dependencies, deserve better than a list of promises.

The questions we've raised here don't have definitive answers today. And that's ok. The important thing is maybe to keep them in mind as we move forward, rather than rushing ahead and reflecting after.


If these questions resonate with what you're experiencing on the ground, we'd love to hear about it. And if you want to see how we approach enterprise architecture (AI or otherwise), Boldo is open access.