Enterprise IT leaders want to build an AI modernization capability that works like a factory – standardized processes with best-in-class tools for each stage that enable predictable ROI across the portfolio. Yet, using agentic AI on real-world applications is falling short of expectations.

The barrier isn’t AI code and test capability but rather the ability for AI to understand applications.

That’s why today, we’re announcing Swimm 2.0, the understanding platform serving both sides of your modernization program: the AI agents that need verified context to execute from, and the business teams that need to govern how applications behave and control modernization decisions.

Swimm 2.0 is the foundation for repeatable, predictable outcomes across your entire enterprise legacy footprint.

AI agents aren’t good enough at understanding legacy applications

Every stage of modernization – from initial analysis through planning, requirements development, forward engineering, and test generation – depends on accurate, complete understanding of how applications actually behave. When that understanding is wrong or incomplete, every downstream step inherits the risk.

AI is remarkably capable at code generation, test creation, and transformation tasks – when it has accurate context to work from. The problem, however, is that AI tools are not reliable at producing that context in the first place, especially for complex legacy applications.

For example, we tested Claude Code on a government COBOL codebase. The results were only 35% code coverage, 70% accuracy, and varied 42% with each run. The same codebase also produced different explanations of how the application behaved on each attempt. This is a structural limitation of how LLMs process complex, interconnected legacy code – not a failing of any single model.

End-to-end agentic approaches compound the problem. When understanding errors feed directly into planning, code generation, and testing without a checkpoint, mistakes propagate exponentially through the entire pipeline. And even if you could isolate the understanding stage to fix it, debugging an LLM’s reasoning across millions of lines of interconnected code is a sisyphean task.

A production AI modernization factory cannot run on best-attempt understanding. It needs deterministic results that hold up under scrutiny and don’t change between Tuesday and Thursday.

Swimm 2.0: Understanding for the agentic age

Our focus over years has been making Swimm the most accurate solution for legacy and mainframe application understanding. Swimm’s deterministic analysis derives application behavior directly from source code – extracting business rules, decision logic, data flows, and process dependencies across your entire codebase. Then AI is used as a translation layer into business language. This process enables the underlying understanding layer to reach 100% accuracy.

The result: customer evaluations show up to 90% reduction in the time that manual reverse engineering requires. What remains for teams to do is easy validations and add the organizational context that doesn’t live in the code.

Swimm 2.0 adds the business control, collaboration, and agentic usage on top of our understanding layer to enable end–to-end understanding for AI modernization – enabling the factory.

New in Swimm 2.0:

Business focussed navigation

Swimm arranges applications by domains – collections of processes aggregated by business function. Your team navigates from business capability to individual processes to specific code steps through a platform designed to enable non-technical stakeholders.

Collections: plan and govern modernization

Collections bring together the relevant parts of a modernization plan into a single context – combining business requirements, validated understanding, and technical dependencies. Everything in a collection is exposed as context to AI agents, so the planning and validation work your team does directly feeds downstream execution.

Every rule traced to source code

When a business rule says “accounts over 90 days receive a penalty rate,” your team can see exactly which lines of code implement that logic.

Structured business rules

Business analysts are able to access detailed business rules by business function without needing rare expert help,

MCP server

AI agents access verified, organized context directly from Swimm. Instead of generating understanding from raw code, agents consume the business rules, dependencies, and organizational context your team has already validated – as just-in-time context for each modernization task.

Broad language support

Swimm supports legacy and mainframe languages including COBOL with ancillary technologies like CICS and JCL, along with modern legacy-adjacent languages. The platform is built to accommodate the custom frameworks enterprises have built into their codebases over decades.

The vision: your AI modernization factory

Modernization should be measured by success rate, time to completion, and cost. Manufacturing companies, and later DevOps from them, learned that the only way to improve metrics is to standardize and then optimize.

Thus, whether you run modernization internally or through system integrator partners, the goal is the same: best-in-class tools at each stage, a standardized approach across each modernization pattern (rehost, replatform, refactor, rearchitect, rebuild, replace, retire).

With Swimm 2.0, understanding is a standardized process for each application with predictable results. That enables AI agents to be utilized and optimized where they are best, code and test generation without propagating understanding mistakes.

Understanding that survives modernization

Swimm’s understanding layer stays accurate as codebases evolve – including code generated by AI during forward engineering. Humans maintain full visibility and control over how applications behave, while AI agents get the verified context they need to operate efficiently.

The understanding doesn’t decay after modernization. It updates as the codebase changes, becoming the foundation for ongoing development rather than a planning artifact that gets filed away.

Our benchmarking shows 75% time savings and 61% cost reduction when AI agents work from Swimm’s understanding layer versus raw code alone due to massive token efficiency and indexing improvements. These savings compound at enterprise scale and unlock significant value in existing AI investments.