SAP explains that enterprise AI governance protects profit margins by shifting from statistical estimates to precise, deterministic control.
If you ask a consumer-level AI model to count the words in a document, it will frequently be off by around ten percent. Manos Raptopoulos, Global President of Customer Success for Europe, APAC, Middle East & Africa at SAP, points out that in enterprise operations, the gap between nearly perfect and fully accurate is not a small step—it is a critical divide.
“The jump from 90% to 100% accuracy is not gradual. In our context, it is a matter of survival,” Raptopoulos states.
As companies deploy large language models into live production systems, Raptopoulos highlights that the benchmarks for success have officially shifted to focus on accuracy, governance, scalability, and measurable business outcomes.
The urgent issue for corporate leadership involves moving from passive tools to active digital agents—a shift Raptopoulos describes as the defining governance challenge. This topic will be a key focus for SAP at this year’s AI & Big Data Expo North America.
Today’s agentic AI systems can plan, reason, coordinate with other agents, and carry out workflows independently. Because these systems directly handle sensitive data and shape decisions at scale, Raptopoulos contends that failing to manage them with the same rigor applied to human employees puts the organization at serious operational risk. He cautions that unchecked agent proliferation will resemble the shadow IT problems of the last decade, but with far greater consequences.
His framework insists on implementing agent lifecycle management, setting clear limits on autonomy, enforcing policies, and maintaining continuous performance oversight as essential practices.
Merging modern vector databases—which capture the meaning behind enterprise language—with older relational systems requires significant engineering resources. Teams must carefully limit the agent’s reasoning process to prevent errors from disrupting financial or supply chain operations. Enforcing these tight controls increases computational delays and raises cloud computing costs, which can alter initial financial forecasts.
When an autonomous model depends on frequent, high-speed database queries to ensure reliable results, token expenses accumulate rapidly. Governance thus becomes a core engineering challenge, not just a compliance formality.
Raptopoulos maintains that corporate boards must address three fundamental questions before launching agentic models: determining who is responsible for an agent’s mistakes, creating clear audit trails for automated decisions, and setting precise criteria for when human intervention is required. Geopolitical complexities make resolving these questions even more difficult.
Sovereign cloud platforms, localized AI models, and data residency requirements are now regulatory realities in key markets including New York, Frankfurt, Riyadh, and Singapore. Businesses must embed deterministic control directly into probabilistic AI systems. Raptopoulos sees this as a strategic leadership priority, not merely an IT initiative.
Organizing relational intelligence for business operations
AI systems are only as reliable as the data and processes they rely on—a point Raptopoulos refers to as the data foundation moment.
Disorganized master data, isolated business systems, and heavily customized ERP setups create dangerous unpredictability at critical times. Raptopoulos notes that if an autonomous agent uses flawed data to make recommendations affecting cash flow, customer relationships, or compliance status, the resulting harm can escalate immediately.
Unlocking real business value requires moving beyond general-purpose large language models trained on public internet content. According to Raptopoulos, true enterprise intelligence must be rooted in proprietary corporate data—such as orders, invoices, supply chain logs, and financial entries—woven directly into business workflows. He asserts that relational foundation models designed specifically for structured business data will consistently outperform generic models in forecasting, detecting anomalies, and optimizing operations.
The difficulty of making a heavily customized ERP environment understandable to a foundation model stalls many AI projects. Data engineering teams often spend excessive time cleaning up fragmented master data just to establish a usable baseline for the AI.
When a relational model must accurately analyze complex, proprietary supply chain records alongside raw invoice data, the underlying data pipelines must operate without delay. If data ingestion fails, the model’s predictive accuracy drops instantly, making the agent a potential risk to the business.
Connecting legacy systems with modern relational AI demands a major overhaul of long-established data pipelines. Engineering teams must index years of poorly categorized planning data so that embedding models can produce accurate vector representations. Following Raptopoulos’s reasoning, leadership must honestly assess whether their current data infrastructure is truly ready, rather than simply adding probabilistic AI on top of disconnected systems.
Creating intent-driven interfaces
How users interact with enterprise software is evolving from fixed interfaces to dynamic, generative experiences—a shift Raptopoulos identifies as the employee interaction moment.
Instead of manually navigating complex software, employees will simply state their intent to the system. For example, Raptopoulos describes a user asking the software to prepare a briefing for their highest-revenue customer meeting that week. The AI agents then coordinate the necessary workflows, gather relevant context, and suggest recommended actions.
However, Raptopoulos emphasizes that employee adoption depends entirely on trust. Workers will only accept these digital collaborators when they are confident the system’s outputs follow governance rules, align with real business logic, and deliver clear productivity improvements.
Building these systems requires role-specific AI personas designed for positions like CFO, CHRO, or head of supply chain. Raptopoulos observes that these personas must be grounded in trusted data and integrated into familiar business processes to bridge the adoption gap.
This level of integration is a high-stakes design choice. Companies that invest in AI-native architecture see faster returns, while those trying to attach probabilistic models to outdated interfaces face major challenges with trust, usability, and scalability.
Technology leaders attempting to layer modern AI orchestration onto rigid, monolithic applications often encounter serious integration bottlenecks. Routing probabilistic API calls through legacy middleware slows down user interfaces, breaking the seamless intent-based workflow. Developing role-specific personas goes beyond prompt engineering—it requires embedding complex access controls, permissions, and business rules into the model’s active memory.
Building a competitive moat
The financial payoff from AI becomes most visible in customer-facing interactions. Raptopoulos explains that training models on proprietary records, internal policies, and historical data creates a layer of customer-specific insight that competitors cannot easily replicate. This approach
These models excel in exception-heavy workflows such as dispute resolution, claims processing, returns management, and service routing.
By deploying autonomous agents that can classify cases, surface relevant documentation, and recommend policy-aligned resolutions, organisations can transform these high-cost processes into a clear competitive advantage.
These models continuously adapt based on the outcomes of each interaction. Raptopoulos highlights that corporate buyers prioritise dependable, relevant, and responsive service over technological novelty. Companies that leverage AI to manage heavy workloads — while maintaining rigorous oversight of final outputs — build barriers to entry that generic tools simply cannot overcome.
Deploying corporate intelligence requires the C-suite to orchestrate three distinct layers simultaneously, which Raptopoulos refers to as the strategy moment.
The first layer involves embedding functionality directly into core applications, enabling persona-driven productivity gains for quick returns. The second layer requires agentic orchestration, enabling multi-agent coordination across cross-system workflows. The third layer centres on industry-specific intelligence, featuring deeply specialised applications co-developed to tackle the highest-value challenges unique to a particular sector.
A common pitfall awaits leaders who fall into false sequencing. Focusing exclusively on embedded tools leaves significant financial value on the table, while rushing toward deep industry applications without first establishing proper governance and data maturity amplifies corporate risk.
Raptopoulos advises that scaling these models requires aligning corporate ambition with actual technical readiness. Leadership teams must invest in clean core architectures, modernise data pipelines, and enforce cross-functional ownership to move beyond the pilot phase. The most profitable deployments treat AI as a central operating layer that demands the same level of governance as human staff.
The financial gap between 90 percent accuracy and full certainty defines where true enterprise value resides. Governance decisions made in the coming months will determine whether specific AI deployments become a powerful source of lasting advantage — or an expensive lesson.
See also: AI agent governance takes focus as regulators flag control gaps
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security & Cloud Expo. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.



