Join an event that enterprise leaders have been trusted for nearly 20 years. VB Transform brings together people who build real enterprise AI strategies. learn more
Over the past 100 years, IBM has seen a rise in trends in a variety of technologies. What you tend to win is technology that has choice.
At today’s VB Transform 2025, Armand Ruiz, VP of IBM’s AI platform, detailed how Big Blue thinks about generating AI, and how its enterprise users are actually deploying technology. A key theme highlighted by Lewis is that at this point it is not about choosing a single large-scale language model (LLM) provider or technology. Increasingly, enterprise customers are systematically rejecting single-vendor AI strategies in favor of a multi-model approach that matches specific LLMs with targeted use cases.
IBM has its own open source AI model with a granite family, but it does not position its technology as the only option or the right choice for all workloads. This enterprise action has led to IBM establishing itself as what Ruiz calls the control tower of AI workloads, rather than as a competitor to the underlying model.
“When I’m sitting in front of a customer, they use everything they have access to,” Lewis explained. “For coding, they love humanity, for some other use cases like inference, they like O3, like LLM customization, with their own data and fine-tuning, they just match Mistral, or LLM to the correct use case with our granite series or small models.
Multi-LLM Gateway Strategy
IBM’s response to this market reality is a newly released model gateway that provides businesses with a single API to switch between different LLMs while maintaining observability and governance across all deployments.
The technical architecture allows customers to run open source models on their own inference stacks for sensitive use cases, while also accessing public APIs such as AWS Bedrock and Google Cloud’s Gemini to access less important applications.
“That gateway offers customers a single layer with a single API to switch from one LLM to another, adding all the observability and governance,” Lewis said.
This approach is directly contradicted with the typical vendor strategy of locking customers into their own ecosystem. IBM is not just taking a multi-vendor approach to model selection. In recent months, multiple tools have emerged for model routing aimed at directing workloads to the right models.
Agent Orchestration Protocols manifest as a critical infrastructure
Beyond multi-model management, IBM is tackling new challenges in agent-to-agent communication via open protocols.
The company developed an ACP (Agent Communication Protocol) and contributed to the Linux Foundation. The ACP is a competitive effort against Google’s Agent2Agent (A2A) protocol, and was donated to the Linux Foundation this week by Google.
Ruiz noted that both protocols aim to promote communication between agents and reduce custom development work. He hopes that the different approaches will converge in the end, and the difference between A2A and ACP is now mostly technical.
Agent Orchestration Protocol provides a standardized way for AI systems to interact with various platforms and vendors.
Considering enterprise scale reveals technical importance. Some IBM customers already have over 100 agents in their pilot programs. Without standardized communication protocols, each agent-agent interaction requires custom development, creating an unsustainable integration burden.
AI is about transforming workflows and how work is done
In terms of how he sees Ruiz affecting businesses today, he suggests that it really needs to be more than a chatbot.
“If you’re just doing a chatbot or trying to cut costs with AI, you’re not doing AI,” Lewis said. “I think AI is about completely changing the way workflows and the way work is completed.”
The distinction between AI implementation and AI transformation is concentrated on how deeply integrated technology is in existing business processes. The IBM internal HR example illustrates this shift. Instead of employees asking the chatbot for HR information, professional agents can handle regular questions about compensation, employment and promotion, automatically route them to the right system only when necessary, and escalate to humans.
“I used to spend a lot of time talking about a lot of things with my HR partners. Now I do most of that with HR agents,” explained Lewis. “Depending on the question, whether it’s something about compensation, whether it’s something about handling separation, or hiring someone, or doing promotions, all of these connect with different HR internal systems, and they become like different agents.”
This represents the fundamental architectural shift from human computer interaction patterns to computer-mediated workflow automation. AI learns to run a complete business process end-to-end, rather than employees who learn to interact with AI tools.
Technical Implications: Companies need to promote engineering beyond API integration to enable AI agents to autonomously execute multi-step workflows.
Strategic impact on enterprise AI investments
IBM’s actual deployment data suggests some important changes in the enterprise AI strategy.
Abandon the chatbot-first thinking: Organizations need to identify the complete workflow for conversion, rather than adding conversational interfaces to existing systems. The goal is to eliminate human steps rather than improving human computer interactions.
Architects for multi-model flexibilityRather than committing to a single AI provider, enterprises need an integrated platform that allows them to switch between models based on use case requirements while maintaining governance standards.
Invest in communication standards: Organizations need to prioritize AI tools that support new protocols such as MCP, ACP, A2A, and more, rather than their own integrated approach to creating vendor lock-in.
“There’s a lot to build. I keep saying that everyone needs to learn AI, especially business leaders are the first leaders of AI and they need to understand the concept,” Lewis said.
Source link