When AI agents enter their actual deployments, organizations are under pressure to define where they belong, how they can effectively build them, and how they can operate at scale. At VentureBeat’s Transform 2025, technology leaders came together to talk about how they are transforming their business with agents. Shailesh Nalawadi, Vice President of Project Management at Sendbird. Thys Waanders, SVP of AI conversion in Cognigy. Sean Malhotra, CTO, Rocket Company.
Some Top Agent AI Use Cases
“The initial appeal of any of these deployments of AI agents tends to be to save human capital. Mathematics is very easy,” Narawadi said. “But that reduces the conversion capabilities you get with AI agents.”
Rocket has proven that AI agents are a powerful tool for increasing website conversions.
“The agent-based experience, the conversation experience on the website, shows that clients are more likely to triple conversions when they pass through that channel,” says Malhotra.
But it’s just damaging the surface. For example, rocket engineers built agents in just two days to automate highly specialized tasks. This is the calculation of transfer taxes during mortgage underwriting.
“That two-day effort saved us $1 million a year,” Malhotra says. “In 2024, we saved more than a million team members time. This is mostly behind AI solutions. This is not just about saving money. And team members can focus their time on people who make what is the biggest financial transaction of their lives.”
Agents are essentially overcharging individual team members. Those million hours saved have not been replicated of someone’s entire work multiple times. It’s a fraction of work that is something employees don’t enjoy or add value to their clients. And saving that million hours gives the rocket the ability to handle more business.
“Some of our team members were able to handle 50% more clients than last year than last year,” Malhotra added. “That means higher throughput and more business can be promoted. And we see a higher conversion rate as we spend time understanding the needs of our clients, as AI does more memorization work we can do now.”
The complexity of tackle agents
“Part of our engineering team’s journey is moving from the software engineering mindset (write once and test it, run the same answer 1,000 times, and move to a more probabilistic approach. “Many of that brings people in, not just software engineers, but product managers and UX designers.”
What helped was that LLM has come a long way, Waanders said. If they built something 18 months, two years ago, they really had to choose the right model, otherwise the agents wouldn’t run as expected. Now, he says we are at the stage where most mainstream models are behaving very well now. They are more predictable. But today’s challenge is to combine models, ensure responsiveness, adjust the right models with the right sequence and weave them with the right data.
“We have customers who push tens of millions of conversations a year,” Waanders said. “For example, how would automate 30 million conversations in a year expand in the LLM world? Even getting the availability of the model with a cloud provider is everything we had to find. For example, having enough quotas in the ChatGPT model.
The upper layer that coordinates LLM coordinates the network of agents, Malhotra said. A conversational experience has a network of agents under the hood, and the orchestrator decides which agents will cultivate requests from available agents.
“It really creates some interesting technical issues when you think about having hundreds or thousands of agents who play before that and can do a variety,” he said. “It’s becoming a bigger problem because delays and time are important. That agent routing will become a very interesting problem to solve in the next few years.”
Use vendor relationships
Up to this point, the first step for most companies launching Agent AI is building it in-house, as specialized tools don’t exist yet. However, building a typical LLM or AI infrastructure cannot create value distinctions. It also requires specialized expertise to debug, iterate, improve what’s built, and maintain the infrastructure, beyond the initial build.
“In many cases, the most successful conversations with prospects tend to be those who are already building something inside the company,” Narawadi said. “They realize that it’s okay to get to 1.0 right away, but they don’t have the ability to coordinate all of this as the world evolves, the infrastructure evolves and the technology needs to be replaced for new things.”
Preparing Agent AI Complexity
In theory, agent AI grows only in complexity. The number of agents in your organization increases, and you start to learn from each other, and the number of use cases explodes. How can an organization prepare for challenges?
“That means checking and balancing the system is even more stressful,” Malhotra said. “For those that have a regulatory process, you have a human in the loop to make sure someone has signed off for this. For important internal processes or data access, do you have the right art and monitoring? Once you unlock, you have to do it.”
So how can we be confident that we will behave as AI agents evolve?
“If you’re not thinking about it first, that part is really difficult,” Narawadi said. “The short answer is that you need to put your assessment infrastructure in place before you start building it. Make sure you have a strict environment from the AI agent that knows what it looks like from this test set.
The problem is that it is nondeterministic, Weander added. Unit testing is important, but the biggest challenge is not knowing what you don’t know. It’s about the erroneous behavior that an agent can see, how it reacts in certain situations.
“You can only find it by simulating a large-scale conversation by pushing it under thousands of different scenarios and analyzing how it holds and reacts,” Waanders said.
