
João Freitas is Director and Vice President of Engineering for AI and Automation. pager duty
As the use of AI continues to evolve in large organizations, leaders are increasingly looking for the next development to generate significant ROI. The latest wave of this ongoing trend is the introduction of AI agents. However, as with any new technology, organizations must ensure they deploy AI agents in a responsible manner that promotes both speed and security.
More than half of organizations have already deployed some level of AI agents, and more are expected to follow over the next two years. However, many early adopters are now reevaluating their approach. Four in 10 technology leaders regret not establishing a strong governance foundation from the beginning. This suggests that, despite the rapid adoption of AI, there has been room to improve policies, rules, and best practices designed to ensure the responsible, ethical, and legal development and use of AI.
As AI adoption accelerates, organizations need to find the right balance between risk exposure and implementing guardrails to ensure safe use of AI.
Where do AI agents create potential risks?
There are three main areas to consider to more securely deploy AI.
The first is shadow AI, where employees circumvent approved tools and processes and use unapproved AI tools without explicit permission. IT departments must create the necessary processes to experiment and innovate to introduce more efficient ways to work with AI. Shadow AI has been around as long as AI tools themselves, but the autonomy of AI agents can make it easier for unauthorized tools to operate outside the purview of IT departments, creating new security risks.
Second, organizations need to close gaps in AI ownership and accountability to prepare for incidents and process failures. The strength of AI agents is their autonomy. However, if an agent behaves unexpectedly, the team must be able to determine who is responsible for addressing the issue.
The third risk arises when there is a lack of explainability of the actions taken by the AI agent. AI agents are goal-oriented, but how they achieve that goal may be unclear. AI agents must have explainable logic underlying their actions so that engineers can track actions that may cause problems in existing systems and rollback if necessary.
These risks should not delay adoption, but they can help ensure that your organization is more secure.
Three guidelines for responsible AI agent deployment
Once organizations have identified the risks that AI agents may pose, they must implement guidelines and guardrails to ensure safe use. By following these three steps, organizations can minimize these risks.
1: Make human monitoring the default
AI agencies continue to evolve at a fast pace. However, human oversight is still required when AI agents are given the ability to act, make decisions, and pursue goals that can impact key systems. Human involvement is required by default, especially for business-critical use cases and systems. Teams using AI need to understand the actions the AI may take and where they need to intervene. Start carefully and increase the level of agency given to your AI agent over time.
Additionally, operations teams, engineers, and security professionals need to understand the role they play in monitoring AI agent workflows. Each agent should be assigned a specific human owner for clearly defined oversight and accountability. Organizations should also allow any human to flag or override the behavior of an AI agent if an action has a negative outcome.
When considering tasks for AI agents, organizations need to understand that while traditional automation is better at handling repetitive, rule-based processes with structured data input, AI agents can handle more complex tasks and adapt to new information in a more autonomous manner. This makes it an attractive solution for all kinds of tasks. However, once AI agents are deployed, organizations need to control the actions they can perform, especially in the early stages of a project. Therefore, teams working with AI agents must have an approval path for high-impact actions to ensure the agent’s scope does not exceed the expected use case and minimize risk to the broader system.
2: Bake in security
Introducing new tools should not expose your systems to new security risks.
Organizations should consider agent platforms that adhere to high security standards and are validated by SOC2, FedRAMP, or equivalent enterprise-grade certifications. Furthermore, AI agents should not be allowed to operate freely throughout an organization’s systems. At a minimum, the AI agent’s permissions and security scope must match the owner’s scope, and tools added to the agent must not allow extended permissions. Restricting access to the system based on the AI agent’s role will ensure a smooth deployment. Having a complete log of all actions performed by the AI agent also helps engineers understand what happened during an incident and track the issue.
3: Make the output explainable
Utilizing AI in your organization should never be a black box. The reasoning behind the actions must be explained so that engineers trying to access the actions can understand the context in which the agent made decisions and have access to the traces that led to those actions.
IAll action inputs and outputs are logged and must be accessible. This helps organizations establish a solid overview of the underlying logic of the AI agent’s behavior, providing great value if something goes wrong.
Security underpins the success of AI agents
AI agents offer organizations a huge opportunity to accelerate and improve existing processes. However, if you don’t prioritize security and strong governance, you may be exposing yourself to new risks.
As AI agents become more commonplace, organizations must ensure they have systems in place to measure their performance and ability to take action when problems arise.
read more guest writer. Or consider submitting your own post. See our Click here for guidelines.
