Rogue agents: Securing the foundation of enterprise GenAI
"Organisations need to secure AI agents at scale, potentially overseeing thousands or even millions at once."

Over the past two years, the rapid growth of generative AI (GenAI) has sparked widespread innovation and a surge in demand from enterprises globally. However, this push for fast progress has increased risks, as the pressure to move quickly often leads to security shortcuts.
Malicious actors are exploiting GenAI to scale their operations, making attacks more frequent and damaging than ever. Securing GenAI-powered enterprise applications requires implementing key security controls to protect the infrastructure, which holds and processes vast amounts of sensitive enterprise data. It is vital to ensure that these security measures are in place in order for businesses to deploy these applications with confidence.
The emergence of AI agents
GenAI has rapidly evolved from content creation tools to the phenomenon of autonomous agents capable of making decisions and taking actions. While not yet widely used in production, these AI agents are expected to see rapid adoption due to their benefits. However, this shift introduces security challenges, particularly in managing machine identities (AI Agents) that may behave unpredictably. Enterprises will need to secure AI agents at scale, potentially overseeing thousands or even millions at once.
Key considerations include authenticating AI agents, managing and restricting their access and controlling their lifecycle to prevent rogue agents from retaining unnecessary permissions. It's also crucial to ensure AI agents carry out their intended functions within enterprise systems. As this technology advances, best practices for secure integration will emerge. However, securing the backend infrastructure of GenAI implementations will be essential to running AI agents on a robust and protected platform.
Evolving security challenges
As with any emerging technology, it is vital to secure agents as they become mainstream. In tandem, as with any major technology innovation, identity security practices must evolve to address new challenges. GenAI introduces unique security concerns that require continuous adaptation, such as protecting against prompt injection attacks, which can expose sensitive data or cause unintended actions.
However, it's important to remember that GenAI-powered applications rely on underlying systems and databases. Without securing this core infrastructure, enterprise applications become vulnerable to serious attacks, such as data leaks, poisoning, model manipulation, or service disruption.
Many identities - human or machine - that have access to critical infrastructure are prime targets for attackers. Identity-related breaches are a leading cyberattack vector, so identifying, managing, and securing these identities is vital. Fortunately, securing these identities aligns with established best practices for protecting other environments, especially cloud infrastructure, where most GenAI components are deployed.
Key components of enterprise GenAI-powered applications
When developing GenAI-powered applications, several critical components must be considered. Application interfaces, such as APIs, act as gateways for users and applications to interact with GenAI systems, making their security essential to prevent unauthorised access and ensure only legitimate requests are processed.
Additionally, learning models and large language models (LLMs) analyse vast amounts of data to identify patterns and make predictions, with most enterprises relying on leading LLMs from providers like OpenAI, Google, and Meta. While these models are trained on public data, enterprises must further refine them with proprietary data to gain a competitive advantage. However, while leveraging internal data is key to developing unique GenAI applications, protecting sensitive information from leaks or loss is a top priority. Finally, deployment environments, whether on-premises or in the cloud, must be secured with stringent identity security measures to ensure the safe operation of AI applications.
Establishing strong identity security measures
Implementing strong identity security measures is essential to mitigate risks and protect the integrity of GenAI applications. Many identities have high levels of access to critical infrastructure and, if compromised, could provide attackers with multiple entry points. It is important to emphasise that privileged users include not just IT and cloud teams but also business users, data scientists, developers and DevOps engineers.
A compromised developer identity, for instance, could grant access to sensitive code, cloud functions, and enterprise data. Additionally, the GenAI backbone relies heavily on machine identities to manage resources and enforce security. As machine identities often outnumber human ones, securing them is crucial. Adopting a Zero Trust approach is vital, extending security controls beyond basic authentication and role-based access to minimise potential attack surfaces.
To enhance identity security across all types of identities, several key controls should be implemented. Enforcing strong adaptive multi-factor authentication (MFA) for all user access is essential to prevent unauthorised entry. Securing access to credentials, keys, certificates, and secrets—whether used by humans, backend applications, or scripts—requires auditing their use, rotating them regularly, and ensuring that API keys or tokens that cannot be automatically rotated are not permanently assigned.
This means that only the minimum necessary systems and services should be exposed. By implementing zero standing privileges (ZSP), it ensures that users do not have permanent access rights and can only assume specific roles when required. Where ZSP is not feasible, applying least-privilege access minimises the attack surface in case of user compromise. Additionally, by isolating and auditing sessions for all users accessing the GenAI backend infrastructure, you are strengthening your security. Finally, centrally monitoring user behaviour for forensics, audits, and compliance—along with logging and tracking any changes—helps maintain a secure and well-governed AI environment.
Ensuring security without compromising
When planning your approach to implementing security and privilege controls, it’s crucial to recognise that GenAI-related projects will likely be highly visible within the organisation. Development teams and corporate initiatives may view security controls as inhibitors in these scenarios. The complexity increases as you need to secure a diversified group of identities, each requiring different levels of access and using various tools and interfaces. That’s why the controls applied must be scalable and sympathetic to users’ experience and expectations; they should not negatively impact productivity and performance.
Yuval Moss is Vice President of Solutions for Global Strategic Partners at CyberArk.
Have you got a story or insights to share? Get in touch and let us know.