•
In the world of enterprise security, we're witnessing a technological shift that's changing the very nature of how organizations operate. Five years ago, the primary concern for security teams was managing access for human employees. Today, according to RSA's 2025 ID IQ report, non-human identities (NHIs) outnumber employees by 5:1 in large enterprises. This isn't just a statistic; it represents a fundamental change in how work gets done.
New and existing identity challenges
The rise of AI agents and autonomous systems has transformed an already beleaguered IT task — managing access rights — into one of the most complex, high-stakes questions facing nearly every enterprise: "Who (or what) should be allowed to do what?"
This question keeps us up at night, and I know it does the same for security leaders across industries. The problem is that most organizations are still treating authorization as an operational issue rather than the security imperative it has become.
When we speak with our customers, we hear the same challenges repeatedly:
"We don’t know if we can trust our source of truth of access across human employees, service accounts, and possibly AI agents" in our most critical systems.
"Our current tools can't adapt fast enough to keep pace with automated processes and business changes."
"We lack confidence in our access management across human employees, service accounts, and AI agents."
"Legacy identity providers and IGAs force us to map agentic entities into outdated paradigms designed for humans, not automated workflows with low explainability."
"Though we recognize risks, we struggle to prioritize and efficiently remediate what matters most."
"Our security team is overwhelmed by alerts without context or clear remediation paths."
"We can't find a vendor capable of both mastering the fundamentals and scaling with our growth."
These aren't just operational pain points — they represent fundamental security gaps that leave organizations vulnerable to breaches, data leaks, and compliance violations.

Why we built our risk layer
This is precisely why we're releasing our risk features and have enhanced our authorization reasoning platform. It is a structural step to solving this problem holistically. We believe security teams deserve more than just superficial visibility —they need intelligence, context, and most importantly, the power to take direct action.
The Risk Layer is built on three core principles:
1. Structured and accurate visibility of entitlements on identity types
All risk platforms are built on the notion of visibility, the holy grail of a single pane of glass. This is easier said than done. The difficulty lies in safely extracting an accurate model of what is happening with regards to access at a given point in time. Useful visibility requires both the policy that was assigned to an entity in a particular asset, in addition to the reality of how that policy is used. This allows for a clearer picture of the nature of access.
In order to construct an accurate picture of what is going on, as much as possible, a real time data layer is necessary, along with first party data from core systems. This is the backbone of everything we build, and we are continuing to provide the depth of what is truly happening in these systems, along with the breadth of identity types, so that one can safely take proactive action.
We follow the 80-20 rule for risk, to reflect the reality of security programs, as we build. We are focused on building out depth for high stakes systems and identities, while allowing our architecture to capture a less granular view of the long tail of less critical systems and identities to provide that full view, and implement a holistic IAM solution.Threats do not respect the artificial boundaries we've created between systems or what the industry’s “acronym of the year” claims is an issue.
2. Intelligent prioritization and explainable remediation
Today, access has a low signal to noise ratio. In an ideal world, you should not be alerted to a problem in access, unless it's really a problem. And then you definitely need to know. It is the type of system that should “just work,” like any other critical infrastructure.
The reality is, there’s a lot to clean up in most deployments before you can get there. Our risk layer uses machine learning to dynamically surface and prioritize issues discovered in a sea of “access bloat.” Opal ranks issues based on factors such as historical access patterns, resource sensitivity, and behavioral intelligence that learns from both your users as they access sensitive systems and from your security team as they remediate risks in Opal. The cleanup phase is human-in-the-loop, so that our systems can better understand which heuristics are effective in a given organization. On top of this, our dedicated model gives the user remediation guidance and further context from how Opal is used internally (provided to the model as context via RAG), helping teams understand not just what to fix, but why it matters — this is especially helpful in instances where tabular data might not provide all the relevant context.
We could not afford to just apply an off-the-shelf LLM to such a high-stakes problem. That would result in a mediocre user experience — garbage in, garbage out. But we cannot think of a better application for AI today if done well, which is why we’ve built our infrastructure to be AI native from day one. We've built a system that reasons about authorization in context, has interpretability, and considers the downstream impact of changes before they're made. This reduces the likelihood of downstream failures in provisioning and de-provisioning from overly aggressive access cleanups — what our team jokingly calls “SCIMcidents.”
3. Composable authorization for hybrid workflows
The future of work is hybrid — humans and AI working together. Traditional authorization models assume static, human-centric access patterns, but that world is becoming rapidly irrelevant. Our platform is built for dynamic, agent-aware authorization that adapts to how work actually happens today.
This means security teams can allow AI agents to call Opal for data or access decisions — with human oversight — applying consistent policy across both human and automated actors. It means delegating tasks to agents securely, with auditability and built-in guardrails. And it means being able to federate authorization in multi-agent environments, composing custom automations that align with your specific security requirements and driving clarity on what humans and machines can access in the same workflow. For example, agents reveal sensitive information, like upcoming layoffs unless their human counterpart is also authorized or provide a backdoor to high risk systems for malicious or misguided actors.
Redefining security's role in the autonomous era
What we're building at Opal isn't just another security tool — it's a fundamental reimagining of how authorization should work in an era of autonomous systems and AI agents.
For too long, identity security has been a high stakes, low impact task passed between IT, DevOps, governance, and security teams until something inevitably breaks. Security teams are held accountable when breaches occur, yet they've lacked the tools to proactively prevent them. We hope to change this equation by giving security teams direct control over identity risk.
When we founded Opal, it was our vision to transform access from a passive review process into an active control plane, with real technical vision and might behind it. As an engineer in high security environments, that is what compelled me to take on this challenge. This is a step towards the realization of this vision — a system that not only identifies risks but empowers security teams to take immediate action.

The path forward
As we look to the future, authorization challenges will only grow more complex. Agent-to-agent interactions, chain-of-thought reasoning, and autonomous decision-making will become standard operating procedures for enterprises seeking to remain competitive. And these are the same technologies that will allow us to finally parameterize and intelligently hack out our way out of this obscure mess.
Security teams need to be equipped not just for today's challenges, but for tomorrow's innovations. That's why we're continuing to invest in our composable, agent-aware authorization framework. We're building toward a future where authorization isn't just about preventing breaches — it's about enabling safe innovation at scale.
Tying our data layer to the risk center is a significant step on this journey, but it's just the beginning. We're committed to partnering with security teams to build a future where identity security is no longer a barrier to innovation but an enabler of it.
In the future, authorization becomes the critical security layer that makes everything else possible. I believe that our approach can be the foundation that security teams build upon as they navigate this new landscape — ensuring resilience, control, and the ability to move faster without increasing risk.
Let's build a future where security enables innovation, where clarity leads to action, and where every entity — human or machine — is secure by design.