Resources

Authorization Resource Center

Master authorization fundamentals with our guides, events, and more.
Open Policy Agent

If you're working with infrastructure as code, especially with Terraform, you know how quickly things can get out of hand without proper governance. This is where policy as code comes in handy. Today we’ll discuss Open Policy Agent. OPA is one of the best tools to help you maintain security while using Terraform.

What exactly is Policy as Code?

Policy as Code (PaC) is the practice of defining your governance, compliance, and security rules using code. Think of it as version-controlling your rules, making them auditable, and automatically enforcing them. For Terraform users, this means you can evaluate infrastructure changes before they're deployed and roll them back quickly if something unexpected happens after they're deployed. This significantly reduces the risk of misconfigurations and non-compliance. By integrating PaC into your CI/CD pipelines, you gain consistency, traceability, and faster remediation across your infrastructure workflows.

How Open Policy Agent works with Terraform

OPA is a general-purpose policy engine that evaluates policies written in a language called Rego against JSON-based data inputs. When it comes to Terraform, OPA can evaluate your policies against the JSON output of a Terraform plan. Tools like Conftest or OPA’s HTTP API can then be used to test whether your proposed infrastructure changes meet your defined policy requirements. For example, you can use OPA to validate that resources are only deployed in approved regions or that all required tags are present. The results of these evaluations can then be used to block, warn, or simply log non-compliant infrastructure changes in your automated workflows.

OPA vs. Sentinel: a quick comparison

If you've been in the Terraform ecosystem for a while, you might be familiar with Sentinel. So, what's the difference between OPA and Sentinel? OPA is a more flexible, open-source policy engine that works across a much broader cloud-native stack – Terraform, Kubernetes, APIs, and more. Sentinel, on the other hand, is developed by HashiCorp and is tightly coupled with Terraform and the HashiCorp ecosystem. While Sentinel is great for what it does within its scope, OPA provides broader integration opportunities, a larger community, and greater extensibility. If you're a team looking to standardize policy enforcement across diverse environments and tools, OPA is definitely the way to go. For an introduction on how to use OPA for Kubernetes, checkout our guide here.

The benefits of integrating OPA with Terraform

Integrating OPA with Terraform offers several compelling benefits:

  • Preventing Non-Compliant Resources: You can stop non-compliant resources from ever being provisioned.
  • Enforcing Standards: Easily enforce tagging, naming, or security requirements across your infrastructure.
  • Early Feedback: Get immediate feedback in your CI/CD pipelines, catching issues before they become problems.

Writing an OPA policy with Terraform—an example

Let's walk through a simple example of how you might write an OPA policy to ensure that all AWS S3 buckets are encrypted. First, you'll need a Terraform plan output in JSON format. You can generate this using terraform plan -out=tfplan.binary and then terraform show -json tfplan.binary > tfplan.json.

Now, let's write a Rego policy (e.g., s3_encryption.rego):

package terraform.aws.s3

default allow = false

allow {
 input.resource_changes[_].type == "aws_s3_bucket"
 input.resource_changes[_].change.after.server_side_encryption_configuration[_].rule[_].apply_server_side_encryption_by_default[_].sse_algorithm == "AES256"
}

# Deny if any S3 bucket is created or updated without AES256 encryption
deny[msg] {
 some i
 resource := input.resource_changes[i]
 resource.type == "aws_s3_bucket"
 resource.change.actions[_] == "create"
 not resource.change.after.server_side_encryption_configuration[_].rule[_].apply_server_side_encryption_by_default[_].sse_algorithm == "AES256"
 msg := sprintf("S3 bucket '%s' must have AES256 encryption enabled.", [resource.address])
}

deny[msg] {
 some i
 resource := input.resource_changes[i]
 resource.type == "aws_s3_bucket"
 resource.change.actions[_] == "update"
 not resource.change.after.server_side_encryption_configuration[_].rule[_].apply_server_side_encryption_by_default[_].sse_algorithm == "AES256"
 msg := sprintf("S3 bucket '%s' must have AES256 encryption enabled.", [resource.address])
}


To test this policy, you can use the opa eval command:

opa eval -i tfplan.json -d s3_encryption.rego "data.terraform.aws.s3.deny"

If your Terraform plan includes an S3 bucket without AES256 encryption, the opa eval command will return a denial message, indicating a policy violation.

How does OPA differ from Oso?

OPA is a general-purpose policy engine used to evaluate rules against JSON-based data inputs. It’s commonly applied to infrastructure and admission control policies like with Terraform and Kubernetes. Oso on the other hand focuses on application-level access control. It can help with with authorization challenges within your application such as determining if a user is allowed to take a certain action or view a certain document. They are complementary tools specializing in authorization for different parts of your stack.

As an example, engineering teams can centralize infrastructure policy using OPA (e.g., deny insecure configurations) and centralize application authorization logic using Oso (e.g., only grant edit access to users with ‘admin’ or ‘editor’ status). Each tool serves a purpose, and together they support a broader policy-as-code initiative.

When to choose Oso or OPA

This is a common question, and it's important to understand their distinct roles:

  • Use Oso when implementing user and object level authorization logic inside an app (e.g., feature gating, permissions, role hierarchies).
  • Use OPA when you need to validate resource configuration  across environments, especially in CI/CD or with infrastructure-as-code tools.

OPA shines in its ability to integrate with your infrastucture like Kubernetes and Terraform natively. Oso’s strength is in its ability to easily integrate into your application, simplifiying and centralizing authorization logic.

Conclusion

Adopting policy as code with OPA is great for managing your infrastructure. It brings the rigor and benefits of software development practices to your infrastructure, making it more secure, compliant, and manageable. While OPA handles the infrastructure side, remember that Oso is there to provide robust, fine-grained authorization within your applications. Used together, they create a powerful, layered approach to policy enforcement across your entire stack.

Open Policy Agent

How to secure Kubernetes with Open Policy Agent

In this article, we’ll cover using OPA Gatekeeper for maintaining security and compliance in your Kubernetes environment. Open Policy Agent (OPA) and its Kubernetes-specific integration, OPA Gatekeeper, address the challenges of security and compliance through a clean policy-as-code approach. This article explores what OPA and Gatekeeper are, how they integrate with your Kubernetes environment, and how to use them to enforce organizational security standards. Specifically, this guide will inform you on making your Kubernetes clusters more robust and less prone to misconfigurations.

What is Open Policy Agent (OPA)?

Open Policy Agent is an open-source, general-purpose policy engine that defines and enforces policies as code across your entire infrastructure stack. You can write rules once and apply them everywhere from microservices and APIs to CI/CD pipelines and Kubernetes.

OPA uses Rego for writing policy rules. Rego is designed to query and manipulate structured data like JSON. For example, you could use OPA to deny requests when container images do not come from approved registries. This approach decouples policy decision making from application logic. Your services ask OPA for decisions rather than containing hardcoded rules.

The policy-as-code approach enables version control, testing, and reuse of policies across different environments, making your security posture more consistent and manageable. OPA exposes APIs through HTTP or library calls to evaluate policy queries, acting as a centralized decision point where any component can ask, "Is this action allowed?" or "Does this configuration comply with our policies?".

If you’d like to go deeper on OPA and Rego, we have an entire tutorial with examples.

How OPA helps secure Kubernetes environments

In Kubernetes environments, admission controllers serve as the first line of defense for security and compliance enforcement. These plugins intercept API server requests before objects are persisted. OPA can be deployed as a dynamic admission controller to enforce custom policies on Kubernetes resources.

OPA integration provides flexible mechanisms for implementing fine-grained controls beyond Kubernetes' built-in validations. For example, you could use the OPA integration to mandate specific labels for auditing purposes, enforce resource limits, allow images from approved sources only, etc.

OPA evaluates each incoming object against organizational rules. Non-compliant configurations (such as Pods missing required securityContext settings) are rejected with clear explanatory messages, preventing misconfigurations from being applied.

Beyond real-time enforcement, OPA constantly audits existing resources for violations, detecting any drift from desired states. This provides a comprehensive approach to defining and enforcing cluster governance rules.

What is OPA atekeeper?

OPA Gatekeeper is the result of a collaboration between Google, Microsoft, Red Hat, and Styra to provide native OPA support in Kubernetes. It’s the Kubernetes-specific integration of OPA  designed to simplify policy decisions in Kubernetes environments. Gatekeeper extends the Kubernetes API with Custom Resource Definitions (CRDs) for policy enforcement. OPA Gatekeeper is implemented as a webhook that can both validate incoming requests and modify requests before allowing them to pass.

Gatekeeper enhances OPA with several Kubernetes-native features:

  • ConstraintTemplates and Constraints: CRDs that declare policies as Kubernetes objects rather than raw configuration files, enabling policy management through kubectl
  • Parameterization and Reusability: ConstraintTemplates serve as reusable policy definitions, while Constraints are parameterized instances, creating extensible policy libraries
  • Audit Functionality: Continuous resource auditing against enforced policies, identifying violations in resources created before policy implementation
  • Native Integration: Built-in Kubernetes tooling that registers as ValidatingAdmissionWebhook and MutatingAdmissionWebhook, ensuring real-time policy enforcement

Gatekeeper transforms OPA into a Kubernetes-native admission controller using a "configure, not code" approach. Instead of building custom webhooks, you write Rego policies and JSON configurations while Gatekeeper handles admission flow integration.

Working within the Kubernetes control plane

Gatekeeper integrates as a “validating admission webhook” within the API server's admission control pipeline. What does that actually mean? When requests to create or modify Kubernetes resources are sent, the API Server authenticates and authorizes them before invoking admission controllers.

The integration process works as follows: Gatekeeper registers a webhook with the API Server for admission events (Pod creation, Deployment updates, etc.). The API Server pauses requests and sends objects (wrapped in AdmissionReview) to Gatekeeper/OPA for evaluation. Using OPA, Gatekeeper evaluates objects against active policies (Constraints). Policy violations result in rejection responses with explanatory messages, while compliant requests are accepted and fulfilled.

A look at K8s Admissions Control Phases

Gatekeeper's admission webhook translates Kubernetes AdmissionReview requests into OPA's input format and queries the loaded Rego policies. The JSON structure passed to OPA includes object content, operations (CREATE/UPDATE), and user information. OPA outputs violations, which Gatekeeper translates into admission responses.

Beyond real-time enforcement, Gatekeeper provides background caching and auditing capabilities. It can replicate Kubernetes objects into OPA's data store, enabling policies to reference other cluster resources (e.g., "deny this Ingress if any other Ingress has the same hostname"). The audit controller periodically scans resources against policies, storing violation results in Constraint status fields for governance reporting.

To recap, Gatekeeper extends the Kubernetes control plane by adding two main things. The first is policy enforcement at admission time. The second is continuous audit capabilities. With OPA Gatekeeper, you’re able to get both of these functionalities without replacing core components of your Kubernetes environment. This architecture integrates cleanly with Kubernetes API machinery while respecting the platform's design principles.

Next we’ll go a little deeper into Constraint and take a look at a real-world example.

ConstraintTemplates and Constraints

ConstraintTemplate is a fundamental concept in OPA Gatekeeper. This Kubernetes Custom Resource Definition (CRD) defines new policy types, serving as blueprints that contain Rego evaluation code and parameter schemas for different policy uses.

When creating a ConstraintTemplate, you define a new constraint type for the Kubernetes API. For example, a template named "K8sRequiredLabels" creates a constraint kind "K8sRequiredLabels". Templates consist of two main components:

  • Targets & Rego: The actual policy code that runs for admission requests. In Gatekeeper, the target is typically admission.k8s.gatekeeper.sh, applying to Kubernetes object admission events. Rego code produces violation[] or deny[] rules when policies are violated, causing Gatekeeper to block requests with explanatory messages.
  • CRD Schema: Defines the structure of spec.parameters that users provide in Constraints. This enables policy reusability by allowing administrators to specify inputs (required labels, value ranges, etc.) when instantiating policies.

ConstraintTemplates alone do not enforce policies until you create Constraints, which are template instances. The workflow involves applying a ConstraintTemplate (registering the policy type), then applying Constraint resources to activate enforcement. Gatekeeper compiles Rego from all ConstraintTemplates and enforces policies when corresponding Constraints exist.

This pattern enables reusability and separation of concerns. Policy authors provide generic templates while cluster administrators instantiate them with organization-specific settings. For instance, a K8sRequiredLabels template can generate multiple Constraints: one requiring "owner" labels on Deployments, another requiring "environment" labels on Namespaces.

A Real-World Policy Example: Enforcing required labels

Let's make this concrete with an example. Imagine you want to ensure every Kubernetes Namespace has specific labels, perhaps to indicate the department or owner. Here is how you would do it with OPA Gatekeeper:

1. ConstraintTemplate example—required labels policy

apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
  name: k8srequiredlabels
spec:
  crd:
    spec:
      names:
        kind: K8sRequiredLabels
      validation:
        openAPIV3Schema:
          properties:
            message:
              type: string
            labels:
              type: array
              items:
                type: string
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8srequiredlabels

        violation[{"msg": msg}] {
          required := input.parameters.labels
          provided := input.review.object.metadata.labels
          missing := required[_]
          not provided[missing]
          msg := sprintf("Missing required label: %v", [missing])
        }


This ConstraintTemplate defines a new policy type called K8sRequiredLabels. It specifies that this policy will take a message (string) and a list of labels (array of strings) as parameters. The Rego code then checks if all the specified labels are present on the incoming Kubernetes object.

2. Constraint Example—enforcing labels on namespaces

To actually enforce this, you would create a Constraint that uses this template:

apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
  name: namespace-must-have-owner-and-env
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Namespace"]
  parameters:
    message: "Namespaces must have 'owner' and 'environment' labels."
    labels:
      - owner
      - environment

This Constraint named namespace-must-have-owner-and-env uses our K8sRequiredLabels template. It is configured to match Namespace objects and requires them to have both owner and environment labels. If someone tries to create a Namespace without these labels, Gatekeeper will block the request and return the specified message.

Getting started with OPA Gatekeeper

Getting started with OPA Gatekeeper is straightforward. You can install it in your Kubernetes cluster using Helm or by applying the raw YAML manifests. The official Gatekeeper documentation provides detailed instructions for installation.

Once installed, you will want to:

  1. Deploy ConstraintTemplates: Start by deploying the ConstraintTemplates that define the types of policies you want to enforce. You can find a library of common ConstraintTemplates in the Gatekeeper policy library.
  2. Create Constraints: Instantiate Constraints from your ConstraintTemplates, specifying the parameters and the resources they should apply to.
  1. Test your policies: Always test your policies thoroughly in a non-production environment first. Make sure they behave as expected and do not inadvertently block legitimate operations.
  2. Monitor and audit: Use Gatekeeper's audit functionality to continuously monitor your cluster for policy violations and ensure compliance.

The best way to get started is to pick a simple policy, like requiring specific labels or enforcing resource limits, and implement that first. Get comfortable with the ConstraintTemplate/Constraint pattern and how Rego works. From there, you can gradually build up more complex policies as your needs evolve.

Conclusion

OPA Gatekeeper provides a powerful way to implement policy-as-code in your Kubernetes clusters. By combining the flexibility of OPA with Kubernetes-native integration, it enables you to enforce security and compliance policies consistently across your infrastructure. The ConstraintTemplate and Constraint pattern makes policies reusable and maintainable, while the audit functionality helps you maintain ongoing compliance.

OPA Gatekeeper is a robust solution that integrates seamlessly with your existing Kubernetes workflows. Start with simple policies and gradually build up your policy library as you become more comfortable with the system. Best of luck!

Open Policy Agent

1. Introduction to Open Policy Agent (OPA)

If you're building modern applications, especially those using microservices or Kubernetes, you've probably bumped into the challenge of authorizing access to infrastructure. It's not just about who can log in, but what services they can access once they're in, and what those services can access. Today I want to discuss OPA and how we can use it to reason about and implement policy enforcement.

So, what exactly is OPA? At its core, OPA is an open-source, general-purpose policy engine. Think of it as a brain that makes decisions about whether something is allowed or not. The beauty of OPA is that it decouples policy decision-making from policy enforcement. This means your application or service doesn't need to know how to make a policy decision, it just needs to ask OPA, "Is this allowed?" and OPA will give a clear answer.

This is super important, because without proper consideration, you can end up with a hard to manage mess of policies sprinkled all throughout your code. You have policies for who can access your APIs, what containers can run in Kubernetes, how your CI/CD pipelines behave, and even what data can be accessed in your databases. Trying to hardcode these policies into every single service is a nightmare to manage and update. OPA provides a unified way to manage all these policies as code, making them consistent, auditable, and much easier to maintain.

Here's how it generally works: Your software queries OPA, providing structured data (usually JSON) as input. This input could be something like a user's identity, the resource they're trying to access, the time of day, or even environmental variables. OPA then evaluates this input against its policies, which are written in a high-level declarative language called Rego. The output is a decision, which can be a simple allow/denyor more complex structured data, depending on what your policy needs to convey.

In this article, I'm going to walk you through some practical examples and use cases where OPA truly shines. We'll dive into the details of Rego and see how you can use OPA to solve real-world authorization challenges. Let's get started!

2. Understanding Rego: OPA's Policy Language

Now that you have a grasp of what OPA is, let's talk about Rego. Rego is the language you'll use to write your policies in OPA. It's a declarative language, which means you describe what you want to achieve, not how to achieve it. It's specifically designed for expressing policies over complex, hierarchical data structures like JSON.

In Rego, policies are defined as rules. These rules essentially define what is true or false based on the input data. Let's look at some core concepts:

Rules

Rules in Rego can be either complete or partial.

  • Complete Rules: These assign a single value to a variable. For example, a rule that defines whether a request is allowed or denied.
default allow = false

allow {
    input.method == "GET"
    input.path == ["users"]
}

In this simple example, allow is true only if the input method is "GET" and the path is "users". Otherwise, it defaults to false. This means that the following input would result in true .

"input": {
  "method": "GET",
  "path": ["users", "123"]
}


  • Partial Rules: These generate a set of values and assign that set to a variable. This is useful for collecting multiple results that satisfy a condition.
allowed_users[user] {
    data.users[user].role == "admin"
}

This rule would create a set of all users who have the role "admin".

So given the following input,

{
  "users": {
    "alice": { "role": "admin" },
    "bob": { "role": "user" },
    "charlie": { "role": "admin" }
  }
}


you’d get this out.

allowed_users = {"alice", "charlie"}


Expressions

Rego policies are built using expressions. Multiple expressions within a rule are implicitly joined by a logical AND. All expressions must evaluate to true for the rule to be true. For example:

allow {
    input.user == "alice"
    input.action == "read"
    input.resource == "data"
}


Here, all three conditions must be met for allow to be true. Here’s an example input that would result in  true.

{
  "user": "alice",
  "action": "read",
  "resource": "data"
}

Variables

You can use variables to store intermediate values or to iterate over collections. Variables are assigned using the := operator. OPA will find values for variables that make all expressions true.

allow {
    user := input.user
    data.roles[user] == "admin"
}


In this case, user is assigned the value of input.user, and then that user is checked against the data.roles to see if they are an "admin".

Iteration

Iteration in Rego often happens implicitly when you use variables in expressions. For example, to check if any element in a list meets a condition, you can use a variable to represent each element:

allow {
    some i
    input.roles[i] == "admin"
}


This rule would be true if any of the roles in input.roles is "admin". The some keyword is used to declare local variables that are used for iteration.

If you want to return true if roles only contains admin, then you can do the following.

allow {
    not some i
    input.roles[i] != "admin"
}


This says “Allow if there does not exist any index i such that input.roles[i] != "admin"”. In other words, all roles must be “admin”.

This is just a quick overview of Rego basics. The official OPA documentation is an excellent resource for a deeper dive. Now, let's get into some real-world use cases!

3. OPA Use Cases and Practical Examples

Now let’s se OPA in action! I’ve picked out five common scenarios where OPA can be incredibly powerful. For each, I’ll give you a brief scenario, explain how OPA fits in, and provide a Rego policy example.

Use Case 1: Attribute-Based Access Control (ABAC) for Cloud Resource Access

Imagine you’re managing access to cloud resources (e.g., AWS S3 buckets, Azure Blob Storage, Google Cloud Storage). It’s not enough to just know who is trying to access a resource; you need granular control based on various attributes. For instance, a user might only be allowed to access a specific S3 bucket if they are part of a certain team, the request originates from a whitelisted IP range, and the time of day is within business hours. This is a classic ABAC problem, where access decisions are based on attributes of the user, the resource, the environment, and the action.

How OPA helps: OPA is perfectly suited for ABAC because Rego excels at evaluating complex conditions based on structured data. You can feed OPA all the relevant attributes (user roles, source IP, time of day, resource tags, etc.), and your Rego policy will determine if the access is authorized.

Let’s look at a simplified example for controlling access to an S3 bucket. We want to ensure that:

1.The user is part of the 'devops' team.

2.The request comes from an IP within the corporate network range.

3.The access attempt is during business hours (9 AM to 5 PM UTC).

Here’s how you might write a Rego policy for this:

default allow = false

allow {
    input.user.team == "devops"
    is_from_whitelisted_ip
    is_during_business_hours
}

is_from_whitelisted_ip {
    ip_range_contains("192.168.1.0/24", input.source_ip)
}

is_during_business_hours {
    current_hour := time.now_ns() / 1000000000 / 60 / 60 % 24
    current_hour >= 9
    current_hour < 17
}

# Helper function to check if an IP is within a CIDR ip_range_contains(cidr, ip) if {
    # Simplified for example, real implementation would parse CIDR and IP
    # and perform network calculations.
    startswith(ip, "192.168.1.")
}
{
  "user": {
    "name": "alice",
    "team": "devops"
  },
  "source_ip": "192.168.1.50",
  "action": "read",
  "resource": "s3_bucket_logs"
}


If you were to run this with opa eval -d policy.rego -i input.json "data.app.abac.allow", the output would be true (assuming the current time is within business hours and the IP is whitelisted). This demonstrates how OPA can enforce attribute-based policies for infrastructure access.

Use Case 2: Role-Based Access Control (RBAC) for Server Access

Let's say you have a fleet of servers, and different teams or individuals need varying levels of access to them. For example, 'network admins' can configure network interfaces, 'devops' can deploy applications, and 'auditors' can only view logs.

In this kind of scenario, OPA can centralize your infrastructure’s RBAC policies. This makes it easier to manage user roles and their associated permissions across your server fleet. Instead of scattering authorization logic across SSH configurations or individual server scripts, you define it once in OPA. When a user tries to perform an action on a server, your system queries OPA with the user's role and the requested action. Then OPA returns whether it's allowed.

Here’s a Rego policy that implements a basic RBAC system for server access:

default allow = false

# Allow if the user has the required role for the action on the resource
allow {
    some role in data.user_roles[input.user]
    some grant in data.role_grants[role]
    input.action == grant.action
    input.resource_type == grant.resource_type
}


The above Rego code is analogous to the following pseudocode in Python.

for role in data.user_roles[input.user]:
    for grant in data.role_grants[role]:
        if input.action == grant["action"] and input.resource_type == grant["resource_type"]:
            allow = True

Input Data (example data.json):

# Example: network_admin can configure network interfaces
# Example: devops can deploy applications
# Example: auditor can view logs

{
    "user_roles": {
        "alice": [
            "auditor"
        ],
        "bob": [
            "devops",
            "network_admin"
        ],
        "charlie": [
            "auditor",
            "devops"
        ]
    },
    "role_grants": {
        "auditor": [
            {
                "action": "view",
                "resource_type": "logs"
            }
        ],
        "devops": [
            {
                "action": "view",
                "resource_type": "logs"
            },
            {
                "action": "deploy",
                "resource_type": "application"
            }
        ],
        "network_admin": [
            {
                "action": "view",
                "resource_type": "logs"
            },
            {
                "action": "deploy",
                "resource_type": "application"
            },
            {
                "action": "configure",
                "resource_type": "network_interface"
            }
        ]
    }
}


Input Query (example input.json for Alice trying to view logs)

{
  "user": "alice",
  "action": "view",
  "resource_type": "logs"
}

Running opa eval -d policy.rego -d data.json -i input.json "data.app.rbac.allow" would return true for Alice. If Alice tried to deploy an application, it would return false. This example shows how you can define roles and their permissions for infrastructure components, and then easily check if a user, based on their assigned roles, is authorized to perform a specific action. It’s a clean and scalable way to manage access control for your infrastructure.

Use Case 3: Kubernetes Admission Control

Scenario: Kubernetes is fantastic for orchestrating containers, but how do you ensure that only approved images are deployed, or that all deployments have specific labels for cost allocation or security? Kubernetes Admission Controllers intercept requests to the Kubernetes API server before objects are persisted. This is a perfect choke point for policy enforcement.

How OPA helps: OPA can act as a validating or mutating admission controller in Kubernetes. This means you can write policies in Rego that dictate what can and cannot be deployed to your clusters, or even modify resources on the fly. This is incredibly powerful for maintaining security, compliance, and operational best practices across your Kubernetes environments. If you're a company that needs strict control over your Kubernetes deployments, OPA as an admission controller is, in my opinion, a must-have.

Let's consider a policy that prevents deployments from using images from unapproved registries and ensures all deployments have a team label.

default allow = false

allow {
    input.request.kind.kind == "Pod"
    image_is_approved
    has_team_label
}

image_is_approved {
    some i
    image := input.request.object.spec.containers[i].image
    startswith(image, "approved-registry.com/")
}

has_team_label {
    input.request.object.metadata.labels.team
}

Input Query (example input.json for a Pod creation request):

{
  "apiVersion": "admission.k8s.io/v1",
  "kind": "AdmissionReview",
  "request": {
    "uid": "705ab455-63f4-11e8-b7ad-0242ac110002",
    "kind": {
      "group": "",
      "version": "v1",
      "kind": "Pod"
    },
    "resource": {
      "group": "",
      "version": "v1",
      "resource": "pods"
    },
    "object": {
      "metadata": {
        "labels": {
          "app": "my-app",
          "team": "devops"
        }
      },
      "spec": {
        "containers": [
          {
            "name": "my-container",
            "image": "approved-registry.com/my-image:latest"
          }
        ]
      }
    }
  }
}


Running opa eval -d policy.rego -i input.json "data.kubernetes.admission.allow" would return true for this valid request. If the image was from unapproved-registry.com or the team label was missing, the policy would evaluate to false, and Kubernetes would reject the admission request. This provides a robust and centralized way to enforce policies in your Kubernetes clusters.

Use Case 4: API Gateway Authorization

Scenario: Your microservices architecture likely exposes APIs through an API Gateway. This gateway is the first line of defense for your backend services. You need to authorize incoming requests, perhaps based on JWT claims, IP addresses, or even rate limits. Hardcoding this logic into each microservice is inefficient and error-prone.

How OPA helps: OPA can be integrated with API Gateways (like Envoy, Kong, or AWS API Gateway) to offload authorization decisions. When a request comes into the gateway, it sends the request details (headers, body, path, method) to OPA. OPA evaluates these details against your policies and sends back an allow/deny decision. This centralizes your API authorization logic, making it easier to manage and update policies across all your APIs.

Let's imagine a policy where only authenticated users with a specific role can access a sensitive API endpoint, and only from a whitelisted IP range.

default allow = false

allow {
    input.method == "GET"
    input.path == ["v1", "sensitive_data"]
    is_authenticated
    has_required_role
    is_from_whitelisted_ip
}

is_authenticated {
    input.headers.authorization
    # In a real scenario, you'd decode and validate the JWT here
    # For simplicity, we just check for presence of the header
}

has_required_role {
    input.jwt_claims.roles[_] == "admin"
}

is_from_whitelisted_ip {
    ip_range_contains("192.168.1.0/24", input.source_ip)
}

# Helper function to check if an IP is within a CIDR range
# This would typically be a built-in or a more robust library function
ip_range_contains(cidr, ip) if {
    # Simplified for example, real implementation would parse CIDR and IP
    # and perform network calculations.
    # For demonstration, let's assume a simple string match for now.
    startswith(ip, "192.168.1.")
}

Input Query (example input.json for a valid API request):

{
  "method": "GET",
  "path": ["v1", "sensitive_data"],
  "headers": {
    "authorization": "Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..."
  },
  "jwt_claims": {
    "sub": "user123",
    "roles": ["user", "admin"]
  },
  "source_ip": "192.168.1.100"
}

Running opa eval -d policy.rego -i input.json "data.api.authz.allow" would return true for this request. If the user didn't have the admin role, or the request came from an unwhitelisted IP, OPA would return false, and the API Gateway would block the request. This centralized approach to API authorization is, in my opinion, a much cleaner way to manage access to your services.

Use Case 5: CI/CD Pipeline Policy Enforcement

Scenario: In a fast-paced development environment, ensuring that your CI/CD pipelines adhere to security, compliance, and operational best practices is crucial. This could involve mandating code reviews for all merges to main, ensuring that specific security scans are run before deployment, or restricting deployments to production environments based on certain criteria (e.g., only from specific branches or by authorized personnel).

How OPA helps: OPA can be integrated into various stages of your CI/CD pipeline to enforce policies. This allows you to "shift left" on security and compliance, catching issues earlier in the development lifecycle. By externalizing these policies to OPA, you keep your pipeline scripts clean and focused on their primary tasks, while OPA handles the complex policy evaluations. If you're looking to automate and standardize your pipeline governance, OPA is an excellent choice.

Let's create a policy that ensures all deployments to the production environment must originate from the main branch and have been approved by at least two reviewers.

default allow = false

allow if {
    input.environment == "production"
    input.source_branch == "main"
    input.pull_request.approvals >= 2
}


Input Query (example input.json for a production deployment request):

{
  "environment": "production",
  "source_branch": "main",
  "pull_request": {
    "id": "pr-123",
    "approvals": 2,
    "status": "merged"
  },
  "user": "dev_lead"
}

Running opa eval -d policy.rego -i input.json "data.ci_cd.policy.allow" would return true for this deployment. If the source_branch was feature/new-feature or approvals was less than 2, the policy would return false, preventing the deployment. This helps ensure that only well-vetted and compliant code makes it to your critical environments. In my experience, this kind of automated governance is invaluable for maintaining high standards in continuous delivery.

4. OPA vs. Oso: When to Use Which?

So now that we have a good idea of how OPA and Rego work, you’re probably wonder if you should implement this in your system. One question you want to answer is whether you want to go with native OPA or use Oso. Both are fantastic tools for authorization, but they shine in different contexts. Think of it this way:

OPA: The Heavyweight, Flexible, Infra-Friendly Policy Engine

In my opinion, OPA is your go-to when you need to enforce policies outside your application. It's designed to be a general-purpose policy engine that can be integrated across your entire stack. This includes:

  • Infrastructure: Kubernetes admission control, Terraform policy enforcement.
  • Microservices: API Gateway authorization, service-to-service authorization.
  • CI/CD Pipelines: Ensuring compliance and security throughout your deployment process.

OPA is incredibly flexible because it decouples policy from enforcement. You write your policies in Rego, and then various services query OPA for decisions. This makes it ideal for broad, system-level policy enforcement where you need a consistent policy layer across diverse technologies. If you're building a large, distributed system and need a centralized policy decision point for your infrastructure and services, OPA is, in my humble opinion, the clear winner.

Oso: The Batteries-Included Tool for Modern Apps

Oso, on the other hand, is built for enforcing access logic inside your application. It's a batteries-included framework that simplifies adding authorization directly into your application code. Oso provides libraries for various languages (Python, Node.js, Go, Rust, etc.) and a declarative policy language called Polar.

Oso is perfect for scenarios where you need fine-grained, application-specific authorization, such as:

  • User Permissions: Determining what a specific user can do within your application (e.g., "Can Alice edit this document?").
  • Multi-tenancy: Managing access to resources across different tenants.
  • Resource-level Authorization: Controlling access to individual resources based on ownership or relationships.

If you're a developer building a new application and you want to quickly and effectively implement authorization logic directly within your codebase, Oso is an excellent choice. It's designed to be developer-friendly and provides a more integrated experience for application-level authorization.

Practical Guidance on Choosing

Here's how I think about it:

  • Use OPA where you enforce policies outside the app. This means policies related to your infrastructure, network, or inter-service communication. It's about controlling the environment your applications run in.
  • Use Oso where you enforce access logic inside the app. This is about controlling what users can do with the data and features within your application.

It's not necessarily an either/or situation. Many organizations might find value in using both. For example, you could use OPA for your Kubernetes admission control and API Gateway authorization, while using Oso to manage user permissions within your application's backend. The key is to understand their strengths and apply them where they make the most sense for your specific authorization challenges.

5. Conclusion

It was quite the journey, but we did it. We looked at five practical examples showcasing the power and versatility of Open Policy Agent. From fine-grained ABAC and RBAC within your applications to robust policy enforcement in Kubernetes, API Gateways, and CI/CD pipelines, OPA provides a consistent and scalable way to manage authorization across your entire stack. Its declarative policy language, Rego, allows you to express complex rules with clarity and precision.

We also touched upon the distinction between OPA and Oso. In my opinion, understanding their core strengths is key: OPA excels at externalizing policy decisions for your infrastructure and services, while Oso is a fantastic tool for building application-level authorization directly into your code. Both are powerful, and often, they can complement each other beautifully in a comprehensive authorization strategy.

Open Policy Agent
August 2025 Update: Apple has hired the maintainers of OPA and the commercial offerings around OPA will be maintained by the open source community.

This article explores five alternatives to Open Policy Agent (OPA) that offer compelling features for different authorization requirements. We'll examine what makes each solution unique and help you determine which might be the right choice for you.

Understanding Open Policy Agent and Its Limitations

Before diving into alternatives, let's establish what Open Policy Agent is, the use cases where it shines, and where it might fall short for certain applications.

Open Policy Agent is an open-source, general-purpose policy engine that provides unified policy enforcement across the stack. It uses a high-level declarative language called Rego for policy definition and can be deployed as a sidecar, host-level daemon, or library. In general, teams use Open Policy Agent to enforce policy within cloud infrastructure.

While OPA offers flexibility as a general-purpose policy engine, this broad focus comes with tradeoffs:

  • The Rego language has a significant learning curve
  • As a general-purpose tool, it lacks application-specific authorization primitives
  • Implementation requires substantial custom integration work
  • Performance can be a concern in high-throughput scenarios

These limitations have led many development teams to seek alternatives that better align with their specific authorization needs.

Alternative 1: Oso

Oso takes a fundamentally different approach to authorization by focusing specifically on application authorization rather than being a general-purpose policy engine. This specialized focus translates to practical advantages for development teams.

Key Differentiators:

  • Purpose-built for application authorization: Unlike OPA's general-purpose approach, Oso’s policy language, Polar, provides primitives specifically designed for application authorization patterns[1]
  • High-performance data model: Oso’s data model is optimized for authorization operations. Oso can even work directly with your application data when you need to squeeze every last bit of performance out of larger operations like filtering lists.
  • Developer-friendly implementation: The authorization logic can mirror your application code, reducing the complexity of implementation

Oso's specialized focus makes it particularly well-suited for teams that need to implement application authorization models like role-based access control (RBAC), attribute-based access control (ABAC), or relationship-based access control (ReBAC) without the overhead of a general-purpose policy engine.

Alternative 2: AWS Cedar

AWS Cedar represents another specialized approach to authorization, with a focus on readability and application-level authorization.

Key Differentiators:

  • Readability focus: Cedar's policy language prioritizes human readability and understanding while also providing a syntax that resembles AWS IAM definitions. It occupies a middle ground between Rego and Polar.
  • Structured design: Cedar offers a more structured approach to policy definition compared to OPA's Rego
  • Application-level authorization: Like Oso, Cedar focuses specifically on application authorization rather than general policy enforcement

Cedar's safety-oriented approach and fine-grained permissions make it a strong contender, particularly for applications on AWS. However, it has limited tooling and smaller community support compared to more established alternatives.

Alternative 3: Google Zanzibar Based Tools

For applications that need to manage complex relationship-based permissions at scale, tools like AuthZed or Auth0, which are based on Google Zanzibar, offer a compelling alternative to OPA.

Key Differentiators:

  • Graph-based authorization model: Zanzibar clones excel at managing access control via relationships
  • Single source of truth: Systems based on Zanzibar centralize the source of authorization decisions
  • Relationship-focused: Particularly strong for applications where permissions depend on complex relationships between users and resources

While Zanzibar offers powerful capabilities for relationship-based authorization, it introduces system complexity by requiring centralization of all authorization data. As a result, you will need to store, copy, and sync data across your application and your authorization service. It also forces you to model your authorization logic as relationships, which makes it challenging to implement ABAC.

Alternative 4: XACML

The eXtensible Access Control Markup Language (XACML) represents a standards-based approach to authorization that predates OPA and other newer alternatives.

Key Differentiators:

  • Standardized approach: As an OASIS standard, XACML offers a well-defined, standardized framework
  • Mature ecosystem: With a longer history, XACML has established patterns and implementations
  • Comprehensive model: Includes a complete policy language, architecture, and request/response protocol

However, XACML's XML-based approach can be verbose and complex compared to newer alternatives, and it may not be as well-suited for modern cloud-native applications as some of the other options discussed here[2].

Alternative 5: Hashicorp Sentinel

Rounding out our alternatives is Hashicorp Sentinel, which takes yet another approach to policy as code.

Key Differentiators:

  • Infrastructure focus: Particularly strong for infrastructure-related authorization decisions
  • Hashicorp ecosystem integration: Works seamlessly with other Hashicorp products
  • Embedded policy engine: Designed to be embedded within other Hashicorp applications and services

Sentinel's focus on infrastructure makes it particularly valuable for teams that need to enforce policies across Hashicorp-based infrastructure as code deployments. It’s not suited for application authorization.[3].

Choosing the Right Authorization Solution

When evaluating these alternatives to Open Policy Agent, consider these key factors:

  • Use case: Are you looking for infrastructure or application authorization?
  • Integration complexity: How much custom work will be required to implement the solution?
  • Performance requirements: Can the solution meet your latency and throughput needs?
  • Team expertise: Which solution aligns best with your team's existing knowledge and skills?
  • Deployment model: Does the solution support your required deployment scenarios?

The right choice depends heavily on your specific requirements. For teams building complex applications with sophisticated authorization needs, purpose-built solutions like Oso often provide advantages over general-purpose policy engines like OPA.

Comparing Key Features

Feature OPA Oso AWS Cedar Google Zanzibar-based XACML HashiCorp Sentinel
Primary Focus Infrastructure authorization Application authorization Application authorization Relationship-based authorization Attribute-based authorization Infrastructure authorization
Language Rego Polar, Oso’s purpose-built language for authorization Cedar DSL Variations on Zanzibar configuration language XML-based Sentinel language
Learning Curve Steep Moderate Moderate Steep Steep Steep
Deployment Model Open Source Flexible – Cloud or Self-Hosted AWS only Vendor-dependent Open Standard Packaged with Terraform Cloud (HCP Terraform)
Best For General policy Application auth with RBAC, ReBAC, ABAC, and custom roles AWS applications Relationship-based authorization Standards compliance General policy if you're all-in on HashiCorp

Implementation Considerations

Before implementing any authorization solution, consider these questions:

  • How easy is it to get started? Is it a cloud service or do you have to deploy it to your infrastructure?
  • How much support does it require? Do you have the capacity and infrastructure to provide it?
  • What’s the developer experience like? Is it easy to onboard new developers and integrate with your existing development process?
  • How is the documentation? Can you quickly find the information you need?
  • Does the solution support your authorization needs? Will it support them in 6 months? A year? Beyond?
  • Will the solution meet your current performance requirements? Will it continue to do so as your application grows?

Conclusion

While Open Policy Agent offers a flexible, general-purpose approach to policy enforcement, purpose-built alternatives often provide advantages for specific authorization scenarios. By understanding the strengths and limitations of each option, you can select the solution that best fits your unique requirements.

For teams building complex applications with sophisticated authorization needs, solutions like Oso that focus specifically on application authorization often provide the best balance of power, flexibility, and developer experience. The right choice ultimately depends on your specific requirements, existing technology stack, and team expertise.

Citations

[1] https://www.osohq.com/post/oso-vs-opa-open-policy-agent-alternatives

[2] https://www.styra.com/blog/opa-vs-xacml-which-is-better-for-authorization/

[3] https://www.jit.io/resources/security-standards/5-use-cases-for-using-open-policy-agent

Open Policy Agent
August 2025 Update: Apple has hired the maintainers of OPA and the commercial offerings around OPA will be maintained by the open source community.

Open Policy Agent (OPA) is a general-purpose policy engine that helps with policy enforcement in cloud infrastructure. OPA allows users to define and enforce policies across a wide range of systems, ensuring compliance and security in dynamic environments.

I have always considered OPA to be one of the most important advancements for cloud infrastructure. In my experience, absent a bespoke solution (e.g. Oso), a rigorous OPA implementation makes for stronger enterprise software.

To implement the OPA engine, you submit a request in the form of a JSON or YAML object. OPA evaluates this incoming request against its policies and data to give a policy determination. You can think of the determination as a decision: approved or declined. It is now up to your software to enforce this decision.

Decoupling Policy Logic and Business Logic

One of the main reasons to use OPA is that it allows you to decouple policy decision making from the business logic of your services. OPA helps you determine the decision of a policy while your software enforces that decision. This allows you to manage policies in one place rather than coordinating policy changes in the business logic of several systems which could be written in different languages and managed by different teams.

What is Rego?

Rego is the custom language that OPA uses for writing policies. It is a declarative language designed for inspecting and transforming structured data, like JSON and YAML that is used for expressing access to cloud infrastructure.

Rego was inspired by Datalog, but extended to support structured document models such as JSON and YAML. Some developers consider it particularly confusing (evidenced by this Reddit thread), but Rego is the language of choice for Opa users.

How do you create an OPA policy?

Authorization in OPA starts with loading your authorization data in a structured format like JSON. You then write policy rules in Rego to transform the data as needed in order to derive the authorization context of the data. An example of this could be determining a user’s role or figuring out which organization a file belongs to. When the data is in the right structure, you inspect it to determine whether to allow or deny a request.

How would you design RBAC using OPA?

RBAC, which stands for Role-Based Access Control, is a broad classification for using roles to control the access that users have to resources. Most people are familiar with the concept of roles, and expect them to be a part of any authorization system. For many app developers, roles are the first and fastest step in implementing application authorization.

To design an RBAC system using OPA, you’ll first need to assign roles to all of your users. We’ll do that using a dictionary.  In our case we have two roles: admin and member. Alice is both an admin and a member while Bob is only a member.

Based on their roles, Alice and Bob will have different permissions. We’ll need to define the various permissions that each role has. In the example below, a member can only “read” the Acme repository while an admin can “write” and “delete” the repository.

With these data structures in place, we’ll need to implement logic to determine whether a user has permission to perform a given action based on their role. Given a user, we’ll need to first determine what roles they have. Then, across all of their roles, we need to see what permissions they have.

In this simple example, you can see that we defined a policy that determines what permissions our users have. With this policy defined, we can determine if a given user has permissions to do the requested action.

What does OPA do well?

OPA is a great general-purpose policy engine. It’s designed to accept data from a variety of systems in their native format. Its rules language, Rego, provides primitives that allow you to transform and inspect its data as needed during evaluation to make authorization decisions. In this way, OPA emphasizes interoperability with third-party systems, where the data isn’t under your direct control. It is also well suited to machine-to-machine operations.

What are Open Policy Agent alternatives?

Oso is a good alternative to OPA for use cases like application authorization.

Oso’s policy language, Polar is built around the higher-order entities that you model in applications, such as actors, roles, and relationships. This makes it a natural fit for the application authorization domain.

If this space is relevant to you, I would recommend reading our overview of the distinctions between Open Policy Agent and Oso.

Looking for an authorization service?

Engineering teams are increasingly adopting services for core infrastructure components, and this applies to authorization too. There are a number of authorization-as-a-service options available. OPA is a popular general-purpose policy engine that implements authorization logic as low-level operations on structured data.

Oso Cloud is a managed authorization service that is tailored to Application Authorization. You use Oso Cloud to provide fine-grained access to resources in your app, to define deep permission hierarchies, and to share access control logic between multiple services in your backend. You do all this by using the same sorts of higher-order entities that you’re already modeling in your application: users, roles, relationships, attributes.

Oso also comes with built-in primitives for patterns like RBAC and ReBAC, and it is extensible for other use cases like attribute-based access control (ABAC). It is built using a best practices data model that makes authorization requests fast and ensures that you don’t need to make schema changes to make authorization changes. It provides APIs for enforcement and data filtering. Oso Cloud is also deployed globally for high availability and low-latency.

Oso Cloud is free to get started – try it out. If you’d like to learn more about Oso Cloud or ask questions about authorization more broadly, set up a 1x1 with an Oso engineer.

Common questions related to OPA

How much does OPA cost?

OPA is free and is available under an Apache License 2.0.

Who develops OPA?

OPA is developed by the OPA community. It was originally created by Styra, but Apple hired the maintainers of OPA in August 2025.

How do I get started with Rego?

The best place to get started with Rego is to read the Rego documentation on the OPA website.

Authorization done right