Skip to content

Network & Egress Policy

Multi-layered egress control for PCI-DSS 1.2 + 1.3 (segmentation + outbound filtering). Default posture: deny all outbound; allow specific destinations per role.


The layers

flowchart TB
    subgraph Pod["Pod / workload"]
        App[App process]
    end
    subgraph NS["OCP Namespace"]
        NP["NetworkPolicy<br/>(egress rules)"]
    end
    subgraph Cluster["OpenShift cluster"]
        EF["EgressFirewall<br/>(OVN-Kubernetes native)<br/>Default-deny + allow list"]
        EIP["EgressIP<br/>(pin source IP per namespace)"]
    end
    subgraph VMHost["VM host (Ubuntu 24.04)"]
        NFT["nftables rules<br/>via Ansible role<br/>Default-deny OUTPUT"]
    end
    subgraph NetEdge["Network edge (future)"]
        GW["Optional:<br/>OPNsense/pfSense<br/>gateway VM"]
    end

    App --> NP
    NP --> EF
    EF --> EIP
    EIP --> NFT
    NFT --> GW
    GW --> Internet["Internet /<br/>approved external"]

    classDef layer fill:#263238,stroke:#455a64,color:#fff
    class Pod,NS,Cluster,VMHost,NetEdge layer

Four layers, defense-in-depth: at least one layer catches a given rule violation.


Layer 1 — Pod-level NetworkPolicy (K8s native)

Deployed via ACM Policy → all namespaces on all 4 clusters. Default-deny-all-egress per namespace; workloads opt-in via specific NetworkPolicy egress allowances.

Example (baseline deployed by ACM Policy to every non-system namespace):

yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny-egress spec: podSelector: {} policyTypes: - Egress egress: [] # deny all

Then per-workload additions:

```yaml

Allow brac-poc-demo-app to reach WSO2 APIM gateway only

apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: brac-poc-demo-app-egress namespace: sample-app spec: podSelector: matchLabels: app.kubernetes.io/name: brac-poc-demo-app policyTypes: [Egress] egress: - to: - ipBlock: cidr: 26.26.200.0/24 # WSO2 APIM Gateway VMs ports: - protocol: TCP port: 9443 - to: # allow DNS - namespaceSelector: matchLabels: { kubernetes.io/metadata.name: openshift-dns } ports: - protocol: UDP port: 53 ```


Layer 2 — OpenShift EgressFirewall (OVN-Kubernetes native)

Cluster-wide egress policy per namespace, enforced at the OVN-Kubernetes data plane. Goes beyond NetworkPolicy by letting us allow traffic to specific external hostnames (DNS-resolved dynamically).

yaml apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default namespace: sample-app spec: egress: # Allow specific internal VMs - type: Allow to: cidrSelector: 26.26.200.0/24 # Allow specific external hosts (DNS resolved) - type: Allow to: dnsName: registry.redhat.io - type: Allow to: dnsName: quay.io - type: Allow to: dnsName: registry.connect.redhat.com # Allow Red Hat subscription / telemetry - type: Allow to: dnsName: api.openshift.com # Deny everything else - type: Deny to: cidrSelector: 0.0.0.0/0

One EgressFirewall CR per namespace. Committed to openshift-platform-gitops/policies/egress-firewalls/.


Layer 3 — EgressIP (pin outbound source IP per namespace)

Pins the source IP of outbound traffic leaving the cluster to a specific IP. Useful when upstream firewalls/gateways whitelist by source IP (e.g., "BRAC's mainframe API only accepts calls from these 3 IPs").

yaml apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: wso2-egress spec: egressIPs: ['59.153.29.108'] namespaceSelector: matchLabels: egress-group: wso2 podSelector: matchLabels: egress-group: wso2

Reserved egress IPs on br-real: 59.153.29.108-.114 (7 IPs) — one per egress group (WSO2, demo-app, operators, etc.).


Layer 4 — VM-level nftables (Ansible-managed)

VMs don't have OCP policy enforcement, so we enforce at the host level with nftables (modern iptables replacement, default on Ubuntu 24.04).

Ansible role egress-firewall applies a standardized rule set based on VM role labels:

``` Role: MinIO Allow OUT tcp/9000,9001 → other minio-vm* (cluster sync) Allow OUT tcp/443 → api.openshift.com (if OCP backup target) Deny all other outbound

Role: Vault Allow OUT tcp/8200 → other vault-vm (Raft peer traffic) Allow OUT tcp/9000 → minio-vm (snapshot upload) Deny all other outbound

Role: WSO2 APIM Gateway Allow OUT tcp/5432 → wso2-apim-pg-vm (DB) Allow OUT tcp/8243 → wso2-apim-km-vm (key manager) Allow OUT tcp/9443 → wso2-is-vm* (identity federation) Allow OUT tcp/26.26.200.0/24 → (spoke API VIPs for calling backend APIs) Deny all other outbound ```

Base template in the Ansible role; per-role overlays are short YAML lists ingested into the nftables template.


Layer 5 (optional, Phase 2) — network edge gateway

A dedicated OPNsense or pfSense VM acts as the egress NAT for the whole POC subnet. Centralizes logging + rule set + IDS.

Out of POC scope (~2 days to configure properly). Flagged for Phase 2.


How to add a new allowed destination

Every new app needs egress rules. Workflow:

  1. Engineer drafts NetworkPolicy + EgressFirewall YAML + nftables Ansible vars in the appropriate repo
  2. MR review: Security Lead verifies the destination is business-justified
  3. CI: kubeconform + ansible-lint + nft -c syntax check
  4. Merge → ArgoCD propagates OCP policies, AWX runs the nftables role on VMs
  5. Not approved in a hurry: adding a new external domain is a policy change, auditable

Evidence for PCI-DSS audit

  • 1.2.1 (restrict inbound/outbound): EgressFirewall + NetworkPolicy + nftables all default-deny with explicit allow-list
  • 1.3.2 (outbound traffic restrictions): EgressIP pins source for whitelisted destinations
  • 1.3.4 (prevent internal network details from being exposed): NAT at the egress gateway (when Phase 2 lands)
  • 10.2.7 (log all egress allow/deny): OVN-Kubernetes egress logs → Loki → Splunk; nftables rules with log prefix → journald → Loki via node exporter

Full mapping in a separate audit-evidence doc that we produce during Day 6.


Commonly allowed external destinations (pre-approved)

Destination Reason Who can reach it
api.openshift.com Cluster telemetry + Insights openshift-system namespaces
registry.redhat.io, quay.io, registry.connect.redhat.com OCP image pulls openshift-system + image-registry
sso.redhat.com Red Hat token exchange namespaces running RH operators
gitlab.apps.brac-poc.comptech-lab.com (internal but via br-real) CI/CD sync workload namespaces
*.comptech-lab.com via Cloudflare cert-manager DNS-01 cert-manager namespace
deb.debian.org, ubuntu.com, pypi.org, github.com apt / pip / module downloads strictly sample-app dev namespaces; prod namespaces blocked

Everything else is DENY unless a specific EgressFirewall or nftables rule allows it.


Version pins

  • OVN-Kubernetes: shipped with OCP 4.21.9 (no separate pin)
  • nftables: Ubuntu 24.04 default (1.0.9 as of Noble)
  • No extra operator needed — egress firewall is built into OCP

Created: 2026-04-24 · Owner: Security + Infrastructure Leads · Decision: #040