Skip to content

Security Toolchain

The scanning + runtime-security tools chosen for the POC, with rationale for what's in and what's out.


Tool matrix

Tool Role Status
Trivy (CLI + image scanner) Image vuln scanning in CI, SBOM, IaC scanning ✅ IN — CI step + scheduled Nexus repo scans
Red Hat Quay Container Security Operator Cluster-level image vulnerability dashboard ✅ IN — on spokes
Red Hat Advanced Cluster Security (ACS / StackRox) Runtime security + policy admission + anomaly detection + vulnerability management ✅ IN — Central on hubs, SecuredCluster on spokes
Red Hat Gatekeeper Operator (OPA Gatekeeper, 3.21.0 stable) Admission-time org policies + mutation (custom rules beyond security) ✅ IN — all 4 clusters; scoped to org/naming/labeling/resource-caps
Red Hat File Integrity Operator Host-level file integrity monitoring (PCI-DSS 11.5) ✅ IN — all 4 clusters
Gitleaks Secret-scanning in Git repos ✅ IN — pre-commit hook + GitLab CI step
OpenSCAP via Compliance Operator CIS + PCI-DSS-4 compliance scans ✅ IN — scheduled via ACM Policy
SonarQube Community Edition SAST + code quality (source-code bugs, vulnerabilities, code smells) ✅ IN — VM tier DC/DR (sonarqube-vm1-dc/dr + PG); GitLab CI quality gate
Red Hat Network Observability Operator (1.11.1 stable) eBPF-based network flow logging per cluster — L3/L4 traffic audit ✅ IN — on all 4 clusters; flows → Loki; flow-visualization plugin in OCP console
OpenVAS / Greenbone Community Edition Network + host vulnerability scanning ✅ IN — VM tier (replaces Tenable Nessus unless license available)
OWASP ZAP OSS DAST ✅ IN — scheduled AWX job (replaces Acunetix unless license available)
OWASP Dependency-Check Language-specific dependency CVE scan ✅ IN — in GitLab CI (complements Trivy's container focus)
Kyverno Admission policy engine (YAML syntax) ❌ OUT — community only, no Red Hat operator; Gatekeeper (RH-supported) covers the role
Falco Runtime anomaly detection ❌ OUT — ACS covers this; Falco is community
Black Duck (Synopsys SCA) Enterprise SCA + license compliance ⚠️ CONDITIONAL — use only if BRAC/comptech-lab has a license; else Trivy + DepCheck + ScanCode CLI cover basics
Tenable Nessus Pro Network vuln scanning ⚠️ CONDITIONAL — use only if license available; else OpenVAS CE
Acunetix DAST ⚠️ CONDITIONAL — use only if license available; else OWASP ZAP

Per-tool plan

Trivy — image & IaC scanning

Uses: 1. GitLab CI step — every build of brac-poc-demo-app and any other image we produce runs trivy image before pushing to Nexus. Fails the build on Critical CVEs with known fixes. 2. Nexus scheduled scan — a daily AWX job runs trivy image against all images in our Nexus registry and emits reports to MinIO. 3. SBOM generation — every image build produces an SBOM (SPDX + CycloneDX formats) via trivy image --format cyclonedx and stores it alongside the image in Nexus. 4. IaC scantrivy config on every OpenTofu + Kubernetes YAML change (GitLab CI merge-request check) catches misconfigurations (root containers, missing resource limits, etc.).

Version: Trivy 0.57.x (or current; single binary, updated via Nexus mirror).

Why Trivy over alternatives: open-source, CLI + operator variants, widely-used in OSS CI pipelines. We use the CLI not the operator (operator is community, which our "Red Hat-only operator" rule excludes). The cluster-runtime role is covered by Quay Security Operator + ACS, both Red Hat.


Quay Container Security Operator

Role: Red Hat's cluster-integrated image-scanning dashboard. Visible in OCP console.

Deployment: Installed on spoke-dc + spoke-dr via the spoke-platform ApplicationSet (version 3.13.11 per reference pin). Exposes ImageManifestVuln CRs for each pulled image.

What it gives: - Per-image vulnerability summary in OCP console - Alerts when a cluster runs an image with new known CVEs - Feed into ACS for deployment-time policy enforcement


Advanced Cluster Security (ACS / StackRox)

Role: everything ACS does replaces Falco/Aqua/etc. for us.

Deployment: ACS Central on both hubs; SecuredCluster on all 4 clusters.

Policies we enable for POC (mapped to PCI-DSS): - Deployment with Critical CVEs → block (PCI 6.3.3) - Container running as root → warn (PCI 2.2) - Privileged container → block - Secret detected in env var → alert (PCI 3.6) - Cluster admin creation attempt → alert (PCI 8.3) - Image from disallowed registry → block (tied to Image.spec.registrySources.allowedRegistries — PCI 6.3.2) - Crypto miner runtime behaviour → block (anomaly) - File integrity anomaly → alert

ACS violations ship to Splunk via ACS's syslog/HTTP integration.


File Integrity Operator

Role: PCI-DSS 11.5 — monitor critical host files (/etc/passwd, /etc/shadow, /etc/rancher/, /var/lib/etcd/) for unexpected changes.

Deployment: all 4 clusters via all-clusters-baseline ApplicationSet. Uses OpenSCAP + AIDE under the hood.

Alerts: violations posted to ACS + Splunk.


Gitleaks — secret scanning

Two enforcement points:

  1. Pre-commit hook (developer workstation): every commit runs gitleaks protect --staged blocks the commit if a potential secret is detected. Each repo has a .pre-commit-config.yaml we ship.
  2. GitLab CI (MR validation): every MR runs gitleaks detect over the diff. Fails the pipeline on hit.

Pattern config: we start with gitleaks defaults + add specific patterns for BRAC's likely leak shapes (Cloudflare, GitHub, AWS, GitLab, Keycloak client secrets).

Version: gitleaks 8.x (latest at install).

Recovery: if a secret ever lands in history despite the gates, rotate immediately + git-filter-repo to scrub + force-push. Documented in SECURITY-AND-COMPLIANCE-GUIDE.md.


OWASP Dependency-Check

Role: language-level dependency CVE scan (Java/Node/Python/Go). Complements Trivy, which is container-focused.

Deployment: GitLab CI step on any repo producing code artifacts.

Version: dependency-check 11.x (latest).


OpenSCAP / Compliance Operator

Role: PCI-DSS-4 + OCP4-CIS compliance scans.

Deployment: All 4 clusters via Compliance Operator + ACM Policy. Scheduled scans via ScanSettingBinding. Reports uploaded to MinIO daily.

Demo artifact: the Compliance scan PDF/HTML export for Day 6.


Acunetix — conditional

Status: ⚠️ requires license. Acunetix Standard is ~$4,500/yr. If BRAC / comptech-lab has a license → use it. If not → OWASP ZAP as the free alternative.

Role if used: DAST on WSO2 APIM gateway + brac-poc-demo-app HTTPS endpoints. Scheduled weekly.

Decision needed from you: - Do you have an Acunetix license for comptech-lab? - If no → proceed with OWASP ZAP (OSS, comparable capability for POC scope)


OWASP ZAP (default DAST if Acunetix absent)

Role: same as Acunetix — DAST on public endpoints (*.routes.<cluster>.opp... ingress URLs, WSO2 gateway, brac-poc-demo-app).

Deployment: scheduled AWX job runs ZAP in daemon mode against targets, publishes report to MinIO + notifies Slack via n8n on new high-severity findings.

Version: OWASP ZAP 2.16.x.


Gatekeeper — Red Hat OPA Gatekeeper Operator (3.21.0)

Role: admission-time validation + mutation for organizational policies, scoped to areas ACS and ACM Policy don't cover.

Why it's in (despite overlap with ACS): Red Hat ships the operator (channel stable, version 3.21.0 verified via OperatorHub). Integrates natively with ACM Policy — Gatekeeper ConstraintTemplate + Constraint resources propagated to all 4 clusters via the hub.

Scope split with ACS (no overlap):

Concern Enforced by
Image Critical CVEs ACS admission
Privileged pods, hostPath, root ACS + Pod Security Admission
Cluster config drift (audit profile, registries) ACM Policy ConfigurationPolicy
Organizational labels / naming / resource caps / image tags Gatekeeper (this tool's territory)
Mutations (auto-inject labels, default resource requests) Gatekeeper mutation webhook

Initial policy set (authored in openshift-platform-gitops/policies/gatekeeper/):

  1. require-standard-labels — every Deployment/StatefulSet must carry app.kubernetes.io/name, app.kubernetes.io/part-of, brac.poc/cost-center
  2. require-resource-requests — no pod without CPU + memory requests (prevents noisy-neighbor issues)
  3. max-replicas — cap replicas at 10 unless namespace has exception annotation
  4. namespace-naming — namespaces must match ^(kube-|openshift-|staxv-|brac-|sample-|<app>-).+
  5. no-latest-tag — block images using :latest or no tag (supply-chain hygiene)
  6. auto-add-cost-center-label — mutation: if a Deployment has no brac.poc/cost-center label, pull from namespace annotation and inject

All distributed via ACM Policy + PlacementBinding so they apply uniformly across hub-dc / hub-dr / spoke-dc / spoke-dr.

Demo artifact: intentionally deploy a non-compliant Deployment (missing label, :latest tag) → show Gatekeeper blocking it → fix labels + re-deploy.


SonarQube Community Edition — SAST + quality gate

Role: static analysis on all our code repos (the few we write: brac-poc-demo-app, any custom Go/Python, WSO2 custom mediations, Ansible custom modules, OpenTofu modules where applicable).

Deployment: VM tier, sonarqube-vm1-dc + sonarqube-vm1-dr + sonarqube-pg-vm1-dc/dr. Exposed via staxv HAProxy at sonarqube.apps.brac-poc.comptech-lab.com.

GitLab CI integration: every MR runs sonar-scanner. Quality Gate result posts to the MR — FAIL blocks merge.

Coverage: Java, Go, TypeScript/JS, Python, C# — all languages we're likely to touch. Plus Kubernetes YAML (Sonar K8s plugin) for additional IaC review layer.

Version: SonarQube Community Edition 10.x LTS (latest).


OpenVAS / Greenbone CE — network vulnerability scanner

Role: scans VMs + network services for known vulnerabilities (CVE-based). Complements Trivy (images) + Compliance Operator (OCP hosts) by covering the VM tier + network surfaces.

Deployment: single VM (openvas-vm1-dc/dr); daily scheduled scans against the full VM fleet + exposed network services. Reports to MinIO + alerts via n8n for High/Critical findings.

Version: Greenbone Community Edition (OpenVAS) 23.x or current LTS.

Note: if Tenable Nessus Pro license is provided later, swap this out — same role, better scan engine.


Egress firewall

See NETWORK-EGRESS-POLICY.md for the full multi-layered egress plan.


Falco — considered, then OUT

Why we skip it: 1. Falco is community (no Red Hat-supported build as an operator); our "Red Hat operator only" policy excludes it. 2. ACS Runtime Policies cover the use cases (syscall anomaly detection, process execution whitelist, file modification monitoring). 3. One less moving part for a 6-day POC.

If Phase 2 wants Falco specifically (some orgs mandate it), it can be revisited as a workload (non-operator deployment).


CI/CD security gates — summary

Every merge request to brac-poc-infrastructure, brac-poc-ansible, openshift-platform-gitops, brac-poc-demo-app:

1. gitleaks detect (block on hit) 2. ansible-lint (if Ansible) + tflint (if OpenTofu) + kubeconform (if K8s YAML) 3. yamllint 4. trivy config (IaC misconfig scan, block on HIGH) 5. trivy image (only for repos building images, block on CRITICAL with fix) 6. dependency-check (only for code repos) 7. owasp-zap --passive (baseline scan of PR-preview env for app repos) 8. unit + integration tests

No merge without all gates green.


Demo day — security section (~5 min)

  1. Show Compliance Operator scan report (PCI-DSS-4 + OCP4-CIS) → >95% compliant, remediation plan for rest
  2. Attempt to deploy an image with Critical CVE → blocked by ACS admission webhook → show the violation event
  3. Open ACS Central dashboard → show runtime policy violations across all 4 clusters
  4. Open Quay Container Security Operator dashboard in OCP console → vulnerability overview
  5. Show a GitLab MR that tripped Gitleaks → dev's console output + remediation (removed the fake secret, re-push)
  6. Show Splunk dashboard with all ACS + Compliance + File Integrity alerts funneled in for PCI audit trail
  7. If ZAP/Acunetix ran: show the DAST report

Open decisions you need to call

  1. Acunetix license: yes (use it) or no (OWASP ZAP)?
  2. ACS policies: start with OOB Red Hat defaults + add our ~8 PCI-mapped policies; or build a custom bank-specific policy bundle from scratch? Default = OOB + PCI policies, ~3 hours of config.

Created: 2026-04-24 · Owner: Security Lead · Decision: #034