Skip to content

BRAC POC - Deployment Guide

Step-by-step guide to deploying the entire 9-component POC in the aggressive 6-day timeline.

Prerequisites

Required Tools

  • Terraform >= 1.5: terraform version
  • Kubernetes CLI (kubectl) >= 1.24: kubectl version --client
  • OpenShift CLI (oc) >= 4.12: oc version
  • Helm >= 3.12: helm version
  • Docker >= 24.0: docker version (for building sample apps)
  • Git >= 2.30: git --version

System Requirements

  • 3-node OpenShift cluster: 12+ vCPU, 24+ GB RAM
  • Networking: Nodes can reach each other, external internet access
  • Storage: 500+ GB available (for ClickHouse, artifact storage)
  • Cloud/On-Premise: AWS, GCP, Azure, or on-premise VMs

Environment Setup

```bash

Clone the repository

git clone cd brac-poc

Verify Git is configured

git config user.name "Your Name" git config user.email "your.email@example.com"

Create develop branch

git checkout -b develop ```


Phase 1: Foundation & Infrastructure (Days 1-3)

Step 1A: OpenShift Cluster Provisioning

Estimated Time: 3-4 hours

1. Configure Terraform Variables

bash cd terraform/openshift cp variables.tf variables.tf.example

Edit terraform.tfvars: ```hcl

Infrastructure

cluster_name = "brac-poc-ocp" region = "us-east-1" environment = "poc"

Node Configuration

node_count = 3 node_machine_type = "t3.2xlarge" # 8 vCPU, 32 GB RAM per node storage_size = 500 # GB per node

OpenShift

openshift_version = "4.12" pull_secret = file("~/.openshift/pull-secret.txt")

ODF Storage

enable_odf = true odf_storage_size = 200 # GB ```

2. Initialize Terraform

```bash terraform init

Verify state backend

terraform plan -out=tfplan ```

3. Provision Cluster

```bash

Review plan

terraform show tfplan

Apply

terraform apply tfplan

Export kubeconfig

terraform output -raw kubeconfig > ~/.kube/brac-poc export KUBECONFIG=~/.kube/brac-poc

Verify cluster access

oc whoami oc get nodes ```

Success Criteria: - ✅ 3 nodes in "Ready" state - ✅ OpenShift console accessible - ✅ Storage classes available: ocs-storagecluster-ceph-rbd, ocs-storagecluster-ceph-rgw

4. Document Cluster Info

```bash

Save outputs for later reference

terraform output > cluster-info.txt echo "Cluster Console: $(terraform output console_url)" echo "API Server: $(terraform output api_server_url)" ```

Commit to Git: ```bash git add terraform/openshift/ git commit -m "infra: provision 3-node openshift cluster with odf

  • Automated cluster using Terraform
  • 3 nodes, 8 vCPU / 32 GB RAM each
  • ODF storage configured (Block + Object)
  • Cluster info: terraform outputs
  • All kubeconfig exported" ```

Step 1B: CI/CD Infrastructure (GitLab + Jenkins)

Estimated Time: 2-3 hours
Can run in parallel with 1A

1. GitLab HA Deployment

```bash cd terraform/cicd/gitlab

Configure variables

cat > terraform.tfvars <<EOF gitlab_hostname = "gitlab.brac-poc.local" gitlab_replicas = 2 postgres_replicas = 2 EOF

terraform init && terraform apply ```

Verify: ```bash

Get GitLab URL

GITLAB_URL=$(terraform output -raw gitlab_url) echo "GitLab URL: $GITLAB_URL"

Wait for deployment (~5 min)

watch kubectl get pods -n gitlab

Get initial admin token

kubectl exec -n gitlab gitlab-0 -- grep 'initial_root_password' /etc/gitlab/initial_root_password ```

2. Jenkins HA Deployment

```bash cd ../jenkins

cat > terraform.tfvars <<EOF jenkins_replicas = 2 jenkins_plugins = "git,docker,kubernetes,pipeline" EOF

terraform init && terraform apply ```

Verify: ```bash JENKINS_URL=$(terraform output -raw jenkins_url) echo "Jenkins URL: $JENKINS_URL"

Get initial admin password

kubectl exec -n jenkins jenkins-0 -- cat /var/jenkins_home/secrets/initialAdminPassword ```

3. Configure Monorepo

```bash

Create sample monorepo structure

mkdir -p git-repos/brac-poc-services/{service-a,service-b,service-c}

Add .gitlab-ci.yml to each service

cat > git-repos/brac-poc-services/.gitlab-ci.yml <<'EOF' stages: - build - push - deploy

build: stage: build script: - docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA . - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA

deploy: stage: deploy script: - kubectl set image deployment/app app=$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA only: - main EOF

Push to GitLab

cd git-repos/brac-poc-services git init git add . git commit -m "feat: initialize monorepo with sample services" git remote add origin "$GITLAB_URL/root/brac-poc-services.git" git push -u origin main ```

Commit: ```bash git add terraform/cicd/ git commit -m "infra: deploy gitlab and jenkins ha clusters

  • GitLab HA: 2 replicas with PostgreSQL
  • Jenkins HA: 2 replicas with Kubernetes plugin
  • Monorepo structure for CI/CD demonstration" ```

Step 1C: Kafka + Redis Infrastructure

Estimated Time: 2-3 hours
Can run in parallel with 1A & 1B

1. Kafka KRaft Cluster

```bash cd terraform/kafka

cat > terraform.tfvars <<EOF kafka_brokers = 3 broker_replicas = 2 enable_schema_registry = true EOF

terraform init && terraform apply

Get Kafka bootstrap servers

KAFKA_BROKERS=$(terraform output -raw kafka_brokers) echo "Kafka brokers: $KAFKA_BROKERS" ```

2. Create Kafka Topics

```bash

Connect to Kafka pod

kubectl exec -it kafka-0 -n kafka -- bash

Create topics

kafka-topics.sh --bootstrap-server kafka:9092 --create \ --topic telemetry.logs --partitions 3 --replication-factor 2 kafka-topics.sh --bootstrap-server kafka:9092 --create \ --topic telemetry.metrics --partitions 3 --replication-factor 2 kafka-topics.sh --bootstrap-server kafka:9092 --create \ --topic telemetry.traces --partitions 3 --replication-factor 2

List topics

kafka-topics.sh --bootstrap-server kafka:9092 --list ```

3. Redis Sentinel HA

```bash cd ../redis

cat > terraform.tfvars <<EOF redis_nodes = 3 sentinel_nodes = 3 replicas = 2 EOF

terraform init && terraform apply

Verify Redis

REDIS_MASTER=$(terraform output -raw redis_master_host) echo "Redis master: $REDIS_MASTER" ```

Validate: ```bash

Test Redis failover

kubectl exec -it redis-0 -n redis -- redis-cli SLAVEOF NO ONE

(Triggers failover, verify new master elected)

```

Commit: ```bash git add terraform/kafka terraform/redis git commit -m "infra: provision kafka kraft cluster and redis sentinel ha

  • Kafka: 3 brokers, KRaft mode, 3 topics for observability pipeline
  • Redis: 3 nodes, Sentinel HA with 2 replicas
  • Both ready for Phase 2 telemetry pipeline" ```

Phase 2: Kubernetes Components (Days 3-5)

Step 2A: Compliance & Security

Estimated Time: 1-2 hours

1. Deploy Compliance Operator

```bash cd k8s/operators

kubectl apply -f compliance-operator.yaml

Wait for operator

kubectl wait --for=condition=Available --timeout=300s \ deployment/compliance-operator -n openshift-compliance ```

2. Create PCI-DSS Scan Profile

```bash cat > k8s/operators/pci-dss-scan.yaml <<'EOF' apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceScan metadata: name: pci-dss-scan namespace: openshift-compliance spec: profile: xccdf_org.ssgproject.content_profile_pci-dss scanType: Node nodeSelector: scan: "true" EOF

kubectl apply -f k8s/operators/pci-dss-scan.yaml

Monitor scan progress

kubectl get compliancescans -n openshift-compliance -w ```

3. Deploy ACS (Advanced Cluster Security)

```bash kubectl apply -f k8s/operators/acs-operator.yaml

Configure policy to block Critical vulnerabilities

kubectl apply -f k8s/operators/acs-policy-block-critical.yaml ```

4. Generate Remediation Report

```bash

Export scan results

kubectl get compliancescans pci-dss-scan -n openshift-compliance -o json > \ compliance-scan-results.json

Generate report

python3 scripts/generate-compliance-report.py \ --input compliance-scan-results.json \ --output compliance-report.pdf ```

Commit: ```bash git add k8s/operators/ git commit -m "sec: deploy compliance operator with pci-dss scanning

  • Compliance Operator scans cluster against PCI-DSS baseline
  • ACS policy blocks deployment of Critical vulnerability images
  • Auto-remediation report generation added" ```

Step 2B: Observability Stack (OpenTelemetry)

Estimated Time: 3-4 hours
Critical path component

1. Deploy OTel Collector

```bash cd k8s/observability

Install Helm chart for OTel Collector

helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts helm repo update

helm install otel-collector open-telemetry/opentelemetry-collector \ -f k8s/observability/otel-collector-values.yaml \ -n observability --create-namespace

Verify DaemonSet

kubectl get daemonset -n observability ```

2. Deploy SigNoz Backend

```bash helm repo add signoz https://signoz.io/helm-repo helm repo update

helm install signoz signoz/signoz \ -f k8s/observability/signoz-values.yaml \ -n observability

Wait for pods

kubectl wait --for=condition=Ready pod -l app=signoz \ -n observability --timeout=600s

Get SigNoz URL

SIGNOZ_URL=$(kubectl get ingress -n observability signoz -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') echo "SigNoz UI: http://$SIGNOZ_URL" ```

3. Configure ClickHouse Retention

```bash kubectl apply -f k8s/observability/clickhouse-retention-policy.yaml

Verify hot/cold storage

kubectl exec -n observability clickhouse-0 -- \ clickhouse-client -q "SHOW TABLES IN default" ```

4. Deploy Sample Application (Instrumented)

```bash

Create with OTel SDK

kubectl apply -f k8s/sample-app/app-with-otel-sdk.yaml

Create without SDK (to demonstrate difference)

kubectl apply -f k8s/sample-app/app-without-sdk.yaml

Generate traffic

kubectl port-forward -n default svc/sample-app 8080:8080 & for i in {1..100}; do curl http://localhost:8080/api/data; done ```

5. Verify End-to-End Flow

```bash

Check OTel Collector logs

kubectl logs -n observability -l app=otel-collector --tail=50

Check Kafka topics

kubectl exec -it kafka-0 -n kafka -- \ kafka-console-consumer.sh --bootstrap-server kafka:9092 \ --topic telemetry.traces --from-beginning --max-messages=10

View in SigNoz

Navigate to SigNoz URL → Traces → Select service

```

Commit: ```bash git add k8s/observability/ k8s/sample-app/ git commit -m "feat: deploy opentelemetry observability stack

  • OTel Collector as DaemonSet on each node
  • SigNoz with ClickHouse backend (2d hot retention)
  • Kafka integration for logs/metrics/traces buffering
  • Sample apps with and without SDK instrumentation
  • End-to-end tracing verified" ```

Step 2C: WSO2 APIM + Identity Server

Estimated Time: 2-3 hours

1. Deploy WSO2 APIM

```bash cd k8s/wso2

helm repo add wso2 https://wso2.github.io/helm-charts helm repo update

helm install wso2-apim wso2/apim \ -f k8s/wso2/apim-values.yaml \ -n wso2 --create-namespace

Wait for deployment

kubectl wait --for=condition=Ready pod -l app=apim \ -n wso2 --timeout=600s

APIM_URL=$(kubectl get ingress -n wso2 apim -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') echo "WSO2 APIM: https://$APIM_URL" ```

2. Deploy WSO2 Identity Server

```bash helm install wso2-is wso2/is \ -f k8s/wso2/is-values.yaml \ -n wso2

Verify

kubectl get pods -n wso2 | grep is ```

3. Configure SSO (SAML/OIDC)

```bash

Create SAML service provider

kubectl apply -f k8s/wso2/saml-sp-config.yaml

Create OIDC client

kubectl apply -f k8s/wso2/oidc-client-config.yaml

Test login flow

Open browser to APIM console, attempt SSO login

```

Commit: ```bash git add k8s/wso2/ git commit -m "feat: deploy wso2 apim and identity server

  • Distributed APIM with HA
  • Identity Server integrated for SSO
  • SAML and OIDC providers configured
  • Rate limiting policies applied" ```

Step 2D: Middleware (NGINX + Open Liberty)

Estimated Time: 2 hours

1. Deploy Open Liberty

```bash cd k8s/middleware

kubectl apply -f k8s/middleware/open-liberty-deployment.yaml

Verify

kubectl get pods -n default | grep open-liberty ```

2. Deploy NGINX Load Balancer

```bash kubectl apply -f k8s/middleware/nginx-canary-routing.yaml

Verify canary routing

kubectl get virtualservices ```

3. Test Canary Deployment

```bash

Route 10% to new version, 90% to stable

kubectl patch vs app-gateway --type=json -p='[ {"op": "replace", "path": "/spec/hosts/0/http/0/route/0/weight", "value": 90}, {"op": "replace", "path": "/spec/hosts/0/http/0/route/1/weight", "value": 10} ]'

Generate traffic and monitor

for i in {1..1000}; do curl http://app-gateway/api/; done

Check traffic split in SigNoz dashboard

```

Commit: ```bash git add k8s/middleware/ git commit -m "feat: deploy open liberty and nginx with canary routing

  • Open Liberty app server deployed
  • NGINX load balancer configured
  • Canary routing (10/90 split) for traffic management
  • Health checks and observability enabled" ```

Phase 3: Supporting Components (Days 5-6)

Step 3A: Trivy SCA & SBOM

Estimated Time: 1-2 hours

```bash cd k8s/trivy

Deploy Trivy dashboard

kubectl apply -f k8s/trivy/trivy-dashboard.yaml

Generate SBOM for sample app

trivy image --format cyclonedx --output sbom.json sample-app:latest

Upload to dashboard

curl -X POST http://trivy-dashboard:8080/sbom \ -F "file=@sbom.json" ```

Commit: ```bash git add k8s/trivy/ git commit -m "sec: deploy trivy sca dashboard and sbom generation

  • Central Trivy dashboard for vulnerability tracking
  • Automated SBOM generation per image
  • Integration with CI/CD pipeline" ```

Step 3B: Nexus Artifact Repository

Estimated Time: 1-2 hours

```bash cd terraform/nexus

terraform init && terraform apply

NEXUS_URL=$(terraform output -raw nexus_url) echo "Nexus: $NEXUS_URL" ```

Configure repositories: ```bash kubectl port-forward -n nexus svc/nexus 8081:8081 &

Configure via UI or API

- Docker repository

- Maven repository

- NPM repository

```

Commit: ```bash git add terraform/nexus/ git commit -m "infra: deploy nexus artifact repository

  • Docker, Maven, NPM repositories configured
  • Integration with GitLab CI/CD
  • Storage tiering configured" ```

Step 3C: ArgoCD GitOps

Estimated Time: 1.5 hours

```bash cd k8s/argocd

Install ArgoCD

kubectl apply -f k8s/argocd/argocd-installation.yaml

Wait for ArgoCD

kubectl wait --for=condition=Ready pod -l app.kubernetes.io/name=argocd-server \ -n argocd --timeout=300s

Get ArgoCD UI

ARGOCD_URL=$(kubectl get svc argocd-server -n argocd -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') echo "ArgoCD: http://$ARGOCD_URL"

Get initial admin password

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d ```

Create sample applications: ```bash kubectl apply -f k8s/argocd/apps/sample-app-argocd.yaml

Monitor sync

argocd app get sample-app argocd app sync sample-app ```

Commit: ```bash git add k8s/argocd/ git commit -m "feat: deploy argocd for gitops workflow

  • ArgoCD managing cluster state
  • Sample applications synchronized from Git
  • Pull-based deployment model" ```

Step 3D: JBoss Domain Mode

Estimated Time: 1-2 hours

```bash cd k8s/jboss

kubectl apply -f k8s/jboss/domain-controller.yaml kubectl apply -f k8s/jboss/managed-servers.yaml

Verify deployment

kubectl get pods -n jboss | grep domain

Access domain console

kubectl port-forward -n jboss svc/domain-controller 9990:9990 & echo "Domain Console: http://localhost:9990" ```

Deploy sample application: ```bash

Via domain console or CLI

kubectl exec -it -n jboss domain-controller-0 -- \ jboss-cli.sh --connect \ "deploy /path/to/app.war" ```

Commit: ```bash git add k8s/jboss/ git commit -m "feat: deploy jboss domain mode

  • Domain controller with managed servers
  • Application deployment via domain console
  • Metrics exported to OpenTelemetry" ```

Validation & Testing

Run Full Validation Suite

```bash ./scripts/validate.sh

Expected output:

✅ OpenShift cluster: 3 nodes Ready

✅ ODF storage: Block + Object classes available

✅ Compliance scan: Passed (PCI-DSS)

✅ ACS policy: Blocking Critical images

✅ OTel traces: Flowing into SigNoz

✅ ClickHouse: Hot retention active

✅ WSO2 SSO: SAML + OIDC working

✅ NGINX canary: 10/90 routing verified

✅ Kafka KRaft: 3 brokers operational

✅ Redis Sentinel: Failover tested

✅ Trivy dashboard: Accessible

✅ ArgoCD: Apps synced

```

Generate Demo Report

```bash python3 scripts/generate-demo-report.py \ --cluster-info cluster-info.txt \ --compliance-report compliance-report.pdf \ --output demo-report.md

Creates markdown report with:

- Architecture diagram (ASCII)

- Component status table

- Compliance findings

- Performance metrics

- Deployment timeline

```


Cleanup (if needed)

Preserve Code, Destroy Infrastructure

```bash

Keep all code in Git, destroy cloud resources

cd terraform/openshift terraform destroy --auto-approve

cd ../kafka terraform destroy --auto-approve

cd ../redis
terraform destroy --auto-approve

GitLab, Jenkins, Nexus also destroyed

```


Troubleshooting

OTel Collector Not Receiving Traces

```bash

Check collector logs

kubectl logs -n observability -l app=otel-collector

Verify sample app is instrumented

kubectl logs -n default svc/sample-app | grep "OTEL|tracing"

Test OTLP endpoint

telnet otel-collector.observability.svc.cluster.local 4317 ```

Kafka Topics Not Receiving Data

```bash

Verify exporter configuration

kubectl get configmap -n observability otel-collector-config -o yaml

Check Kafka connectivity

kubectl exec -it otel-collector-xxx -n observability -- \ nc -zv kafka.kafka.svc.cluster.local 9092 ```

WSO2 SSO Not Working

```bash

Check Identity Server logs

kubectl logs -n wso2 -l app=is

Verify SAML/OIDC configuration

kubectl get secrets -n wso2 | grep saml ```


Documentation


Last Updated: 2026-04-24
Timeline: 6 days
Status: Ready for implementation