Skip to content

BRAC POC - Validation & Testing

Comprehensive testing and validation procedures to verify all 9 components are operational and meet BRAC requirements.

Pre-Validation Checklist

Before running validation: - [ ] All Terraform modules applied successfully - [ ] All Kubernetes manifests deployed - [ ] Cluster nodes are Ready - [ ] Pods are Running (no CrashLoopBackOff) - [ ] All required namespaces created


Component Validation

1. OpenShift Platform

Cluster Health

```bash

Check node status

oc get nodes

Expected: 3 nodes, STATUS=Ready, ROLES=master,worker

Check cluster operators

oc get clusteroperators

Expected: All operators Available

Check OpenShift version

oc version ```

ODF Storage

```bash

Verify storage classes

oc get storageclass

Expected: ocs-storagecluster-ceph-rbd, ocs-storagecluster-ceph-rgw

Check ODF cluster status

oc get ocscluster

Expected: STATUS=Ready, PHASE=Ready

Test Block storage

oc apply -f - <<EOF apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test-block-pvc spec: accessModes: - ReadWriteOnce storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: 5Gi EOF

oc get pvc test-block-pvc

Expected: STATUS=Bound after ~10 seconds

```

Compliance Operator

```bash

Check scan status

oc get compliancescans pci-dss-scan -n openshift-compliance

Expected: STATUS=DONE, RESULT=PASSED (or with remediations)

View compliance report

oc get compliancescans pci-dss-scan -n openshift-compliance -o jsonpath='{.status.result}' | jq .

Export detailed results

oc extract configmap/$(oc get cm -n openshift-compliance -o name | grep pci-dss) \ -n openshift-compliance --to=- ```

ACS Image Policy

```bash

Try deploying image with Critical vulnerability

oc apply -f - <<EOF apiVersion: v1 kind: Pod metadata: name: test-critical-vuln spec: containers: - name: app image: library/alpine:3.8 # Known to have CVEs EOF

Expected: Pod creation fails with ACS policy violation

```

Validation Checkpoint: - [ ] 3 nodes Ready - [ ] All cluster operators Available - [ ] ODF storage classes available - [ ] Compliance scan completed - [ ] ACS policy blocking vulnerability


2. Logging & Observability (OpenTelemetry)

OTel Collector

```bash

Check collector deployment

kubectl get daemonset -n observability otel-collector

Expected: DESIRED=3, READY=3

Check collector logs

kubectl logs -n observability -l app=otel-collector --tail=20

Expected: No errors, listening on 4317 (OTLP), 8888 (Prometheus)

Test OTLP connectivity

kubectl port-forward -n observability svc/otel-collector 4317:4317 & grpcurl -plaintext localhost:4317 list

Expected: opentelemetry.proto.collector.metrics.v1.MetricsService, etc.

```

SigNoz Backend

```bash

Check SigNoz pods

kubectl get pods -n observability -l app=signoz

Expected: All pods Running

Access SigNoz UI

SIGNOZ_URL=$(kubectl get ingress -n observability signoz -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') echo "Open browser to: http://$SIGNOZ_URL"

Expected: SigNoz UI loads, no 404

Verify services are sending data

kubectl logs -n observability -l app=signoz-backend --tail=50 | grep -i "received|ingested" ```

ClickHouse Data Storage

```bash

Check ClickHouse status

kubectl exec -n observability clickhouse-0 -- \ clickhouse-client -q "SELECT version()"

Verify tables created

kubectl exec -n observability clickhouse-0 -- \ clickhouse-client -q "SHOW TABLES IN default"

Expected: logs, metrics, traces tables present

Check data insertion rate

kubectl exec -n observability clickhouse-0 -- \ clickhouse-client -q "SELECT count() FROM logs"

Expected: Row count > 0 (after traffic generated)

Verify retention policy

kubectl exec -n observability clickhouse-0 -- \ clickhouse-client -q "SHOW CREATE TABLE logs"

Expected: 2-day TTL policy in output

```

Sample Application Instrumentation

```bash

Generate traffic to sample apps

SAMPLE_URL=$(kubectl get service sample-app -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')

for i in {1..50}; do curl http://$SAMPLE_URL/api/data; done

Verify traces in SigNoz

Navigate to: SigNoz UI → Traces → Select "sample-app" service

Expected: 50+ traces visible with proper span hierarchy

Compare with-SDK vs without-SDK

Expected: SDK-instrumented app shows more detail (database calls, HTTP calls)

```

Kafka Telemetry Pipeline

```bash

Check Kafka topics

kubectl exec -it kafka-0 -n kafka -- \ kafka-topics.sh --bootstrap-server kafka:9092 --list

Expected: telemetry.logs, telemetry.metrics, telemetry.traces visible

Verify data flowing into topics

kubectl exec -it kafka-0 -n kafka -- \ kafka-console-consumer.sh --bootstrap-server kafka:9092 \ --topic telemetry.traces --max-messages=5 --from-beginning

Expected: JSON trace data visible

```

Validation Checkpoint: - [ ] OTel Collector running on all nodes - [ ] SigNoz UI accessible and showing data - [ ] ClickHouse storing logs/metrics/traces - [ ] Retention policy active (2-day hot) - [ ] Kafka topics receiving telemetry data - [ ] End-to-end trace visible in SigNoz


3. Identity & API Management (WSO2)

WSO2 APIM Deployment

```bash

Check APIM pods

kubectl get pods -n wso2 -l app=apim

Expected: 2+ replicas Running

Access APIM console

APIM_URL=$(kubectl get ingress -n wso2 apim -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') echo "APIM: https://$APIM_URL"

Login: admin / admin (or configured credentials)

```

WSO2 Identity Server

```bash

Check IS pods

kubectl get pods -n wso2 -l app=is

Expected: Running

Verify SSO integration

kubectl get configmap -n wso2 wso2-is-config -o yaml | grep -i saml

Expected: SAML configuration present

```

SAML SSO Flow

```bash

Navigate to APIM console

Click login → Select "SSO"

Expected: Redirects to IS, can log in, returns to APIM authenticated

Verify SAML assertion

Browser dev tools → Network tab → Look for SAML Response header

Expected: Valid SAML assertion with user claims

```

OIDC Token Flow

```bash

Get OIDC client credentials

kubectl get secret -n wso2 oidc-client -o jsonpath='{.data.client_id}' | base64 -d

Request token

curl -X POST https:///oauth2/token \ -d "grant_type=password&username=admin&password=admin&client_id=&client_secret="

Expected: Access token returned (JWT)

```

API Gateway Rate Limiting

```bash

Create test API in APIM console

Configure rate limit: 10 requests / minute

Test

for i in {1..15}; do curl http:///test-api/resource echo "Request $i" done

Expected: Requests 1-10 succeed, 11-15 return 429 Too Many Requests

```

Validation Checkpoint: - [ ] APIM running with 2+ replicas - [ ] Identity Server running - [ ] SAML SSO login working - [ ] OIDC token endpoint functional - [ ] Rate limiting enforced - [ ] Both accessible with no 404/500 errors


4. CI/CD & DevOps Tooling

GitLab HA

```bash

Check GitLab pods

kubectl get pods -n gitlab

Expected: gitlab, postgresql, redis pods Running

Access GitLab

GITLAB_URL=$(kubectl get ingress -n gitlab gitlab -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') echo "GitLab: $GITLAB_URL"

Test CI/CD pipeline

Push code to feature branch

git push origin feature/test-pipeline

Expected: GitLab CI/CD triggers, pipeline visible in UI

```

Jenkins HA

```bash

Check Jenkins pods

kubectl get pods -n jenkins

Expected: jenkins-0, jenkins-1 Running

Access Jenkins

JENKINS_URL=$(kubectl get service -n jenkins jenkins -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') echo "Jenkins: http://$JENKINS_URL"

Create test job

New Job → Pipeline → Poll Git repo → Build

```

Monorepo Pipeline

```bash

Verify .gitlab-ci.yml triggers correctly

git log --all --grep="ci" --oneline

Expected: CI commits visible

Check pipeline status

curl -H "PRIVATE-TOKEN: " https:///api/v4/projects/1/pipelines

Expected: Pipeline status visible

```

Validation Checkpoint: - [ ] GitLab console accessible and responsive - [ ] Jenkins console accessible - [ ] Monorepo pipeline triggered on commit - [ ] Build logs visible - [ ] Artifacts generated


5. Trivy SCA & SBOM

Trivy Dashboard

```bash

Check Trivy deployment

kubectl get deployment -n trivy trivy-dashboard

Expected: Running

Access dashboard

TRIVY_URL=$(kubectl get service -n trivy trivy-dashboard -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') echo "Trivy: http://$TRIVY_URL"

Verify image scans visible

Expected: Dashboard shows vulnerability counts by severity

```

SBOM Generation

```bash

Generate SBOM for sample image

trivy image --format cyclonedx sample-app:latest > sbom.json

Verify SBOM format

jq '.components | length' sbom.json

Expected: Number > 0 (list of components)

Upload to dashboard

curl -X POST http://$TRIVY_URL/sbom \ -F "file=@sbom.json" ```

Validation Checkpoint: - [ ] Trivy dashboard accessible - [ ] Image scans showing results - [ ] SBOM files generated - [ ] SBOM uploaded to dashboard


6. Redis HA

Sentinel Deployment

```bash

Check Redis pods

kubectl get pods -n redis

Expected: redis-0, redis-1, redis-2 Running

Expected: redis-sentinel-0, redis-sentinel-1, redis-sentinel-2 Running

Check Redis master

REDIS_MASTER=$(kubectl exec redis-0 -n redis -- redis-cli role) echo "$REDIS_MASTER"

Expected: First line shows "master" or "slave"

Test SET/GET

kubectl exec redis-0 -n redis -- redis-cli SET test-key "hello" kubectl exec redis-1 -n redis -- redis-cli GET test-key

Expected: "hello" returned from replica

```

Failover Validation

```bash

Kill Redis master

kubectl delete pod redis-0 -n redis

Wait 10 seconds for Sentinel to detect failure

sleep 10

Check new master

kubectl exec redis-1 -n redis -- redis-cli role

Expected: Either redis-1 or redis-2 is now master

Verify no data loss

kubectl exec redis-1 -n redis -- redis-cli GET test-key

Expected: "hello" still available

Restore original pod

kubectl apply -f k8s/redis/deployment.yaml ```

Validation Checkpoint: - [ ] 3 Redis nodes running - [ ] 3 Sentinel nodes running - [ ] Master selected correctly - [ ] Data replicates to slaves - [ ] Failover works automatically - [ ] No data loss on failover


7. Kafka KRaft Cluster

Broker Status

```bash

Check Kafka brokers

kubectl get statefulset -n kafka

Expected: kafka, 3 replicas Ready

List brokers

kubectl exec kafka-0 -n kafka -- \ kafka-broker-api-versions.sh --bootstrap-server kafka:9092

Expected: All 3 brokers responsive

```

Schema Registry

```bash

Check Schema Registry

kubectl get deployment -n kafka schema-registry

Expected: Running

Register schema

curl -X POST http://schema-registry.kafka.svc.cluster.local:8081/subjects/test/versions \ -H "Content-Type: application/vnd.schemaregistry.v1+json" \ -d '{"schema": "{\"type\": \"string\"}"}'

Expected: Schema ID returned

```

Topic Operations

```bash

Create test topic

kubectl exec kafka-0 -n kafka -- \ kafka-topics.sh --bootstrap-server kafka:9092 \ --create --topic test-topic --partitions 3 --replication-factor 2

Describe topic

kubectl exec kafka-0 -n kafka -- \ kafka-topics.sh --bootstrap-server kafka:9092 \ --describe --topic test-topic

Expected: 3 partitions, replication-factor 2

```

DLQ Testing

```bash

Send invalid message (will go to DLQ)

kubectl exec kafka-0 -n kafka -- \ kafka-console-producer.sh --broker-list kafka:9092 \ --topic invalid-topic < /dev/null

Check DLQ

kubectl exec kafka-0 -n kafka -- \ kafka-topics.sh --bootstrap-server kafka:9092 --list | grep dlq

Expected: DLQ topic present

```

Validation Checkpoint: - [ ] All 3 brokers Running - [ ] Schema Registry responsive - [ ] Topics created successfully - [ ] Replication factor respected - [ ] DLQ operational


8. Middleware (Open Liberty + NGINX)

Open Liberty Deployment

```bash

Check Open Liberty pods

kubectl get pods -n default -l app=open-liberty

Expected: 1+ replicas Running

Test application

LIBERTY_URL=$(kubectl get service open-liberty -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') curl http://$LIBERTY_URL/health

Expected: 200 OK with health status

```

NGINX Load Balancing

```bash

Check NGINX pod

kubectl get pods -n default -l app=nginx-ingress

Expected: Running

Test routing to Open Liberty

curl http:///app/

Expected: Response from Open Liberty

Check canary routing

for i in {1..100}; do curl -s http:///app/ | grep -o "version: [0-9]" | sort | uniq -c; done

Expected: ~90 requests to v1, ~10 to v2 (canary)

```

Observability Metrics

```bash

Check metrics exported

kubectl exec -n observability otel-collector-0 -- \ curl localhost:8888/metrics | grep liberty

Expected: Open Liberty metrics visible (http.requests, jvm.memory, etc.)

```

Validation Checkpoint: - [ ] Open Liberty running - [ ] NGINX routing traffic correctly - [ ] Canary split observed (10/90) - [ ] Metrics exported to OTel - [ ] Health checks passing


9. JBoss Domain Mode

Domain Controller

```bash

Check Domain Controller pod

kubectl get pod -n jboss domain-controller-0

Expected: Running

Access domain console

kubectl port-forward -n jboss svc/domain-controller 9990:9990 & curl http://localhost:9990/

Expected: Domain console UI loads

```

Managed Servers

```bash

Check managed server pods

kubectl get pods -n jboss -l app=managed-server

Expected: 2+ replicas Running

Verify servers registered with domain controller

kubectl exec -n jboss domain-controller-0 -- \ jboss-cli.sh --connect -c "/server-group=default:query"

Expected: Server group configuration shown

```

Application Deployment

```bash

Deploy test application

kubectl exec -n jboss domain-controller-0 -- \ jboss-cli.sh --connect -c "deploy /path/to/app.war --server-groups=default"

Verify deployment

kubectl exec -n jboss domain-controller-0 -- \ jboss-cli.sh --connect -c "/deployment=app.war:query"

Expected: Deployment info shown

```

Validation Checkpoint: - [ ] Domain Controller running - [ ] Managed servers running - [ ] Domain console accessible - [ ] Applications deployable via domain - [ ] Servers registered with controller


Comprehensive Validation Script

Run all validations automatically:

```bash ./scripts/validate.sh --full

Expected output:

============================================

BRAC POC - Comprehensive Validation Report

============================================

1. OpenShift Platform

✅ Cluster health: 3 nodes Ready

✅ ODF storage: Block + Object available

✅ Compliance scan: PASSED (PCI-DSS)

✅ ACS policy: ENFORCED (blocking Critical)

2. Observability Stack

✅ OTel Collector: Running on 3 nodes

✅ SigNoz backend: Accessible

✅ ClickHouse: Storing data (123,456 logs)

✅ Kafka topics: Receiving telemetry

✅ E2E trace: Visible in SigNoz

3. Identity & API Management

✅ APIM: Running (2 replicas)

✅ Identity Server: Running

✅ SSO (SAML): Functional

✅ SSO (OIDC): Functional

✅ Rate limiting: Enforced (10 req/min)

4. CI/CD & DevOps

✅ GitLab: Running (accessible)

✅ Jenkins: Running (accessible)

✅ Monorepo pipeline: Triggered

✅ Builds: Successful

5. Trivy SCA

✅ Trivy dashboard: Accessible

✅ Image scans: Completed

✅ SBOM: Generated (45 components)

6. Redis HA

✅ Redis cluster: 3 nodes Ready

✅ Sentinel: 3 sentinels Running

✅ Failover: Automatic (tested)

✅ Data replication: Working

7. Kafka KRaft

✅ Brokers: 3 Running

✅ Schema Registry: Operational

✅ Topics: Created (3/3)

✅ DLQ: Operational

8. Middleware

✅ Open Liberty: Running

✅ NGINX: Load balancing

✅ Canary routing: 10/90 split verified

✅ Metrics: Exported

9. JBoss

✅ Domain Controller: Running

✅ Managed servers: 2 Running

✅ Applications: Deployable

✅ Domain console: Accessible

============================================

OVERALL STATUS: ✅ ALL COMPONENTS PASSING

Timeline: 5.5 days (ahead of schedule)

============================================

```


Performance Validation

Load Testing

```bash

Generate sustained load

artillery run load-test.yml --target http://$NGINX_URL

Monitor in SigNoz

Expected: P95 latency < 200ms, P99 < 500ms

```

Throughput Testing

```bash

Kafka throughput

kafka-producer-perf-test.sh --topic telemetry.logs \ --num-records 100000 --record-size 1024 \ --throughput 10000 --producer-props bootstrap.servers=kafka:9092

Expected: Sustained 10k msg/sec

```


Compliance Validation

Security Checklist

  • No exposed credentials in Git history
  • All services use TLS/HTTPS
  • Network policies restrict pod-to-pod traffic
  • RBAC configured for all services
  • Audit logging enabled
  • PCI-DSS compliance scan passed

Sign-Off

When all validations pass:

```bash ./scripts/generate-validation-report.sh > validation-report.md

Commit results

git add validation-report.md git commit -m "docs: validation complete, all components passing

  • All 9 components verified operational
  • Performance metrics within targets
  • Compliance requirements met
  • POC ready for BRAC demo" ```

Last Updated: 2026-04-24
Status: Validation template ready