A six-tier regulatory operations platform on GCP that encodes monthly PUC filing knowledge into a structured, queryable form — so the next time the report owner leaves, the system doesn't have to be rebuilt from scratch.
Lumen files monthly PUC regulatory reports across 8 states — Oregon, Idaho, Minnesota, Nebraska, New Mexico, Montana, Iowa, Wyoming — each carrying monetary penalties when service-quality thresholds are missed.
Three problems kept compounding. Single-person dependency: when the Oregon owner left, the SQL, exception rules, and threshold knowledge weren't fully transferable. Context-free monitoring: existing tools could tell you a value dropped 5.6% — they couldn't tell you whether that crossed a regulatory threshold. Rule drift: report-specific logic lived in people's heads, so new operators ran reports without institutional knowledge and downstream decisions used wrong numbers.
RADAR makes it impossible to lose this knowledge again.
The central idea: every recurring report gets a Genome — a structured, queryable knowledge profile that captures what the report is, what it depends on, and what makes it pass or fail.
Each Genome encodes purpose and regulatory obligation, upstream ETL freshness SLAs, 90-day rolling baselines per metric, regulatory thresholds sourced from published PUC rules, cross-jurisdiction Pearson correlations with neighbour states, failure history with recovery playbooks, cost and bytes scanned per run, and the blast radius — which other reports break if this one fails.
Hosted on GCP (prj-mmopsrpt-prod-001) with BigQuery as the analytical core. Five deterministic analytical methods run on every execution, with a thin Vertex AI layer above them as a translation surface only.
Vertex AI (Gemini) sits above the deterministic core, never inside it. It receives structured analytical outputs and generates natural-language advisories, filing narratives, root-cause hypotheses, and conversational answers.
Gemini does not analyse. If you removed it tomorrow, RADAR continues to function using template text. A second LLM provider is wired up as a backup model. This separation is deliberate — analysis is deterministic and auditable; the LLM is a translator only.
Microsoft Teams alerts, email digests, QlikSense dashboards. Full audit trail in Cloud Logging. Submitted through Lumen's AI governance process as OneTrust Project 565 — approved.