AssureNest AI | AI ASSURANCE PLATFORM

AI assurance platform that audits LLM outputs for hallucination, bias, PII and quality issues, with browser extension and dashboard.

AI AssuranceLLMDashboardComplianceBrowser ExtensionSaaS Tech stack:

Problem

Businesses adopting AI face risks: hallucination, biased outputs, confidential data leaks, and compliance uncertainty. There is little transparency in how LLMs behave, and no simple tooling for SMEs to track and audit AI use.

Solution

AssureNest AI collects LLM prompts and responses, runs assurance checks (hallucination, bias, PII, sentiment), and displays findings in an explainable risk panel. Results are logged in a dashboard for future audits and compliance reviews.

Architecture

Browser Extension → SSE/REST → FastAPI Backend → AI Assurance Engines → Firestore DB → Dashboard (React + Tailwind). Deployed across Firebase + GCP Cloud Run with token-based access and role-based permissions.

What I learned

Model assurance workflows, token gating, extension-to-backend communication, SSE streaming, GCP deployment, role-based dashboards, product thinking for compliance and enterprise adoption.

Tech stack

Next.jsReactTypeScriptFastAPIFirebaseFirestoreSSETailwindHuggingFaceGCP