AIVoice+ implements a layered guardrails system across all AI-powered features. These guardrails protect the superadmin, the company, and paying users without sacrificing speed or UX.
Layered Guardrails Stack
1. Policy Layer (System Prompt)
SAFETY_PREAMBLE injected into every AI conversation
Non-negotiable rules that the model must follow
Covers: role adherence, content restrictions, data handling
NIST AI RMF: Map β Measure β Manage β Govern cycle. Our guardrails cover the Manage function (risk mitigation controls).
EU AI Act: Our AI systems would be classified as "limited risk" (chatbots). We implement transparency (safety notices) and human oversight (content moderation).
ISO/IEC 42001: Our safety pipeline documents align with the AI management system requirements.
These are reference alignments, not certifications. External auditing is recommended for formal compliance.
What We Intentionally Do NOT Implement
Item
Reason
CORS restriction
Would break MCP clients and external integrations
Differential privacy
Academic technique, not applicable to chat-based SaaS
Model retraining
We use third-party models; we mitigate via prompts
Real-time bias scoring
Would add latency; we use post-hoc audit logging instead
Content Moderation (how it's done on our platform)